id
stringlengths 1
8
| text
stringlengths 6
1.05M
| dataset_id
stringclasses 1
value |
---|---|---|
/ngxtop-0.0.3.tar.gz/ngxtop-0.0.3/README.rst
|
================================================================
``ngxtop`` - **real-time** metrics for nginx server (and others)
================================================================
**ngxtop** parses your nginx access log and outputs useful, ``top``-like, metrics of your nginx server.
So you can tell what is happening with your server in real-time.
``ngxtop`` is designed to run in a short-period time just like the ``top`` command for troubleshooting and monitoring
your Nginx server at the moment. If you need a long running monitoring process or storing your webserver stats in external
monitoring / graphing system, you can try `Luameter <https://luameter.com>`_.
``ngxtop`` tries to determine the correct location and format of nginx access log file by default, so you can just run
``ngxtop`` and having a close look at all requests coming to your nginx server. But it does not limit you to nginx
and the default top view. ``ngxtop`` is flexible enough for you to configure and change most of its behaviours.
You can query for different things, specify your log and format, even parse remote Apache common access log with ease.
See sample usages below for some ideas about what you can do with it.
Installation
------------
::
pip install ngxtop
Note: ``ngxtop`` is primarily developed and tested with python2 but also supports python3.
Usage
-----
::
Usage:
ngxtop [options]
ngxtop [options] (print|top|avg|sum) <var>
ngxtop info
Options:
-l <file>, --access-log <file> access log file to parse.
-f <format>, --log-format <format> log format as specify in log_format directive.
--no-follow ngxtop default behavior is to ignore current lines in log
and only watch for new lines as they are written to the access log.
Use this flag to tell ngxtop to process the current content of the access log instead.
-t <seconds>, --interval <seconds> report interval when running in follow mode [default: 2.0]
-g <var>, --group-by <var> group by variable [default: request_path]
-w <var>, --having <expr> having clause [default: 1]
-o <var>, --order-by <var> order of output for default query [default: count]
-n <number>, --limit <number> limit the number of records included in report for top command [default: 10]
-a <exp> ..., --a <exp> ... add exp (must be aggregation exp: sum, avg, min, max, etc.) into output
-v, --verbose more verbose output
-d, --debug print every line and parsed record
-h, --help print this help message.
--version print version information.
Advanced / experimental options:
-c <file>, --config <file> allow ngxtop to parse nginx config file for log format and location.
-i <filter-expression>, --filter <filter-expression> filter in, records satisfied given expression are processed.
-p <filter-expression>, --pre-filter <filter-expression> in-filter expression to check in pre-parsing phase.
Samples
-------
Default output
~~~~~~~~~~~~~~
::
$ ngxtop
running for 411 seconds, 64332 records processed: 156.60 req/sec
Summary:
| count | avg_bytes_sent | 2xx | 3xx | 4xx | 5xx |
|---------+------------------+-------+-------+-------+-------|
| 64332 | 2775.251 | 61262 | 2994 | 71 | 5 |
Detailed:
| request_path | count | avg_bytes_sent | 2xx | 3xx | 4xx | 5xx |
|------------------------------------------+---------+------------------+-------+-------+-------+-------|
| /abc/xyz/xxxx | 20946 | 434.693 | 20935 | 0 | 11 | 0 |
| /xxxxx.json | 5633 | 1483.723 | 5633 | 0 | 0 | 0 |
| /xxxxx/xxx/xxxxxxxxxxxxx | 3629 | 6835.499 | 3626 | 0 | 3 | 0 |
| /xxxxx/xxx/xxxxxxxx | 3627 | 15971.885 | 3623 | 0 | 4 | 0 |
| /xxxxx/xxx/xxxxxxx | 3624 | 7830.236 | 3621 | 0 | 3 | 0 |
| /static/js/minified/utils.min.js | 3031 | 1781.155 | 2104 | 927 | 0 | 0 |
| /static/js/minified/xxxxxxx.min.v1.js | 2889 | 2210.235 | 2068 | 821 | 0 | 0 |
| /static/tracking/js/xxxxxxxx.js | 2594 | 1325.681 | 1927 | 667 | 0 | 0 |
| /xxxxx/xxx.html | 2521 | 573.597 | 2520 | 0 | 1 | 0 |
| /xxxxx/xxxx.json | 1840 | 800.542 | 1839 | 0 | 1 | 0 |
View top source IPs of clients
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
$ ngxtop top remote_addr
running for 20 seconds, 3215 records processed: 159.62 req/sec
top remote_addr
| remote_addr | count |
|-----------------+---------|
| 118.173.177.161 | 20 |
| 110.78.145.3 | 16 |
| 171.7.153.7 | 16 |
| 180.183.67.155 | 16 |
| 183.89.65.9 | 16 |
| 202.28.182.5 | 16 |
| 1.47.170.12 | 15 |
| 119.46.184.2 | 15 |
| 125.26.135.219 | 15 |
| 125.26.213.203 | 15 |
List 4xx or 5xx responses together with HTTP referer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
$ ngxtop -i 'status >= 400' print request status http_referer
running for 2 seconds, 28 records processed: 13.95 req/sec
request, status, http_referer:
| request | status | http_referer |
|-----------+----------+----------------|
| - | 400 | - |
Parse apache log from remote server with `common` format
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
$ ssh user@remote_server tail -f /var/log/apache2/access.log | ngxtop -f common
running for 20 seconds, 1068 records processed: 53.01 req/sec
Summary:
| count | avg_bytes_sent | 2xx | 3xx | 4xx | 5xx |
|---------+------------------+-------+-------+-------+-------|
| 1068 | 28026.763 | 1029 | 20 | 19 | 0 |
Detailed:
| request_path | count | avg_bytes_sent | 2xx | 3xx | 4xx | 5xx |
|------------------------------------------+---------+------------------+-------+-------+-------+-------|
| /xxxxxxxxxx | 199 | 55150.402 | 199 | 0 | 0 | 0 |
| /xxxxxxxx/xxxxx | 167 | 47591.826 | 167 | 0 | 0 | 0 |
| /xxxxxxxxxxxxx/xxxxxx | 25 | 7432.200 | 25 | 0 | 0 | 0 |
| /xxxx/xxxxx/x/xxxxxxxxxxxxx/xxxxxxx | 22 | 698.727 | 22 | 0 | 0 | 0 |
| /xxxx/xxxxx/x/xxxxxxxxxxxxx/xxxxxx | 19 | 7431.632 | 19 | 0 | 0 | 0 |
| /xxxxx/xxxxx/ | 18 | 7840.889 | 18 | 0 | 0 | 0 |
| /xxxxxxxx/xxxxxxxxxxxxxxxxx | 15 | 7356.000 | 15 | 0 | 0 | 0 |
| /xxxxxxxxxxx/xxxxxxxx | 15 | 9978.800 | 15 | 0 | 0 | 0 |
| /xxxxx/ | 14 | 0.000 | 0 | 14 | 0 | 0 |
| /xxxxxxxxxx/xxxxxxxx/xxxxx | 13 | 20530.154 | 13 | 0 | 0 | 0 |
|
PypiClean
|
/noia-sdk-0.1.1.tar.gz/noia-sdk-0.1.1/noia_sdk/configuration.py
|
from __future__ import absolute_import
import copy
import logging
import multiprocessing
import sys
import six
import urllib3
from six.moves import http_client as httplib
class TypeWithDefault(type):
def __init__(cls, name, bases, dct):
super(TypeWithDefault, cls).__init__(name, bases, dct)
cls._default = None
def __call__(cls):
if cls._default is None:
cls._default = type.__call__(cls)
return copy.copy(cls._default)
def set_default(cls, default):
cls._default = copy.copy(default)
class Configuration(six.with_metaclass(TypeWithDefault, object)):
"""NOTE: This class is auto generated by the swagger code generator program.
Ref: https://github.com/swagger-api/swagger-codegen
Do not edit the class manually.
"""
def __init__(self):
"""Constructor"""
# Default Base url
self.host = "/"
# Temp file folder for downloading files
self.temp_folder_path = None
# Authentication Settings
# dict to store API key(s)
self.api_key = {}
# dict to store API prefix (e.g. Bearer)
self.api_key_prefix = {}
# function to refresh API key if expired
self.refresh_api_key_hook = None
# Username for HTTP basic authentication
self.username = ""
# Password for HTTP basic authentication
self.password = ""
# Logging Settings
self.logger = {}
self.logger["package_logger"] = logging.getLogger("noia_sdk")
self.logger["urllib3_logger"] = logging.getLogger("urllib3")
# Log format
self.logger_format = "%(asctime)s %(levelname)s %(message)s"
# Log stream handler
self.logger_stream_handler = None
# Log file handler
self.logger_file_handler = None
# Debug file location
self.logger_file = None
# Debug switch
self.debug = False
# SSL/TLS verification
# Set this to false to skip verifying SSL certificate when calling API
# from https server.
self.verify_ssl = True
# Set this to customize the certificate file to verify the peer.
self.ssl_ca_cert = None
# client certificate file
self.cert_file = None
# client key file
self.key_file = None
# Set this to True/False to enable/disable SSL hostname verification.
self.assert_hostname = None
# urllib3 connection pool's maximum number of connections saved
# per pool. urllib3 uses 1 connection as default value, but this is
# not the best value when you are making a lot of possibly parallel
# requests to the same host, which is often the case here.
# cpu_count * 5 is used as default value to increase performance.
self.connection_pool_maxsize = multiprocessing.cpu_count() * 5
# Proxy URL
self.proxy = None
# Safe chars for path_param
self.safe_chars_for_path_param = ""
@property
def logger_file(self):
"""The logger file.
If the logger_file is None, then add stream handler and remove file
handler. Otherwise, add file handler and remove stream handler.
:param value: The logger_file path.
:type: str
"""
return self.__logger_file
@logger_file.setter
def logger_file(self, value):
"""The logger file.
If the logger_file is None, then add stream handler and remove file
handler. Otherwise, add file handler and remove stream handler.
:param value: The logger_file path.
:type: str
"""
self.__logger_file = value
if self.__logger_file:
# If set logging file,
# then add file handler and remove stream handler.
self.logger_file_handler = logging.FileHandler(self.__logger_file)
self.logger_file_handler.setFormatter(self.logger_formatter)
for _, logger in six.iteritems(self.logger):
logger.addHandler(self.logger_file_handler)
if self.logger_stream_handler:
logger.removeHandler(self.logger_stream_handler)
else:
# If not set logging file,
# then add stream handler and remove file handler.
self.logger_stream_handler = logging.StreamHandler()
self.logger_stream_handler.setFormatter(self.logger_formatter)
for _, logger in six.iteritems(self.logger):
logger.addHandler(self.logger_stream_handler)
if self.logger_file_handler:
logger.removeHandler(self.logger_file_handler)
@property
def debug(self):
"""Debug status
:param value: The debug status, True or False.
:type: bool
"""
return self.__debug
@debug.setter
def debug(self, value):
"""Debug status
:param value: The debug status, True or False.
:type: bool
"""
self.__debug = value
if self.__debug:
# if debug status is True, turn on debug logging
for _, logger in six.iteritems(self.logger):
logger.setLevel(logging.DEBUG)
# turn on httplib debug
httplib.HTTPConnection.debuglevel = 1
else:
# if debug status is False, turn off debug logging,
# setting log level to default `logging.WARNING`
for _, logger in six.iteritems(self.logger):
logger.setLevel(logging.WARNING)
# turn off httplib debug
httplib.HTTPConnection.debuglevel = 0
@property
def logger_format(self):
"""The logger format.
The logger_formatter will be updated when sets logger_format.
:param value: The format string.
:type: str
"""
return self.__logger_format
@logger_format.setter
def logger_format(self, value):
"""The logger format.
The logger_formatter will be updated when sets logger_format.
:param value: The format string.
:type: str
"""
self.__logger_format = value
self.logger_formatter = logging.Formatter(self.__logger_format)
def get_api_key_with_prefix(self, identifier):
"""Gets API key (with prefix if set).
:param identifier: The identifier of apiKey.
:return: The token for api key authentication.
"""
if self.refresh_api_key_hook:
self.refresh_api_key_hook(self)
key = self.api_key.get(identifier)
if key:
prefix = self.api_key_prefix.get(identifier)
if prefix:
return "%s %s" % (prefix, key)
else:
return key
def get_basic_auth_token(self):
"""Gets HTTP basic authentication header (string).
:return: The token for basic HTTP authentication.
"""
return urllib3.util.make_headers(
basic_auth=self.username + ":" + self.password
).get("authorization")
def auth_settings(self):
"""Gets Auth Settings dict for api client.
:return: The Auth Settings information dict.
"""
return {
"jwt": {
"type": "api_key",
"in": "header",
"key": "Authorization",
"value": self.get_api_key_with_prefix("Authorization"),
},
}
def to_debug_report(self):
"""Gets the essential information for debugging.
:return: The report for debugging.
"""
return (
"Python SDK Debug Report:\n"
"OS: {env}\n"
"Python Version: {pyversion}\n"
"Version of the API: 1.0.0\n"
"SDK Package Version: 0.0.1".format(env=sys.platform, pyversion=sys.version)
)
|
PypiClean
|
/grid_utils-0.0.33-py3-none-any.whl/grid_utils/dimension/grid_dim.py
|
import six
import re
import numpy as np
from krux.types.check import *
__all__ = [
'GridDim',
'GridDimParseSerializeMixin',
'GridDimLUTMixin',
'GridDimBase',
]
class GridDimParseSerializeMixin(object):
"""requires: name"""
parser = None
serializer = None
def parse(self, value):
if not self.parser:
return value
if callable(self.parser):
return self.parser(value)
def serialize(self, value):
if not self.serializer:
return u"{}".format(value)
if callable(self.serializer):
return self.serializer(value)
elif isinstance(self.serializer, six.string_types):
if re.match(r'.*{[^{}]*}.*', self.serializer):
return self.serializer.format(value)
elif '%' in self.serializer:
return self.serializer % value
else:
raise ValueError("Invalid serializer in dim {}: {}".format(self.name, self.serializer))
class GridDimLUTMixin(object):
"""Requires: serialize, values"""
_true_lut = None
@property
def _lut(self):
if self._true_lut is None:
return self._build_lut(self.values)
else:
return self._true_lut
def _build_lut(self, values):
return {self.serialize(v): i for i, v in enumerate(values)}
class GridDimBase(object):
is_grid_dim = True
class GridDim(GridDimParseSerializeMixin, GridDimLUTMixin, GridDimBase):
_values = None
def __init__(self, name, values, **kwargs):
self.name = name
self._values = values
for k, v in six.iteritems(kwargs):
setattr(self, k, v)
@property
def ndim(self):
if isinstance(self.name, six.string_types):
return 1
else:
return len(self.name)
@property
def values(self):
if callable(self._values):
return self._values()
else:
return self._values
@values.setter
def values(self, new_vals):
if callable(new_vals):
self._values = new_vals
self._true_lut = None
else:
self._values = new_vals
self._true_lut = self._build_lut(self._values)
@property
def size(self):
if self.ndim == 1:
return len(self.values)
else:
raise NotImplementedError("Dim size for multi-dim should be implemented by user.")
def __getitem__(self, key):
return self.get_index(key)
def get_index(self, key, **kwargs):
if key is None:
return None
elif is_integer(key):
return key
elif isinstance(key, slice):
try:
start = self.get_index(key.start, **kwargs)
except IndexError:
start = 0
try:
stop = self.get_index(key.stop, **kwargs) + 1
except Exception:
stop = None
return slice(start, stop, key.step)
else:
try:
return self._lut[self.serialize(key)]
except KeyError as e:
raise KeyError(u"{}, valid values: {}".format(key, self.values))
def __repr__(self):
return "<{} {}>".format(self.__class__.__name__, self.name)
|
PypiClean
|
/Py-Trading-0.4.9.11.tar.gz/Py-Trading-0.4.9.11/py_trading/download_tickers.py
|
from requests import get
from bs4 import BeautifulSoup
from datetime import datetime
import pickle
from pathlib import Path
import pandas as pd
# IMPLEMENT THREADING
# results = [executor.submit(test_stocks, i, n_threads) for i in range(n_threads)] # Can try executor.map()
# for f in concurrent.futures.as_completed(results):
# print(f.result())
# def test_stocks(index_of_thread, num_of_threads): # Divide # of stocks per thread / total stocks to be tested. Index_of_thread is which thread from 0 to n threads.
# n_stocks_per_thread = len(Stock.objects.all())
# portion = Stock.objects.all()[index_of_thread*n_stocks_per_thread:(index_of_thread+1)*n_stocks_per_thread]
def get_sp500():
request = get('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
soup = BeautifulSoup(request.text, 'lxml')
table = soup.find('table')
df = pd.read_html(str(table))
return df[0]
def get_nasdaq(): # Nasdaq + NYSE + AMEX
dfs = []
for letter in 'abcdefghijklmnopqrstuvwxyz':
request = get(f'https://www.advfn.com/nasdaq/nasdaq.asp?companies={letter.upper()}')
soup = BeautifulSoup(request.text, 'lxml')
table = soup.find('table', {'class': 'market tab1'})
df = pd.read_html(str(table))[0]
df.columns = df.iloc[1].tolist()
df = df.iloc[2:]
df = df.reset_index()
df = df[['Symbol', 'Equity']]
df.columns = ['ticker', 'name']
dfs.append(df)
for letter in 'abcdefghijklmnopqrstuvwxyz':
request = get(f'http://eoddata.com/stocklist/NASDAQ/{letter}.htm')
soup = BeautifulSoup(request.text, 'lxml')
table = soup.find('table', {'class': 'quotes'})
df = pd.read_html(str(table))[0]
df = df[['Code', 'Name']]
df.columns = ['ticker', 'name']
dfs.append(df)
df = pd.concat(dfs)
df = df.reset_index()
df = df[['ticker', 'name']]
# if as_list:
# return df.set_index('ticker').to_dict()
return df
def get_nyse(): # Test to see if duplicate tickers on backend or Django webapp
dfs = []
for letter in 'abcdefghijklmnopqrstuvwxyz':
request = get(f'https://www.advfn.com/nyse/newyorkstockexchange.asp?companies={letter.upper()}')
soup = BeautifulSoup(request.text, 'lxml')
table = soup.find('table', {'class': 'market tab1'})
df = pd.read_html(str(table))[0]
df.columns = df.iloc[1].tolist()
df = df.iloc[2:]
df = df.reset_index()
df = df[['Symbol', 'Equity']]
df.columns = ['ticker', 'name']
dfs.append(df)
for letter in 'abcdefghijklmnopqrstuvwxyz':
request = get(f'https://eoddata.com/stocklist/NYSE/{letter}.htm')
soup = BeautifulSoup(request.text, 'lxml')
table = soup.find('table', {'class': 'quotes'})
try:
df = pd.read_html(str(table))[0]
except:
df = pd.read_html(str(table))
df = df[['Code', 'Name']]
df.columns = ['ticker', 'name']
dfs.append(df)
# Will this work since they are series?
df = pd.concat(dfs)
df = df.reset_index()
df = df[['ticker', 'name']]
# df['ticker'] = df['ticker'].unique()
# df['name'] = df['name'].unique()
# if as_list:
# return sorted(df.tolist())
return df.sort_values(by='ticker', ascending=True)
# def get_biggest_movers():
# tickers = []
# request = get('https://www.tradingview.com/markets/stocks-usa/market-movers-gainers/')
# soup = BeautifulSoup(request.text, 'lxml')
# table = soup.find('tbody', {'class': 'tv-data-table__tbody'})
# for i in table.find_all('a', {'class': 'tv-screener__symbol'})[::2]:
# tickers.append(i.get_text())
# request = get('http://thestockmarketwatch.com/markets/topstocks/')
# soup = BeautifulSoup(request.text, 'lxml')
# table = soup.find_all('div', {'class': 'activestockstbl'})
# return list(set(tickers))
def get_day_hot_stocks():
url = 'https://www.tradingview.com/markets/stocks-usa/market-movers-gainers/'
page = get(url)
soup = BeautifulSoup(page.content, 'html.parser')
rows = soup.find_all('tr', {'class':'tv-data-table__row tv-data-table__stroke tv-screener-table__result-row'})
return [row.find('a').get_text() for row in rows]
def get_day_premarket_movers():
url = 'https://thestockmarketwatch.com/markets/pre-market/today.aspx'
page = get(url)
soup = BeautifulSoup(page.content, 'html.parser')
table = soup.find_all('table', {'id': 'tblMoversDesktop'})[0]
try:
print('Biggest winners from TheStockMarketWatch:')
marketwatch_list = [(ticker.get_text(), float(change.get_text()[:-1].replace(',',''))) for ticker, change in zip(table.find_all('td', {'class': 'tdSymbol'}), table.find_all('td', {'class': 'tdChangePct'}))]
for ticker, percentage in sorted(marketwatch_list, key=lambda x: x[1], reverse=True):
print(f'{ticker}: {percentage}%')
except:
print('Due to unseen errors, the stockmarketwatch list is unable to be reached.')
print()
url = 'https://www.benzinga.com/money/premarket-movers/'
page = get(url)
soup = BeautifulSoup(page.content, 'html.parser')
div = soup.find('div', {'id': 'movers-stocks-table-gainers'})
tbody = div.find('tbody')
data = [(i.get_text().replace('\n ', '')[2:], float(j.get_text().replace('\n ', '')[2:-1])) for i, j in zip(tbody.find_all('a', {'class': 'font-normal'}), tbody.find_all('td')[3::5])]
try:
print('Biggest winners from Benzinga:')
for ticker, percentage in data:
print(f'{ticker}: {percentage}%')
except:
print('Due to unseen errors, the Benzinga list is unable to be reached.')
def get_silver_stocks():
url = 'http://www.24hgold.com/english/listcompanies.aspx?fundamental=datas&data=company&commodity=ag&commodityname=SILVER&sort=resources&iordre=107'
page = get(url)
soup = BeautifulSoup(page.content, 'html.parser')
table = soup.find('table', {'id': 'ctl00_BodyContent_tbDataExport'})
rows = table.find_all('td', {'class': ['cell_bleue_center', 'cell_gold_right']})
for i in range(len(rows))[28::2]:
if not '.' in rows[i].get_text():
print(f'{rows[i].get_text()}: ${rows[i+1].get_text()}')
def load_biggest_movers():
path = Path(__file__).parents[1]
with open(f'{path}/dailypickle/biggest_movers-{datetime.now().strftime("%m-%d-%Y")}.pkl', 'rb') as f:
return pickle.load(f)
def pickle_biggest_movers(portfolio):
path = Path(__file__).parents[1]
with open(f'{path}/dailypickle/biggest_movers-{datetime.now().strftime("%m-%d-%Y")}.pkl', 'wb') as f:
pickle.dump(portfolio, f)
def load_positions():
path = Path(__file__).parents[1]
with open(f'{path}/dailypickle/positions-{datetime.now().strftime("%m-%d-%Y")}.pkl', 'rb') as f:
return pickle.load(f)
def pickle_positions(portfolio):
# This is useless
path = Path(__file__).parents[1]
with open(f'{path}/dailypickle/positions-{datetime.now().strftime("%m-%d-%Y")}.pkl', 'wb') as f:
pickle.dump(portfolio, f)
def pickle_dump(portfolio):
today = date.today()
with open(f"{today.strftime('%m-%d')}_pickle.pkl", 'wb') as f:
pickle.dump(portfolio, f)
def pickle_load():
today = date.today()
with open(f"{today.strftime('%m-%d')}_pickle.pkl", 'rb') as f:
return pickle.load(f)
|
PypiClean
|
/Assembly-1.3.0.tar.gz/Assembly-1.3.0/assembly/_extensions.py
|
import os
import re
import six
import copy
import logging
import markdown
import flask_s3
import flask_mail
import ses_mailer
import flask_login
import flask_kvsession
from jinja2 import Markup
from jinja2.ext import Extension
from urllib.parse import urlparse
from jinja2.nodes import CallBlock
from jinja2 import TemplateSyntaxError
from . import (ext, config, app_context, utils)
from jinja2.lexer import Token, describe_token
from flask import (request, current_app, send_file, session)
# ------------------------------------------------------------------------------
@app_context
def setup(app):
check_config_keys = ["SECRET_KEY"]
for k in check_config_keys:
if k not in app.config \
or not app.config.get(k):
msg = "Missing config key value: %s " % k
logging.warning(msg)
exit()
#
"""
Flatten properties that were set in dict in the config
MAIL = {
"sender": "[email protected]",
"a_key": "some-value
}
MAIL_SENDER
MAIL_A_KEY
"""
for k in ["CORS"]:
utils.flatten_config_property(k, app.config)
# ------------------------------------------------------------------------------
# Session
@app_context
def session(app):
"""
Sessions
It uses KV session to allow multiple backend for the session
"""
store = None
uri = app.config.get("SESSION_URL")
if uri:
parse_uri = urlparse(uri)
scheme = parse_uri.scheme
username = parse_uri.username
password = parse_uri.password
hostname = parse_uri.hostname
port = parse_uri.port
bucket = parse_uri.path.strip("/")
if "redis" in scheme:
import redis
from simplekv.memory.redisstore import RedisStore
conn = redis.StrictRedis.from_url(url=uri)
store = RedisStore(conn)
elif "s3" in scheme or "google_storage" in scheme:
from simplekv.net.botostore import BotoStore
import boto
if "s3" in scheme:
_con_fn = boto.connect_s3
else:
_con_fn = boto.connect_gs
conn = _con_fn(username, password)
_bucket = conn.create_bucket(bucket)
store = BotoStore(_bucket)
elif "memcache" in scheme:
import memcache
from simplekv.memory.memcachestore import MemcacheStore
host_port = "%s:%s" % (hostname, port)
conn = memcache.Client(servers=[host_port])
store = MemcacheStore(conn)
elif "sql" in scheme:
from simplekv.db.sql import SQLAlchemyStore
from sqlalchemy import create_engine, MetaData
engine = create_engine(uri)
metadata = MetaData(bind=engine)
store = SQLAlchemyStore(engine, metadata, 'kvstore')
metadata.create_all()
else:
raise Error("Invalid Session Store. '%s' provided" % scheme)
if store:
flask_kvsession.KVSessionExtension(store, app)
# ------------------------------------------------------------------------------
# Mailer
class _Mailer(object):
"""
config key: MAIL_*
A simple wrapper to switch between SES-Mailer and Flask-Mail based on config
"""
mail = None
provider = None
config = None
_template = None
@property
def validated(self):
return bool(self.mail)
def init_app(self, app):
utils.flatten_config_property("MAIL", app.config)
self.config = app.config
scheme = None
mailer_uri = self.config.get("MAIL_URL")
if mailer_uri:
templates_sources = app.config.get("MAIL_TEMPLATE")
if not templates_sources:
templates_sources = app.config.get("MAIL_TEMPLATES_DIR") or app.config.get("MAIL_TEMPLATES_DICT")
mailer_uri = urlparse(mailer_uri)
scheme = mailer_uri.scheme
hostname = mailer_uri.hostname
# Using ses-mailer
if "ses" in scheme.lower():
self.provider = "SES"
access_key = mailer_uri.username or app.config.get("AWS_ACCESS_KEY_ID")
secret_key = mailer_uri.password or app.config.get("AWS_SECRET_ACCESS_KEY")
region = hostname or self.config.get("AWS_REGION", "us-east-1")
self.mail = ses_mailer.Mail(aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
region=region,
sender=self.config.get("MAIL_SENDER"),
reply_to=self.config.get("MAIL_REPLY_TO"),
template=templates_sources,
template_context=self.config.get("MAIL_TEMPLATE_CONTEXT"))
# SMTP will use flask-mail
elif "smtp" in scheme.lower():
self.provider = "SMTP"
class _App(object):
config = {
"MAIL_SERVER": mailer_uri.hostname,
"MAIL_USERNAME": mailer_uri.username,
"MAIL_PASSWORD": mailer_uri.password,
"MAIL_PORT": mailer_uri.port,
"MAIL_USE_TLS": True if "tls" in mailer_uri.scheme else False,
"MAIL_USE_SSL": True if "ssl" in mailer_uri.scheme else False,
"MAIL_DEFAULT_SENDER": app.config.get("MAIL_SENDER"),
"TESTING": app.config.get("TESTING"),
"DEBUG": app.config.get("DEBUG")
}
debug = app.config.get("DEBUG")
testing = app.config.get("TESTING")
_app = _App()
self.mail = flask_mail.Mail(app=_app)
_ses_mailer = ses_mailer.Mail(template=templates_sources,
template_context=self.config.get("MAIL_TEMPLATE_CONTEXT"))
self._template = _ses_mailer.parse_template
else:
logging.warning("Mailer Error. Invalid scheme '%s'" % scheme)
def send(self, to, subject=None, body=None, reply_to=None, template=None, **kwargs):
"""
To send email
:param to: the recipients, list or string
:param subject: the subject
:param body: the body
:param reply_to: reply_to
:param template: template, will use the templates instead
:param kwargs: context args
:return: bool - True if everything is ok
"""
sender = self.config.get("MAIL_SENDER")
recipients = [to] if not isinstance(to, list) else to
kwargs.update({
"subject": subject,
"body": body,
"reply_to": reply_to
})
if not self.validated:
raise Error("Mail configuration error")
if self.provider == "SES":
kwargs["to"] = recipients
if template:
self.mail.send_template(template=template, **kwargs)
else:
self.mail.send(**kwargs)
elif self.provider == "SMTP":
if template:
data = self._template(template=template, **kwargs)
kwargs["subject"] = data["subject"]
kwargs["body"] = data["body"]
kwargs["recipients"] = recipients
kwargs["sender"] = sender
# Remove invalid Messages keys
_safe_keys = ["recipients", "subject", "body", "html", "alts",
"cc", "bcc", "attachments", "reply_to", "sender",
"date", "charset", "extra_headers", "mail_options",
"rcpt_options"]
for k in kwargs.copy():
if k not in _safe_keys:
del kwargs[k]
message = flask_mail.Message(**kwargs)
self.mail.send(message)
else:
raise Error("Invalid mail provider. Must be 'SES' or 'SMTP'")
ext.mail = _Mailer()
app_context(ext.mail.init_app)
# ------------------------------------------------------------------------------
# Assets Delivery
class _AssetsDelivery(flask_s3.FlaskS3):
def init_app(self, app):
delivery_method = app.config.get("ASSETS_DELIVERY_METHOD")
if delivery_method and delivery_method.upper() in ["S3", "CDN"]:
# with app.app_context():
is_secure = False # request.is_secure
if delivery_method.upper() == "CDN":
domain = app.config.get("ASSETS_DELIVERY_DOMAIN")
if "://" in domain:
domain_parsed = urlparse(domain)
is_secure = domain_parsed.scheme == "https"
domain = domain_parsed.netloc
app.config.setdefault("S3_CDN_DOMAIN", domain)
app.config["FLASK_ASSETS_USE_S3"] = True
app.config["FLASKS3_ACTIVE"] = True
app.config["FLASKS3_URL_STYLE"] = "path"
app.config.setdefault("FLASKS3_USE_HTTPS", is_secure)
app.config.setdefault("FLASKS3_ONLY_MODIFIED", True)
app.config.setdefault("FLASKS3_GZIP", True)
app.config.setdefault("FLASKS3_BUCKET_NAME", app.config.get("AWS_S3_BUCKET_NAME"))
super(self.__class__, self).init_app(app)
ext.assets_delivery = _AssetsDelivery()
app_context(ext.assets_delivery.init_app)
# ------------------------------------------------------------------------------
# Flask-Login
ext.login_manager = flask_login.LoginManager()
@ext.login_manager.user_loader
def _login_manager_user_loader(user_id):
"""
setup None user loader.
Without this, it will throw an error if it doesn't exist
"""
return None
@app_context
def login_manager_init(app):
""" set the config for the login manager """
lm = app.config.get("LOGIN_MANAGER")
ext.login_manager.init_app(app)
if lm:
for k, v in lm.items():
setattr(ext.login_manager, k, v)
# ------------------------------------------------------------------------------
# Markdown
class JinjaMDTagExt(Extension):
"""
A simple extension for adding a {% markdown %}{% endmarkdown %} tag to Jinja
<div>
{% markdown %}
## Hi
{% endmarkdown %}
</div>
"""
tags = set(['markdown'])
def __init__(self, environment):
super(JinjaMDTagExt, self).__init__(environment)
environment.extend(
markdowner=markdown.Markdown(extensions=['extra'])
)
def parse(self, parser):
lineno = next(parser.stream).lineno
body = parser.parse_statements(
['name:endmarkdown'],
drop_needle=True
)
return CallBlock(
self.call_method('_markdown_support'),
[],
[],
body
).set_lineno(lineno)
def _markdown_support(self, caller):
block = caller()
block = self._strip_whitespace(block)
return self._render_markdown(block)
def _strip_whitespace(self, block):
lines = block.split('\n')
whitespace = ''
output = ''
if (len(lines) > 1):
for char in lines[1]:
if (char == ' ' or char == '\t'):
whitespace += char
else:
break
for line in lines:
output += line.replace(whitespace, '', 1) + '\r\n'
return output.strip()
def _render_markdown(self, block):
block = self.environment.markdowner.convert(block)
return block
class JinjaMDExt(Extension):
"""
JINJA Convert Markdown file to HTML
"""
options = {}
file_extensions = '.md'
def preprocess(self, source, name, filename=None):
if (not name or
(name and not os.path.splitext(name)[1] in self.file_extensions)):
return source
return md_to_html(source)
# Markdown
mkd = markdown.Markdown(extensions=[
'markdown.extensions.extra',
'markdown.extensions.nl2br',
'markdown.extensions.sane_lists',
'markdown.extensions.toc'
])
def md_to_html(text):
'''
Convert MD text to HTML
:param text:
:return:
'''
mkd.reset()
return mkd.convert(text)
@app_context
def setup_markdown(app):
"""
Load markdown extension
"""
app.jinja_env.add_extension(JinjaMDTagExt)
app.jinja_env.add_extension(JinjaMDExt)
# The extension
ext.markdown = md_to_html
# --------
"""
-- jinja2-htmlcompress
a Jinja2 extension that removes whitespace between HTML tags.
Example usage:
env = Environment(extensions=['htmlcompress_ext.HTMLCompress'])
How does it work? It throws away all whitespace between HTML tags
it can find at runtime. It will however preserve pre, textarea, style
and script tags because this kinda makes sense. In order to force
whitespace you can use ``{{ " " }}``.
Unlike filters that work at template runtime, this remotes whitespace
at compile time and does not add an overhead in template execution.
What if you only want to selective strip stuff?
env = Environment(extensions=['htmlcompress_ext.SelectiveHTMLCompress'])
And then mark blocks with ``{% strip %}``:
{% strip %} ... {% endstrip %}
"""
gl_tag_re = re.compile(r'(?:<(/?)([a-zA-Z0-9_-]+)\s*|(>\s*))(?s)')
gl_ws_normalize_re = re.compile(r'[ \t\r\n]+')
class StreamProcessContext(object):
def __init__(self, stream):
self.stream = stream
self.token = None
self.stack = []
def fail(self, message):
raise TemplateSyntaxError(message, self.token.lineno, self.stream.name,
self.stream.filename)
def _make_dict_from_listing(listing):
rv = {}
for keys, value in listing:
for key in keys:
rv[key] = value
return rv
class HTMLCompress(Extension):
isolated_elements = set(['script', 'style', 'noscript', 'textarea', 'pre'])
void_elements = set(['br', 'img', 'area', 'hr', 'param', 'input',
'embed', 'col'])
block_elements = set(['div', 'p', 'form', 'ul', 'ol', 'li', 'table', 'tr',
'tbody', 'thead', 'tfoot', 'tr', 'td', 'th', 'dl',
'dt', 'dd', 'blockquote', 'h1', 'h2', 'h3', 'h4',
'h5', 'h6'])
breaking_rules = _make_dict_from_listing([
(['p'], set(['#block'])),
(['li'], set(['li'])),
(['td', 'th'], set(['td', 'th', 'tr', 'tbody', 'thead', 'tfoot'])),
(['tr'], set(['tr', 'tbody', 'thead', 'tfoot'])),
(['thead', 'tbody', 'tfoot'], set(['thead', 'tbody', 'tfoot'])),
(['dd', 'dt'], set(['dl', 'dt', 'dd']))
])
def is_isolated(self, stack):
for tag in reversed(stack):
if tag in self.isolated_elements:
return True
return False
def is_breaking(self, tag, other_tag):
breaking = self.breaking_rules.get(other_tag)
return breaking and (tag in breaking or (
'#block' in breaking and tag in self.block_elements))
def enter_tag(self, tag, ctx):
while ctx.stack and self.is_breaking(tag, ctx.stack[-1]):
self.leave_tag(ctx.stack[-1], ctx)
if tag not in self.void_elements:
ctx.stack.append(tag)
def leave_tag(self, tag, ctx):
if not ctx.stack:
ctx.fail(
'Tried to leave "%s" but something closed it already' % tag)
if tag == ctx.stack[-1]:
ctx.stack.pop()
return
for idx, other_tag in enumerate(reversed(ctx.stack)):
if other_tag == tag:
for num in range(idx + 1):
ctx.stack.pop()
elif not self.breaking_rules.get(other_tag):
break
def normalize(self, ctx):
pos = 0
buffer = []
def write_data(value):
if not self.is_isolated(ctx.stack):
value = gl_ws_normalize_re.sub(' ', value)
buffer.append(value)
for match in gl_tag_re.finditer(ctx.token.value):
closes, tag, sole = match.groups()
preamble = ctx.token.value[pos:match.start()]
write_data(preamble)
if sole:
write_data(sole)
else:
buffer.append(match.group())
(closes and self.leave_tag or self.enter_tag)(tag, ctx)
pos = match.end()
write_data(ctx.token.value[pos:])
return ''.join(buffer)
def filter_stream(self, stream):
ctx = StreamProcessContext(stream)
for token in stream:
if token.type != 'data':
yield token
continue
ctx.token = token
value = self.normalize(ctx)
yield Token(token.lineno, 'data', value)
class SelectiveHTMLCompress(HTMLCompress):
def filter_stream(self, stream):
ctx = StreamProcessContext(stream)
strip_depth = 0
while True:
if stream.current.type == 'block_begin':
if stream.look().test('name:strip') or stream.look().test(
'name:endstrip'):
stream.skip()
if stream.current.value == 'strip':
strip_depth += 1
else:
strip_depth -= 1
if strip_depth < 0:
ctx.fail('Unexpected tag endstrip')
stream.skip()
if stream.current.type != 'block_end':
ctx.fail(
'expected end of block, got %s' % describe_token(
stream.current))
stream.skip()
if strip_depth > 0 and stream.current.type == 'data':
ctx.token = stream.current
value = self.normalize(ctx)
yield Token(stream.current.lineno, 'data', value)
else:
yield stream.current
next(stream)
@app_context
def setup_compress_html(app):
if app.config.get("COMPRESS_HTML"):
app.jinja_env.add_extension(HTMLCompress)
# ------------------------------------------------------------------------------
"""
PLACEHOLDER
With this extension you can define placeholders where your blocks get rendered
and at different places in your templates append to those blocks.
This is especially useful for css and javascript.
Your sub-templates can now define css and Javascript files
to be included, and the css will be nicely put at the top and
the Javascript to the bottom, just like you should.
It will also ignore any duplicate content in a single block.
<html>
<head>
{% placeholder "css" %}
</head>
<body>
Your content comes here.
Maybe you want to throw in some css:
{% addto "css" %}
<link href="/media/css/stylesheet.css" media="screen" rel="[stylesheet](stylesheet)" type="text/css" />
{% endaddto %}
Some more content here.
{% addto "js" %}
<script type="text/javascript">
alert("Hello flask");
</script>
{% endaddto %}
And even more content.
{% placeholder "js" %}
</body>
</html>
"""
@app_context
def init_app(app):
app.jinja_env.add_extension(PlaceholderAddTo)
app.jinja_env.add_extension(PlaceholderRender)
app.jinja_env.placeholder_tags = {}
class PlaceholderAddTo(Extension):
tags = set(['addtoblock'])
def _render_tag(self, name, caller):
context = self.environment.placeholder_tags
blocks = context.get(name)
if blocks is None:
blocks = set()
blocks.add(caller().strip())
context[name] = blocks
return Markup("")
def parse(self, parser):
lineno = next(parser.stream).lineno
name = parser.parse_expression()
body = parser.parse_statements(['name:endaddtoblock'], drop_needle=True)
args = [name]
return CallBlock(self.call_method('_render_tag', args),
[], [], body).set_lineno(lineno)
class PlaceholderRender(Extension):
tags = set(['renderblock'])
def _render_tag(self, name: str, caller):
context = self.environment.placeholder_tags
return Markup('\n'.join(context.get(name, [])))
def parse(self, parser):
lineno = next(parser.stream).lineno
name = parser.parse_expression()
args = [name]
return CallBlock(self.call_method('_render_tag', args),
[], [], []).set_lineno(lineno)
|
PypiClean
|
/eko_probability01-0.5.tar.gz/eko_probability01-0.5/eko_probability01/Gaussiandistribution.py
|
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
|
PypiClean
|
/codemagic_cli_tools-0.42.1-py3-none-any.whl/codemagic/tools/_app_store_connect/arguments.py
|
import argparse
import json
import pathlib
import re
import shlex
from argparse import ArgumentTypeError
from collections import Counter
from dataclasses import dataclass
from dataclasses import fields
from datetime import datetime
from datetime import timezone
from typing import List
from typing import Optional
from typing import Type
from cryptography.hazmat.primitives.serialization import load_pem_private_key
from codemagic import cli
from codemagic.apple.app_store_connect import AppStoreConnectApiClient
from codemagic.apple.app_store_connect import IssuerId
from codemagic.apple.app_store_connect import KeyIdentifier
from codemagic.apple.resources import AppStoreState
from codemagic.apple.resources import BetaReviewState
from codemagic.apple.resources import BuildProcessingState
from codemagic.apple.resources import BundleIdPlatform
from codemagic.apple.resources import CertificateType
from codemagic.apple.resources import DeviceStatus
from codemagic.apple.resources import Locale
from codemagic.apple.resources import Platform
from codemagic.apple.resources import ProfileState
from codemagic.apple.resources import ProfileType
from codemagic.apple.resources import ReleaseType
from codemagic.apple.resources import ResourceId
from codemagic.apple.resources import ReviewSubmissionState
from codemagic.cli import Colors
from codemagic.models import Certificate
from codemagic.models import ProvisioningProfile
@dataclass
class BetaBuildInfo:
whats_new: str
locale: Optional[Locale]
@dataclass
class AppStoreVersionInfo:
platform: Platform
copyright: Optional[str] = None
earliest_release_date: Optional[datetime] = None
release_type: Optional[ReleaseType] = None
version_string: Optional[str] = None
@dataclass
class AppStoreVersionLocalizationInfo:
description: Optional[str] = None
keywords: Optional[str] = None
locale: Optional[Locale] = None
marketing_url: Optional[str] = None
promotional_text: Optional[str] = None
support_url: Optional[str] = None
whats_new: Optional[str] = None
class Types:
class IssuerIdArgument(cli.EnvironmentArgumentValue[IssuerId]):
argument_type = IssuerId
environment_variable_key = "APP_STORE_CONNECT_ISSUER_ID"
class KeyIdentifierArgument(cli.EnvironmentArgumentValue[KeyIdentifier]):
argument_type = KeyIdentifier
environment_variable_key = "APP_STORE_CONNECT_KEY_IDENTIFIER"
class PrivateKeyArgument(cli.EnvironmentArgumentValue[str]):
PRIVATE_KEY_LOCATIONS = (
pathlib.Path("./private_keys"),
pathlib.Path("~/private_keys"),
pathlib.Path("~/.private_keys"),
pathlib.Path("~/.appstoreconnect/private_keys"),
)
environment_variable_key = "APP_STORE_CONNECT_PRIVATE_KEY"
def _apply_type(self, non_typed_value: str) -> str:
pem_private_key = self.argument_type(non_typed_value)
try:
_ = load_pem_private_key(pem_private_key.encode(), None)
except ValueError as ve:
raise argparse.ArgumentTypeError("Provided value is not a valid PEM encoded private key") from ve
return pem_private_key
class CertificateKeyArgument(cli.EnvironmentArgumentValue[str]):
environment_variable_key = "CERTIFICATE_PRIVATE_KEY"
@classmethod
def _is_valid(cls, value: str) -> bool:
return value.startswith("-----BEGIN ")
class CertificateKeyPasswordArgument(cli.EnvironmentArgumentValue):
environment_variable_key = "CERTIFICATE_PRIVATE_KEY_PASSWORD"
class AppSpecificPassword(cli.EnvironmentArgumentValue):
environment_variable_key = "APP_SPECIFIC_PASSWORD"
@classmethod
def _is_valid(cls, value: str) -> bool:
return bool(re.match(r"^([a-z]{4}-){3}[a-z]{4}$", value))
class WhatsNewArgument(cli.EnvironmentArgumentValue[str]):
environment_variable_key = "APP_STORE_CONNECT_WHATS_NEW"
class AppStoreConnectSkipPackageValidation(cli.TypedCliArgument[bool]):
argument_type = bool
environment_variable_key = "APP_STORE_CONNECT_SKIP_PACKAGE_VALIDATION"
class AppStoreConnectEnablePackageValidation(cli.TypedCliArgument[bool]):
argument_type = bool
environment_variable_key = "APP_STORE_CONNECT_ENABLE_PACKAGE_VALIDATION"
class AppStoreConnectSkipPackageUpload(cli.TypedCliArgument[bool]):
argument_type = bool
environment_variable_key = "APP_STORE_CONNECT_SKIP_PACKAGE_UPLOAD"
class AppStoreConnectDisableJwtCache(cli.TypedCliArgument[bool]):
argument_type = bool
environment_variable_key = "APP_STORE_CONNECT_DISABLE_JWT_CACHE"
class AltoolRetriesCount(cli.TypedCliArgument[int]):
argument_type = int
environment_variable_key = "APP_STORE_CONNECT_ALTOOL_RETRIES"
default_value = 10
@classmethod
def _is_valid(cls, value: int) -> bool:
return value > 0
class AltoolRetryWait(cli.TypedCliArgument[float]):
argument_type = float
environment_variable_key = "APP_STORE_CONNECT_ALTOOL_RETRY_WAIT"
default_value = 0.5
@classmethod
def _is_valid(cls, value: float) -> bool:
return value >= 0
class AltoolVerboseLogging(cli.TypedCliArgument[bool]):
argument_type = bool
environment_variable_key = "APP_STORE_CONNECT_ALTOOL_VERBOSE_LOGGING"
class MaxBuildProcessingWait(cli.TypedCliArgument[int]):
argument_type = int
environment_variable_key = "APP_STORE_CONNECT_MAX_BUILD_PROCESSING_WAIT"
default_value = 20
@classmethod
def _is_valid(cls, value: int) -> bool:
return value >= 0
class ApiUnauthorizedRetries(cli.TypedCliArgument[int]):
argument_type = int
environment_variable_key = "APP_STORE_CONNECT_API_UNAUTHORIZED_RETRIES"
default_value = 3
@classmethod
def _is_valid(cls, value: int) -> bool:
return value > 0
class ApiServerErrorRetries(cli.TypedCliArgument[int]):
argument_type = int
environment_variable_key = "APP_STORE_CONNECT_API_SERVER_ERROR_RETRIES"
default_value = 3
@classmethod
def _is_valid(cls, value: int) -> bool:
return value > 0
class EarliestReleaseDate(cli.TypedCliArgument[datetime]):
argument_type = Type[datetime]
@classmethod
def validate(cls, value: datetime):
if value <= datetime.now(timezone.utc):
raise ArgumentTypeError(f'Provided value "{value}" is not valid, date cannot be in the past')
elif (value.minute, value.second, value.microsecond) != (0, 0, 0):
raise ArgumentTypeError(
(
f'Provided value "{value}" is not valid, '
f"only hour precision is allowed and "
f"minutes and seconds are not permitted"
),
)
def _apply_type(self, non_typed_value: str) -> datetime:
value = cli.CommonArgumentTypes.iso_8601_datetime(non_typed_value)
self.validate(value)
return value
class AppStoreVersionInfoArgument(cli.EnvironmentArgumentValue[AppStoreVersionInfo]):
argument_type = List[AppStoreVersionInfo]
environment_variable_key = "APP_STORE_CONNECT_APP_STORE_VERSION_INFO"
example_value = json.dumps(
{
"platform": "IOS",
"copyright": "2008 Acme Inc.",
"version_string": "1.0.8",
"release_type": "SCHEDULED",
"earliest_release_date": "2021-11-10T14:00:00+00:00",
},
)
def _apply_type(self, non_typed_value: str) -> AppStoreVersionInfo:
try:
given_app_store_version_info = json.loads(non_typed_value)
assert isinstance(given_app_store_version_info, dict)
except (ValueError, AssertionError):
raise ArgumentTypeError(f"Provided value {non_typed_value!r} is not a valid JSON encoded object")
allowed_fields = {field.name for field in fields(AppStoreVersionInfo)}
invalid_keys = given_app_store_version_info.keys() - allowed_fields
if invalid_keys:
keys = ", ".join(map(str, invalid_keys))
raise ArgumentTypeError(f"Unknown App Store version option(s) {keys}")
try:
platform = Platform(given_app_store_version_info["platform"])
except KeyError:
platform = AppStoreVersionArgument.PLATFORM.get_default()
except ValueError as ve:
raise ArgumentTypeError(f"Invalid App Store version info: {ve}")
app_store_version_info = AppStoreVersionInfo(platform=platform)
try:
given_earliest_release_date = given_app_store_version_info["given_app_store_version_info"]
app_store_version_info.earliest_release_date = cli.CommonArgumentTypes.iso_8601_datetime(
given_earliest_release_date,
)
Types.EarliestReleaseDate.validate(app_store_version_info.earliest_release_date)
except KeyError:
pass
except ArgumentTypeError as ate:
raise ArgumentTypeError(f'Invalid "earliest_release_date" in App Store version info: {ate}') from ate
if "release_type" in given_app_store_version_info:
try:
app_store_version_info.release_type = ReleaseType(given_app_store_version_info["release_type"])
except ValueError as ve:
raise ArgumentTypeError(f"Invalid App Store version info: {ve}")
if "copyright" in given_app_store_version_info:
app_store_version_info.copyright = given_app_store_version_info["copyright"]
if "version_string" in given_app_store_version_info:
app_store_version_info.version_string = given_app_store_version_info["version_string"]
return app_store_version_info
class AppStoreVersionLocalizationInfoArgument(cli.EnvironmentArgumentValue[List[AppStoreVersionLocalizationInfo]]):
argument_type = List[AppStoreVersionLocalizationInfo]
environment_variable_key = "APP_STORE_CONNECT_APP_STORE_VERSION_LOCALIZATIONS"
example_value = json.dumps(
[
{
"description": "App description",
"keywords": "keyword, other keyword",
"locale": "en-US",
"marketing_url": "https://example.com",
"promotional_text": "Promotional text",
"support_url": "https://example.com",
"whats_new": "Fixes an issue ...",
},
],
)
def _apply_type(self, non_typed_value: str) -> List[AppStoreVersionLocalizationInfo]:
try:
given_localization_infos = json.loads(non_typed_value)
assert isinstance(given_localization_infos, list)
except (ValueError, AssertionError):
raise ArgumentTypeError(f"Provided value {non_typed_value!r} is not a valid JSON encoded list")
app_store_version_localization_infos: List[AppStoreVersionLocalizationInfo] = []
error_prefix = "Invalid App Store Version localization"
for i, given_localization_info in enumerate(given_localization_infos):
try:
locale: Optional[Locale] = Locale(given_localization_info["locale"])
except KeyError:
locale = None
except ValueError as ve: # Invalid locale
raise ArgumentTypeError(f"{error_prefix} on index {i}, {ve} in {given_localization_info!r}")
except TypeError: # Given beta build localization is not a dictionary
raise ArgumentTypeError(f"{error_prefix} value {given_localization_info!r} on index {i}")
localization_info = AppStoreVersionLocalizationInfo(
description=given_localization_info.get("description"),
keywords=given_localization_info.get("keywords"),
locale=locale,
marketing_url=given_localization_info.get("marketing_url"),
promotional_text=given_localization_info.get("promotional_text"),
support_url=given_localization_info.get("support_url"),
whats_new=given_localization_info.get("whats_new"),
)
if set(localization_info.__dict__.values()) == {None}:
raise ArgumentTypeError(f"{error_prefix} value {given_localization_info!r} on index {i}")
app_store_version_localization_infos.append(localization_info)
locales = Counter(info.locale for info in app_store_version_localization_infos)
duplicate_locales = {locale.value if locale else "primary" for locale, uses in locales.items() if uses > 1}
if duplicate_locales:
raise ArgumentTypeError(
(
f'Ambiguous definitions for locale(s) {", ".join(duplicate_locales)}. '
"Please define App Store Version localization for each locale exactly once."
),
)
return app_store_version_localization_infos
class BetaBuildLocalizations(cli.EnvironmentArgumentValue[List[BetaBuildInfo]]):
argument_type = List[BetaBuildInfo]
environment_variable_key = "APP_STORE_CONNECT_BETA_BUILD_LOCALIZATIONS"
example_value = json.dumps([{"locale": "en-US", "whats_new": "What's new in English"}])
def _apply_type(self, non_typed_value: str) -> List[BetaBuildInfo]:
try:
given_beta_build_localizations = json.loads(non_typed_value)
assert isinstance(given_beta_build_localizations, list)
except (ValueError, AssertionError):
raise ArgumentTypeError(f"Provided value {non_typed_value!r} is not a valid JSON encoded list")
beta_build_infos: List[BetaBuildInfo] = []
error_prefix = "Invalid beta build localization"
for i, bbl in enumerate(given_beta_build_localizations):
try:
whats_new: str = bbl["whats_new"]
locale = Locale(bbl["locale"])
except TypeError: # Given beta build localization is not a dictionary
raise ArgumentTypeError(f"{error_prefix} value {bbl!r} on index {i}")
except ValueError as ve: # Invalid locale
raise ArgumentTypeError(f"{error_prefix} on index {i}, {ve} in {bbl!r}")
except KeyError as ke: # Required key is missing from input
raise ArgumentTypeError(f"{error_prefix} on index {i}, missing {ke.args[0]} in {bbl!r}")
beta_build_infos.append(BetaBuildInfo(whats_new=whats_new, locale=locale))
locales = Counter(info.locale for info in beta_build_infos)
duplicate_locales = {locale.value for locale, uses in locales.items() if locale and uses > 1}
if duplicate_locales:
raise ArgumentTypeError(
(
f'Ambiguous definitions for locale(s) {", ".join(duplicate_locales)}. '
"Please define beta build localization for each locale exactly once."
),
)
return beta_build_infos
class DeviceUdidsArgument(cli.EnvironmentArgumentValue[List[str]]):
argument_type = List[str]
environment_variable_key = "APP_STORE_CONNECT_DEVICE_UDIDS"
example_value = "00000000-000000000000001E"
def _apply_type(self, non_typed_value: str) -> List[str]:
is_from_cli = not (self._is_from_environment() or self._is_from_file() or self._from_environment)
if is_from_cli:
udids = [non_typed_value.strip()]
else:
udids = [udid.strip() for udid in shlex.split(non_typed_value) if udid.strip()]
if udids and all(udids):
return udids
raise argparse.ArgumentTypeError(f'Provided value "{non_typed_value}" is not valid')
_API_DOCS_REFERENCE = f"Learn more at {AppStoreConnectApiClient.API_KEYS_DOCS_URL}."
_LOCALE_CODES_URL = (
"https://developer.apple.com/documentation/appstoreconnectapi/betabuildlocalizationcreaterequest/data/attributes"
)
class AppArgument(cli.Argument):
APPLICATION_ID_RESOURCE_ID = cli.ArgumentProperties(
key="application_id",
type=ResourceId,
description="Application Apple ID. An automatically generated ID assigned to your app",
)
APPLICATION_ID_RESOURCE_ID_OPTIONAL = cli.ArgumentProperties(
key="application_id",
flags=("--app-id", "--application-id"),
type=ResourceId,
description="Application Apple ID. An automatically generated ID assigned to your app",
argparse_kwargs={"required": False},
)
APPLICATION_NAME = cli.ArgumentProperties(
key="application_name",
flags=("--app-name", "--application-name"),
description="The name of your app as it will appear in the App Store",
argparse_kwargs={"required": False},
)
APPLICATION_SKU = cli.ArgumentProperties(
key="application_sku",
flags=("--app-sku", "--application-sku"),
description="A unique ID for your app that is not visible on the App Store.",
argparse_kwargs={"required": False},
)
class AppStoreConnectArgument(cli.Argument):
LOG_REQUESTS = cli.ArgumentProperties(
key="log_requests",
flags=("--log-api-calls",),
type=bool,
description="Turn on logging for App Store Connect API HTTP requests",
argparse_kwargs={"required": False, "action": "store_true"},
)
UNAUTHORIZED_REQUEST_RETRIES = cli.ArgumentProperties(
key="unauthorized_request_retries",
flags=("--api-unauthorized-retries", "-r"),
type=Types.ApiUnauthorizedRetries,
description=(
"Specify how many times the App Store Connect API request "
"should be retried in case the called request fails due to an "
"authentication error (401 Unauthorized response from the server). "
"In case of the above authentication error, the request is retried using"
"a new JSON Web Token as many times until the number of retries "
"is exhausted."
),
argparse_kwargs={
"required": False,
},
)
SERVER_ERROR_RETRIES = cli.ArgumentProperties(
key="server_error_retries",
flags=("--api-server-error-retries",),
type=Types.ApiServerErrorRetries,
description=(
"Specify how many times the App Store Connect API request "
"should be retried in case the called request fails due to a "
"server error (response with status code 5xx). "
"In case of server error, the request is retried until "
"the number of retries is exhausted."
),
argparse_kwargs={
"required": False,
},
)
DISABLE_JWT_CACHE = cli.ArgumentProperties(
key="disable_jwt_cache",
flags=("--disable-jwt-cache",),
description=(
"Turn off caching App Store Connect JSON Web Tokens to disk. "
"By default generated tokens are cached to disk to be reused between "
"separate processes, which can can reduce number of "
"false positive authentication errors from App Store Connect API."
),
type=Types.AppStoreConnectDisableJwtCache,
argparse_kwargs={"required": False, "action": "store_true"},
)
JSON_OUTPUT = cli.ArgumentProperties(
key="json_output",
flags=("--json",),
type=bool,
description="Whether to show the resource in JSON format",
argparse_kwargs={"required": False, "action": "store_true"},
)
ISSUER_ID = cli.ArgumentProperties(
key="issuer_id",
flags=("--issuer-id",),
type=Types.IssuerIdArgument,
description=(
f"App Store Connect API Key Issuer ID. Identifies the issuer "
f"who created the authentication token. {_API_DOCS_REFERENCE}"
),
argparse_kwargs={"required": False},
)
KEY_IDENTIFIER = cli.ArgumentProperties(
key="key_identifier",
flags=("--key-id",),
type=Types.KeyIdentifierArgument,
description=f"App Store Connect API Key ID. {_API_DOCS_REFERENCE}",
argparse_kwargs={"required": False},
)
PRIVATE_KEY = cli.ArgumentProperties(
key="private_key",
flags=("--private-key",),
type=Types.PrivateKeyArgument,
description=(
f"App Store Connect API private key used for JWT authentication to communicate with Apple services. "
f"{_API_DOCS_REFERENCE} "
f"If not provided, the key will be searched from the following directories "
f'in sequence for a private key file with the name "AuthKey_<key_identifier>.p8": '
f'{", ".join(map(str, Types.PrivateKeyArgument.PRIVATE_KEY_LOCATIONS))}, where '
f'<key_identifier> is the value of {Colors.BRIGHT_BLUE("--key-id")}'
),
argparse_kwargs={"required": False},
)
CERTIFICATES_DIRECTORY = cli.ArgumentProperties(
key="certificates_directory",
flags=("--certificates-dir",),
type=pathlib.Path,
description="Directory where the code signing certificates will be saved",
argparse_kwargs={"required": False, "default": Certificate.DEFAULT_LOCATION},
)
PROFILES_DIRECTORY = cli.ArgumentProperties(
key="profiles_directory",
flags=("--profiles-dir",),
type=pathlib.Path,
description="Directory where the provisioning profiles will be saved",
argparse_kwargs={"required": False, "default": ProvisioningProfile.DEFAULT_LOCATION},
)
class AppStoreVersionArgument(cli.Argument):
APP_STORE_STATE = cli.ArgumentProperties(
key="app_store_state",
flags=("--state", "--app-store-version-state"),
type=AppStoreState,
description="State of App Store Version",
argparse_kwargs={
"required": False,
"choices": list(AppStoreState),
},
)
APP_STORE_VERSION_ID = cli.ArgumentProperties(
key="app_store_version_id",
type=ResourceId,
description="UUID value of the App Store Version",
)
APP_STORE_VERSION_ID_OPTIONAL = cli.ArgumentProperties(
key="app_store_version_id",
flags=("--version-id", "--app-store-version-id"),
type=ResourceId,
description="UUID value of the App Store Version",
argparse_kwargs={"required": False},
)
APP_STORE_VERSION_INFO = cli.ArgumentProperties(
key="app_store_version_info",
flags=("--app-store-version-info", "-vi"),
type=Types.AppStoreVersionInfoArgument,
description=(
"General App information and version release options for App Store version submission "
"as a JSON encoded object. Alternative to individually defining "
f'`{Colors.BRIGHT_BLUE("--platform")}`, `{Colors.BRIGHT_BLUE("--copyright")}`, '
f'`{Colors.BRIGHT_BLUE("--earliest-release-date")}`, `{Colors.BRIGHT_BLUE("--release-type")}` '
f'and `{Colors.BRIGHT_BLUE("--version-string")}`. '
f'For example, "{Colors.WHITE(Types.AppStoreVersionInfoArgument.example_value)}". '
"Definitions from the JSON will be overridden by dedicated CLI options if provided."
),
argparse_kwargs={
"required": False,
},
)
APP_STORE_VERSION_SUBMISSION_ID = cli.ArgumentProperties(
key="app_store_version_submission_id",
type=ResourceId,
description="UUID value of the App Store Version Submission",
)
COPYRIGHT = cli.ArgumentProperties(
key="copyright",
flags=("--copyright",),
description=(
"The name of the person or entity that owns the exclusive rights to your app, "
f'preceded by the year the rights were obtained (for example, `{Colors.WHITE("2008 Acme Inc.")}`). '
"Do not provide a URL."
),
argparse_kwargs={"required": False},
)
EARLIEST_RELEASE_DATE = cli.ArgumentProperties(
key="earliest_release_date",
flags=("--earliest-release-date",),
type=Types.EarliestReleaseDate,
description=(
f"Specify earliest return date for scheduled release type "
f'(see `{Colors.BRIGHT_BLUE("--release-type")}` configuration option). '
f"Timezone aware ISO8601 timestamp with hour precision, "
f'for example "{Colors.WHITE("2021-11-10T14:00:00+00:00")}".'
),
argparse_kwargs={"required": False},
)
PLATFORM = cli.ArgumentProperties(
key="platform",
flags=("--platform", "--app-store-version-platform"),
type=Platform,
description="App Store Version platform",
argparse_kwargs={
"required": False,
"choices": list(Platform),
"default": Platform.IOS,
},
)
PLATFORM_OPTIONAL = cli.ArgumentProperties.duplicate(
PLATFORM,
argparse_kwargs={
"required": False,
"choices": list(Platform),
},
)
RELEASE_TYPE = cli.ArgumentProperties(
key="release_type",
flags=("--release-type",),
type=ReleaseType,
description=(
"Choose when to release the app. You can either manually release the app at a later date on "
"the App Store Connect website, or the app version can be automatically released right after "
"it has been approved by App Review."
),
argparse_kwargs={
"required": False,
"choices": list(ReleaseType),
},
)
VERSION_STRING = cli.ArgumentProperties(
key="version_string",
flags=("--version-string", "--app-store-version"),
description=(
"Version of the build published to App Store "
"that identifies an iteration of the bundle. "
"The string can only contain one to three groups of numeric characters (0-9) "
"separated by period in the format [Major].[Minor].[Patch]. "
f'For example `{Colors.WHITE("3.2.46")}`'
),
argparse_kwargs={"required": False},
)
class ReviewSubmissionArgument(cli.Argument):
APP_CUSTOM_PRODUCT_PAGE_VERSION_ID = cli.ArgumentProperties(
key="app_custom_product_page_version_id",
flags=("--app-custom-product-page-version-id",),
description="UUID value of custom product page",
type=ResourceId,
)
APP_EVENT_ID = cli.ArgumentProperties(
key="app_event_id",
flags=("--app-event-id",),
description="UUID value of app event",
type=ResourceId,
)
APP_STORE_VERSION_ID = cli.ArgumentProperties(
key="app_store_version_id",
flags=("--version-id", "--app-store-version-id"),
type=ResourceId,
description="UUID value of the App Store Version",
)
APP_STORE_VERSION_EXPERIMENT_ID = cli.ArgumentProperties(
key="app_store_version_experiment_id",
flags=("--app-store-version-experiment-id",),
type=ResourceId,
description="UUID value of the App Store Version experiment",
)
REVIEW_SUBMISSION_ID = cli.ArgumentProperties(
key="review_submission_id",
type=ResourceId,
description="UUID value of the review submission",
)
REVIEW_SUBMISSION_STATE = cli.ArgumentProperties(
key="review_submission_state",
flags=("--review-submission-state",),
type=ReviewSubmissionState,
description="String value of the review submission state",
argparse_kwargs={
"required": False,
"choices": list(ReviewSubmissionState),
"nargs": "+",
},
)
class AppStoreVersionLocalizationArgument(cli.Argument):
APP_STORE_VERSION_LOCALIZATION_ID = cli.ArgumentProperties(
key="app_store_version_localization_id",
type=ResourceId,
description="UUID value of the App Store Version localization",
)
LOCALE = cli.ArgumentProperties(
key="locale",
type=Locale,
description=(
"The locale code name for App Store metadata in different languages. "
f"See available locale code names from {_LOCALE_CODES_URL}"
),
argparse_kwargs={
"choices": list(Locale),
},
)
LOCALE_DEFAULT = cli.ArgumentProperties.duplicate(
LOCALE,
flags=("--locale", "-l"),
description=(
"The locale code name for App Store metadata in different languages. "
"In case not provided, application's primary locale is used instead. "
f"Learn more from {_LOCALE_CODES_URL}"
),
argparse_kwargs={
"required": False,
"choices": list(Locale),
},
)
DESCRIPTION = cli.ArgumentProperties(
key="description",
flags=("--description", "-d"),
description="A description of your app, detailing features and functionality.",
argparse_kwargs={
"required": False,
},
)
KEYWORDS = cli.ArgumentProperties(
key="keywords",
flags=("--keywords", "-k"),
description=(
"Include one or more keywords that describe your app. Keywords make "
"App Store search results more accurate. Separate keywords with an "
"English comma, Chinese comma, or a mix of both."
),
argparse_kwargs={
"required": False,
},
)
MARKETING_URL = cli.ArgumentProperties(
key="marketing_url",
flags=("--marketing-url",),
description="A URL with marketing information about your app. This URL will be visible on the App Store.",
argparse_kwargs={
"required": False,
},
)
PROMOTIONAL_TEXT = cli.ArgumentProperties(
key="promotional_text",
flags=("--promotional-text",),
description=(
"Promotional text lets you inform your App Store visitors of any current "
"app features without requiring an updated submission. This text will "
"appear above your description on the App Store for customers with devices "
"running iOS 11 or later, and macOS 10.13 or later."
),
argparse_kwargs={
"required": False,
},
)
SUPPORT_URL = cli.ArgumentProperties(
key="support_url",
flags=("--support-url",),
description="A URL with support information for your app. This URL will be visible on the App Store.",
argparse_kwargs={
"required": False,
},
)
WHATS_NEW = cli.ArgumentProperties(
key="whats_new",
flags=("--whats-new", "-n"),
type=Types.WhatsNewArgument,
description=(
"Describe what's new in this version of your app, such as new features, improvements, and bug fixes."
),
argparse_kwargs={
"required": False,
},
)
APP_STORE_VERSION_LOCALIZATION_INFOS = cli.ArgumentProperties(
key="app_store_version_localizations",
flags=("--app-store-version-localizations", "-vl"),
type=Types.AppStoreVersionLocalizationInfoArgument,
description=(
"Localized App Store version meta information for App Store version submission "
"as a JSON encoded list. Alternative to individually defining version release notes "
f'and other options via dedicated CLI options such as `{Colors.BRIGHT_BLUE("--whats-new")}`. '
"Definitions for duplicate locales are not allowed. "
f'For example, "{Colors.WHITE(Types.AppStoreVersionLocalizationInfoArgument.example_value)}"'
),
argparse_kwargs={
"required": False,
},
)
class PublishArgument(cli.Argument):
APPLICATION_PACKAGE_PATH_PATTERNS = cli.ArgumentProperties(
key="application_package_path_patterns",
flags=("--path",),
type=pathlib.Path,
description=(
"Path to artifact (*.ipa or *.pkg). Can be either a path literal, or "
"a glob pattern to match projects in working directory."
),
argparse_kwargs={
"required": False,
"default": (pathlib.Path("**/*.ipa"), pathlib.Path("**/*.pkg")),
"nargs": "+",
"metavar": "artifact-path",
},
)
SUBMIT_TO_TESTFLIGHT = cli.ArgumentProperties(
key="submit_to_testflight",
flags=("--testflight", "-t"),
type=bool,
description="Enable submission of an app for Testflight beta app review to allow external testing.",
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
SUBMIT_TO_APP_STORE = cli.ArgumentProperties(
key="submit_to_app_store",
flags=("--app-store", "-a"),
type=bool,
description="Enable submission of an app to App Store app review procedure.",
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
APPLE_ID = cli.ArgumentProperties(
key="apple_id",
flags=("--apple-id", "-u"),
description=(
"App Store Connect username used for application package validation "
"and upload if App Store Connect API key is not specified"
),
argparse_kwargs={"required": False},
)
APP_SPECIFIC_PASSWORD = cli.ArgumentProperties(
key="app_specific_password",
flags=("--password", "-p"),
type=Types.AppSpecificPassword,
description=(
"App-specific password used for application package validation "
"and upload if App Store Connect API Key is not specified. "
f'Used together with {Colors.BRIGHT_BLUE("--apple-id")} '
'and should match pattern "abcd-abcd-abcd-abcd". '
"Create an app-specific password in the Security section of your Apple ID account. "
"Learn more from https://support.apple.com/en-us/HT204397"
),
argparse_kwargs={"required": False},
)
ENABLE_PACKAGE_VALIDATION = cli.ArgumentProperties(
key="enable_package_validation",
flags=("--enable-package-validation", "-ev"),
type=Types.AppStoreConnectEnablePackageValidation,
description=(
"Validate package before uploading it to App Store Connect. "
"Use this switch to enable running `altool --validate-app` before uploading "
"the package to App Store connect"
),
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
SKIP_PACKAGE_VALIDATION = cli.ArgumentProperties(
key="skip_package_validation",
flags=("--skip-package-validation", "-sv"),
type=Types.AppStoreConnectSkipPackageValidation,
description=(
f'{Colors.BOLD("Deprecated")}. '
f"Starting from version `0.14.0` package validation before "
"uploading it to App Store Connect is disabled by default."
),
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
SKIP_PACKAGE_UPLOAD = cli.ArgumentProperties(
key="skip_package_upload",
flags=("--skip-package-upload", "-su"),
type=Types.AppStoreConnectSkipPackageUpload,
description=(
"Skip package upload before doing any other TestFlight or App Store related actions. "
"Using this switch will opt out from running `altool --upload-app` as part of publishing "
"action. Use this option in case your application package is already uploaded to App Store."
),
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
LOCALE_DEFAULT = cli.ArgumentProperties(
key="locale",
flags=("--locale", "-l"),
type=Locale,
description=(
"The locale code name for App Store metadata in different languages, "
'or for displaying localized "What\'s new" content in TestFlight. '
"In case not provided, application's primary locale is used instead. "
f"Learn more from {_LOCALE_CODES_URL}"
),
argparse_kwargs={
"required": False,
"choices": list(Locale),
},
)
WHATS_NEW = cli.ArgumentProperties(
key="whats_new",
flags=("--whats-new", "-n"),
type=Types.WhatsNewArgument,
description=(
"Release notes either for TestFlight or App Store review submission. "
"Describe what's new in this version of your app, "
"such as new features, improvements, and bug fixes."
),
argparse_kwargs={
"required": False,
},
)
MAX_BUILD_PROCESSING_WAIT = cli.ArgumentProperties(
key="max_build_processing_wait",
flags=("--max-build-processing-wait", "-w"),
type=Types.MaxBuildProcessingWait,
description=(
"Maximum amount of minutes to wait for the freshly uploaded build to be processed by "
"Apple and retry submitting the build for (beta) review. Works in conjunction with "
"TestFlight beta review submission, or App Store review submission and operations that "
"depend on either one of those. If the processing is not finished "
"within the specified timeframe, further submission will be terminated. "
"Waiting will be skipped if the value is set to 0, further actions might fail "
"if the build is not processed yet."
),
argparse_kwargs={
"required": False,
},
)
EXPIRE_BUILD_SUBMITTED_FOR_REVIEW = cli.ArgumentProperties(
key="expire_build_submitted_for_review",
flags=("--expire-build-submitted-for-review",),
type=bool,
description="Expires any previous build waiting for, or in, review before submitting the build to TestFlight.",
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
CANCEL_PREVIOUS_SUBMISSIONS = cli.ArgumentProperties(
key="cancel_previous_submissions",
flags=("--cancel-previous-submissions",),
type=bool,
description=(
"Cancels previous submissions for the application in App Store Connect "
"before creating a new submission if the submissions are in a state where it is possible."
),
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
ALTOOL_VERBOSE_LOGGING = cli.ArgumentProperties(
key="altool_verbose_logging",
flags=("--altool-verbose-logging",),
type=Types.AltoolVerboseLogging,
description=(
"Show verbose log output when launching Application Loader tool. "
"That is add `--verbose` flag to `altool` invocations when either validating "
"the package, or while uploading the pakcage to App Store Connect."
),
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
ALTOOL_RETRIES_COUNT = cli.ArgumentProperties(
key="altool_retries_count",
flags=("--altool-retries",),
type=Types.AltoolRetriesCount,
description=(
"Define how many times should the package validation or upload action be attempted in case it "
"failed due to a known `altool` issue (authentication failure or request timeout)."
),
argparse_kwargs={
"required": False,
},
)
ALTOOL_RETRY_WAIT = cli.ArgumentProperties(
key="altool_retry_wait",
flags=("--altool-retry-wait",),
type=Types.AltoolRetryWait,
description=(
"For how long (in seconds) should the tool wait between the retries of package validation or "
"upload action retries in case they failed due to a known `altool` issues "
"(authentication failure or request timeout). "
f"See also {cli.ArgumentProperties.get_flag(ALTOOL_RETRIES_COUNT)} for more configuration options."
),
argparse_kwargs={
"required": False,
},
)
class BuildArgument(cli.Argument):
EXPIRED = cli.ArgumentProperties(
key="expired",
flags=("--expired",),
type=bool,
description=(
f'List only expired builds. Mutually exclusive with option `{Colors.BRIGHT_BLUE("--not-expired")}`.'
),
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
NOT_EXPIRED = cli.ArgumentProperties(
key="not_expired",
flags=("--not-expired",),
type=bool,
description=(
f'List only not expired builds. Mutually exclusive with option `{Colors.BRIGHT_BLUE("--expired")}`.'
),
argparse_kwargs={
"required": False,
"action": "store_true",
},
)
BUILD_ID_RESOURCE_ID = cli.ArgumentProperties(
key="build_id",
type=ResourceId,
description="Alphanumeric ID value of the Build",
)
BUILD_ID_RESOURCE_ID_OPTIONAL = cli.ArgumentProperties(
key="build_id",
flags=("--build-id",),
type=ResourceId,
description="Alphanumeric ID value of the Build",
argparse_kwargs={"required": False},
)
BUILD_ID_RESOURCE_ID_EXCLUDE_OPTIONAL = cli.ArgumentProperties(
key="excluded_build_id",
flags=("--exclude-build-id",),
type=ResourceId,
description="Alphanumeric ID value of the Build(s)",
argparse_kwargs={
"required": False,
"nargs": "+",
},
)
PRE_RELEASE_VERSION = cli.ArgumentProperties(
key="pre_release_version",
flags=("--pre-release-version",),
description=(
"Version of the build published to Testflight "
"that identifies an iteration of the bundle. "
"The string can only contain one to three groups of numeric characters (0-9) "
"separated by period in the format [Major].[Minor].[Patch]. "
"For example `3.2.46`"
),
argparse_kwargs={"required": False},
)
PROCESSING_STATE = cli.ArgumentProperties(
key="processing_state",
flags=("--processing-state",),
type=BuildProcessingState,
description="Build processing state",
argparse_kwargs={
"required": False,
"choices": list(BuildProcessingState),
},
)
BUILD_VERSION_NUMBER = cli.ArgumentProperties(
key="build_version_number",
flags=("--build-version-number",),
description="Build version number is the version number of the uploaded build. For example `46` or `1.0.13.5`.",
argparse_kwargs={"required": False},
)
BETA_BUILD_LOCALIZATION_ID_RESOURCE_ID = cli.ArgumentProperties(
key="localization_id",
type=ResourceId,
description="Alphanumeric ID value of the Beta Build Localization",
)
LOCALE_OPTIONAL = cli.ArgumentProperties(
key="locale",
flags=("--locale", "-l"),
type=Locale,
description=(
'The locale code name for displaying localized "What\'s new" content in TestFlight. '
f"Learn more from {_LOCALE_CODES_URL}"
),
argparse_kwargs={
"required": False,
"choices": list(Locale),
},
)
LOCALE_DEFAULT = cli.ArgumentProperties.duplicate(
LOCALE_OPTIONAL,
description=(
'The locale code name for displaying localized "What\'s new" content in TestFlight. '
"In case not provided, application's primary locale from test information is used instead. "
f"Learn more from {_LOCALE_CODES_URL}"
),
)
WHATS_NEW = cli.ArgumentProperties(
key="whats_new",
flags=("--whats-new", "-n"),
type=Types.WhatsNewArgument,
description=(
"Describe the changes and additions to the build and indicate "
"the features you would like your users to tests."
),
argparse_kwargs={
"required": False,
},
)
BETA_REVIEW_STATE = cli.ArgumentProperties(
key="beta_review_state",
flags=("--beta-review-state",),
type=BetaReviewState,
description="Build beta review state",
argparse_kwargs={
"required": False,
"choices": list(BetaReviewState),
"nargs": "+",
},
)
BETA_BUILD_LOCALIZATIONS = cli.ArgumentProperties(
key="beta_build_localizations",
flags=("--beta-build-localizations",),
type=Types.BetaBuildLocalizations,
description=(
"Localized beta test info for what's new in the uploaded build as a JSON encoded list. "
f'For example, "{Colors.WHITE(Types.BetaBuildLocalizations.example_value)}". '
f'See "{Colors.WHITE(cli.ArgumentProperties.get_flag(LOCALE_OPTIONAL))}" for possible locale options.'
),
argparse_kwargs={
"required": False,
},
)
BETA_GROUP_NAMES_REQUIRED = cli.ArgumentProperties(
key="beta_group_names",
flags=("--beta-group",),
type=str,
description="Name of your Beta group",
argparse_kwargs={
"nargs": "+",
"metavar": "beta-group",
"required": True,
},
)
BETA_GROUP_NAMES_OPTIONAL = cli.ArgumentProperties.duplicate(
BETA_GROUP_NAMES_REQUIRED,
argparse_kwargs={
"nargs": "+",
"metavar": "beta-group",
"required": False,
},
)
class BundleIdArgument(cli.Argument):
BUNDLE_ID_IDENTIFIER = cli.ArgumentProperties(
key="bundle_id_identifier",
description="Identifier of the Bundle ID. For example `com.example.app`",
)
BUNDLE_ID_IDENTIFIER_OPTIONAL = cli.ArgumentProperties(
key="bundle_id_identifier",
flags=("--bundle-id-identifier",),
description="Identifier of the Bundle ID. For example `com.example.app`",
argparse_kwargs={"required": False},
)
BUNDLE_ID_NAME = cli.ArgumentProperties(
key="bundle_id_name",
flags=("--name",),
description=(
"Name of the Bundle ID. If the resource is being created, "
"the default will be deduced from given Bundle ID identifier."
),
argparse_kwargs={"required": False},
)
BUNDLE_ID_RESOURCE_ID = cli.ArgumentProperties(
key="bundle_id_resource_id",
type=ResourceId,
description="Alphanumeric ID value of the Bundle ID",
)
BUNDLE_ID_RESOURCE_IDS = cli.ArgumentProperties(
key="bundle_id_resource_ids",
flags=("--bundle-ids",),
type=ResourceId,
description="Alphanumeric ID value of the Bundle ID",
argparse_kwargs={
"required": True,
"nargs": "+",
"metavar": "bundle-identifier-id",
},
)
PLATFORM = cli.ArgumentProperties(
key="platform",
flags=("--platform",),
type=BundleIdPlatform,
description="Bundle ID platform",
argparse_kwargs={
"required": False,
"choices": list(BundleIdPlatform),
"default": BundleIdPlatform.IOS,
},
)
PLATFORM_OPTIONAL = cli.ArgumentProperties(
key="platform",
flags=("--platform",),
type=BundleIdPlatform,
description="Bundle ID platform",
argparse_kwargs={
"required": False,
"choices": list(BundleIdPlatform),
},
)
IDENTIFIER_STRICT_MATCH = cli.ArgumentProperties(
key="bundle_id_identifier_strict_match",
flags=("--strict-match-identifier",),
type=bool,
description=(
"Only match Bundle IDs that have exactly the same identifier specified by "
"`BUNDLE_ID_IDENTIFIER`. By default identifier `com.example.app` also matches "
"Bundle IDs with identifier such as `com.example.app.extension`"
),
argparse_kwargs={"required": False, "action": "store_true"},
)
class DeviceArgument(cli.Argument):
DEVICE_RESOURCE_IDS = cli.ArgumentProperties(
key="device_resource_ids",
flags=("--device-ids",),
type=ResourceId,
description="Alphanumeric ID value of the Device",
argparse_kwargs={
"required": True,
"nargs": "+",
"metavar": "device-id",
},
)
DEVICE_NAME = cli.ArgumentProperties(
key="device_name",
flags=("--name", "-n"),
description="Common name of Devices",
argparse_kwargs={"required": True},
)
DEVICE_NAME_OPTIONAL = cli.ArgumentProperties.duplicate(
DEVICE_NAME,
argparse_kwargs={"required": False},
)
DEVICE_UDIDS = cli.ArgumentProperties(
key="device_udids",
flags=("--udid", "-u"),
type=Types.DeviceUdidsArgument,
description=f"Device ID (UDID), for example: {Types.DeviceUdidsArgument.example_value}",
argparse_kwargs={
"required": False,
"nargs": "+",
"metavar": "UDID",
},
)
DEVICE_STATUS = cli.ArgumentProperties(
key="device_status",
flags=("--status",),
type=DeviceStatus,
description="Status of the Device",
argparse_kwargs={
"required": False,
"choices": list(DeviceStatus),
},
)
IGNORE_REGISTRATION_ERRORS = cli.ArgumentProperties(
key="ignore_registration_errors",
flags=("--ignore-registration-errors",),
type=bool,
description=(
"Ignore device registration failures, e.g. invalid UDID or duplicate UDID submission. "
"Proceed registering remaining UDIDs when the flag is set."
),
argparse_kwargs={
"required": False,
"action": "store_true",
"default": False,
},
)
class CertificateArgument(cli.Argument):
CERTIFICATE_RESOURCE_ID = cli.ArgumentProperties(
key="certificate_resource_id",
type=ResourceId,
description="Alphanumeric ID value of the Signing Certificate",
)
CERTIFICATE_RESOURCE_IDS = cli.ArgumentProperties(
key="certificate_resource_ids",
flags=("--certificate-ids",),
type=ResourceId,
description="Alphanumeric ID value of the Signing Certificate",
argparse_kwargs={
"required": True,
"nargs": "+",
"metavar": "certificate-id",
},
)
CERTIFICATE_TYPE = cli.ArgumentProperties(
key="certificate_type",
flags=("--type",),
type=CertificateType,
description="Type of the certificate",
argparse_kwargs={
"required": False,
"choices": list(CertificateType),
"default": CertificateType.IOS_DEVELOPMENT,
},
)
CERTIFICATE_TYPE_OPTIONAL = cli.ArgumentProperties(
key="certificate_type",
flags=("--type",),
type=CertificateType,
description="Type of the certificate",
argparse_kwargs={
"required": False,
"choices": list(CertificateType),
},
)
CERTIFICATE_TYPES_OPTIONAL = cli.ArgumentProperties(
key="certificate_types",
flags=("--type",),
type=CertificateType,
description="Type of the certificate",
argparse_kwargs={
"required": False,
"choices": list(CertificateType),
"nargs": "+",
},
)
PROFILE_TYPE_OPTIONAL = cli.ArgumentProperties(
key="profile_type",
flags=("--profile-type",),
type=ProfileType,
description="Type of the provisioning profile that the certificate is used with",
argparse_kwargs={
"required": False,
"choices": list(ProfileType),
},
)
DISPLAY_NAME = cli.ArgumentProperties(
key="display_name",
flags=("--display-name",),
description="Code signing certificate display name",
argparse_kwargs={"required": False},
)
PRIVATE_KEY = cli.ArgumentProperties(
key="certificate_key",
flags=("--certificate-key",),
type=Types.CertificateKeyArgument,
description=(
f"Private key used to generate the certificate. "
f'Used together with {Colors.BRIGHT_BLUE("--save")} '
f'or {Colors.BRIGHT_BLUE("--create")} options.'
),
argparse_kwargs={"required": False},
)
PRIVATE_KEY_PASSWORD = cli.ArgumentProperties(
key="certificate_key_password",
flags=("--certificate-key-password",),
type=Types.CertificateKeyPasswordArgument,
description=(
f"Password of the private key used to generate the certificate. "
f'Used together with {Colors.BRIGHT_BLUE("--certificate-key")} '
f'or {Colors.BRIGHT_BLUE("--certificate-key-path")} options '
f"if the provided key is encrypted."
),
argparse_kwargs={"required": False},
)
P12_CONTAINER_PASSWORD = cli.ArgumentProperties(
key="p12_container_password",
flags=("--p12-password",),
description=(
"If provided, the saved p12 container will be encrypted using this password. "
f'Used together with {Colors.BRIGHT_BLUE("--save")} option.'
),
argparse_kwargs={"required": False, "default": ""},
)
P12_CONTAINER_SAVE_PATH = cli.ArgumentProperties(
key="p12_container_save_path",
flags=("--p12-path",),
type=cli.CommonArgumentTypes.non_existing_path,
description=(
"If provided, the exported p12 container will saved at this path. "
"Otherwise it will be saved with a random name in the directory specified "
f'by {Colors.BRIGHT_BLUE("--certificates-dir")}. '
f'Used together with {Colors.BRIGHT_BLUE("--save")} option.'
),
argparse_kwargs={"required": False},
)
class ProfileArgument(cli.Argument):
PROFILE_RESOURCE_ID = cli.ArgumentProperties(
key="profile_resource_id",
type=ResourceId,
description="Alphanumeric ID value of the Profile",
)
PROFILE_TYPE = cli.ArgumentProperties(
key="profile_type",
flags=("--type",),
type=ProfileType,
description="Type of the provisioning profile",
argparse_kwargs={
"required": False,
"choices": list(ProfileType),
"default": ProfileType.IOS_APP_DEVELOPMENT,
},
)
PROFILE_TYPE_OPTIONAL = cli.ArgumentProperties(
key="profile_type",
flags=("--type",),
type=ProfileType,
description="Type of the provisioning profile",
argparse_kwargs={
"required": False,
"choices": list(ProfileType),
},
)
PROFILE_STATE_OPTIONAL = cli.ArgumentProperties(
key="profile_state",
flags=("--state",),
type=ProfileState,
description="State of the provisioning profile",
argparse_kwargs={
"required": False,
"choices": list(ProfileState),
},
)
PROFILE_NAME = cli.ArgumentProperties(
key="profile_name",
flags=("--name",),
description="Name of the provisioning profile",
argparse_kwargs={"required": False},
)
class CommonArgument(cli.Argument):
CREATE_RESOURCE = cli.ArgumentProperties(
key="create_resource",
flags=("--create",),
type=bool,
description="Whether to create the resource if it does not exist yet",
argparse_kwargs={"required": False, "action": "store_true"},
)
IGNORE_NOT_FOUND = cli.ArgumentProperties(
key="ignore_not_found",
flags=("--ignore-not-found",),
type=bool,
description="Do not raise exceptions if the specified resource does not exist.",
argparse_kwargs={"required": False, "action": "store_true"},
)
SAVE = cli.ArgumentProperties(
key="save",
flags=("--save",),
type=bool,
description=(
f"Whether to save the resources to disk. See "
f"{Colors.CYAN(AppStoreConnectArgument.PROFILES_DIRECTORY.key.upper())} and "
f"{Colors.CYAN(AppStoreConnectArgument.CERTIFICATES_DIRECTORY.key.upper())} "
f"for more information."
),
argparse_kwargs={"required": False, "action": "store_true"},
)
PLATFORM = cli.ArgumentProperties(
key="platform",
flags=("--platform",),
type=Platform,
description="Apple operating systems",
argparse_kwargs={
"required": False,
"choices": list(Platform),
},
)
class ArgumentGroups:
ADD_BETA_TEST_INFO_OPTIONAL_ARGUMENTS = (
BuildArgument.BETA_BUILD_LOCALIZATIONS,
BuildArgument.LOCALE_DEFAULT,
BuildArgument.WHATS_NEW,
)
ADD_BUILD_TO_BETA_GROUPS_OPTIONAL_ARGUMENTS = (BuildArgument.BETA_GROUP_NAMES_OPTIONAL,)
ALTOOL_CONFIGURATION_ARGUMENTS = (
PublishArgument.ALTOOL_RETRIES_COUNT,
PublishArgument.ALTOOL_RETRY_WAIT,
PublishArgument.ALTOOL_VERBOSE_LOGGING,
)
LIST_BUILDS_FILTERING_ARGUMENTS = (
BuildArgument.BETA_REVIEW_STATE,
BuildArgument.BUILD_ID_RESOURCE_ID_OPTIONAL,
BuildArgument.BUILD_VERSION_NUMBER,
BuildArgument.EXPIRED,
BuildArgument.NOT_EXPIRED,
BuildArgument.PRE_RELEASE_VERSION,
BuildArgument.PROCESSING_STATE,
)
PACKAGE_UPLOAD_ARGUMENTS = (
PublishArgument.ENABLE_PACKAGE_VALIDATION,
PublishArgument.SKIP_PACKAGE_VALIDATION,
PublishArgument.SKIP_PACKAGE_UPLOAD,
)
SUBMIT_TO_APP_STORE_OPTIONAL_ARGUMENTS = (
PublishArgument.MAX_BUILD_PROCESSING_WAIT,
PublishArgument.CANCEL_PREVIOUS_SUBMISSIONS,
# Generic App Store Version information arguments
AppStoreVersionArgument.APP_STORE_VERSION_INFO,
AppStoreVersionArgument.COPYRIGHT,
AppStoreVersionArgument.EARLIEST_RELEASE_DATE,
AppStoreVersionArgument.PLATFORM,
AppStoreVersionArgument.RELEASE_TYPE,
AppStoreVersionArgument.VERSION_STRING,
# Localized App Store Version arguments
AppStoreVersionLocalizationArgument.DESCRIPTION,
AppStoreVersionLocalizationArgument.KEYWORDS,
AppStoreVersionLocalizationArgument.LOCALE_DEFAULT,
AppStoreVersionLocalizationArgument.MARKETING_URL,
AppStoreVersionLocalizationArgument.PROMOTIONAL_TEXT,
AppStoreVersionLocalizationArgument.SUPPORT_URL,
AppStoreVersionLocalizationArgument.WHATS_NEW,
AppStoreVersionLocalizationArgument.APP_STORE_VERSION_LOCALIZATION_INFOS,
)
SUBMIT_TO_TESTFLIGHT_OPTIONAL_ARGUMENTS = (
PublishArgument.MAX_BUILD_PROCESSING_WAIT,
PublishArgument.EXPIRE_BUILD_SUBMITTED_FOR_REVIEW,
)
|
PypiClean
|
/idaes-pse-2.2.0rc0.tar.gz/idaes-pse-2.2.0rc0/idaes/apps/matopt/materials/lattices/perovskite_lattice.py
|
from copy import deepcopy
import numpy as np
from .unit_cell_lattice import UnitCell, UnitCellLattice
from ..geometry import RectPrism
from ..tiling import CubicTiling
from ..transform_func import ScaleFunc, RotateFunc, ReflectFunc
class PerovskiteLattice(UnitCellLattice):
RefA = 1
RefB = 1
RefC = 1
# === STANDARD CONSTRUCTOR
def __init__(self, A, B, C):
RefUnitCellShape = RectPrism(
PerovskiteLattice.RefA,
PerovskiteLattice.RefB,
PerovskiteLattice.RefC,
np.array([0, 0, 0], dtype=float),
)
RefUnitCellTiling = CubicTiling(RefUnitCellShape)
RefFracPositions = [
np.array([0.0, 0.0, 0.0]),
np.array([0.5, 0.5, 0.5]),
np.array([0.5, 0.5, 0.0]),
np.array([0.5, 0.0, 0.5]),
np.array([0.0, 0.5, 0.5]),
]
RefUnitCell = UnitCell(RefUnitCellTiling, RefFracPositions)
UnitCellLattice.__init__(self, RefUnitCell)
self._A = PerovskiteLattice.RefA
self._B = PerovskiteLattice.RefB
self._C = PerovskiteLattice.RefC
self.applyTransF(
ScaleFunc(
np.array(
[
A / PerovskiteLattice.RefA,
B / PerovskiteLattice.RefB,
C / PerovskiteLattice.RefC,
]
)
)
)
# === MANIPULATION METHODS
def applyTransF(self, TransF):
if isinstance(TransF, ScaleFunc):
self._A *= TransF.Scale[0]
self._B *= TransF.Scale[1]
self._C *= TransF.Scale[2]
UnitCellLattice.applyTransF(self, TransF)
# === PROPERTY EVALUATION METHODS
# def isOnLattice(self,P):
def areNeighbors(self, P1, P2):
raise NotImplementedError(
"PerovskiteLattice: there is no universal definition of nearest neighbors."
)
def getNeighbors(self, P, layer=1):
if layer > 2:
raise ValueError(
"PerovskiteLattice: there is no universal definition of N-th nearest neighbors."
)
RefP = self._getConvertToReference(P)
if self.isASite(P):
return []
elif self.isBSite(P):
RefNeighs = [
np.array([0.5, 0.0, 0.0]),
np.array([0.0, 0.5, 0.0]),
np.array([0.0, 0.0, 0.5]),
np.array([-0.5, 0.0, 0.0]),
np.array([0.0, -0.5, 0.0]),
np.array([0.0, 0.0, -0.5]),
]
elif self.isOSite(P):
PointType = self.RefUnitCell.getPointType(self._getConvertToReference(P))
if PointType == 4: # i.e., motif aligned with x-axis
RefNeighs = [
np.array([-0.5, 0.0, 0.0]),
np.array([0.5, 0.0, 0.0]),
np.array([-0.5, 1.0, 0.0]),
np.array([-0.5, 0.0, 1.0]),
np.array([-0.5, -1.0, 0.0]),
np.array([-0.5, 0.0, -1.0]),
np.array([0.5, 1.0, 0.0]),
np.array([0.5, 0.0, 1.0]),
np.array([0.5, -1.0, 0.0]),
np.array([0.5, 0.0, -1.0]),
]
elif PointType == 3: # i.e., motif aligned with y-axis
RefNeighs = [
np.array([0.0, -0.5, 0.0]),
np.array([0.0, 0.5, 0.0]),
np.array([1.0, -0.5, 0.0]),
np.array([0.0, -0.5, 1.0]),
np.array([-1.0, -0.5, 0.0]),
np.array([0.0, -0.5, -1.0]),
np.array([1.0, 0.5, 0.0]),
np.array([0.0, 0.5, 1.0]),
np.array([-1.0, 0.5, 0.0]),
np.array([0.0, 0.5, -1.0]),
]
elif PointType == 2: # i.e., motif aligned with z-axis
RefNeighs = [
np.array([0.0, 0.0, -0.5]),
np.array([0.0, 0.0, 0.5]),
np.array([1.0, 0.0, -0.5]),
np.array([0.0, 1.0, -0.5]),
np.array([-1.0, 0.0, -0.5]),
np.array([0.0, -1.0, -0.5]),
np.array([1.0, 0.0, 0.5]),
np.array([0.0, 1.0, 0.5]),
np.array([-1.0, 0.0, 0.5]),
np.array([0.0, -1.0, 0.5]),
]
else:
raise ValueError(
"PerovskiteLattice.getNeighbors Should never reach here!"
)
else:
raise ValueError(
"PerovskiteLattice.getNeighbors given point apparently not on lattice"
)
result = deepcopy(RefNeighs)
for NeighP in result:
NeighP += RefP
self._convertFromReference(NeighP)
return result
def isASite(self, P):
return self.RefUnitCell.getPointType(self._getConvertToReference(P)) == 0
def isBSite(self, P):
return self.RefUnitCell.getPointType(self._getConvertToReference(P)) == 1
def isOSite(self, P):
PointType = self.RefUnitCell.getPointType(self._getConvertToReference(P))
return PointType == 2 or PointType == 3 or PointType == 4
def setDesign(self, D, AType, BType, OType):
for i, P in enumerate(D.Canvas.Points):
if self.isASite(P):
D.setContent(i, AType)
elif self.isBSite(P):
D.setContent(i, BType)
elif self.isOSite(P):
D.setContent(i, OType)
else:
raise ValueError("setDesign can not set site not on lattice")
# === BASIC QUERY METHODS
@property
def A(self):
return self._A
@property
def B(self):
return self._B
@property
def C(self):
return self._C
def getOxygenSymTransFs():
result = []
ReflX = ReflectFunc.acrossX()
ReflY = ReflectFunc.acrossY()
ReflZ = ReflectFunc.acrossZ()
RotYY = RotateFunc.fromXYZAngles(0, np.pi, 0)
RotZZ = RotateFunc.fromXYZAngles(0, 0, np.pi)
RotX = RotateFunc.fromXYZAngles(np.pi * 0.5, 0, 0)
RotXX = RotateFunc.fromXYZAngles(np.pi, 0, 0)
RotXXX = RotateFunc.fromXYZAngles(np.pi * 1.5, 0, 0)
result.append(RotX)
result.append(RotXX)
result.append(RotXXX)
result.append(ReflX)
result.append(ReflX + RotX)
result.append(ReflX + RotXX)
result.append(ReflX + RotXXX)
result.append(ReflY)
result.append(ReflY + RotX)
result.append(ReflY + RotXX)
result.append(ReflY + RotXXX)
result.append(ReflZ)
result.append(ReflZ + RotX)
result.append(ReflZ + RotXX)
result.append(ReflZ + RotXXX)
result.append(ReflX + ReflY)
result.append(ReflX + ReflY + RotX)
result.append(ReflX + ReflY + RotXX)
result.append(ReflX + ReflY + RotXXX)
result.append(ReflX + ReflZ)
result.append(ReflX + ReflZ + RotX)
result.append(ReflX + ReflZ + RotXX)
result.append(ReflX + ReflZ + RotXXX)
result.append(ReflY + ReflZ)
result.append(ReflY + ReflZ + RotX)
result.append(ReflY + ReflZ + RotXX)
result.append(ReflY + ReflZ + RotXXX)
result.append(ReflX + ReflY + ReflZ)
result.append(ReflX + ReflY + ReflZ + RotX)
result.append(ReflX + ReflY + ReflZ + RotXX)
result.append(ReflX + ReflY + ReflZ + RotXXX)
result.append(RotYY + RotX)
result.append(RotYY + RotXX)
result.append(RotYY + RotXXX)
result.append(RotYY + ReflX)
result.append(RotYY + ReflX + RotX)
result.append(RotYY + ReflX + RotXX)
result.append(RotYY + ReflX + RotXXX)
result.append(RotYY + ReflY)
result.append(RotYY + ReflY + RotX)
result.append(RotYY + ReflY + RotXX)
result.append(RotYY + ReflY + RotXXX)
result.append(RotYY + ReflZ)
result.append(RotYY + ReflZ + RotX)
result.append(RotYY + ReflZ + RotXX)
result.append(RotYY + ReflZ + RotXXX)
result.append(RotYY + ReflX + ReflY)
result.append(RotYY + ReflX + ReflY + RotX)
result.append(RotYY + ReflX + ReflY + RotXX)
result.append(RotYY + ReflX + ReflY + RotXXX)
result.append(RotYY + ReflX + ReflZ)
result.append(RotYY + ReflX + ReflZ + RotX)
result.append(RotYY + ReflX + ReflZ + RotXX)
result.append(RotYY + ReflX + ReflZ + RotXXX)
result.append(RotYY + ReflY + ReflZ)
result.append(RotYY + ReflY + ReflZ + RotX)
result.append(RotYY + ReflY + ReflZ + RotXX)
result.append(RotYY + ReflY + ReflZ + RotXXX)
result.append(RotYY + ReflX + ReflY + ReflZ)
result.append(RotYY + ReflX + ReflY + ReflZ + RotX)
result.append(RotYY + ReflX + ReflY + ReflZ + RotXX)
result.append(RotYY + ReflX + ReflY + ReflZ + RotXXX)
result.append(RotZZ + RotX)
result.append(RotZZ + RotXX)
result.append(RotZZ + RotXXX)
result.append(RotZZ + ReflX)
result.append(RotZZ + ReflX + RotX)
result.append(RotZZ + ReflX + RotXX)
result.append(RotZZ + ReflX + RotXXX)
result.append(RotZZ + ReflY)
result.append(RotZZ + ReflY + RotX)
result.append(RotZZ + ReflY + RotXX)
result.append(RotZZ + ReflY + RotXXX)
result.append(RotZZ + ReflZ)
result.append(RotZZ + ReflZ + RotX)
result.append(RotZZ + ReflZ + RotXX)
result.append(RotZZ + ReflZ + RotXXX)
result.append(RotZZ + ReflX + ReflY)
result.append(RotZZ + ReflX + ReflY + RotX)
result.append(RotZZ + ReflX + ReflY + RotXX)
result.append(RotZZ + ReflX + ReflY + RotXXX)
result.append(RotZZ + ReflX + ReflZ)
result.append(RotZZ + ReflX + ReflZ + RotX)
result.append(RotZZ + ReflX + ReflZ + RotXX)
result.append(RotZZ + ReflX + ReflZ + RotXXX)
result.append(RotZZ + ReflY + ReflZ)
result.append(RotZZ + ReflY + ReflZ + RotX)
result.append(RotZZ + ReflY + ReflZ + RotXX)
result.append(RotZZ + ReflY + ReflZ + RotXXX)
result.append(RotZZ + ReflX + ReflY + ReflZ)
result.append(RotZZ + ReflX + ReflY + ReflZ + RotX)
result.append(RotZZ + ReflX + ReflY + ReflZ + RotXX)
result.append(RotZZ + ReflX + ReflY + ReflZ + RotXXX)
return result
|
PypiClean
|
/census_consumer_complaint_ineuron-0.0.1-py3-none-any.whl/census_consumer_complaint_custom_component/component.py
|
from typing import Optional, Union
from tfx import types
from tfx.components.example_gen import driver
from tfx.components.example_gen import utils
from tfx.dsl.components.base import base_beam_component
from tfx.dsl.components.base import base_beam_executor
from tfx.dsl.components.base import executor_spec
from tfx.orchestration import data_types
from tfx.proto import example_gen_pb2
from tfx.proto import range_config_pb2
from tfx.types import standard_artifacts
from census_consumer_complaint_types.types import RemoteZipFileBasedExampleGenSpec
from tfx.components import FileBasedExampleGen
class RemoteZipFileBasedExampleGen(base_beam_component.BaseBeamComponent):
"""A TFX component to ingest examples from a file system.
The FileBasedExampleGen component is an API for getting file-based records
into TFX pipelines. It consumes external files to generate examples which will
be used by other internal components like StatisticsGen or Trainers. The
component will also convert the input data into
[tf.record](https://www.tensorflow.org/tutorials/load_data/tf_records)
and generate train and eval example splits for downstream components.
## Example
```
_taxi_root = os.path.join(os.environ['HOME'], 'taxi')
_data_root = os.path.join(_taxi_root, 'data', 'simple')
_zip_uri = "https://xyz//abz.csv.zip"
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = RemoteZipFileBasedExample(input_base=_data_root,zip_file_uri="")
```
Component `outputs` contains:
- `examples`: Channel of type `standard_artifacts.Examples` for output train
and eval examples.
"""
SPEC_CLASS = RemoteZipFileBasedExampleGenSpec
# EXECUTOR_SPEC should be overridden by subclasses.
EXECUTOR_SPEC = executor_spec.BeamExecutorSpec(
base_beam_executor.BaseBeamExecutor)
DRIVER_CLASS = driver.FileBasedDriver
def __init__(
self,
input_base: Optional[str] = None,
zip_file_uri: Optional[str] = None,
input_config: Optional[Union[example_gen_pb2.Input,
data_types.RuntimeParameter]] = None,
output_config: Optional[Union[example_gen_pb2.Output,
data_types.RuntimeParameter]] = None,
custom_config: Optional[Union[example_gen_pb2.CustomConfig,
data_types.RuntimeParameter]] = None,
range_config: Optional[Union[range_config_pb2.RangeConfig,
data_types.RuntimeParameter]] = None,
output_data_format: Optional[int] = example_gen_pb2.FORMAT_TF_EXAMPLE,
output_file_format: Optional[int] = example_gen_pb2.FORMAT_TFRECORDS_GZIP,
custom_executor_spec: Optional[executor_spec.ExecutorSpec] = None):
"""Construct a FileBasedExampleGen component.
Args:
input_base: an extract directory containing the CSV files after extraction of downloaded zip file.
zip_file_uri: Remote Zip file uri to download compressed zip csv file
input_config: An
[`example_gen_pb2.Input`](https://github.com/tensorflow/tfx/blob/master/tfx/proto/example_gen.proto)
instance, providing input configuration. If unset, input files will be
treated as a single split.
output_config: An example_gen_pb2.Output instance, providing the output
configuration. If unset, default splits will be 'train' and
'eval' with size 2:1.
custom_config: An optional example_gen_pb2.CustomConfig instance,
providing custom configuration for executor.
range_config: An optional range_config_pb2.RangeConfig instance,
specifying the range of span values to consider. If unset, driver will
default to searching for latest span with no restrictions.
output_data_format: Payload format of generated data in output artifact,
one of example_gen_pb2.PayloadFormat enum.
output_file_format: File format of generated data in output artifact,
one of example_gen_pb2.FileFormat enum.
custom_executor_spec: Optional custom executor spec overriding the default
executor spec specified in the component attribute.
"""
# Configure inputs and outputs.
input_config = input_config or utils.make_default_input_config()
output_config = output_config or utils.make_default_output_config(
input_config)
example_artifacts = types.Channel(type=standard_artifacts.Examples)
spec = RemoteZipFileBasedExampleGenSpec(
input_base=input_base,
zip_file_uri=zip_file_uri,
input_config=input_config,
output_config=output_config,
custom_config=custom_config,
range_config=range_config,
output_data_format=output_data_format,
output_file_format=output_file_format,
examples=example_artifacts)
super().__init__(spec=spec, custom_executor_spec=custom_executor_spec)
|
PypiClean
|
/politico-civic-demography-0.1.2.tar.gz/politico-civic-demography-0.1.2/README.md
|

# django-politico-civic-demography
Gather U.S. Census data for elections, the POLITICO way.
### Quickstart
1. Install the app.
```
$ pip install django-politico-civic-demography
```
2. Add the app to your Django project and configure settings.
```python
INSTALLED_APPS = [
# ...
'rest_framework',
'geography',
'demography',
]
#########################
# demography settings
CENSUS_API_KEY = ''
DEMOGRAPHY_AWS_ACCESS_KEY_ID = ''
DEMOGRAPHY_AWS_SECRET_ACCESS_KEY = ''
DEMOGRAPHY_AWS_S3_BUCKET = ''
DEMOGRAPHY_AWS_REGION = 'us-east-1' # default
DEMOGRAPHY_AWS_S3_UPLOAD_ROOT = 'elections' # default
DEMOGRAPHY_AWS_ACL = 'public-read' # default
DEMOGRAPHY_AWS_CACHE_HEADER = 'max-age=31536000' # default
DEMOGRAPHY_API_AUTHENTICATION_CLASS = 'rest_framework.authentication.BasicAuthentication' # default
DEMOGRAPHY_API_PERMISSION_CLASS = 'rest_framework.permissions.IsAdminUser' # default
DEMOGRAPHY_API_PAGINATION_CLASS = 'demography.pagination.ResultsPagination' # default
```
### Developing
##### Running a development server
Move into the example directory, install dependencies and run the development server with pipenv.
```
$ cd example
$ pipenv install
$ pipenv run python manage.py runserver
```
##### Setting up a PostgreSQL database
1. Run the make command to setup a fresh database.
```
$ make database
```
2. Add a connection URL to `example/.env`.
```
DATABASE_URL="postgres://localhost:5432/geography"
```
3. Run migrations from the example app.
```
$ cd example
$ pipenv run python manage.py migrate
```
### Baking Data
This app will bake multi-level census data files to the s3 bucket configured in your settings. The files will bake in the following structure:
```javascript
{ DEMOGRAPHY_AWS_S3_UPLOAD_ROOT }
├── { series } // each census series (e.g. acs5) has its own directory
│ ├── { year } // each series has a directory for each year
│ │ ├── { table } // each year has a directory for each table by table code
│ │ │ ├── districts.json // national-level data broken up by districts
│ │ │ ├── states.json // national-level data broken up by states by district
│ │ │ ├── { state_fips } // each table has a directory for each state by FIPS code
│ │ │ │ ├── districts.json // state-level data broken up district
│ │ │ │ └── counties.json // state-level data broken up county
│ │ │ └── ...
│ │ └── ...
│ └── ...
└── ...
```
The data structure will differ depending on the type of file and the setup of your census tables in the admin. Here are four samples for a data table of "median age" with a "total" code of `001E`. In our sample admin we have the code inputted twice: once with a label of `total` and once with no label.
##### National Districts File
```python
# upload_root/series/year/table/districts.json
{
"10": { # state FIPS code
"00": { # district number
"001E": 39.6, # census variable without a label
"total": 39.6, # census variable with an aggregate variable
}
},
"11": {
"98": {
"001E": 33.8,
"total": 33.8,
}
},
"12": {
"10": {
"001E": 35,
"total": 35,
},
"11": {
"001E": 55,
"total": 55,
},
"12": {
"001E": 46.3,
"total": 46.3,
},
... # more districts here
},
... # more states here
}
```
##### National States File
```python
# upload_root/series/year/table/states.json
{
"10": {
"001E": 39.6,
"total": 39.6,
},
"11": {
"001E": 33.8,
"total": 33.8,
},
"12": {
"001E": 41.6,
"total": 41.6,
},
... # more states here
}
```
##### State Districts File
```python
# upload_root/series/year/table/state/districts.json
{
"10": {
"001E": 35,
"total": 35,
},
"11": {
"001E": 55,
"total": 55,
},
"12": {
"001E": 46.3,
"total": 46.3,
},
... # more districts here
}
```
##### State County File
```python
# upload_root/series/year/table/state/counties.json
{
"12001": { # county FIPS code
"001E": 31,
"total": 31,
},
"12003": {
"001E": 36.5,
"total": 36.5,
},
"12005": {
"001E": 39.7,
"total": 39.7,
},
"12007": {
"001E": 41,
"total": 41,
},
... # more counties here
}
```
|
PypiClean
|
/airsim_emulator-1.0.1.tar.gz/airsim_emulator-1.0.1/airsim_emulator/run_emulator.py
|
import airsim_adaptor
airsim = airsim_adaptor
import setup_path
from pynput.keyboard import Listener, Events
import time
import threading
from multiprocessing import Process, Queue
import pprint
from visca_adapter import viscaAdapter
from sBGC_adapter import sBGCAdaper
import math
# connect to the AirSim simulator
client = airsim.MultirotorClient()
client.confirmConnection()
client.enableApiControl(True)
fpv = "0"
pi = math.pi
class Quaternion:
def __init__(self, ):
self.w_val = 0
self.x_val = 0
self.y_val = 0
self.z_val = 0
class Platform:
def __init__(self):
self.roll = 0
self.pitch = 0
self.yaw = 0
self.fov = 0
self.fov_read = 0
self.value = 0
self.q = Quaternion()
self.q1 = Quaternion()
self.update_pitch = False
def getCameraInfo(self, camera_):
cam_info = client.simGetCameraInfo(camera_)
veh_info = client.simGetVehiclePose("PX4")
sss = (str(cam_info).split("<CameraInfo>")[1])
sss1 = sss.split(':')
self.fov_read = float(sss1[1].split("'")[0].split(",")[0])
self.q.w_val = float(sss1[4].split(",")[0])
self.q.x_val = float(sss1[5].split(",")[0])
self.q.y_val = float(sss1[6].split(",")[0])
self.q.z_val = float(sss1[7].split(",")[0].split('}')[0])
ddd = (str(veh_info).split("<Pose>")[1])
ddd1 = ddd.split(':')
self.q1.w_val = float(ddd1[2].split(",")[0])
self.q1.x_val = float(ddd1[3].split(",")[0])
self.q1.y_val = float(ddd1[4].split(",")[0])
self.q1.z_val = float(ddd1[5].split(",")[0].split('}')[0])
def gimbal_update(self, target_pitch):
self.getCameraInfo(fpv)
(self.pitch, self.roll, self.yaw) = airsim.to_eularian_angles(self.q)
(pitch_bd, _, __) = airsim.to_eularian_angles(self.q1)
pitch_rad = target_pitch*pi/-180
pitch_bdrad = pitch_bd*pi/-180
client.simSetCameraPitch(fpv, airsim.to_quaternion(pitch_rad+pitch_bdrad, 0, 0))
@staticmethod
def zoom_update(zoom):
client.simSetCameraFov(fpv, zoom)
if __name__ == '__main__':
platform = Platform()
que = Queue()
que_raw = Queue()
camera = viscaAdapter()
gimbal = sBGCAdaper()
q = Quaternion()
viscaAdapterT = threading.Thread(target=camera.run)
viscaAdapterT.start()
j = False
sBGCAdapterT = Process(target=gimbal.run, args=(que, que_raw,))
sBGCAdapterT.start()
# viscaAdapterI.epoch()
platform.fov = platform.fov_read
target_pitch = 0
while True:
if not que.empty():
[platform.update_pitch, target_pitch] = que.get()
platform.getCameraInfo(fpv)
(platform.pitch, platform.roll, platform.yaw) = airsim.to_eularian_angles(platform.q)
if not que_raw.full():
que_raw.put([platform.pitch, platform.roll, platform.yaw])
if camera.zoom_update and camera.zoom_fov != 0:
platform.zoom_update(camera.zoom_fov)
camera.zoom_update = False
gimbal.get_raw_orientation(platform.pitch * -180 / pi, platform.roll * -180 / pi, platform.yaw * -180 / pi)
if platform.update_pitch:
platform.gimbal_update(target_pitch)
platform.update_pitch = False
|
PypiClean
|
/pyGSTi-0.9.11.2-cp37-cp37m-win_amd64.whl/pygsti/forwardsims/forwardsim.py
|
import collections as _collections
import warnings as _warnings
import numpy as _np
from pygsti.layouts.cachedlayout import CachedCOPALayout as _CachedCOPALayout
from pygsti.layouts.copalayout import CircuitOutcomeProbabilityArrayLayout as _CircuitOutcomeProbabilityArrayLayout
from pygsti.baseobjs import outcomelabeldict as _ld
from pygsti.baseobjs.resourceallocation import ResourceAllocation as _ResourceAllocation
from pygsti.baseobjs.nicelyserializable import NicelySerializable as _NicelySerializable
from pygsti.tools import slicetools as _slct
class ForwardSimulator(_NicelySerializable):
"""
A calculator of circuit outcome probability calculations and their derivatives w.r.t. model parameters.
Some forward simulators may also be used to perform operation-product calculations.
This functionality exists in a class separate from Model to allow for additional
model classes (e.g. ones which use entirely different -- non-gate-local
-- parameterizations of operation matrices and SPAM vectors) access to these
fundamental operations. It also allows for the easier addition of new forward simulators.
Note: a model holds or "contains" a forward simulator instance to perform its computations,
and a forward simulator holds a reference to its parent model, so we need to make sure the
forward simulator doesn't serialize the model or we have a circular reference.
Parameters
----------
model : Model, optional
The model this forward simulator will use to compute circuit outcome probabilities.
"""
@classmethod
def cast(cls, obj, num_qubits=None):
""" num_qubits only used if `obj == 'auto'` """
from .matrixforwardsim import MatrixForwardSimulator as _MatrixFSim
from .mapforwardsim import MapForwardSimulator as _MapFSim
if isinstance(obj, ForwardSimulator):
return obj
elif obj == "auto":
return _MapFSim() if (num_qubits is None or num_qubits > 2) else _MatrixFSim()
elif obj == "map":
return _MapFSim()
elif obj == "matrix":
return _MatrixFSim()
else:
raise ValueError("Cannot convert %s to a forward simulator!" % str(obj))
@classmethod
def _array_types_for_method(cls, method_name):
# The array types of *intermediate* or *returned* values within various class methods (for memory estimates)
if method_name == 'bulk_probs': return ('E',) + cls._array_types_for_method('bulk_fill_probs')
if method_name == 'bulk_dprobs': return ('EP',) + cls._array_types_for_method('bulk_fill_dprobs')
if method_name == 'bulk_hprobs': return ('EPP',) + cls._array_types_for_method('bulk_fill_hprobs')
if method_name == 'iter_hprobs_by_rectangle': return cls._array_types_for_method('_iter_hprobs_by_rectangle')
if method_name == '_iter_hprobs_by_rectangle': return ('epp',) + cls._array_types_for_method('bulk_fill_hprobs')
if method_name == 'bulk_fill_probs': return cls._array_types_for_method('_bulk_fill_probs_block')
if method_name == 'bulk_fill_dprobs': return cls._array_types_for_method('_bulk_fill_dprobs_block')
if method_name == 'bulk_fill_hprobs': return cls._array_types_for_method('_bulk_fill_hprobs_block')
if method_name == '_bulk_fill_probs_block': return ()
if method_name == '_bulk_fill_dprobs_block':
return ('e',) + cls._array_types_for_method('_bulk_fill_probs_block')
if method_name == '_bulk_fill_hprobs_block':
return ('ep', 'ep') + cls._array_types_for_method('_bulk_fill_dprobs_block')
return ()
def __init__(self, model=None):
super().__init__()
#self.dim = model.dim
self.model = model
#self.paramvec = paramvec
#self.Np = len(paramvec)
#self.evotype = layer_op_server.evotype()
#Conversion of labels -> integers for speed & C-compatibility
#self.operation_lookup = { lbl:i for i,lbl in enumerate(gates.keys()) }
#self.prep_lookup = { lbl:i for i,lbl in enumerate(preps.keys()) }
#self.effect_lookup = { lbl:i for i,lbl in enumerate(effects.keys()) }
#
#self.operationreps = { i:self.operations[lbl].torep() for lbl,i in self.operation_lookup.items() }
#self.prepreps = { lbl:p.torep('prep') for lbl,p in preps.items() }
#self.effectreps = { lbl:e.torep('effect') for lbl,e in effects.items() }
def _to_nice_serialization(self):
# (don't serialize parent model)
return super()._to_nice_serialization()
@classmethod
def _from_nice_serialization(cls, state):
return cls(None)
def __getstate__(self):
state_dict = self.__dict__.copy()
state_dict['_model'] = None # don't serialize parent model (will cause recursion)
return state_dict
@property
def model(self):
return self._model
@model.setter
def model(self, val):
self._model = val
try:
evotype = None if val is None else self._model.evotype
self._set_evotype(evotype) # alert the class: new evotype! (allows loading evotype-specific calc functions)
except AttributeError:
pass # not all models have an evotype (OK)
def _set_evotype(self, evotype):
""" Called when the evotype being used (defined by the parent model) changes.
`evotype` will be `None` when the current model is None"""
pass
#def to_vector(self):
# """
# Returns the parameter vector of the associated Model.
#
# Returns
# -------
# numpy array
# The vectorized model parameters.
# """
# return self.paramvec
#
#def from_vector(self, v, close=False, nodirty=False):
# """
# The inverse of to_vector.
#
# Initializes the Model-like members of this
# calculator based on `v`. Used for computing finite-difference derivatives.
#
# Parameters
# ----------
# v : numpy.ndarray
# The parameter vector.
#
# close : bool, optional
# Set to `True` if `v` is close to the current parameter vector.
# This can make some operations more efficient.
#
# nodirty : bool, optional
# If True, the framework for marking and detecting when operations
# have changed and a Model's parameter-vector needs to be updated
# is disabled. Disabling this will increases the speed of the call.
#
# Returns
# -------
# None
# """
# #Note: this *will* initialize the parent Model's objects too,
# # since only references to preps, effects, and gates are held
# # by the calculator class. ORDER is important, as elements of
# # POVMs and Instruments rely on a fixed from_vector ordering
# # of their simplified effects/gates.
# self.paramvec = v.copy() # now self.paramvec is *not* the same as the Model's paramvec
# self.sos.from_vector(v, close, nodirty) # so don't always want ", nodirty=True)" - we
# # need to set dirty flags so *parent* will re-init it's paramvec...
#
# #Re-init reps for computation
# #self.operationreps = { i:self.operations[lbl].torep() for lbl,i in self.operation_lookup.items() }
# #self.operationreps = { lbl:g.torep() for lbl,g in gates.items() }
# #self.prepreps = { lbl:p.torep('prep') for lbl,p in preps.items() }
# #self.effectreps = { lbl:e.torep('effect') for lbl,e in effects.items() }
def _compute_circuit_outcome_probabilities(self, array_to_fill, circuit, outcomes, resource_alloc, time=None):
raise NotImplementedError("Derived classes should implement this!")
def _compute_sparse_circuit_outcome_probabilities(self, circuit, resource_alloc, time=None):
raise NotImplementedError("Derived classes should implement this to provide sparse (non-zero) probabilites!")
def _compute_circuit_outcome_probability_derivatives(self, array_to_fill, circuit, outcomes, param_slice,
resource_alloc):
# array to fill has shape (num_outcomes, len(param_slice)) and should be filled with the "w.r.t. param_slice"
# derivatives of each specified circuit outcome probability.
raise NotImplementedError("Derived classes can implement this to speed up derivative computation")
def probs(self, circuit, outcomes=None, time=None, resource_alloc=None):
"""
Construct a dictionary containing the outcome probabilities for a single circuit.
Parameters
----------
circuit : Circuit or tuple of operation labels
The sequence of operation labels specifying the circuit.
outcomes : list or tuple
A sequence of outcomes, which can themselves be either tuples
(to include intermediate measurements) or simple strings, e.g. `'010'`.
If None, only non-zero outcome probabilities will be reported.
time : float, optional
The *start* time at which `circuit` is evaluated.
resource_alloc : ResourceAllocation, optional
The resources available for computing circuit outcome probabilities.
Returns
-------
probs : OutcomeLabelDict
A dictionary with keys equal to outcome labels and
values equal to probabilities. If no target outcomes provided,
only non-zero probabilities will be reported.
"""
if outcomes is None:
try:
return self._compute_sparse_circuit_outcome_probabilities(circuit, resource_alloc, time)
except NotImplementedError:
pass # continue on to create full layout and calculate all outcomes
copa_layout = self.create_layout([circuit], array_types=('e',), resource_alloc=resource_alloc)
probs_array = _np.empty(copa_layout.num_elements, 'd')
if time is None:
self.bulk_fill_probs(probs_array, copa_layout)
else:
self._bulk_fill_probs_at_times(probs_array, copa_layout, [time])
if _np.any(_np.isnan(probs_array)):
to_print = str(circuit) if len(circuit) < 10 else str(circuit[0:10]) + " ... (len %d)" % len(circuit)
_warnings.warn("pr(%s) == nan" % to_print)
probs = _ld.OutcomeLabelDict()
elindices, outcomes = copa_layout.indices_and_outcomes_for_index(0)
for element_index, outcome in zip(_slct.indices(elindices), outcomes):
probs[outcome] = probs_array[element_index]
return probs
def dprobs(self, circuit, resource_alloc=None):
"""
Construct a dictionary containing outcome probability derivatives for a single circuit.
Parameters
----------
circuit : Circuit or tuple of operation labels
The sequence of operation labels specifying the circuit.
resource_alloc : ResourceAllocation, optional
The resources available for computing circuit outcome probability derivatives.
Returns
-------
dprobs : OutcomeLabelDict
A dictionary with keys equal to outcome labels and
values equal to an array containing the (partial) derivatives
of the outcome probability with respect to all model parameters.
"""
copa_layout = self.create_layout([circuit], array_types=('ep',), resource_alloc=resource_alloc)
dprobs_array = _np.empty((copa_layout.num_elements, self.model.num_params), 'd')
self.bulk_fill_dprobs(dprobs_array, copa_layout)
dprobs = _ld.OutcomeLabelDict()
elindices, outcomes = copa_layout.indices_and_outcomes_for_index(0)
for element_index, outcome in zip(_slct.indices(elindices), outcomes):
dprobs[outcome] = dprobs_array[element_index]
return dprobs
def hprobs(self, circuit, resource_alloc=None):
"""
Construct a dictionary containing outcome probability Hessians for a single circuit.
Parameters
----------
circuit : Circuit or tuple of operation labels
The sequence of operation labels specifying the circuit.
resource_alloc : ResourceAllocation, optional
The resources available for computing circuit outcome probability Hessians.
Returns
-------
hprobs : OutcomeLabelDict
A dictionary with keys equal to outcome labels and
values equal to a 2D array that is the Hessian matrix for
the corresponding outcome probability (with respect to all model parameters).
"""
copa_layout = self.create_layout([circuit], array_types=('epp',), resource_alloc=None)
hprobs_array = _np.empty((copa_layout.num_elements, self.model.num_params, self.model.num_params), 'd')
self.bulk_fill_hprobs(hprobs_array, copa_layout)
hprobs = _ld.OutcomeLabelDict()
elindices, outcomes = copa_layout.indices_and_outcomes_for_index(0)
for element_index, outcome in zip(_slct.indices(elindices), outcomes):
hprobs[outcome] = hprobs_array[element_index]
return hprobs
# ---------------------------------------------------------------------------
# BULK operations -----------------------------------------------------------
# ---------------------------------------------------------------------------
def create_layout(self, circuits, dataset=None, resource_alloc=None,
array_types=(), derivative_dimensions=None, verbosity=0):
"""
Constructs an circuit-outcome-probability-array (COPA) layout for `circuits` and `dataset`.
Parameters
----------
circuits : list
The circuits whose outcome probabilities should be computed.
dataset : DataSet
The source of data counts that will be compared to the circuit outcome
probabilities. The computed outcome probabilities are limited to those
with counts present in `dataset`.
resource_alloc : ResourceAllocation
A available resources and allocation information. These factors influence how
the layout (evaluation strategy) is constructed.
array_types : tuple, optional
A tuple of string-valued array types, as given by
:meth:`CircuitOutcomeProbabilityArrayLayout.allocate_local_array`. These types determine
what types of arrays we anticipate computing using this layout (and forward simulator). These
are used to check available memory against the limit (if it exists) within `resource_alloc`.
The array types also determine the number of derivatives that this layout is able to compute.
So, for example, if you ever want to compute derivatives or Hessians of element arrays then
`array_types` must contain at least one `'ep'` or `'epp'` type, respectively or the layout
will not allocate needed intermediate storage for derivative-containing types. If you don't
care about accurate memory limits, use `('e',)` when you only ever compute probabilities and
never their derivatives, and `('e','ep')` or `('e','ep','epp')` if you need to compute
Jacobians or Hessians too.
derivative_dimensions : tuple, optional
A tuple containing, optionally, the parameter-space dimension used when taking first
and second derivatives with respect to the cirucit outcome probabilities. This must be
have minimally 1 or 2 elements when `array_types` contains `'ep'` or `'epp'` types,
respectively.
verbosity : int or VerbosityPrinter
Determines how much output to send to stdout. 0 means no output, higher
integers mean more output.
Returns
-------
CircuitOutcomeProbabilityArrayLayout
"""
return _CircuitOutcomeProbabilityArrayLayout.create_from(circuits, self.model, dataset, derivative_dimensions,
resource_alloc=resource_alloc)
#TODO UPDATE
#def bulk_prep_probs(self, eval_tree, comm=None, mem_limit=None):
# """
# Performs initial computation needed for bulk_fill_probs and related calls.
#
# For example, as computing probability polynomials. This is usually coupled with
# the creation of an evaluation tree, but is separated from it because this
# "preparation" may use `comm` to distribute a computationally intensive task.
#
# Parameters
# ----------
# eval_tree : EvalTree
# The evaluation tree used to define a list of circuits and hold (cache)
# any computed quantities.
#
# comm : mpi4py.MPI.Comm, optional
# When not None, an MPI communicator for distributing the computation
# across multiple processors. Distribution is performed over
# subtrees of `eval_tree` (if it is split).
#
# mem_limit : int
# Rough memory limit in bytes.
#
# Returns
# -------
# None
# """
# pass # default is to have no pre-computed quantities (but not an error to call this fn)
def bulk_probs(self, circuits, clip_to=None, resource_alloc=None, smartc=None):
"""
Construct a dictionary containing the probabilities for an entire list of circuits.
Parameters
----------
circuits : list of Circuits
The list of circuits. May also be a :class:`CircuitOutcomeProbabilityArrayLayout`
object containing pre-computed quantities that make this function run faster.
clip_to : 2-tuple, optional
(min,max) to clip return value if not None.
resource_alloc : ResourceAllocation, optional
A resource allocation object describing the available resources and a strategy
for partitioning them.
smartc : SmartCache, optional
A cache object to cache & use previously cached values inside this
function.
Returns
-------
probs : dictionary
A dictionary such that `probs[circuit]` is an ordered dictionary of
outcome probabilities whose keys are outcome labels.
"""
if isinstance(circuits, _CircuitOutcomeProbabilityArrayLayout):
copa_layout = circuits
else:
copa_layout = self.create_layout(circuits, array_types=('e',), resource_alloc=resource_alloc)
global_layout = copa_layout.global_layout
resource_alloc = _ResourceAllocation.cast(resource_alloc)
with resource_alloc.temporarily_track_memory(global_layout.num_elements): # 'E' (vp)
local_vp = copa_layout.allocate_local_array('e', 'd')
if smartc:
smartc.cached_compute(self.bulk_fill_probs, local_vp, copa_layout, _filledarrays=(0,))
else:
self.bulk_fill_probs(local_vp, copa_layout)
vp = copa_layout.gather_local_array('e', local_vp) # gather data onto rank-0 processor
copa_layout.free_local_array(local_vp)
if resource_alloc.comm is None or resource_alloc.comm.rank == 0:
ret = _collections.OrderedDict()
for elInds, c, outcomes in global_layout.iter_unique_circuits():
if isinstance(elInds, slice): elInds = _slct.indices(elInds)
ret[c] = _ld.OutcomeLabelDict([(outLbl, vp[ei]) for ei, outLbl in zip(elInds, outcomes)])
return ret
else:
return None # on non-root ranks
def bulk_dprobs(self, circuits, resource_alloc=None, smartc=None):
"""
Construct a dictionary containing the probability derivatives for an entire list of circuits.
Parameters
----------
circuits : list of Circuits
The list of circuits. May also be a :class:`CircuitOutcomeProbabilityArrayLayout`
object containing pre-computed quantities that make this function run faster.
resource_alloc : ResourceAllocation, optional
A resource allocation object describing the available resources and a strategy
for partitioning them.
smartc : SmartCache, optional
A cache object to cache & use previously cached values inside this
function.
Returns
-------
dprobs : dictionary
A dictionary such that `dprobs[circuit]` is an ordered dictionary of
derivative arrays (one element per differentiated parameter) whose
keys are outcome labels
"""
if isinstance(circuits, _CircuitOutcomeProbabilityArrayLayout):
copa_layout = circuits
else:
copa_layout = self.create_layout(circuits, array_types=('ep',), resource_alloc=resource_alloc)
global_layout = copa_layout.global_layout
resource_alloc = _ResourceAllocation.cast(resource_alloc)
with resource_alloc.temporarily_track_memory(global_layout.num_elements * self.model.num_params): # 'EP' (vdp)
#Note: don't use smartc for now.
local_vdp = copa_layout.allocate_local_array('ep', 'd')
self.bulk_fill_dprobs(local_vdp, copa_layout, None)
vdp = copa_layout.gather_local_array('ep', local_vdp) # gather data onto rank-0 processor
copa_layout.free_local_array(local_vdp)
if resource_alloc.comm_rank == 0:
ret = _collections.OrderedDict()
for elInds, c, outcomes in global_layout.iter_unique_circuits():
if isinstance(elInds, slice): elInds = _slct.indices(elInds)
ret[c] = _ld.OutcomeLabelDict([(outLbl, vdp[ei]) for ei, outLbl in zip(elInds, outcomes)])
return ret
else:
return None # on non-root ranks
def bulk_hprobs(self, circuits, resource_alloc=None, smartc=None):
"""
Construct a dictionary containing the probability Hessians for an entire list of circuits.
Parameters
----------
circuits : list of Circuits
The list of circuits. May also be a :class:`CircuitOutcomeProbabilityArrayLayout`
object containing pre-computed quantities that make this function run faster.
resource_alloc : ResourceAllocation, optional
A resource allocation object describing the available resources and a strategy
for partitioning them.
smartc : SmartCache, optional
A cache object to cache & use previously cached values inside this
function.
Returns
-------
hprobs : dictionary
A dictionary such that `hprobs[circuit]` is an ordered dictionary of
Hessian arrays (a square matrix with one row/column per differentiated
parameter) whose keys are outcome labels
"""
if isinstance(circuits, _CircuitOutcomeProbabilityArrayLayout):
copa_layout = circuits
else:
copa_layout = self.create_layout(circuits, array_types=('epp',), resource_alloc=resource_alloc)
global_layout = copa_layout.global_layout
resource_alloc = _ResourceAllocation.cast(resource_alloc)
with resource_alloc.temporarily_track_memory(global_layout.num_elements * self.model.num_params**2): # 'EPP'
#Note: don't use smartc for now.
local_vhp = copa_layout.allocate_local_array('epp', 'd')
self.bulk_fill_hprobs(local_vhp, copa_layout, None, None, None)
vhp = copa_layout.gather_local_array('epp', local_vhp) # gather data onto rank-0 processor
copa_layout.free_local_array(local_vhp)
if resource_alloc.comm_rank == 0:
ret = _collections.OrderedDict()
for elInds, c, outcomes in global_layout.iter_unique_circuits():
if isinstance(elInds, slice): elInds = _slct.indices(elInds)
ret[c] = _ld.OutcomeLabelDict([(outLbl, vhp[ei]) for ei, outLbl in zip(elInds, outcomes)])
return ret
else:
return None # on non-root ranks
def bulk_fill_probs(self, array_to_fill, layout):
"""
Compute the outcome probabilities for a list circuits.
This routine fills a 1D array, `array_to_fill` with circuit outcome probabilities
as dictated by a :class:`CircuitOutcomeProbabilityArrayLayout` ("COPA layout")
object, which is usually specifically tailored for efficiency.
The `array_to_fill` array must have length equal to the number of elements in
`layout`, and the meanings of each element are given by `layout`.
Parameters
----------
array_to_fill : numpy ndarray
an already-allocated 1D numpy array of length equal to the
total number of computed elements (i.e. `len(layout)`).
layout : CircuitOutcomeProbabilityArrayLayout
A layout for `array_to_fill`, describing what circuit outcome each
element corresponds to. Usually given by a prior call to :meth:`create_layout`.
Returns
-------
None
"""
return self._bulk_fill_probs(array_to_fill, layout)
def _bulk_fill_probs(self, array_to_fill, layout):
return self._bulk_fill_probs_block(array_to_fill, layout)
def _bulk_fill_probs_block(self, array_to_fill, layout):
for element_indices, circuit, outcomes in layout.iter_unique_circuits():
self._compute_circuit_outcome_probabilities(array_to_fill[element_indices], circuit,
outcomes, layout.resource_alloc(), time=None)
def _bulk_fill_probs_at_times(self, array_to_fill, layout, times):
# A separate function because computation with time-dependence is often approached differently
return self._bulk_fill_probs_block_at_times(array_to_fill, layout, times)
def _bulk_fill_probs_block_at_times(self, array_to_fill, layout, times):
for (element_indices, circuit, outcomes), time in zip(layout.iter_unique_circuits(), times):
self._compute_circuit_outcome_probabilities(array_to_fill[element_indices], circuit,
outcomes, layout.resource_alloc(), time)
def bulk_fill_dprobs(self, array_to_fill, layout, pr_array_to_fill=None):
"""
Compute the outcome probability-derivatives for an entire tree of circuits.
This routine fills a 2D array, `array_to_fill` with circuit outcome probabilities
as dictated by a :class:`CircuitOutcomeProbabilityArrayLayout` ("COPA layout")
object, which is usually specifically tailored for efficiency.
The `array_to_fill` array must have length equal to the number of elements in
`layout`, and the meanings of each element are given by `layout`.
Parameters
----------
array_to_fill : numpy ndarray
an already-allocated 2D numpy array of shape `(len(layout), Np)`, where
`Np` is the number of model parameters being differentiated with respect to.
layout : CircuitOutcomeProbabilityArrayLayout
A layout for `array_to_fill`, describing what circuit outcome each
element corresponds to. Usually given by a prior call to :meth:`create_layout`.
pr_mx_to_fill : numpy array, optional
when not None, an already-allocated length-`len(layout)` numpy array that is
filled with probabilities, just as in :meth:`bulk_fill_probs`.
Returns
-------
None
"""
return self._bulk_fill_dprobs(array_to_fill, layout, pr_array_to_fill)
def _bulk_fill_dprobs(self, array_to_fill, layout, pr_array_to_fill):
if pr_array_to_fill is not None:
self._bulk_fill_probs_block(pr_array_to_fill, layout)
return self._bulk_fill_dprobs_block(array_to_fill, None, layout, None)
def _bulk_fill_dprobs_block(self, array_to_fill, dest_param_slice, layout, param_slice):
#If _compute_circuit_outcome_probability_derivatives is implemented, use it!
resource_alloc = layout.resource_alloc()
try:
for element_indices, circuit, outcomes in layout.iter_unique_circuits():
self._compute_circuit_outcome_probability_derivatives(
array_to_fill[element_indices, dest_param_slice], circuit, outcomes, param_slice, resource_alloc)
return
except NotImplementedError:
pass # otherwise, proceed to compute derivatives via finite difference.
eps = 1e-7 # hardcoded?
if param_slice is None:
param_slice = slice(0, self.model.num_params)
param_indices = _slct.to_array(param_slice)
if dest_param_slice is None:
dest_param_slice = slice(0, len(param_indices))
dest_param_indices = _slct.to_array(dest_param_slice)
iParamToFinal = {i: dest_param_indices[ii] for ii, i in enumerate(param_indices)}
probs = _np.empty(len(layout), 'd')
self._bulk_fill_probs_block(probs, layout)
probs2 = _np.empty(len(layout), 'd')
orig_vec = self.model.to_vector().copy()
for i in range(self.model.num_params):
if i in iParamToFinal:
iFinal = iParamToFinal[i]
vec = orig_vec.copy(); vec[i] += eps
self.model.from_vector(vec, close=True)
self._bulk_fill_probs_block(probs2, layout, resource_alloc)
array_to_fill[:, iFinal] = (probs2 - probs) / eps
self.model.from_vector(orig_vec, close=True)
def bulk_fill_hprobs(self, array_to_fill, layout,
pr_array_to_fill=None, deriv1_array_to_fill=None, deriv2_array_to_fill=None):
"""
Compute the outcome probability-Hessians for an entire list of circuits.
Similar to `bulk_fill_probs(...)`, but fills a 3D array with
the Hessians for each circuit outcome probability.
Parameters
----------
array_to_fill : numpy ndarray
an already-allocated numpy array of shape `(len(layout),M1,M2)` where
`M1` and `M2` are the number of selected model parameters (by `wrt_filter1`
and `wrt_filter2`).
layout : CircuitOutcomeProbabilityArrayLayout
A layout for `array_to_fill`, describing what circuit outcome each
element corresponds to. Usually given by a prior call to :meth:`create_layout`.
pr_mx_to_fill : numpy array, optional
when not None, an already-allocated length-`len(layout)` numpy array that is
filled with probabilities, just as in :meth:`bulk_fill_probs`.
deriv1_array_to_fill : numpy array, optional
when not None, an already-allocated numpy array of shape `(len(layout),M1)`
that is filled with probability derivatives, similar to
:meth:`bulk_fill_dprobs` (see `array_to_fill` for a definition of `M1`).
deriv2_array_to_fill : numpy array, optional
when not None, an already-allocated numpy array of shape `(len(layout),M2)`
that is filled with probability derivatives, similar to
:meth:`bulk_fill_dprobs` (see `array_to_fill` for a definition of `M2`).
Returns
-------
None
"""
return self._bulk_fill_hprobs(array_to_fill, layout, pr_array_to_fill,
deriv1_array_to_fill, deriv2_array_to_fill)
def _bulk_fill_hprobs(self, array_to_fill, layout,
pr_array_to_fill, deriv1_array_to_fill, deriv2_array_to_fill):
if pr_array_to_fill is not None:
self._bulk_fill_probs_block(pr_array_to_fill, layout)
if deriv1_array_to_fill is not None:
self._bulk_fill_dprobs_block(deriv1_array_to_fill, None, layout, None)
if deriv2_array_to_fill is not None:
#if wrtSlice1 == wrtSlice2:
deriv2_array_to_fill[:, :] = deriv1_array_to_fill[:, :]
#else:
# self._bulk_fill_dprobs_block(deriv2_array_to_fill, None, layout, None)
return self._bulk_fill_hprobs_block(array_to_fill, None, None, layout, None, None)
def _bulk_fill_hprobs_block(self, array_to_fill, dest_param_slice1, dest_param_slice2, layout,
param_slice1, param_slice2):
eps = 1e-4 # hardcoded?
if param_slice1 is None: param_slice1 = slice(0, self.model.num_params)
if param_slice2 is None: param_slice2 = slice(0, self.model.num_params)
param_indices1 = _slct.to_array(param_slice1)
param_indices2 = _slct.to_array(param_slice2)
if dest_param_slice1 is None:
dest_param_slice1 = slice(0, len(param_indices1))
if dest_param_slice2 is None:
dest_param_slice2 = slice(0, len(param_indices2))
dest_param_indices1 = _slct.to_array(dest_param_slice1)
#dest_param_indices2 = _slct.to_array(dest_param_slice2) # unused
iParamToFinal = {i: dest_param_indices1[ii] for ii, i in enumerate(param_indices1)}
nP2 = len(param_indices2)
dprobs = _np.empty((len(layout), nP2), 'd')
self._bulk_fill_dprobs_block(dprobs, None, layout, param_slice2)
dprobs2 = _np.empty((len(layout), nP2), 'd')
orig_vec = self.model.to_vector().copy()
for i in range(self.model.num_params):
if i in iParamToFinal:
iFinal = iParamToFinal[i]
vec = orig_vec.copy(); vec[i] += eps
self.model.from_vector(vec, close=True)
self._bulk_fill_dprobs_block(dprobs2, None, layout, param_slice2)
array_to_fill[:, iFinal, dest_param_slice2] = (dprobs2 - dprobs) / eps
self.model.from_vector(orig_vec, close=True)
def iter_hprobs_by_rectangle(self, layout, wrt_slices_list,
return_dprobs_12=False):
"""
Iterates over the 2nd derivatives of a layout's circuit probabilities one rectangle at a time.
This routine can be useful when memory constraints make constructing
the entire Hessian at once impractical, and as it only computes a subset of
the Hessian's rows and colums (a "rectangle") at once. For example, the
Hessian of a function of many circuit probabilities can often be computed
rectangle-by-rectangle and without the need to ever store the entire Hessian at once.
Parameters
----------
layout : CircuitOutcomeProbabilityArrayLayout
A layout for generated arrays, describing what circuit outcome each
element corresponds to. Usually given by a prior call to :meth:`create_layout`.
wrt_slices_list : list
A list of `(rowSlice,colSlice)` 2-tuples, each of which specify
a "rectangle" of the Hessian to compute. Iterating over the output
of this function iterates over these computed rectangles, in the order
given by `wrt_slices_list`. `rowSlice` and `colSlice` must by Python
`slice` objects.
return_dprobs_12 : boolean, optional
If true, the generator computes a 2-tuple: (hessian_col, d12_col),
where d12_col is a column of the matrix d12 defined by:
d12[iSpamLabel,iOpStr,p1,p2] = dP/d(p1)*dP/d(p2) where P is is
the probability generated by the sequence and spam label indexed
by iOpStr and iSpamLabel. d12 has the same dimensions as the
Hessian, and turns out to be useful when computing the Hessian
of functions of the probabilities.
Returns
-------
rectangle_generator
A generator which, when iterated, yields the 3-tuple
`(rowSlice, colSlice, hprobs)` or `(rowSlice, colSlice, hprobs, dprobs12)`
(the latter if `return_dprobs_12 == True`). `rowSlice` and `colSlice`
are slices directly from `wrt_slices_list`. `hprobs` and `dprobs12` are
arrays of shape E x B x B', where:
- E is the length of layout elements
- B is the number of parameter rows (the length of rowSlice)
- B' is the number of parameter columns (the length of colSlice)
If `mx`, `dp1`, and `dp2` are the outputs of :func:`bulk_fill_hprobs`
(i.e. args `mx_to_fill`, `deriv1_mx_to_fill`, and `deriv2_mx_to_fill`), then:
- `hprobs == mx[:,rowSlice,colSlice]`
- `dprobs12 == dp1[:,rowSlice,None] * dp2[:,None,colSlice]`
"""
yield from self._iter_hprobs_by_rectangle(layout, wrt_slices_list, return_dprobs_12)
def _iter_hprobs_by_rectangle(self, layout, wrt_slices_list, return_dprobs_12):
# under distributed layout each proc already has a local set of parameter slices, and
# this routine could just compute parts of that piecemeal so we never compute an entire
# proc's hprobs (may be too large) - so I think this function signature may still be fine,
# but need to construct wrt_slices_list from global slices assigned to it by the layout.
# (note the values in the wrt_slices_list must be global param indices - just not all of them)
nElements = len(layout) # (global number of elements, though "global" isn't really defined)
#NOTE: don't override this method in DistributableForwardSimulator
# by a method that distributes wrt_slices_list across comm procs,
# as we assume the user has already done any such distribution
# and has given each processor a list appropriate for it.
# Use comm only for speeding up the calcs of the given
# wrt_slices_list
for wrtSlice1, wrtSlice2 in wrt_slices_list:
if return_dprobs_12:
dprobs1 = _np.zeros((nElements, _slct.length(wrtSlice1)), 'd')
self._bulk_fill_dprobs_block(dprobs1, None, layout, wrtSlice1)
if wrtSlice1 == wrtSlice2:
dprobs2 = dprobs1
else:
dprobs2 = _np.zeros((nElements, _slct.length(wrtSlice2)), 'd')
self._bulk_fill_dprobs_block(dprobs2, None, layout, wrtSlice2)
else:
dprobs1 = dprobs2 = None
hprobs = _np.zeros((nElements, _slct.length(wrtSlice1), _slct.length(wrtSlice2)), 'd')
self._bulk_fill_hprobs_block(hprobs, None, None, layout, wrtSlice1, wrtSlice2)
if return_dprobs_12:
dprobs12 = dprobs1[:, :, None] * dprobs2[:, None, :] # (KM,N,1) * (KM,1,N') = (KM,N,N')
yield wrtSlice1, wrtSlice2, hprobs, dprobs12
else:
yield wrtSlice1, wrtSlice2, hprobs
def __str__(self):
return self.__class__.__name__
class CacheForwardSimulator(ForwardSimulator):
"""
A forward simulator that works with non-distributed :class:`CachedCOPALayout` layouts.
This is just a small addition to :class:`ForwardSimulator`, adding a
persistent cache passed to new derived-class-overridable compute routines.
"""
def create_layout(self, circuits, dataset=None, resource_alloc=None,
array_types=(), derivative_dimensions=None, verbosity=0):
"""
Constructs an circuit-outcome-probability-array (COPA) layout for a list of circuits.
Parameters
----------
circuits : list
The circuits whose outcome probabilities should be included in the layout.
dataset : DataSet
The source of data counts that will be compared to the circuit outcome
probabilities. The computed outcome probabilities are limited to those
with counts present in `dataset`.
resource_alloc : ResourceAllocation
A available resources and allocation information. These factors influence how
the layout (evaluation strategy) is constructed.
array_types : tuple, optional
A tuple of string-valued array types. See :meth:`ForwardSimulator.create_layout`.
derivative_dimensions : tuple, optional
A tuple containing, optionally, the parameter-space dimension used when taking first
and second derivatives with respect to the cirucit outcome probabilities. This must be
have minimally 1 or 2 elements when `array_types` contains `'ep'` or `'epp'` types,
respectively.
verbosity : int or VerbosityPrinter
Determines how much output to send to stdout. 0 means no output, higher
integers mean more output.
Returns
-------
CachedCOPALayout
"""
#Note: resource_alloc not even used -- make a slightly more complex "default" strategy?
cache = None # Derived classes should override this function and create a cache here.
# A dictionary whose keys are the elements of `circuits` and values can be
# whatever the user wants. These values are provided when calling
# :meth:`iter_unique_circuits_with_cache`.
return _CachedCOPALayout.create_from(circuits, self.model, dataset, derivative_dimensions, cache)
# Override these two functions to plumb `cache` down to _compute* methods
def _bulk_fill_probs_block(self, array_to_fill, layout):
for element_indices, circuit, outcomes, cache in layout.iter_unique_circuits_with_cache():
self._compute_circuit_outcome_probabilities_with_cache(array_to_fill[element_indices], circuit,
outcomes, layout.resource_alloc(), cache, time=None)
def _bulk_fill_dprobs_block(self, array_to_fill, dest_param_slice, layout, param_slice):
for element_indices, circuit, outcomes, cache in layout.iter_unique_circuits_with_cache():
self._compute_circuit_outcome_probability_derivatives_with_cache(
array_to_fill[element_indices, dest_param_slice], circuit, outcomes, param_slice,
layout.resource_alloc(), cache)
def _compute_circuit_outcome_probabilities_with_cache(self, array_to_fill, circuit, outcomes, resource_alloc,
cache, time=None):
raise NotImplementedError("Derived classes should implement this!")
def _compute_circuit_outcome_probability_derivatives_with_cache(self, array_to_fill, circuit, outcomes, param_slice,
resource_alloc, cache):
# array to fill has shape (num_outcomes, len(param_slice)) and should be filled with the "w.r.t. param_slice"
# derivatives of each specified circuit outcome probability.
raise NotImplementedError("Derived classes can implement this to speed up derivative computation")
def _array_type_parameter_dimension_letters():
""" Return all the array-type letters that stand for a parameter dimension """
return ('P', 'p', 'b')
def _bytes_for_array_type(array_type, global_elements, max_local_elements, max_atom_size,
total_circuits, max_local_circuits,
global_num_params, max_local_num_params, max_param_block_size,
max_per_processor_cachesize, dim, dtype='d'):
bytes_per_item = _np.dtype(dtype).itemsize
size = 1; cur_deriv_dim = 0
for letter in array_type:
if letter == 'E': size *= global_elements
if letter == 'e': size *= max_local_elements
if letter == 'a': size *= max_atom_size
if letter == 'C': size *= total_circuits
if letter == 'c': size *= max_local_circuits
if letter == 'P':
size *= global_num_params[cur_deriv_dim]; cur_deriv_dim += 1
if letter == 'p':
size *= max_local_num_params[cur_deriv_dim]; cur_deriv_dim += 1
if letter == 'b':
size *= max_param_block_size[cur_deriv_dim]; cur_deriv_dim += 1
if letter == 'z': size *= max_per_processor_cachesize
if letter == 'd': size *= dim
return size * bytes_per_item
def _bytes_for_array_types(array_types, global_elements, max_local_elements, max_atom_size,
total_circuits, max_local_circuits,
global_num_params, max_local_num_params, max_param_block_size,
max_per_processor_cachesize, dim, dtype='d'): # cache is only local to processors
return sum([_bytes_for_array_type(array_type, global_elements, max_local_elements, max_atom_size,
total_circuits, max_local_circuits,
global_num_params, max_local_num_params, max_param_block_size,
max_per_processor_cachesize, dim, dtype) for array_type in array_types])
|
PypiClean
|
/gevent-20.9.0.tar.gz/gevent-20.9.0/deps/README.rst
|
================================
Managing Embedded Dependencies
================================
* Generate patches with ``git diff --patch --minimal -b``
Updating libev
==============
Download and unpack the tarball into libev/. Remove these extra
files::
rm -f libev/Makefile.am
rm -f libev/Symbols.ev
rm -f libev/Symbols.event
rm -f libev/TODO
rm -f libev/aclocal.m4
rm -f libev/autogen.sh
rm -f libev/compile
rm -f libev/configure.ac
rm -f libev/libev.m4
rm -f libev/mkinstalldirs
Check if 'config.guess' and/or 'config.sub' went backwards in time
(the 'timestamp' and 'copyright' dates). If so, revert it (or update
from the latest source
http://git.savannah.gnu.org/gitweb/?p=config.git;a=tree )
Updating c-ares
===============
- Download and clean up the c-ares Makefile.in[c] to empty out the
MANPAGES variables so that we don't have to ship those in the sdist::
export CARES_VER=1.15.0
cd deps/
wget https://c-ares.haxx.se/download/c-ares-$CARES_VER.tar.gz
tar -xf c-ares-$CARES_VER.tar.gz
rm -rf c-ares c-ares-$CARES_VER.tar.gz
mv c-ares-$CARES_VER c-ares
cp c-ares/ares_build.h c-ares/ares_build.h.dist
rm -f c-ares/*.3 c-ares/*.1
rm -rf c-ares/test
rm -rf c-ares/vc
rm -f c-ares/maketgz
rm -f c-ares/CMakeLists.txt
rm -f c-ares/RELEASE-PROCEDURE.md
rm -f c-ares/*.cmake c-ares/*.cmake.in
git apply cares-make.patch
At this point there might be new files in libuv that need added to
git, evaluate them and add them.
- Follow the same 'config.guess' and 'config.sub' steps as libev.
Updating libuv
==============
- Clean up the libuv tree, and apply the patches to libuv (this whole
sequence is meant to be copied and pasted into the terminal)::
export LIBUV_VER=v1.38.0
cd deps/
wget https://dist.libuv.org/dist/$LIBUV_VER/libuv-$LIBUV_VER.tar.gz
tar -xf libuv-$LIBUV_VER.tar.gz
rm libuv-$LIBUV_VER.tar.gz
rm -rf libuv
mv libuv-$LIBUV_VER libuv
rm -rf libuv/.github
rm -rf libuv/docs
rm -rf libuv/samples
rm -rf libuv/test/*.[ch] libuv/test/test.gyp # must leave the fixtures/ dir
rm -rf libuv/tools
rm -f libuv/android-configure*
rm -f libuv/uv_win_longpath.manifest
At this point there might be new files in libuv that need added to git
and the build process. Evaluate those and add them to git and to
``src/gevent/libuv/_corecffi_build.py`` as needed. Then check if there
are changes to the build system (e.g., the .gyp files) that need to be
accounted for in our build file.
.. caution::
Pay special attention to the m4 directory. New .m4 files that need
to be added may not actually show up in git output. See
https://github.com/libuv/libuv/issues/2862
- Follow the same 'config.guess' and 'config.sub' steps as libev.
|
PypiClean
|
/nni_daily-1.5.2005180104-py3-none-manylinux1_x86_64.whl/nni_daily-1.5.2005180104.data/data/nni/node_modules/rx/ts/core/concurrency/scheduler.ts
|
module Rx {
export interface IScheduler {
/** Gets the current time according to the local machine's system clock. */
now(): number;
/**
* Schedules an action to be executed.
* @param state State passed to the action to be executed.
* @param {Function} action Action to be executed.
* @returns {Disposable} The disposable object used to cancel the scheduled action (best effort).
*/
schedule<TState>(state: TState, action: (scheduler: IScheduler, state: TState) => IDisposable): IDisposable;
/**
* Schedules an action to be executed after dueTime.
* @param state State passed to the action to be executed.
* @param {Function} action Action to be executed.
* @param {Number} dueTime Relative time after which to execute the action.
* @returns {Disposable} The disposable object used to cancel the scheduled action (best effort).
*/
scheduleFuture<TState>(state: TState, dueTime: number | Date, action: (scheduler: IScheduler, state: TState) => IDisposable): IDisposable;
}
export interface SchedulerStatic {
/** Gets the current time according to the local machine's system clock. */
now(): number;
/**
* Normalizes the specified TimeSpan value to a positive value.
* @param {Number} timeSpan The time span value to normalize.
* @returns {Number} The specified TimeSpan value if it is zero or positive; otherwise, 0
*/
normalize(timeSpan: number): number;
/** Determines whether the given object is a scheduler */
isScheduler(s: any): boolean;
}
/** Provides a set of static properties to access commonly used schedulers. */
export var Scheduler: SchedulerStatic;
}
(function() {
var s: Rx.IScheduler;
var d: Rx.IDisposable = s.schedule('state', (sh, s ) => Rx.Disposable.empty);
var d: Rx.IDisposable = s.scheduleFuture('state', 100, (sh, s ) => Rx.Disposable.empty);
var n : () => number = Rx.Scheduler.now;
var a : number = Rx.Scheduler.normalize(1000);
})
|
PypiClean
|
/english-words-2.0.1.tar.gz/english-words-2.0.1/README.md
|
[](https://pypi.org/project/english-words/)
# english-words-py
Returns sets of English words created by combining different words
lists together. Example usage: to get a set of English words from the
"web2" word list, including only lower-case letters, you write the
following:
```python3
>>> from english_words import get_english_words_set
>>> web2lowerset = get_english_words_set(['web2'], lower=True)
```
## Usage
From the main package, import `get_english_words_set` as demonstrated
above. This function takes a number of arguments; the first is a list of
word list identifiers for the word lists to combine and the rest are
flags. These arguments are described here (in the following order):
- `sources` is an iterable containing strings
corresponding to word list identifiers (see "Word lists" subsection
below)
- `alpha` (default `False`) is a flag specifying that all
non-alphanumeric characters (e.g.: `-`, `'`) should be stripped
- `lower` (default `False` ) is a flag specifying that all upper-case
letters should be converted to lower-case
Each word list is pre-processed to handle the above flags, so using any
combination of options will not cause the function to run slower.
Note that some care needs to be used when combining word lists. For
example, only proper nouns in the `web2` word list are capitalized, but
most words in the `gcide` word list are capitalized.
### Word lists
| Name/URL | Identifier | Notes |
| :--- | :--- | :--- |
| [GCIDE 0.53 index](https://ftp.gnu.org/gnu/gcide/) | `gcide` | Words found in GNU Collaborative International Dictionary of English 0.53. Most words capitalized (not exactly sure what the capitalization convention is). Contains some entries with multiple words (currently you must use the alpha option to exclude these).<br/><br/>Unicode characters are currently unprocessed; for example `<ae/` is present in the dictionary instead of `æ`. Ideally, these should all be converted. |
| [web2 revision 326913](https://svnweb.freebsd.org/base/head/share/dict/web2?view=markup&pathrev=326913) | `web2` | |
## Adding additional word lists
To add a word list, say with identifier `x`, put the word list (one word
per line), into a plain text file `x.txt` in the [`raw_data`](raw_data)
directory at the root of the repository. Then, to process the word list
(and all others in the directory) run the script
[`process_raw_data.py`](scripts/process_raw_data.py).
## Installation
Install this with pip with
```
pip install english-words
```
This package is unfortunately rather large (~20MB), and will run into
scaling issues if more word lists or (especially) options are added.
When that bridge is crossed, word lists should possibly be chosen by the
user instead of simply including all of them; word lists could also be
preprocessed on the client side instead of being included in the
package.
|
PypiClean
|
/tiddlywebplugins.openid2-0.9.tar.gz/tiddlywebplugins.openid2-0.9/tiddlywebplugins/openid2.py
|
import logging
import urlparse
from httpexceptor import HTTP302
from openid import oidutil
from openid.consumer import consumer
from tiddlyweb.web.challengers import ChallengerInterface
from tiddlyweb.web.util import server_base_url, server_host_url, make_cookie
LOGGER = logging.getLogger(__name__)
def log_message(message, level=0):
"""
Redefine the Python OpenID log function,
which just writes to stderr, spewing all
over the place.
"""
LOGGER.debug(message)
oidutil.log = log_message
class Challenger(ChallengerInterface):
desc = "OpenID"
def __init__(self):
self.name = __name__
def challenge_get(self, environ, start_response):
openid_mode = environ['tiddlyweb.query'].get('openid.mode', [None])[0]
if openid_mode:
return self._handle_response(environ, start_response)
else:
return self._render_form(environ, start_response)
def challenge_post(self, environ, start_response):
openid_url = environ['tiddlyweb.query'].get('openid', [None])[0]
redirect = environ['tiddlyweb.query'].get(
'tiddlyweb_redirect', ['/'])[0]
if not openid_url:
return self._render_form(environ, start_response,
message='Enter an openid')
# Make a bare bones stateless consumer
oidconsumer = consumer.Consumer({}, None)
try:
request = oidconsumer.begin(openid_url)
except consumer.DiscoveryFailure, exc:
return self._render_form(environ, start_response,
openid=openid_url,
message='Error in discovery: %s' % exc[0])
if request is None:
return self._render_form(environ, start_response,
openid=openid_url,
message='No open id services for %s' % openid_url)
else:
trust_root = server_base_url(environ)
return_to = urlparse.urljoin(trust_root, '%s/challenge/%s' % (
environ['tiddlyweb.config']['server_prefix'],
self.name))
request.return_to_args['tiddlyweb_redirect'] = redirect
if request.shouldSendRedirect():
redirect_url = request.redirectURL(trust_root, return_to,
immediate=False)
raise HTTP302(redirect_url)
else:
form_html = request.htmlMarkup(trust_root, return_to,
form_tag_attrs={'id': 'openid_message'},
immediate=False)
start_response('200 OK', [
('Content-Type', 'text/html; charset=UTF-8')])
return [form_html]
def _handle_response(self, environ, start_response):
oidconsumer = consumer.Consumer({}, None)
host = server_base_url(environ)
url = urlparse.urljoin(host, '%s/challenge/%s' % (
environ['tiddlyweb.config']['server_prefix'],
self.name))
query = {}
for key in environ['tiddlyweb.query']:
query[key] = environ['tiddlyweb.query'][key][0]
info = oidconsumer.complete(query, url)
display_identifier = info.getDisplayIdentifier()
if info.status == consumer.FAILURE and display_identifier:
return self._render_form(environ, start_response,
openid=display_identifier,
message='Verification of %s failed with: %s' % (
display_identifier, info.message))
elif info.status == consumer.SUCCESS:
return self._success(environ, start_response, info)
elif info.status == consumer.CANCEL:
return self._render_form(environ, start_response,
message='You cancelled, try again with something else?')
elif info.status == consumer.SETUP_NEEDED:
if info.setup_url:
message = ('<a href=%s>Setup needed at openid server.</a>'
% info.setup_url)
else:
message = 'More information needed at server'
return self._render_form(environ, start_response,
message=message)
else:
return self._render_form(environ, start_response,
message='Unable to process. Unknown error')
def _success(self, environ, start_response, info):
usersign = info.getDisplayIdentifier()
if info.endpoint.canonicalID:
usersign = info.endpoint.canonicalID
# canonicolize usersign to tiddlyweb form
if usersign.startswith('http'):
usersign = usersign.split('://', 1)[1]
usersign = usersign.rstrip('/')
uri = urlparse.urljoin(server_host_url(environ),
environ['tiddlyweb.query'].get('tiddlyweb_redirect', ['/'])[0])
secret = environ['tiddlyweb.config']['secret']
cookie_age = environ['tiddlyweb.config'].get('cookie_age', None)
cookie_header_string = make_cookie('tiddlyweb_user', usersign,
mac_key=secret, path=self._cookie_path(environ),
expires=cookie_age)
start_response('303 See Other',
[('Location', uri.encode('utf-8')),
('Content-Type', 'text/plain'),
('Set-Cookie', cookie_header_string)])
return [uri]
def _render_form(self, environ, start_response, openid='',
message='', form=''):
redirect = environ['tiddlyweb.query'].get(
'tiddlyweb_redirect', ['/'])[0]
start_response('200 OK', [
('Content-Type', 'text/html')])
environ['tiddlyweb.title'] = 'OpenID Login'
return ["""
<div id='content'>
<div class='message'>%s</div>
<pre>
<form action="" method="POST">
OpenID: <input name="openid" size="60" value="%s"/>
<input type="hidden" name="tiddlyweb_redirect" value="%s" />
<input type="submit" value="submit" />
</form>
</pre>
</div>""" % (message, openid, redirect)]
|
PypiClean
|
/pulumi_azure_native-2.5.1a1693590910.tar.gz/pulumi_azure_native-2.5.1a1693590910/pulumi_azure_native/signalrservice/signal_r_private_endpoint_connection.py
|
import copy
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._enums import *
from ._inputs import *
__all__ = ['SignalRPrivateEndpointConnectionArgs', 'SignalRPrivateEndpointConnection']
@pulumi.input_type
class SignalRPrivateEndpointConnectionArgs:
def __init__(__self__, *,
resource_group_name: pulumi.Input[str],
resource_name: pulumi.Input[str],
private_endpoint: Optional[pulumi.Input['PrivateEndpointArgs']] = None,
private_endpoint_connection_name: Optional[pulumi.Input[str]] = None,
private_link_service_connection_state: Optional[pulumi.Input['PrivateLinkServiceConnectionStateArgs']] = None):
"""
The set of arguments for constructing a SignalRPrivateEndpointConnection resource.
:param pulumi.Input[str] resource_group_name: The name of the resource group that contains the resource. You can obtain this value from the Azure Resource Manager API or the portal.
:param pulumi.Input[str] resource_name: The name of the resource.
:param pulumi.Input['PrivateEndpointArgs'] private_endpoint: Private endpoint
:param pulumi.Input[str] private_endpoint_connection_name: The name of the private endpoint connection
:param pulumi.Input['PrivateLinkServiceConnectionStateArgs'] private_link_service_connection_state: Connection state of the private endpoint connection
"""
pulumi.set(__self__, "resource_group_name", resource_group_name)
pulumi.set(__self__, "resource_name", resource_name)
if private_endpoint is not None:
pulumi.set(__self__, "private_endpoint", private_endpoint)
if private_endpoint_connection_name is not None:
pulumi.set(__self__, "private_endpoint_connection_name", private_endpoint_connection_name)
if private_link_service_connection_state is not None:
pulumi.set(__self__, "private_link_service_connection_state", private_link_service_connection_state)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the resource group that contains the resource. You can obtain this value from the Azure Resource Manager API or the portal.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="resourceName")
def resource_name(self) -> pulumi.Input[str]:
"""
The name of the resource.
"""
return pulumi.get(self, "resource_name")
@resource_name.setter
def resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_name", value)
@property
@pulumi.getter(name="privateEndpoint")
def private_endpoint(self) -> Optional[pulumi.Input['PrivateEndpointArgs']]:
"""
Private endpoint
"""
return pulumi.get(self, "private_endpoint")
@private_endpoint.setter
def private_endpoint(self, value: Optional[pulumi.Input['PrivateEndpointArgs']]):
pulumi.set(self, "private_endpoint", value)
@property
@pulumi.getter(name="privateEndpointConnectionName")
def private_endpoint_connection_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the private endpoint connection
"""
return pulumi.get(self, "private_endpoint_connection_name")
@private_endpoint_connection_name.setter
def private_endpoint_connection_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_endpoint_connection_name", value)
@property
@pulumi.getter(name="privateLinkServiceConnectionState")
def private_link_service_connection_state(self) -> Optional[pulumi.Input['PrivateLinkServiceConnectionStateArgs']]:
"""
Connection state of the private endpoint connection
"""
return pulumi.get(self, "private_link_service_connection_state")
@private_link_service_connection_state.setter
def private_link_service_connection_state(self, value: Optional[pulumi.Input['PrivateLinkServiceConnectionStateArgs']]):
pulumi.set(self, "private_link_service_connection_state", value)
class SignalRPrivateEndpointConnection(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
private_endpoint: Optional[pulumi.Input[pulumi.InputType['PrivateEndpointArgs']]] = None,
private_endpoint_connection_name: Optional[pulumi.Input[str]] = None,
private_link_service_connection_state: Optional[pulumi.Input[pulumi.InputType['PrivateLinkServiceConnectionStateArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
resource_name_: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
A private endpoint connection to an azure resource
Azure REST API version: 2023-02-01. Prior API version in Azure Native 1.x: 2020-05-01
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['PrivateEndpointArgs']] private_endpoint: Private endpoint
:param pulumi.Input[str] private_endpoint_connection_name: The name of the private endpoint connection
:param pulumi.Input[pulumi.InputType['PrivateLinkServiceConnectionStateArgs']] private_link_service_connection_state: Connection state of the private endpoint connection
:param pulumi.Input[str] resource_group_name: The name of the resource group that contains the resource. You can obtain this value from the Azure Resource Manager API or the portal.
:param pulumi.Input[str] resource_name_: The name of the resource.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: SignalRPrivateEndpointConnectionArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
A private endpoint connection to an azure resource
Azure REST API version: 2023-02-01. Prior API version in Azure Native 1.x: 2020-05-01
:param str resource_name: The name of the resource.
:param SignalRPrivateEndpointConnectionArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(SignalRPrivateEndpointConnectionArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
private_endpoint: Optional[pulumi.Input[pulumi.InputType['PrivateEndpointArgs']]] = None,
private_endpoint_connection_name: Optional[pulumi.Input[str]] = None,
private_link_service_connection_state: Optional[pulumi.Input[pulumi.InputType['PrivateLinkServiceConnectionStateArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
resource_name_: Optional[pulumi.Input[str]] = None,
__props__=None):
opts = pulumi.ResourceOptions.merge(_utilities.get_resource_opts_defaults(), opts)
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = SignalRPrivateEndpointConnectionArgs.__new__(SignalRPrivateEndpointConnectionArgs)
__props__.__dict__["private_endpoint"] = private_endpoint
__props__.__dict__["private_endpoint_connection_name"] = private_endpoint_connection_name
__props__.__dict__["private_link_service_connection_state"] = private_link_service_connection_state
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
if resource_name_ is None and not opts.urn:
raise TypeError("Missing required property 'resource_name_'")
__props__.__dict__["resource_name"] = resource_name_
__props__.__dict__["group_ids"] = None
__props__.__dict__["name"] = None
__props__.__dict__["provisioning_state"] = None
__props__.__dict__["system_data"] = None
__props__.__dict__["type"] = None
alias_opts = pulumi.ResourceOptions(aliases=[pulumi.Alias(type_="azure-native:signalrservice/v20200501:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20200701preview:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20210401preview:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20210601preview:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20210901preview:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20211001:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20220201:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20220801preview:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20230201:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20230301preview:SignalRPrivateEndpointConnection"), pulumi.Alias(type_="azure-native:signalrservice/v20230601preview:SignalRPrivateEndpointConnection")])
opts = pulumi.ResourceOptions.merge(opts, alias_opts)
super(SignalRPrivateEndpointConnection, __self__).__init__(
'azure-native:signalrservice:SignalRPrivateEndpointConnection',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'SignalRPrivateEndpointConnection':
"""
Get an existing SignalRPrivateEndpointConnection resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = SignalRPrivateEndpointConnectionArgs.__new__(SignalRPrivateEndpointConnectionArgs)
__props__.__dict__["group_ids"] = None
__props__.__dict__["name"] = None
__props__.__dict__["private_endpoint"] = None
__props__.__dict__["private_link_service_connection_state"] = None
__props__.__dict__["provisioning_state"] = None
__props__.__dict__["system_data"] = None
__props__.__dict__["type"] = None
return SignalRPrivateEndpointConnection(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="groupIds")
def group_ids(self) -> pulumi.Output[Sequence[str]]:
"""
Group IDs
"""
return pulumi.get(self, "group_ids")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the resource.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="privateEndpoint")
def private_endpoint(self) -> pulumi.Output[Optional['outputs.PrivateEndpointResponse']]:
"""
Private endpoint
"""
return pulumi.get(self, "private_endpoint")
@property
@pulumi.getter(name="privateLinkServiceConnectionState")
def private_link_service_connection_state(self) -> pulumi.Output[Optional['outputs.PrivateLinkServiceConnectionStateResponse']]:
"""
Connection state of the private endpoint connection
"""
return pulumi.get(self, "private_link_service_connection_state")
@property
@pulumi.getter(name="provisioningState")
def provisioning_state(self) -> pulumi.Output[str]:
"""
Provisioning state of the resource.
"""
return pulumi.get(self, "provisioning_state")
@property
@pulumi.getter(name="systemData")
def system_data(self) -> pulumi.Output['outputs.SystemDataResponse']:
"""
Metadata pertaining to creation and last modification of the resource.
"""
return pulumi.get(self, "system_data")
@property
@pulumi.getter
def type(self) -> pulumi.Output[str]:
"""
The type of the resource - e.g. "Microsoft.SignalRService/SignalR"
"""
return pulumi.get(self, "type")
|
PypiClean
|
/pymzml-tapir-0.8.0.tar.gz/pymzml-tapir-0.8.0/pymzml/run.py
|
# pymzml
#
# Copyright (C) 2010-2014 T. Bald, J. Barth, A. Niehues, M. Specht, H. Roest, C. Fufezan
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import print_function
import re
import os
import bisect
import codecs
from xml.etree import cElementTree
from collections import defaultdict as ddict
import pymzml.spec
import pymzml.obo
import pymzml.minimum
class RegexPatterns(object):
spectrumIndexPattern = re.compile(
b'(?P<type>(scan=|nativeID="))(?P<nativeID>[0-9]*)">(?P<offset>[0-9]*)</offset>'
)
simIndexPattern = re.compile(b'(?P<type>idRef=")(?P<nativeID>.*)">(?P<offset>[0-9]*)</offset>')
class Reader(object):
"""
.. function:: __init__(
path*
[,noiseThreshold=0.0, extraAccessions=None, MS1_Precision=5e-6, MSn_Precision=20e-6]
)
Initializes an mzML run and returns an iterator.
:param path: path to mzML file. File can be gzipped.
:type path: string
:param extraAccessions: list of additional (accession,fieldName) tuples.
For example, ('MS:1000285',['value']) will extract the "total ion
current" and store it under two keys in the spectrum, i.e.
spectrum["total ion current"] or spectrum['MS:1000285'].
The translated name is extracted from the current OBO file,
hence the name that is defined by the HUPO-PSI consortium is used.
(http://www.psidev.info/).
pymzML comes with an example script queryOBO.py which can be used to
lookup the names or MS tags (see: :py:obj:`queryOBO`).
The value, i.e. which xml property has to be extraced has to be provided
by the user. Multiple values can be used as input, i.e. ( 'MS:1000016' ,
['value','unitName'] ) will extract scan time and its unit.
:type extraAccessions: list of tuples
:param MS1_Precision: measured precision of MS1 spectra
:type MS1_Precision: float
:param MSn_Precision: measured precision of MSn spectra
:type MSn_Precision: float
:param build_index_from_scratch: build index from scratch
:type build_index_from_scratch: boolean
:param file_object: file object or any other iterable stream, this will make
path obsolete, seeking is disabled
:type file_object: File_object like
Example:
>>> run = pymzml.run.Reader("../mzML_example_files/100729_t300_100729172744.mzML.gz",
MS1_Precision = 20e-6)
"""
def __init__(
self,
path=None,
noiseThreshold=0.0,
extraAccessions=None,
MS1_Precision=5e-6,
MSn_Precision=20e-6,
build_index_from_scratch=False,
file_object=None,
obo_version=None,
use_spectra_sanity_check=True,
):
# self.param contains user-specified parsing parameters
self.param = dict()
self.param['noiseThreshold'] = noiseThreshold
self.param['MS1_Precision'] = MS1_Precision
self.param['MSn_Precision'] = MSn_Precision
self.param['accessions'] = {}
# self.info contains information extracted from the mzML file
self.info = dict()
self.info['offsets'] = ddict()
self.info['offsetList'] = []
self.info['referenceableParamGroupList'] = False
self.info['spectrum_count'] = 0
self.info['chromatogram_count'] = 0
self.info['obo_version'] = obo_version
self.info['encoding'] = None
self.MS1_Precision = MS1_Precision
self.elementList = []
# Default stuff
# Can actually be either a spectrum _or_ a chromatogram; the Spectrum
# class supports both
self.spectrum = pymzml.spec.Spectrum(
measuredPrecision=MS1_Precision,
param=self.param,
)
self.spectrum.clear()
assert path is not None or file_object is not None, \
'Must provide either a path or a file object to parse'
self.info['fileObject'], self.info['seekable'] = self.__open_file(
path,
file_object
)
self.info['filename'] = path
if self.info['seekable']:
# Seekable files can use the index for random access
self.seeker = self._build_index(build_index_from_scratch, use_spectra_sanity_check)
self.iter = self.__init_iter()
self.OT = self.__init_obo_translator(extraAccessions)
return
def __determine_file_encoding(self, path):
'''
Determines mzML XML encoding using the information in the
first line of the mzML. Otherwise falls back to utf-8.
'''
mzML_encoding = 'utf-8'
if os.path.exists( path ):
# we might have been initialized with a file-object
# then no questions about the encoding have to be addressed
# is not seekable neither ..
sniffer = open(path, 'rb')
header = sniffer.readline()
encodingPattern = re.compile(
b'encoding="(?P<encoding>[A-Za-z0-9-]*)"'
)
match = encodingPattern.search(header)
if match:
mzML_encoding = bytes.decode(
match.group('encoding')
)
sniffer.close()
return mzML_encoding
def _open_file(self, path, given_file_object=None):
return self.__open_file( path, given_file_object=given_file_object)
def __open_file(self, path, given_file_object=None):
# Arbitrary supplied file objects are not seekable
file_object = given_file_object
seekable = False
self.info['encoding'] = self.__determine_file_encoding( path )
if file_object is None:
import codecs
if path.endswith('.gz'):
# Gzipped files are not seekable
import gzip
file_object = codecs.getreader("utf-8")(
gzip.open(path)
)
else:
file_object = codecs.open(
path,
mode = 'r',
encoding = self.info['encoding']
)
seekable = True
return file_object, seekable
def _build_index(self, from_scratch, use_spectra_sanity_check):
"""
.. method:: _build_index(from_scratch)
Builds an index: a list of offsets to which a file pointer can seek
directly to access a particular spectrum or chromatogram without
parsing the entire file.
:param from_scratch: Whether or not to force building the index from
scratch, by parsing the file, if no existing
index can be found.
:type from_scratch: A boolean
:param use_spectra_sanity_check: Whether or not to assume all data are
spectra and follow the (scan=|nativeID=")
pattern. Disable this if you have
chromatograms or spectra with
different ids.
:type from_scratch: A boolean
:returns: A file-like object used to access the indexed content by
seeking to a particular offset for the file.
"""
# Declare the pre-seeker
seeker = open(self.info['filename'], 'rb')
# Reading last 1024 bytes to find chromatogram Pos and SpectrumIndex Pos
indexListOffsetPattern = re.compile(
b'<indexListOffset>(?P<indexListOffset>[0-9]*)</indexListOffset>'
)
chromatogramOffsetPattern = re.compile(
b'(?P<WTF>[nativeID|idRef])="TIC">(?P<offset>[0-9]*)</offset'
)
self.info['offsets']['indexList'] = None
self.info['offsets']['TIC'] = None
seeker.seek(0, 2)
spectrumIndexPattern = RegexPatterns.spectrumIndexPattern
for _ in range(1, 10): # max 10kbyte
# some converters fail in writing a correct index
# we found
# a) the offset is always the same (silent fail hurray!)
sanity_check_set = set()
try:
seeker.seek(-1024 * _, 1)
except:
break
# File is smaller than 10kbytes ...
for line in seeker:
match = chromatogramOffsetPattern.search(line)
if match:
self.info['offsets']['TIC'] = int(
bytes.decode(match.group('offset'))
)
match_spec = spectrumIndexPattern.search(line)
if match_spec is not None:
spec_byte_offset = int(
bytes.decode(match_spec.group('offset'))
)
sanity_check_set.add(spec_byte_offset)
match = indexListOffsetPattern.search(line)
if match:
self.info['offsets']['indexList'] = int(
bytes.decode(match.group('indexListOffset'))
)
# break
if self.info['offsets']['indexList'] is not None and \
self.info['offsets']['TIC'] is not None:
break
if use_spectra_sanity_check and len(sanity_check_set) <= 2:
# print( 'Convert error obvious ... ')
self.info['offsets']['indexList'] = None
if self.info['offsets']['indexList'] is None:
# fall back to non-seekable
self.info['seekable'] = False
if from_scratch:
self._build_index_from_scratch(seeker)
elif self.info['offsets']['TIC'] is not None and \
self.info['offsets']['TIC'] > os.path.getsize(self.info['filename']):
self.info['seekable'] = False
else:
# Jumping to index list and slurpin all specOffsets
seeker.seek(self.info['offsets']['indexList'], 0)
spectrumIndexPattern = RegexPatterns.spectrumIndexPattern
simIndexPattern = RegexPatterns.simIndexPattern
# NOTE: this might be again different in another mzML versions!!
# 1.1 >> small_zlib.pwiz.1.1.mzML:
# <offset idRef="controllerType=0 controllerNumber=1 scan=1">4363</offset>
# 1.0 >>
# <offset idRef="S16004" nativeID="16004">236442042</offset>
# <offset idRef="SIM SIC 651.5">330223452</offset>\n'
for line in seeker:
match_spec = spectrumIndexPattern.search(line)
if match_spec and match_spec.group('nativeID') == b'':
match_spec = None
match_sim = simIndexPattern.search(line)
if match_spec:
offset = int(bytes.decode(match_spec.group('offset')))
nativeID = int(bytes.decode(match_spec.group('nativeID')))
self.info['offsets'][nativeID] = offset
self.info['offsetList'].append(offset)
elif match_sim:
offset = int(bytes.decode(match_sim.group('offset')))
nativeID = bytes.decode(match_sim.group('nativeID'))
try:
nativeID = int(nativeID)
except:
pass
self.info['offsets'][nativeID] = offset
self.info['offsetList'].append(offset)
# opening seeker in normal mode again
seeker.close()
seeker = codecs.open(
self.info['filename'],
mode = 'r',
encoding = self.info['encoding']
)
# seeker = open(self.info['filename'], 'r')
return seeker
def _build_index_from_scratch(self, seeker):
"""Build an index of spectra/chromatogram data with offsets by parsing the file."""
def get_data_indices(fh, chunksize=8192, lookback_size=100):
"""Get a dictionary with binary file indices of spectra and
chromatograms in an mzML file.
Will parse quickly through the file and find all occurences of
<chromatogram ... id="..." and <spectrum ... id="..." using a
regex.
We dont use an XML parser here because we need to know the
exact location of the filepointer which is usually not possible
with common xml parsers.
"""
chrom_positions = {}
spec_positions = {}
chromcnt = 0
speccnt = 0
# regexes to be used
chromexp = re.compile(b"<\s*chromatogram[^>]*id=\"([^\"]*)\"")
chromcntexp = re.compile(b"<\s*chromatogramList\s*count=\"([^\"]*)\"")
specexp = re.compile(b"<\s*spectrum[^>]*id=\"([^\"]*)\"")
speccntexp = re.compile(b"<\s*spectrumList\s*count=\"([^\"]*)\"")
# go to start of file
fh.seek(0)
prev_chunk = ""
while True:
# read a chunk of data
offset = fh.tell()
chunk = fh.read(chunksize)
if not chunk:
break
# append a part of the previous chunk since we have cut in the middle
# of the text (to make sure we dont miss anything, prev_chunk
# is analyzed twice).
if len(prev_chunk) > 0:
chunk = prev_chunk[-lookback_size:] + chunk
offset -= lookback_size
prev_chunk = chunk
# find all occurences of the expressions and add to the dictionary
for m in chromexp.finditer(chunk):
chrom_positions[m.group(1).decode('utf-8')] = offset + m.start()
for m in specexp.finditer(chunk):
spec_positions[m.group(1).decode('utf-8')] = offset + m.start()
# also look for the total count of chromatograms and spectra
# -> must be the same as the content of our dict!
m = chromcntexp.search(chunk)
if m is not None:
chromcnt = int(m.group(1))
m = speccntexp.search(chunk)
if m is not None:
speccnt = int(m.group(1))
# Check if everything is ok (e.g. we found the right number of
# chromatograms and spectra) and then return the dictionary.
if (chromcnt == len(chrom_positions) and speccnt == len(spec_positions)):
positions = {}
positions.update(chrom_positions)
positions.update(spec_positions)
# return positions # return only once in function leaves my brain sane :)
self.info['spectrum_count'] = speccnt
self.info['chromatogram_count'] = chromcnt
else:
positions = None
return positions
indices = get_data_indices(seeker)
if indices is not None:
self.info['offsets'].update(indices)
self.info['offsetList'].extend(indices.values())
# make sure the list is sorted (for bisect)
self.info['offsetList'] = sorted(self.info['offsetList'])
self.info['seekable'] = True
return
def __init_iter(self):
"""
.. method:: __init_iter()
initializes the iterator for the mzml xml parsing and moves it to the
first relevant item.
:returns: an iterator.
"""
# declare the iter
mzml_iter = iter(cElementTree.iterparse(
self.info['fileObject'],
events=(b'start', b'end')
)) # NOTE: end might be sufficient
# Move iter to spectrumList / chromatogramList, setting the version
# along the way
while True:
event, element = next(mzml_iter)
if element.tag.endswith('}mzML'):
if 'version' in element.attrib and len(element.attrib['version']) > 0:
self.info['mzmlVersion'] = element.attrib['version']
else:
s = element.attrib['{http://www.w3.org/2001/XMLSchema-instance}schemaLocation']
self.info['mzmlVersion'] = re.search(r'[0-9]*\.[0-9]*\.[0-9]*', s).group()
elif element.tag.endswith('}cv'):
if not self.info['obo_version'] and element.attrib['id'] == 'MS':
self.info['obo_version'] = element.attrib.get('version', '1.1.0')
elif element.tag.endswith('}referenceableParamGroupList'):
self.info['referenceableParamGroupList'] = True
self.info['referenceableParamGroupListElement'] = element
elif element.tag.endswith('}spectrumList'):
speccnt = element.attrib.get('count')
self.info['spectrum_count'] = int(speccnt) if speccnt else None
break
elif element.tag.endswith('}chromatogramList'):
chromcnt = element.attrib.get('count')
self.info['chromatogram_count'] = int(chromcnt) if chromcnt else None
break
else:
pass
return mzml_iter
def __init_obo_translator(self, extraAccessions):
"""
.. method:: __init_obo_translator(extraAccessions)
Initializes the OBO translator of this parser
:param extraAccessions: list of additional (accession,fieldName) tuples,
from the constructor
:type extraAccessions: list of tuples
:returns: A pymzml.obo.oboTranslator object
"""
# parse obo, check MS tags and if they are ok in minimum.py (minimum required) ...
obo_translator = pymzml.obo.oboTranslator(version=self.info['obo_version'])
for minimumMS, ListOfvaluesToExtract in pymzml.minimum.MIN_REQ:
self.param['accessions'][minimumMS] = {
'valuesToExtract': ListOfvaluesToExtract,
'name': obo_translator[minimumMS],
'values': []
}
# parse extra accessions ...
if extraAccessions is not None:
for accession, fieldIdentifiers in extraAccessions:
if accession not in self.param['accessions'].keys():
self.param['accessions'][accession] = {
'valuesToExtract': [],
'name': obo_translator[accession],
'values': []
}
for valueToExtract in fieldIdentifiers:
if valueToExtract not in self.param['accessions'][accession]['valuesToExtract']:
self.param['accessions'][accession]['valuesToExtract'].append(
valueToExtract
)
return obo_translator
def __iter__(self):
return self
def __next__(self):
""" The python 2.6+ iterator """
return self.next()
def next(self):
"""
Iterator for the class :py:class:`Run`:. Iterates over all of the spectra
or chromatograms in the file.
:return: a spectrum, stored in run.spectrum.
:rtype: :py:class:`spec.Spectrum`:
Example:
>>> for spectrum in run:
... print(spectrum['id'], end='\\r')
"""
while True:
event, element = next(self.iter, ('END', 'END'))
# Error? check cElementTree; conversion of data to 32bit-float mzml
# files might help
# Stop iteration when parsing is done
if event == 'END':
raise StopIteration
if (
(element.tag.endswith('}spectrum') or element.tag.endswith('}chromatogram'))
and event == b'end'
):
if self.info['referenceableParamGroupList']:
self.spectrum.initFromTreeObjectWithRef(
element,
self.info['referenceableParamGroupListElement']
)
else:
self.spectrum.initFromTreeObject(element)
try:
self.elementList[-1].clear()
except:
pass
self.elementList.append(element)
return self.spectrum
def __getitem__(self, value):
'''
Random access to spectra if mzML fill is indexed,
not compressed and not truncted.
Example:
>>> spectrum_with_nativeID_100 = msrun[100]
'''
answer = None
if self.info['seekable'] is True:
if len(self.info['offsets']) == 0:
raise IOError("File does support random access: index list missing...")
if value in self.info['offsets']:
startPos = self.info['offsets'][value]
endPos_index = bisect.bisect_right(
self.info['offsetList'],
self.info['offsets'][value]
)
if endPos_index == len(self.info['offsetList']):
endPos = os.path.getsize(self.info['filename'])
else:
endPos = self.info['offsetList'][endPos_index]
print (" will start at position", startPos)
self.seeker.seek(startPos, 0)
data = self.seeker.read(endPos - self.info['offsets'][value])
try:
print (" will try to get from tree obje")
self.spectrum.initFromTreeObject(cElementTree.fromstring(data))
except:
print (" has failed, will try to get using closing tag ")
# have closing </mzml> & </run> &or </spectrumList>
startingTag = data.split()[0]
print ("starting tag", startingTag)
# print ("data ", data)
print ("looking for", '</' + startingTag[1:] + '>')
stopIndex = data.index('</' + startingTag[1:] + '>')
print ("found ", stopIndex)
self.spectrum.initFromTreeObject(
cElementTree.fromstring(data[:stopIndex + len(startingTag) + 2])
)
answer = self.spectrum
else:
# Reopen the file from the beginning if possible
force_seeking = self.info.get('force_seeking', False)
if force_seeking is False:
self.info['fileObject'].close()
assert self.info['filename'], \
'Must specify either filename or index for random spectrum access'
self.info['fileObject'], _ = self.__open_file(self.info['filename'])
self.iter = self.__init_iter()
for spec in self:
if spec['id'] == value:
answer = spec
break
if answer is None:
raise KeyError("Run does not contain spec with native ID {0}".format(value))
else:
return answer
def getSpectrumCount(self):
return self.info['spectrum_count']
def getChromatogramCount(self):
return self.info['chromatogram_count']
class Writer(object):
"""
.. function:: __init__(filename* ,run* [, overwrite = boolean])
Initializes an mzML writer (beta stage).
:param path: filename for the new mzML file.
:type path: string
:param run: Currently a pymzml.run.Reader object is required since we do
not write the header by ourselves yet.
:type run: pymzml.run.Reader
:param overwrite: force the re-initialization of mzML file, even if file exists.
:type overwrite: boolean
At the moment no index is written.
Example:
>>> run = pymzml.run.Reader(
... '../mzML_example_files/100729_t300_100729172744.mzML',
... MS1_Precision=5e-6,
... )
>>> run2 = pymzml.run.Writer(filename='write_test.mzML', run=run , overwrite=True)
>>> spec = run[1000]
>>> run2.addSpec(spec)
>>> run2.save()
"""
def __init__(self, filename=None, run=None, overwrite=False):
cElementTree.register_namespace("", "http://psi.hupo.org/ms/mzml")
self.filename = filename
self.lookup = {}
self.newTree = None
self.TreeBuilder = cElementTree.TreeBuilder()
self.run = run
self.info = {'counters': ddict(int)}
if self.run.info['filename'].endswith('.gz'):
import gzip
import codecs
io = codecs.getreader("utf-8")(gzip.open(self.run.info['filename']))
else:
io = open(self.run.info['filename'], 'r')
#read the rest as original file
input_xml_string = ''
pymzml_tag_written = False
#open again to read as text!
for line in open(self.run.info['filename'], 'r').readlines():
if 'indexedmzML' in line:
# writing of indexed mzML is not possible at the moment
continue
if 'run' in line:
# the run is appended from the original parser to avoid messing
# with the new xml tree, we break before the run data starts
break
input_xml_string += line
if 'softwareList' in line and pymzml_tag_written is False:
addon = cElementTree.Element(
'software',
{
'id' : 'pymzML',
'version' : "0.7.6"
}
)
cElementTree.SubElement(
addon,
'cvParam',
{
'accession' : 'MS:1000531',
'cvRef' : 'MS',
'name' : 'pymzML Writer',
'version' : '0.7.6',
}
)
new_line = cElementTree.tostring(addon, encoding='utf-8')
input_xml_string += new_line
pymzml_tag_written = True
input_xml_string += '</mzML>\n'
self.newTree = cElementTree.fromstring(input_xml_string)
for event, element in cElementTree.iterparse(io, events=(b'start', b'end')):
if event ==b'start':
if element.tag.endswith('}run'):
self.lookup['run'] = cElementTree.Element(element.tag, element.attrib)
if element.tag.endswith('}spectrumList'):
self.lookup['spectrumList'] = \
cElementTree.Element(element.tag, element.attrib)
self.lookup['spectrumIndeces'] = \
cElementTree.Element('index', {'name': 'spectrum'})
break
return
def addSpec(self, spec):
self._addTree(spec.deRef(), typeOfSpec='spectrum')
return
def addChromatogram(self, spec):
self._addTree(spec.deRef(), typeOfSpec='chromatogram')
return
def _addTree(self, spec, typeOfSpec=None):
if not self.lookup.has_key('{0}List'.format(typeOfSpec)):
self.lookup['{0}List'.format(typeOfSpec)] = \
cElementTree.Element('{0}List'.format(typeOfSpec), {'count': 0})
self.lookup['{0}Indeces'.format(typeOfSpec)] = \
cElementTree.Element('index', {'name': typeOfSpec})
self.lookup[typeOfSpec + 'List'].append(spec._xmlTree)
offset = cElementTree.Element('offset')
offset.text = '0'
offset.attrib = {'idRef': 'NaN', 'nativeID': str(spec['id'])}
self.lookup[typeOfSpec + 'Indeces'].append(offset)
self.info['counters'][typeOfSpec] += 1
return
def save(self):
for typeOfSpec in ['spectrum', 'chromatogram']:
if typeOfSpec + 'List' in self.lookup.keys():
self.lookup['{0}List'.format(typeOfSpec)].set(
'count', str(self.info['counters'][typeOfSpec]),
)
if typeOfSpec == 'spectrum':
self.lookup['{0}List'.format(typeOfSpec)].set(
'defaultDataProcessingRef',
"pwiz_Reader_Thermo_conversion",
)
self.lookup['run'].append(self.lookup[typeOfSpec + 'List'])
self.newTree.append(
self.lookup['run']
)
# IndexList = cElementTree.Element('IndexList', {
# 'count': str(len(self.info['counters'].keys()))
# })
# for typeOfSpec in ['spectrum', 'chromatogram']:
# if typeOfSpec + 'Indeces' in self.lookup.keys():
# IndexList.append(self.lookup['{0}Indeces'.format(typeOfSpec)])
# self.newTree.append(IndexList)
self.prettyXMLformater(
self.newTree
)
self.xmlTree = cElementTree.ElementTree(
self.newTree
)
self.xmlTree.write(
self.filename,
encoding = "utf-8",
xml_declaration = True
)
return
def prettyXMLformater(self, element, level=0):
# Modified version from
# http://infix.se/2007/02/06/gentlemen-indent-your-xml
# which is a modified version of
# http://effbot.org/zone/element-lib.htm#prettyprint
i = '\n{0}'.format(level * ' ')
if len(element):
if not element.text or not element.text.strip():
element.text = i + ' '
for e in element:
self.prettyXMLformater(e, level + 1)
if not e.tail or not e.tail.strip():
e.tail = i + ' '
if not e.tail or not e.tail.strip():
e.tail = i
else:
if level and (not element.tail or not element.tail.strip()):
element.tail = i
return
if __name__ == '__main__':
print(__doc__)
|
PypiClean
|
/react-frontend-20230406083236.tar.gz/react-frontend-20230406083236/react_frontend/083278ec.js
|
"use strict";(self.webpackChunkreact_frontend=self.webpackChunkreact_frontend||[]).push([[4724],{34724:function(t,e,r){r.r(e);var n,i,o,a,c=r(7599),s=r(46323),l=r(38513),u=r(18394),f=r(63226),d=r(2733),p=(r(68336),r(37662),r(63383)),h=r(84322),y=r(73132),v=r(11629);function m(t){return m="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t},m(t)}function g(t,e){return e||(e=t.slice(0)),Object.freeze(Object.defineProperties(t,{raw:{value:Object.freeze(e)}}))}function b(){b=function(){return t};var t={},e=Object.prototype,r=e.hasOwnProperty,n=Object.defineProperty||function(t,e,r){t[e]=r.value},i="function"==typeof Symbol?Symbol:{},o=i.iterator||"@@iterator",a=i.asyncIterator||"@@asyncIterator",c=i.toStringTag||"@@toStringTag";function s(t,e,r){return Object.defineProperty(t,e,{value:r,enumerable:!0,configurable:!0,writable:!0}),t[e]}try{s({},"")}catch(A){s=function(t,e,r){return t[e]=r}}function l(t,e,r,i){var o=e&&e.prototype instanceof d?e:d,a=Object.create(o.prototype),c=new _(i||[]);return n(a,"_invoke",{value:x(t,r,c)}),a}function u(t,e,r){try{return{type:"normal",arg:t.call(e,r)}}catch(A){return{type:"throw",arg:A}}}t.wrap=l;var f={};function d(){}function p(){}function h(){}var y={};s(y,o,(function(){return this}));var v=Object.getPrototypeOf,g=v&&v(v(C([])));g&&g!==e&&r.call(g,o)&&(y=g);var w=h.prototype=d.prototype=Object.create(y);function k(t){["next","throw","return"].forEach((function(e){s(t,e,(function(t){return this._invoke(e,t)}))}))}function E(t,e){function i(n,o,a,c){var s=u(t[n],t,o);if("throw"!==s.type){var l=s.arg,f=l.value;return f&&"object"==m(f)&&r.call(f,"__await")?e.resolve(f.__await).then((function(t){i("next",t,a,c)}),(function(t){i("throw",t,a,c)})):e.resolve(f).then((function(t){l.value=t,a(l)}),(function(t){return i("throw",t,a,c)}))}c(s.arg)}var o;n(this,"_invoke",{value:function(t,r){function n(){return new e((function(e,n){i(t,r,e,n)}))}return o=o?o.then(n,n):n()}})}function x(t,e,r){var n="suspendedStart";return function(i,o){if("executing"===n)throw new Error("Generator is already running");if("completed"===n){if("throw"===i)throw o;return j()}for(r.method=i,r.arg=o;;){var a=r.delegate;if(a){var c=O(a,r);if(c){if(c===f)continue;return c}}if("next"===r.method)r.sent=r._sent=r.arg;else if("throw"===r.method){if("suspendedStart"===n)throw n="completed",r.arg;r.dispatchException(r.arg)}else"return"===r.method&&r.abrupt("return",r.arg);n="executing";var s=u(t,e,r);if("normal"===s.type){if(n=r.done?"completed":"suspendedYield",s.arg===f)continue;return{value:s.arg,done:r.done}}"throw"===s.type&&(n="completed",r.method="throw",r.arg=s.arg)}}}function O(t,e){var r=e.method,n=t.iterator[r];if(void 0===n)return e.delegate=null,"throw"===r&&t.iterator.return&&(e.method="return",e.arg=void 0,O(t,e),"throw"===e.method)||"return"!==r&&(e.method="throw",e.arg=new TypeError("The iterator does not provide a '"+r+"' method")),f;var i=u(n,t.iterator,e.arg);if("throw"===i.type)return e.method="throw",e.arg=i.arg,e.delegate=null,f;var o=i.arg;return o?o.done?(e[t.resultName]=o.value,e.next=t.nextLoc,"return"!==e.method&&(e.method="next",e.arg=void 0),e.delegate=null,f):o:(e.method="throw",e.arg=new TypeError("iterator result is not an object"),e.delegate=null,f)}function L(t){var e={tryLoc:t[0]};1 in t&&(e.catchLoc=t[1]),2 in t&&(e.finallyLoc=t[2],e.afterLoc=t[3]),this.tryEntries.push(e)}function P(t){var e=t.completion||{};e.type="normal",delete e.arg,t.completion=e}function _(t){this.tryEntries=[{tryLoc:"root"}],t.forEach(L,this),this.reset(!0)}function C(t){if(t){var e=t[o];if(e)return e.call(t);if("function"==typeof t.next)return t;if(!isNaN(t.length)){var n=-1,i=function e(){for(;++n<t.length;)if(r.call(t,n))return e.value=t[n],e.done=!1,e;return e.value=void 0,e.done=!0,e};return i.next=i}}return{next:j}}function j(){return{value:void 0,done:!0}}return p.prototype=h,n(w,"constructor",{value:h,configurable:!0}),n(h,"constructor",{value:p,configurable:!0}),p.displayName=s(h,c,"GeneratorFunction"),t.isGeneratorFunction=function(t){var e="function"==typeof t&&t.constructor;return!!e&&(e===p||"GeneratorFunction"===(e.displayName||e.name))},t.mark=function(t){return Object.setPrototypeOf?Object.setPrototypeOf(t,h):(t.__proto__=h,s(t,c,"GeneratorFunction")),t.prototype=Object.create(w),t},t.awrap=function(t){return{__await:t}},k(E.prototype),s(E.prototype,a,(function(){return this})),t.AsyncIterator=E,t.async=function(e,r,n,i,o){void 0===o&&(o=Promise);var a=new E(l(e,r,n,i),o);return t.isGeneratorFunction(r)?a:a.next().then((function(t){return t.done?t.value:a.next()}))},k(w),s(w,c,"Generator"),s(w,o,(function(){return this})),s(w,"toString",(function(){return"[object Generator]"})),t.keys=function(t){var e=Object(t),r=[];for(var n in e)r.push(n);return r.reverse(),function t(){for(;r.length;){var n=r.pop();if(n in e)return t.value=n,t.done=!1,t}return t.done=!0,t}},t.values=C,_.prototype={constructor:_,reset:function(t){if(this.prev=0,this.next=0,this.sent=this._sent=void 0,this.done=!1,this.delegate=null,this.method="next",this.arg=void 0,this.tryEntries.forEach(P),!t)for(var e in this)"t"===e.charAt(0)&&r.call(this,e)&&!isNaN(+e.slice(1))&&(this[e]=void 0)},stop:function(){this.done=!0;var t=this.tryEntries[0].completion;if("throw"===t.type)throw t.arg;return this.rval},dispatchException:function(t){if(this.done)throw t;var e=this;function n(r,n){return a.type="throw",a.arg=t,e.next=r,n&&(e.method="next",e.arg=void 0),!!n}for(var i=this.tryEntries.length-1;i>=0;--i){var o=this.tryEntries[i],a=o.completion;if("root"===o.tryLoc)return n("end");if(o.tryLoc<=this.prev){var c=r.call(o,"catchLoc"),s=r.call(o,"finallyLoc");if(c&&s){if(this.prev<o.catchLoc)return n(o.catchLoc,!0);if(this.prev<o.finallyLoc)return n(o.finallyLoc)}else if(c){if(this.prev<o.catchLoc)return n(o.catchLoc,!0)}else{if(!s)throw new Error("try statement without catch or finally");if(this.prev<o.finallyLoc)return n(o.finallyLoc)}}}},abrupt:function(t,e){for(var n=this.tryEntries.length-1;n>=0;--n){var i=this.tryEntries[n];if(i.tryLoc<=this.prev&&r.call(i,"finallyLoc")&&this.prev<i.finallyLoc){var o=i;break}}o&&("break"===t||"continue"===t)&&o.tryLoc<=e&&e<=o.finallyLoc&&(o=null);var a=o?o.completion:{};return a.type=t,a.arg=e,o?(this.method="next",this.next=o.finallyLoc,f):this.complete(a)},complete:function(t,e){if("throw"===t.type)throw t.arg;return"break"===t.type||"continue"===t.type?this.next=t.arg:"return"===t.type?(this.rval=this.arg=t.arg,this.method="return",this.next="end"):"normal"===t.type&&e&&(this.next=e),f},finish:function(t){for(var e=this.tryEntries.length-1;e>=0;--e){var r=this.tryEntries[e];if(r.finallyLoc===t)return this.complete(r.completion,r.afterLoc),P(r),f}},catch:function(t){for(var e=this.tryEntries.length-1;e>=0;--e){var r=this.tryEntries[e];if(r.tryLoc===t){var n=r.completion;if("throw"===n.type){var i=n.arg;P(r)}return i}}throw new Error("illegal catch attempt")},delegateYield:function(t,e,r){return this.delegate={iterator:C(t),resultName:e,nextLoc:r},"next"===this.method&&(this.arg=void 0),f}},t}function w(t,e,r,n,i,o,a){try{var c=t[o](a),s=c.value}catch(l){return void r(l)}c.done?e(s):Promise.resolve(s).then(n,i)}function k(t,e){for(var r=0;r<e.length;r++){var n=e[r];n.enumerable=n.enumerable||!1,n.configurable=!0,"value"in n&&(n.writable=!0),Object.defineProperty(t,S(n.key),n)}}function E(t,e){return E=Object.setPrototypeOf?Object.setPrototypeOf.bind():function(t,e){return t.__proto__=e,t},E(t,e)}function x(t){var e=function(){if("undefined"==typeof Reflect||!Reflect.construct)return!1;if(Reflect.construct.sham)return!1;if("function"==typeof Proxy)return!0;try{return Boolean.prototype.valueOf.call(Reflect.construct(Boolean,[],(function(){}))),!0}catch(t){return!1}}();return function(){var r,n=M(t);if(e){var i=M(this).constructor;r=Reflect.construct(n,arguments,i)}else r=n.apply(this,arguments);return function(t,e){if(e&&("object"===m(e)||"function"==typeof e))return e;if(void 0!==e)throw new TypeError("Derived constructors may only return object or undefined");return O(t)}(this,r)}}function O(t){if(void 0===t)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return t}function L(){L=function(){return t};var t={elementsDefinitionOrder:[["method"],["field"]],initializeInstanceElements:function(t,e){["method","field"].forEach((function(r){e.forEach((function(e){e.kind===r&&"own"===e.placement&&this.defineClassElement(t,e)}),this)}),this)},initializeClassElements:function(t,e){var r=t.prototype;["method","field"].forEach((function(n){e.forEach((function(e){var i=e.placement;if(e.kind===n&&("static"===i||"prototype"===i)){var o="static"===i?t:r;this.defineClassElement(o,e)}}),this)}),this)},defineClassElement:function(t,e){var r=e.descriptor;if("field"===e.kind){var n=e.initializer;r={enumerable:r.enumerable,writable:r.writable,configurable:r.configurable,value:void 0===n?void 0:n.call(t)}}Object.defineProperty(t,e.key,r)},decorateClass:function(t,e){var r=[],n=[],i={static:[],prototype:[],own:[]};if(t.forEach((function(t){this.addElementPlacement(t,i)}),this),t.forEach((function(t){if(!C(t))return r.push(t);var e=this.decorateElement(t,i);r.push(e.element),r.push.apply(r,e.extras),n.push.apply(n,e.finishers)}),this),!e)return{elements:r,finishers:n};var o=this.decorateConstructor(r,e);return n.push.apply(n,o.finishers),o.finishers=n,o},addElementPlacement:function(t,e,r){var n=e[t.placement];if(!r&&-1!==n.indexOf(t.key))throw new TypeError("Duplicated element ("+t.key+")");n.push(t.key)},decorateElement:function(t,e){for(var r=[],n=[],i=t.decorators,o=i.length-1;o>=0;o--){var a=e[t.placement];a.splice(a.indexOf(t.key),1);var c=this.fromElementDescriptor(t),s=this.toElementFinisherExtras((0,i[o])(c)||c);t=s.element,this.addElementPlacement(t,e),s.finisher&&n.push(s.finisher);var l=s.extras;if(l){for(var u=0;u<l.length;u++)this.addElementPlacement(l[u],e);r.push.apply(r,l)}}return{element:t,finishers:n,extras:r}},decorateConstructor:function(t,e){for(var r=[],n=e.length-1;n>=0;n--){var i=this.fromClassDescriptor(t),o=this.toClassDescriptor((0,e[n])(i)||i);if(void 0!==o.finisher&&r.push(o.finisher),void 0!==o.elements){t=o.elements;for(var a=0;a<t.length-1;a++)for(var c=a+1;c<t.length;c++)if(t[a].key===t[c].key&&t[a].placement===t[c].placement)throw new TypeError("Duplicated element ("+t[a].key+")")}}return{elements:t,finishers:r}},fromElementDescriptor:function(t){var e={kind:t.kind,key:t.key,placement:t.placement,descriptor:t.descriptor};return Object.defineProperty(e,Symbol.toStringTag,{value:"Descriptor",configurable:!0}),"field"===t.kind&&(e.initializer=t.initializer),e},toElementDescriptors:function(t){var e;if(void 0!==t)return(e=t,function(t){if(Array.isArray(t))return t}(e)||function(t){if("undefined"!=typeof Symbol&&null!=t[Symbol.iterator]||null!=t["@@iterator"])return Array.from(t)}(e)||function(t,e){if(t){if("string"==typeof t)return T(t,e);var r=Object.prototype.toString.call(t).slice(8,-1);return"Object"===r&&t.constructor&&(r=t.constructor.name),"Map"===r||"Set"===r?Array.from(t):"Arguments"===r||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r)?T(t,e):void 0}}(e)||function(){throw new TypeError("Invalid attempt to destructure non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()).map((function(t){var e=this.toElementDescriptor(t);return this.disallowProperty(t,"finisher","An element descriptor"),this.disallowProperty(t,"extras","An element descriptor"),e}),this)},toElementDescriptor:function(t){var e=String(t.kind);if("method"!==e&&"field"!==e)throw new TypeError('An element descriptor\'s .kind property must be either "method" or "field", but a decorator created an element descriptor with .kind "'+e+'"');var r=S(t.key),n=String(t.placement);if("static"!==n&&"prototype"!==n&&"own"!==n)throw new TypeError('An element descriptor\'s .placement property must be one of "static", "prototype" or "own", but a decorator created an element descriptor with .placement "'+n+'"');var i=t.descriptor;this.disallowProperty(t,"elements","An element descriptor");var o={kind:e,key:r,placement:n,descriptor:Object.assign({},i)};return"field"!==e?this.disallowProperty(t,"initializer","A method descriptor"):(this.disallowProperty(i,"get","The property descriptor of a field descriptor"),this.disallowProperty(i,"set","The property descriptor of a field descriptor"),this.disallowProperty(i,"value","The property descriptor of a field descriptor"),o.initializer=t.initializer),o},toElementFinisherExtras:function(t){return{element:this.toElementDescriptor(t),finisher:A(t,"finisher"),extras:this.toElementDescriptors(t.extras)}},fromClassDescriptor:function(t){var e={kind:"class",elements:t.map(this.fromElementDescriptor,this)};return Object.defineProperty(e,Symbol.toStringTag,{value:"Descriptor",configurable:!0}),e},toClassDescriptor:function(t){var e=String(t.kind);if("class"!==e)throw new TypeError('A class descriptor\'s .kind property must be "class", but a decorator created a class descriptor with .kind "'+e+'"');this.disallowProperty(t,"key","A class descriptor"),this.disallowProperty(t,"placement","A class descriptor"),this.disallowProperty(t,"descriptor","A class descriptor"),this.disallowProperty(t,"initializer","A class descriptor"),this.disallowProperty(t,"extras","A class descriptor");var r=A(t,"finisher");return{elements:this.toElementDescriptors(t.elements),finisher:r}},runClassFinishers:function(t,e){for(var r=0;r<e.length;r++){var n=(0,e[r])(t);if(void 0!==n){if("function"!=typeof n)throw new TypeError("Finishers must return a constructor.");t=n}}return t},disallowProperty:function(t,e,r){if(void 0!==t[e])throw new TypeError(r+" can't have a ."+e+" property.")}};return t}function P(t){var e,r=S(t.key);"method"===t.kind?e={value:t.value,writable:!0,configurable:!0,enumerable:!1}:"get"===t.kind?e={get:t.value,configurable:!0,enumerable:!1}:"set"===t.kind?e={set:t.value,configurable:!0,enumerable:!1}:"field"===t.kind&&(e={configurable:!0,writable:!0,enumerable:!0});var n={kind:"field"===t.kind?"field":"method",key:r,placement:t.static?"static":"field"===t.kind?"own":"prototype",descriptor:e};return t.decorators&&(n.decorators=t.decorators),"field"===t.kind&&(n.initializer=t.value),n}function _(t,e){void 0!==t.descriptor.get?e.descriptor.get=t.descriptor.get:e.descriptor.set=t.descriptor.set}function C(t){return t.decorators&&t.decorators.length}function j(t){return void 0!==t&&!(void 0===t.value&&void 0===t.writable)}function A(t,e){var r=t[e];if(void 0!==r&&"function"!=typeof r)throw new TypeError("Expected '"+e+"' to be a function");return r}function S(t){var e=function(t,e){if("object"!==m(t)||null===t)return t;var r=t[Symbol.toPrimitive];if(void 0!==r){var n=r.call(t,e||"default");if("object"!==m(n))return n;throw new TypeError("@@toPrimitive must return a primitive value.")}return("string"===e?String:Number)(t)}(t,"string");return"symbol"===m(e)?e:String(e)}function T(t,e){(null==e||e>t.length)&&(e=t.length);for(var r=0,n=new Array(e);r<e;r++)n[r]=t[r];return n}function D(){return D="undefined"!=typeof Reflect&&Reflect.get?Reflect.get.bind():function(t,e,r){var n=function(t,e){for(;!Object.prototype.hasOwnProperty.call(t,e)&&null!==(t=M(t)););return t}(t,e);if(n){var i=Object.getOwnPropertyDescriptor(n,e);return i.get?i.get.call(arguments.length<3?t:r):i.value}},D.apply(this,arguments)}function M(t){return M=Object.setPrototypeOf?Object.getPrototypeOf.bind():function(t){return t.__proto__||Object.getPrototypeOf(t)},M(t)}var z={moisture:"M12,3.25C12,3.25 6,10 6,14C6,17.32 8.69,20 12,20A6,6 0 0,0 18,14C18,10 12,3.25 12,3.25M14.47,9.97L15.53,11.03L9.53,17.03L8.47,15.97M9.75,10A1.25,1.25 0 0,1 11,11.25A1.25,1.25 0 0,1 9.75,12.5A1.25,1.25 0 0,1 8.5,11.25A1.25,1.25 0 0,1 9.75,10M14.25,14.5A1.25,1.25 0 0,1 15.5,15.75A1.25,1.25 0 0,1 14.25,17A1.25,1.25 0 0,1 13,15.75A1.25,1.25 0 0,1 14.25,14.5Z",temperature:"M15 13V5A3 3 0 0 0 9 5V13A5 5 0 1 0 15 13M12 4A1 1 0 0 1 13 5V8H11V5A1 1 0 0 1 12 4Z",brightness:"M3.55 19.09L4.96 20.5L6.76 18.71L5.34 17.29M12 6C8.69 6 6 8.69 6 12S8.69 18 12 18 18 15.31 18 12C18 8.68 15.31 6 12 6M20 13H23V11H20M17.24 18.71L19.04 20.5L20.45 19.09L18.66 17.29M20.45 5L19.04 3.6L17.24 5.39L18.66 6.81M13 1H11V4H13M6.76 5.39L4.96 3.6L3.55 5L5.34 6.81L6.76 5.39M1 13H4V11H1M13 20H11V23H13",conductivity:"M2,22V20C2,20 7,18 12,18C17,18 22,20 22,20V22H2M11.3,9.1C10.1,5.2 4,6.1 4,6.1C4,6.1 4.2,13.9 9.9,12.7C9.5,9.8 8,9 8,9C10.8,9 11,12.4 11,12.4V17C11.3,17 11.7,17 12,17C12.3,17 12.7,17 13,17V12.8C13,12.8 13,8.9 16,7.9C16,7.9 14,10.9 14,12.9C21,13.6 21,4 21,4C21,4 12.1,3 11.3,9.1Z",battery:void 0};!function(t,e,r,n){var i=L();if(n)for(var o=0;o<n.length;o++)i=n[o](i);var a=e((function(t){i.initializeInstanceElements(t,c.elements)}),r),c=i.decorateClass(function(t){for(var e=[],r=function(t){return"method"===t.kind&&t.key===o.key&&t.placement===o.placement},n=0;n<t.length;n++){var i,o=t[n];if("method"===o.kind&&(i=e.find(r)))if(j(o.descriptor)||j(i.descriptor)){if(C(o)||C(i))throw new ReferenceError("Duplicated methods ("+o.key+") can't be decorated.");i.descriptor=o.descriptor}else{if(C(o)){if(C(i))throw new ReferenceError("Decorators can't be placed on different accessors with for the same property ("+o.key+").");i.decorators=o.decorators}_(o,i)}else e.push(o)}return e}(a.d.map(P)),t);i.initializeClassElements(a.F,c.elements),i.runClassFinishers(a.F,c.finishers)}([(0,s.Mo)("hui-plant-status-card")],(function(t,e){var m,L,P=function(e){!function(t,e){if("function"!=typeof e&&null!==e)throw new TypeError("Super expression must either be null or a function");t.prototype=Object.create(e&&e.prototype,{constructor:{value:t,writable:!0,configurable:!0}}),Object.defineProperty(t,"prototype",{writable:!1}),e&&E(t,e)}(a,e);var r,n,i,o=x(a);function a(){var e;!function(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}(this,a);for(var r=arguments.length,n=new Array(r),i=0;i<r;i++)n[i]=arguments[i];return e=o.call.apply(o,[this].concat(n)),t(O(e)),e}return r=a,n&&k(r.prototype,n),i&&k(r,i),Object.defineProperty(r,"prototype",{writable:!1}),r}(e);return{F:P,d:[{kind:"method",static:!0,key:"getConfigElement",value:(m=b().mark((function t(){return b().wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,Promise.all([r.e(9663),r.e(3976),r.e(687)]).then(r.bind(r,30687));case 2:return t.abrupt("return",document.createElement("hui-plant-status-card-editor"));case 3:case"end":return t.stop()}}),t)})),L=function(){var t=this,e=arguments;return new Promise((function(r,n){var i=m.apply(t,e);function o(t){w(i,r,n,o,a,"next",t)}function a(t){w(i,r,n,o,a,"throw",t)}o(void 0)}))},function(){return L.apply(this,arguments)})},{kind:"method",static:!0,key:"getStubConfig",value:function(t,e,r){return{type:"plant-status",entity:(0,h.j)(t,1,e,r,["plant"])[0]||""}}},{kind:"field",decorators:[(0,s.Cb)({attribute:!1})],key:"hass",value:void 0},{kind:"field",decorators:[(0,s.SB)()],key:"_config",value:void 0},{kind:"method",key:"getCardSize",value:function(){return 3}},{kind:"method",key:"setConfig",value:function(t){if(!t.entity||"plant"!==t.entity.split(".")[0])throw new Error("Specify an entity from within the plant domain");this._config=t}},{kind:"method",key:"shouldUpdate",value:function(t){return(0,y.G)(this,t)}},{kind:"method",key:"updated",value:function(t){if(D(M(P.prototype),"updated",this).call(this,t),this._config&&this.hass){var e=t.get("hass"),r=t.get("_config");e&&r&&e.themes===this.hass.themes&&r.theme===this._config.theme||(0,l.R)(this,this.hass.themes,this._config.theme)}}},{kind:"method",key:"render",value:function(){var t=this;if(!this.hass||!this._config)return c.Ld;var e=this.hass.states[this._config.entity];return e?(0,c.dy)(i||(i=g(["\n <ha-card\n class=",'\n >\n <div\n class="banner"\n style="background-image:url(',')"\n >\n <div class="header">\n ','\n </div>\n </div>\n <div class="content">\n ',"\n </div>\n </ha-card>\n "])),e.attributes.entity_picture?"has-plant-image":"",e.attributes.entity_picture,this._config.name||(0,d.C)(e),this.computeAttributes(e).map((function(r){return(0,c.dy)(o||(o=g(['\n <div\n class="attributes"\n @action=',"\n .actionHandler=",'\n tabindex="0"\n .value=',"\n >\n <div>\n <ha-svg-icon\n .path=","\n ></ha-svg-icon>\n </div>\n <div\n class=","\n >\n ",'\n </div>\n <div class="uom">\n ',"\n </div>\n </div>\n "])),t._handleMoreInfo,(0,p.K)(),r,t.computeIcon(r,e.attributes.battery),-1===e.attributes.problem.indexOf(r)?"":"problem",e.attributes[r],e.attributes.unit_of_measurement_dict[r]||"")}))):(0,c.dy)(n||(n=g(["\n <hui-warning>\n ","\n </hui-warning>\n "])),(0,v.i)(this.hass,this._config.entity))}},{kind:"get",static:!0,key:"styles",value:function(){return(0,c.iv)(a||(a=g(['\n ha-card {\n height: 100%;\n box-sizing: border-box;\n }\n .banner {\n display: flex;\n align-items: flex-end;\n background-repeat: no-repeat;\n background-size: cover;\n background-position: center;\n padding-top: 12px;\n }\n\n .has-plant-image .banner {\n padding-top: 30%;\n }\n\n .header {\n /* start paper-font-headline style */\n font-family: "Roboto", "Noto", sans-serif;\n -webkit-font-smoothing: antialiased; /* OS X subpixel AA bleed bug */\n text-rendering: optimizeLegibility;\n font-size: 24px;\n font-weight: 400;\n letter-spacing: -0.012em;\n /* end paper-font-headline style */\n\n line-height: 40px;\n padding: 8px 16px;\n }\n\n .has-plant-image .header {\n font-size: 16px;\n font-weight: 500;\n line-height: 16px;\n padding: 16px;\n color: white;\n width: 100%;\n background: rgba(0, 0, 0, var(--dark-secondary-opacity));\n }\n\n .content {\n display: flex;\n justify-content: space-between;\n padding: 16px 32px 24px 32px;\n }\n\n .has-plant-image .content {\n padding-bottom: 16px;\n }\n\n ha-svg-icon {\n color: var(--paper-item-icon-color);\n margin-bottom: 8px;\n }\n\n .attributes {\n cursor: pointer;\n }\n\n .attributes:focus {\n outline: none;\n background: var(--divider-color);\n border-radius: 100%;\n }\n\n .attributes div {\n text-align: center;\n }\n\n .problem {\n color: var(--error-color);\n font-weight: bold;\n }\n\n .uom {\n color: var(--secondary-text-color);\n }\n '])))}},{kind:"method",key:"computeAttributes",value:function(t){return Object.keys(z).filter((function(e){return e in t.attributes}))}},{kind:"method",key:"computeIcon",value:function(t,e){return"battery"===t?(0,f.M)(e):z[t]}},{kind:"method",key:"_handleMoreInfo",value:function(t){var e=t.currentTarget,r=this.hass.states[this._config.entity];e.value&&(0,u.B)(this,"hass-more-info",{entityId:r.attributes.sensors[e.value]})}}]}}),c.oi)}}]);
|
PypiClean
|
/itk_core-5.4rc1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl/itk/itkScaleVersor3DTransformPython.py
|
import collections
from sys import version_info as _version_info
if _version_info < (3, 7, 0):
raise RuntimeError("Python 3.7 or later required")
from . import _ITKCommonPython
from . import _ITKTransformPython
from sys import version_info as _swig_python_version_info
if _swig_python_version_info < (2, 7, 0):
raise RuntimeError("Python 2.7 or later required")
# Import the low-level C/C++ module
if __package__ or "." in __name__:
from . import _itkScaleVersor3DTransformPython
else:
import _itkScaleVersor3DTransformPython
try:
import builtins as __builtin__
except ImportError:
import __builtin__
_swig_new_instance_method = _itkScaleVersor3DTransformPython.SWIG_PyInstanceMethod_New
_swig_new_static_method = _itkScaleVersor3DTransformPython.SWIG_PyStaticMethod_New
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except __builtin__.Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
def _swig_setattr_nondynamic_instance_variable(set):
def set_instance_attr(self, name, value):
if name == "thisown":
self.this.own(value)
elif name == "this":
set(self, name, value)
elif hasattr(self, name) and isinstance(getattr(type(self), name), property):
set(self, name, value)
else:
raise AttributeError("You cannot add instance attributes to %s" % self)
return set_instance_attr
def _swig_setattr_nondynamic_class_variable(set):
def set_class_attr(cls, name, value):
if hasattr(cls, name) and not isinstance(getattr(cls, name), property):
set(cls, name, value)
else:
raise AttributeError("You cannot add class attributes to %s" % cls)
return set_class_attr
def _swig_add_metaclass(metaclass):
"""Class decorator for adding a metaclass to a SWIG wrapped class - a slimmed down version of six.add_metaclass"""
def wrapper(cls):
return metaclass(cls.__name__, cls.__bases__, cls.__dict__.copy())
return wrapper
class _SwigNonDynamicMeta(type):
"""Meta class to enforce nondynamic attributes (no new attributes) for a class"""
__setattr__ = _swig_setattr_nondynamic_class_variable(type.__setattr__)
import collections.abc
import itk.ITKCommonBasePython
import itk.itkMatrixPython
import itk.itkVectorPython
import itk.vnl_vectorPython
import itk.stdcomplexPython
import itk.pyBasePython
import itk.vnl_matrixPython
import itk.itkFixedArrayPython
import itk.vnl_vector_refPython
import itk.vnl_matrix_fixedPython
import itk.itkCovariantVectorPython
import itk.itkPointPython
import itk.itkArray2DPython
import itk.itkOptimizerParametersPython
import itk.itkArrayPython
import itk.itkVersorRigid3DTransformPython
import itk.itkVersorTransformPython
import itk.itkVersorPython
import itk.itkRigid3DTransformPython
import itk.itkMatrixOffsetTransformBasePython
import itk.itkVariableLengthVectorPython
import itk.itkDiffusionTensor3DPython
import itk.itkSymmetricSecondRankTensorPython
import itk.itkTransformBasePython
def itkScaleVersor3DTransformD_New():
return itkScaleVersor3DTransformD.New()
class itkScaleVersor3DTransformD(itk.itkVersorRigid3DTransformPython.itkVersorRigid3DTransformD):
r"""Proxy of C++ itkScaleVersor3DTransformD class."""
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__New_orig__ = _swig_new_static_method(_itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD___New_orig__)
Clone = _swig_new_instance_method(_itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD_Clone)
SetMatrix = _swig_new_instance_method(_itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD_SetMatrix)
SetScale = _swig_new_instance_method(_itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD_SetScale)
GetScale = _swig_new_instance_method(_itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD_GetScale)
__swig_destroy__ = _itkScaleVersor3DTransformPython.delete_itkScaleVersor3DTransformD
cast = _swig_new_static_method(_itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD_cast)
def New(*args, **kargs):
"""New() -> itkScaleVersor3DTransformD
Create a new object of the class itkScaleVersor3DTransformD and set the input and the parameters if some
named or non-named arguments are passed to that method.
New() tries to assign all the non named parameters to the input of the new objects - the
first non named parameter in the first input, etc.
The named parameters are used by calling the method with the same name prefixed by 'Set'.
Ex:
itkScaleVersor3DTransformD.New(reader, threshold=10)
is (most of the time) equivalent to:
obj = itkScaleVersor3DTransformD.New()
obj.SetInput(0, reader.GetOutput())
obj.SetThreshold(10)
"""
obj = itkScaleVersor3DTransformD.__New_orig__()
from itk.support import template_class
template_class.New(obj, *args, **kargs)
return obj
New = staticmethod(New)
# Register itkScaleVersor3DTransformD in _itkScaleVersor3DTransformPython:
_itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD_swigregister(itkScaleVersor3DTransformD)
itkScaleVersor3DTransformD___New_orig__ = _itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD___New_orig__
itkScaleVersor3DTransformD_cast = _itkScaleVersor3DTransformPython.itkScaleVersor3DTransformD_cast
|
PypiClean
|
/basic_sftp-1.3.6.tar.gz/basic_sftp-1.3.6/basic_sftp/basic_sftp.py
|
import pysftp
import logging
import time
import os
logging.basicConfig(level=logging.INFO)
remotepath = '/home/brian/files/'
class BasicSftp():
def __init__(self, remotepath, ip, username, password, ssh_key, port):
self.sftpConnect = None
self.remotePath = remotepath
self.ip = ip
self.username = username
self.password = password
self.ssh_key = ssh_key
self.port = port
def sftp(self):
'''This method creates a sftp connection to a remote server allowing you
to transfer files later'''
try:
if self.ssh_key:
self.sftpConnect = pysftp.Connection(
self.ip, username=self.username, password=self.password, private_key=self.ssh_key, port=self.port)
else:
self.sftpConnect = pysftp.Connection(
self.ip, username=self.username, password=self.password, port=self.port)
return self.sftpConnect.exists(self.remotePath)
except Exception as e:
logging.error(e)
return(False)
def transferContents(self, fname, direct):
'''This method transfers the contents of a local folder to the remote
server'''
try:
# startTime = time.perf_counter()
if direct:
# This allows you to move the entire contents of a folder to your remote
# server rather than just one file
fileNum = len([f for f in os.listdir(fname)
if os.path.isfile(os.path.join(fname, f))])
foldername = fname.split('/')[-2]
newfolder = self.remotePath + foldername
# Creates a new folder, places the items in the folder, gives privileges to the admin
self.sftpConnect.mkdir(newfolder)
self.sftpConnect.put_r(fname, newfolder)
self.sftpConnect.chmod(newfolder, mode=777)
else:
# This will just move one specific file to the remote server
fileNum = 1
filename = fname.split('/')[-1]
self.sftpConnect.put(fname, self.remotePath + filename)
# endTime = time.perf_counter() - startTime
logging.info('A total of %d file(s) were added in %2.4f seconds.' %
(fileNum, 1))
return self.sftpConnect.exists(self.remotePath)
except Exception as e:
logging.error(str(e))
return False
def check_open(self):
'''Checks to see if the connection is open and returns the object'''
return str(self.sftpConnect)
def close(self):
'''Closes the connection if there is one'''
if self.sftpConnect:
self.sftpConnect.close()
def getip(self):
'''Returns the IP address'''
return self.ip
def __str__(self):
return('%s /n %s /n %s /n %s /n %d' % (self.remotePath, self.ip, self.username, self.password, self.port))
######### OTher changes that need to be done to the program ###############
# * Have a set method for the sftp connect that allows you to change the settings of the current connection
# * Make it so that the ssh key is required for the click method
# * Fix the sftp method so that you can create a new connection if one already exists
# or just set all of the variables and start one if none exists
# *
|
PypiClean
|
/dataiku-scoring-12.1.0.tar.gz/dataiku-scoring-12.1.0/dataikuscoring/models/mlflow.py
|
from .common import PredictionModelMixin, ProbabilisticModelMixin
import numpy as np
class MLflowModel(PredictionModelMixin, ProbabilisticModelMixin):
def __init__(self, resources):
self.model = resources["model"]
self.metadata = resources["mlflow_metadata"]
self.threshold = resources["mlflow_usermeta"]["activeClassifierThreshold"] \
if "activeClassifierThreshold" in resources["mlflow_usermeta"] else None
def _compute_predict(self, X):
import pandas as pd
from dataikuscoring.mlflow import mlflow_classification_predict_to_scoring_data, mlflow_regression_predict_to_scoring_data
features = [x['name'] for x in self.metadata['features']]
if isinstance(X, (list, np.ndarray)):
if isinstance(X[0], (list, np.ndarray)):
X = [{feature: value for feature, value in zip(features, observation)} for observation in X]
input_df = pd.DataFrame(X)
elif isinstance(X, pd.DataFrame):
input_df = X
input_df = input_df[features]
input_df.index = range(input_df.shape[0])
if "predictionType" not in self.metadata:
raise Exception("Prediction type is not set on the MLFlow model version, cannot use parsed output")
prediction_type = self.metadata["predictionType"]
if prediction_type in ["BINARY_CLASSIFICATION", "MULTICLASS"]:
scoring_data = mlflow_classification_predict_to_scoring_data(self.model, self.metadata, input_df, self.threshold)
y_pred = scoring_data.pred_and_proba_df
elif prediction_type == "REGRESSION":
scoring_data = mlflow_regression_predict_to_scoring_data(self.model, self.metadata, input_df)
y_pred = scoring_data.preds_df
return y_pred
def _predict(self, X):
y_pred = self._compute_predict(X)
return np.array([output[0] for output in y_pred.values.tolist()])
def _predict_proba(self, X):
if self.metadata["predictionType"] == "REGRESSION":
raise Exception("You cannot output probabilities for regressions.")
y_probas = self._compute_predict(X).values[:, 1:]
labels = [x["label"] for x in self.metadata["classLabels"]]
result = {label: value for label, value in zip(labels, y_probas.T)}
return result
def _describe(self):
return "{} with MLFlow model".format(self.metadata["predictionType"])
|
PypiClean
|
/sithom-0.0.4.tar.gz/sithom-0.0.4/README.md
|
# sithom README
[](https://opensource.org/licenses/MIT)[](https://github.com/psf/black)[](https://github.com/sdat2/sithom/actions/workflows/python-package.yml)[](https://sithom.readthedocs.io/en/latest/?badge=latest)[](https://badge.fury.io/py/sithom)[](https://zenodo.org/badge/latestdoi/496635214)
## Description
A package for shared utility scripts that I use in my research projects.
I realised I was copying functionality from project to project. So instead, here it is.
## Install using pip
```txt
pip install sithom
```
## Install using conda
```txt
conda install -c conda-forge sithom
```
## Package structure
```txt
├── CHANGELOG.txt <- List of main changes at each new package version.
├── CITATION.cff <- File to allow you to easily cite this repository.
├── LICENSE <- MIT Open software license.
├── Makefile <- Makefile with commands.
├── pytest.ini <- Enable doctest unit-tests.
├── README.md <- The top-level README for developers using this project.
├── setup.py <- Python setup file for pip install.
|
├── sithom <- Package folder.
| |
│ ├── __init__.py <- Init file.
│ ├── _version.py <- Key package information.
│ ├── curve.py <- Curve fitting w. uncertainty propogation.
│ ├── misc.py <- Miscellanious utilties.
│ ├── place.py <- Place objects.
│ ├── plot.py <- Plot utilties.
│ ├── time.py <- Time utilties.
│ ├── unc.py <- Uncertainties utilties.
│ └── xr.py <- Xarray utilties.
|
└── tests <- Test folder.
```
## Requirements
- Python 3.8+
- `matplotlib`
- `seaborn`
- `cmocean`
- `xarray`
- `uncertainties`
- `jupyterthemes`
|
PypiClean
|
/msgraph_beta_sdk-1.0.0a9-py3-none-any.whl/msgraph/generated/me/calendars/item/events/item/forward/forward_post_request_body.py
|
from __future__ import annotations
from kiota_abstractions.serialization import AdditionalDataHolder, Parsable, ParseNode, SerializationWriter
from typing import Any, Callable, Dict, List, Optional, TYPE_CHECKING, Union
if TYPE_CHECKING:
from .......models import recipient
class ForwardPostRequestBody(AdditionalDataHolder, Parsable):
def __init__(self,) -> None:
"""
Instantiates a new forwardPostRequestBody and sets the default values.
"""
# Stores additional data not described in the OpenAPI description found when deserializing. Can be used for serialization as well.
self._additional_data: Dict[str, Any] = {}
# The Comment property
self._comment: Optional[str] = None
# The ToRecipients property
self._to_recipients: Optional[List[recipient.Recipient]] = None
@property
def additional_data(self,) -> Dict[str, Any]:
"""
Gets the additionalData property value. Stores additional data not described in the OpenAPI description found when deserializing. Can be used for serialization as well.
Returns: Dict[str, Any]
"""
return self._additional_data
@additional_data.setter
def additional_data(self,value: Dict[str, Any]) -> None:
"""
Sets the additionalData property value. Stores additional data not described in the OpenAPI description found when deserializing. Can be used for serialization as well.
Args:
value: Value to set for the AdditionalData property.
"""
self._additional_data = value
@property
def comment(self,) -> Optional[str]:
"""
Gets the comment property value. The Comment property
Returns: Optional[str]
"""
return self._comment
@comment.setter
def comment(self,value: Optional[str] = None) -> None:
"""
Sets the comment property value. The Comment property
Args:
value: Value to set for the Comment property.
"""
self._comment = value
@staticmethod
def create_from_discriminator_value(parse_node: Optional[ParseNode] = None) -> ForwardPostRequestBody:
"""
Creates a new instance of the appropriate class based on discriminator value
Args:
parseNode: The parse node to use to read the discriminator value and create the object
Returns: ForwardPostRequestBody
"""
if parse_node is None:
raise Exception("parse_node cannot be undefined")
return ForwardPostRequestBody()
def get_field_deserializers(self,) -> Dict[str, Callable[[ParseNode], None]]:
"""
The deserialization information for the current model
Returns: Dict[str, Callable[[ParseNode], None]]
"""
from .......models import recipient
fields: Dict[str, Callable[[Any], None]] = {
"Comment": lambda n : setattr(self, 'comment', n.get_str_value()),
"ToRecipients": lambda n : setattr(self, 'to_recipients', n.get_collection_of_object_values(recipient.Recipient)),
}
return fields
def serialize(self,writer: SerializationWriter) -> None:
"""
Serializes information the current object
Args:
writer: Serialization writer to use to serialize this model
"""
if writer is None:
raise Exception("writer cannot be undefined")
writer.write_str_value("Comment", self.comment)
writer.write_collection_of_object_values("ToRecipients", self.to_recipients)
writer.write_additional_data_value(self.additional_data)
@property
def to_recipients(self,) -> Optional[List[recipient.Recipient]]:
"""
Gets the toRecipients property value. The ToRecipients property
Returns: Optional[List[recipient.Recipient]]
"""
return self._to_recipients
@to_recipients.setter
def to_recipients(self,value: Optional[List[recipient.Recipient]] = None) -> None:
"""
Sets the toRecipients property value. The ToRecipients property
Args:
value: Value to set for the to_recipients property.
"""
self._to_recipients = value
|
PypiClean
|
/gismath_open3d-0.0.2-cp311-cp311-win_amd64.whl/open3d/visualization/tensorboard_plugin/metadata.py
|
"""Internal information about the Open3D plugin."""
from tensorboard.compat.proto.summary_pb2 import SummaryMetadata
from .plugin_data_pb2 import LabelToNames
PLUGIN_NAME = "Open3D"
# The most recent value for the `version` field of the
# `Open3DPluginData` proto. Sync with Open3D version (MAJOR*100 + MINOR)
_VERSION = 14
SUPPORTED_FILEFORMAT_VERSIONS = [14]
GEOMETRY_PROPERTY_DIMS = {
'vertex_positions': (3,),
'vertex_normals': (3,),
'vertex_colors': (3,),
'vertex_texture_uvs': (2,),
'triangle_indices': (3,),
'triangle_normals': (3,),
'triangle_colors': (3,),
'triangle_texture_uvs': (3, 2),
'line_indices': (2,),
'line_colors': (3,)
}
VERTEX_PROPERTIES = ('vertex_normals', 'vertex_colors', 'vertex_texture_uvs')
TRIANGLE_PROPERTIES = ('triangle_normals', 'triangle_colors',
'triangle_texture_uvs')
LINE_PROPERTIES = ('line_colors',)
MATERIAL_SCALAR_PROPERTIES = (
'point_size',
'line_width',
'metallic',
'roughness',
'reflectance',
# 'sheen_roughness',
'clear_coat',
'clear_coat_roughness',
'anisotropy',
'ambient_occlusion',
# 'ior',
'transmission',
# 'micro_thickness',
'thickness',
'absorption_distance',
)
MATERIAL_VECTOR_PROPERTIES = (
'base_color',
# 'sheen_color',
# 'anisotropy_direction',
'normal',
# 'bent_normal',
# 'clear_coat_normal',
# 'emissive',
# 'post_lighting_color',
'absorption_color',
)
MATERIAL_TEXTURE_MAPS = (
'albedo', # same as base_color
# Ambient occlusion, roughness, and metallic maps in a single 3 channel
# texture. Commonly used in glTF models.
'ao_rough_metal',
)
SUPPORTED_PROPERTIES = set(
tuple(GEOMETRY_PROPERTY_DIMS.keys()) + ("material_name",) +
tuple("material_scalar_" + p for p in MATERIAL_SCALAR_PROPERTIES) +
tuple("material_vector_" + p for p in MATERIAL_VECTOR_PROPERTIES) +
tuple("material_texture_map_" + p for p in
(MATERIAL_SCALAR_PROPERTIES[2:] + # skip point_size, line_width
MATERIAL_VECTOR_PROPERTIES[1:] + # skip base_color
MATERIAL_TEXTURE_MAPS)))
def create_summary_metadata(description, metadata):
"""Creates summary metadata. Reserved for future use. Required by
TensorBoard.
Args:
description: The description to show in TensorBoard.
Returns:
A `SummaryMetadata` protobuf object.
"""
ln_proto = LabelToNames()
if 'label_to_names' in metadata:
ln_proto.label_to_names.update(metadata['label_to_names'])
return SummaryMetadata(
summary_description=description,
plugin_data=SummaryMetadata.PluginData(
plugin_name=PLUGIN_NAME, content=ln_proto.SerializeToString()),
)
def parse_plugin_metadata(content):
"""Parse summary metadata to a Python object. Reserved for future use.
Arguments:
content: The `content` field of a `SummaryMetadata` proto
corresponding to the Open3D plugin.
"""
ln_proto = LabelToNames()
ln_proto.ParseFromString(content)
return ln_proto.label_to_names
|
PypiClean
|
/backends-1.5.4.tar.gz/backends-1.5.4/lab/random.py
|
import sys
from functools import reduce
from operator import mul
import numpy as np
from typing import Union
from . import dispatch, B
from .types import DType, Int, Numeric, RandomState
from .util import abstract
__all__ = [
"set_random_seed",
"create_random_state",
"global_random_state",
"set_global_random_state",
"rand",
"randn",
"randcat",
"choice",
"randint",
"randperm",
"randgamma",
"randbeta",
]
@dispatch
def set_random_seed(seed: Int):
"""Set the random seed for all frameworks.
Args:
seed (int): Seed.
"""
# Set seed in NumPy.
np.random.seed(seed)
# Set seed for TensorFlow, if it is loaded.
if "tensorflow" in sys.modules:
import tensorflow as tf
tf.random.set_seed(seed)
tf.random.set_global_generator(tf.random.Generator.from_seed(seed))
# Set seed for PyTorch, if it is loaded.
if "torch" in sys.modules:
import torch
torch.manual_seed(seed)
# Set seed for JAX, if it is loaded.
if hasattr(B, "jax_global_random_state"):
import jax
B.jax_global_random_state = jax.random.PRNGKey(seed=seed)
@dispatch
@abstract()
def create_random_state(dtype: DType, seed: Int = 0):
"""Create a random state.
Args:
dtype (dtype): Data type of the desired framework to create a random state
for.
seed (int, optional): Seed to initialise the random state with. Defaults
to `0`.
Returns:
random state: Random state.
"""
@dispatch
@abstract()
def global_random_state(dtype: DType):
"""Get the global random state.
Args:
dtype (dtype): Data type of the desired framework for which to get the global
random state.
Returns:
random state: Global random state.
"""
@dispatch
@abstract()
def set_global_random_state(state: RandomState):
"""Set the global random state.
NOTE:
In TensorFlow, setting the global random state does NOT fix the randomness
for non-LAB random calls, like `tf.random.normal`. Use `B.set_random_seed`
instead!
Args:
state (random state): Random state to set.
"""
@dispatch
def global_random_state(a):
return global_random_state(B.dtype(a))
@dispatch
@abstract()
def rand(state: RandomState, dtype: DType, *shape: Int): # pragma: no cover
"""Construct a U[0, 1] random tensor.
Args:
state (random state, optional): Random state.
dtype (dtype, optional): Data type. Defaults to the default data type.
*shape (shape, optional): Shape of the sample. Defaults to `()`.
Returns:
state (random state, optional): Random state.
tensor: Random tensor.
"""
@dispatch
def rand(*shape: Int):
return rand(B.default_dtype, *shape)
@dispatch
def rand(state: RandomState, ref: Numeric):
return rand(state, B.dtype(ref), *B.shape(ref))
@dispatch
def rand(ref: Numeric):
return rand(B.dtype(ref), *B.shape(ref))
@dispatch
def rand(shape: Int):
# Single integer is not a reference.
return rand(B.default_dtype, shape)
@dispatch
@abstract()
def randn(state: RandomState, dtype: DType, *shape: Int): # pragma: no cover
"""Construct a N(0, 1) random tensor.
Args:
state (random state, optional): Random state.
dtype (dtype, optional): Data type. Defaults to the default data type.
*shape (shape, optional): Shape of the sample. Defaults to `()`.
Returns:
state (random state, optional): Random state.
tensor: Random tensor.
"""
@dispatch
def randn(*shape: Int):
return randn(B.default_dtype, *shape)
@dispatch
def randn(state: RandomState, ref: Numeric):
return randn(state, B.dtype(ref), *B.shape(ref))
@dispatch
def randn(ref: Numeric):
return randn(B.dtype(ref), *B.shape(ref))
@dispatch
def randn(shape: Int):
return randn(B.default_dtype, shape)
@dispatch
def randcat(state: RandomState, p: Union[Numeric, None], *shape: Int):
"""Randomly draw from a categorical random variable.
Args:
state (random state, optional): Random state.
p (tensor): Probabilities. The last axis determines the probabilities and
any prior axes add to the shap of the sample.
*shape (int): Shape of the sample. Defaults to `()`.
Returns:
state (random state, optional): Random state.
tensor: Realisation.
"""
n = reduce(mul, shape, 1)
state, sample = randcat(state, p, n)
return state, B.reshape(sample, *shape, *B.shape(sample)[1:])
def _randcat_last_first(a):
"""Put the last dimension first.
Args:
a (tensor): Tensor.
Returns:
tensor: `a`, but with last dimension first.
"""
perm = list(range(B.rank(a)))
return B.transpose(a, perm=perm[-1:] + perm[:-1])
@dispatch
def choice(
state: RandomState,
a: Numeric,
*shape: Int,
p: Union[Numeric, None] = None,
):
"""Randomly choose from a tensor *with* replacement.
Args:
state (random state, optional): Random state.
a (tensor): Tensor to choose from. Choices will be made along the first
dimension.
*shape (int): Shape of the sample. Defaults to `()`.
p (tensor, optional): Probabilities to sample with.
Returns:
state (random state, optional): Random state.
tensor: Choices.
"""
if p is None:
with B.on_device(a):
p = B.ones(B.dtype_float(a), B.shape(a, 0))
state, inds = B.randcat(state, p, *shape)
choices = B.reshape(
B.take(a, B.flatten(inds), axis=0),
*B.shape(inds),
*B.shape(a)[1:],
)
return state, choices
@dispatch
def choice(
a: Numeric,
*shape: Int,
p: Union[Numeric, None] = None,
):
state = B.global_random_state(a)
state, choices = choice(state, a, *shape, p=p)
B.set_global_random_state(state)
return choices
@dispatch
@abstract()
def randint(
state: RandomState,
dtype: DType,
*shape: Int,
lower: Int = 0,
upper: Int,
): # pragma: no cover
"""Construct a tensor of random integers in [`lower`, `upper`).
Args:
state (random state, optional): Random state.
dtype (dtype, optional): Data type. Defaults to the default data type.
*shape (shape, optional): Shape of the tensor. Defaults to `()`.
lower (int, optional): Lower bound. Defaults to `0`.
upper (int): Upper bound. Must be given as a keyword argument.
Returns:
state (random state, optional): Random state.
tensor: Random tensor.
"""
@dispatch
def randint(*shape: Int, lower: Int = 0, upper: Int):
return randint(B.default_dtype, *shape, lower=lower, upper=upper)
@dispatch
def randint(state: RandomState, ref: Numeric, *, lower: Int = 0, upper: Int):
return randint(state, B.dtype(ref), *B.shape(ref), lower=lower, upper=upper)
@dispatch
def randint(ref: Numeric, *, lower: Int = 0, upper: Int):
return randint(B.dtype(ref), *B.shape(ref), lower=lower, upper=upper)
@dispatch
def randint(shape: Int, *, lower: Int = 0, upper: Int):
# Single integer is not a reference.
return randint(B.default_dtype, shape, lower=lower, upper=upper)
@dispatch
@abstract()
def randperm(state: RandomState, dtype: DType, n: Int): # pragma: no cover
"""Construct a random permutation counting to `n`.
Args:
state (random state, optional): Random state.
dtype (dtype, optional): Data type. Defaults to the default data type.
n (int): Length of the permutation.
Returns:
state (random state, optional): Random state.
tensor: Random permutation.
"""
@dispatch
def randperm(n: Int):
return randperm(B.default_dtype, n)
@dispatch
@abstract()
def randgamma(
state: RandomState,
dtype: DType,
*shape: Int,
alpha: Numeric,
scale: Numeric,
): # pragma: no cover
"""Construct a tensor of gamma random variables with shape parameter `alpha` and
scale `scale`.
Args:
state (random state, optional): Random state.
dtype (dtype, optional): Data type. Defaults to the default data type.
*shape (shape, optional): Shape of the tensor. Defaults to `()`.
alpha (scalar): Shape parameter.
scale (scalar): Scale parameter.
Returns:
state (random state, optional): Random state.
tensor: Random tensor.
"""
@dispatch
def randgamma(*shape: Int, alpha: Numeric, scale: Numeric):
return randgamma(B.default_dtype, *shape, alpha=alpha, scale=scale)
@dispatch
def randgamma(state: RandomState, ref: Numeric, *, alpha: Numeric, scale: Numeric):
return randgamma(state, B.dtype(ref), *B.shape(ref), alpha=alpha, scale=scale)
@dispatch
def randgamma(ref: Numeric, *, alpha: Numeric, scale: Numeric):
return randgamma(B.dtype(ref), *B.shape(ref), alpha=alpha, scale=scale)
@dispatch
def randgamma(shape: Int, *, alpha: Numeric, scale: Numeric):
# Single integer is a not a reference.
return randgamma(B.default_dtype, shape, alpha=alpha, scale=scale)
@dispatch
def randbeta(
state: RandomState,
dtype: DType,
*shape: Int,
alpha: Numeric,
beta: Numeric,
):
"""Construct a tensor of beta random variables with shape parameters `alpha` and
`beta`.
Args:
state (random state, optional): Random state.
dtype (dtype, optional): Data type. Defaults to the default data type.
*shape (shape, optional): Shape of the tensor. Defaults to `()`.
alpha (scalar): Shape parameter `alpha`.
beta (scalar): Shape parameter `beta`.
Returns:
state (random state, optional): Random state.
tensor: Random tensor.
"""
state, x = randgamma(state, dtype, *shape, alpha=alpha, scale=1)
state, y = randgamma(state, dtype, *shape, alpha=beta, scale=1)
return state, x / (x + y)
@dispatch
def randbeta(dtype: DType, *shape: Int, alpha: Numeric, beta: Numeric):
return randbeta(
B.global_random_state(dtype),
dtype,
*shape,
alpha=alpha,
beta=beta,
)[1]
@dispatch
def randbeta(*shape: Int, alpha: Numeric, beta: Numeric):
return randbeta(B.default_dtype, *shape, alpha=alpha, beta=beta)
@dispatch
def randbeta(state: RandomState, ref: Numeric, *, alpha: Numeric, beta: Numeric):
return randbeta(state, B.dtype(ref), *B.shape(ref), alpha=alpha, beta=beta)
@dispatch
def randbeta(ref: Numeric, *, alpha: Numeric, beta: Numeric):
return randbeta(B.dtype(ref), *B.shape(ref), alpha=alpha, beta=beta)
@dispatch
def randbeta(shape: Int, *, alpha: Numeric, beta: Numeric):
# Single integer is not a reference.
return randbeta(B.default_dtype, shape, alpha=alpha, beta=beta)
|
PypiClean
|
/jupyterlab_remote_contents-0.1.1.tar.gz/jupyterlab_remote_contents-0.1.1/node_modules/type-check/README.md
|
# type-check [](https://travis-ci.org/gkz/type-check)
<a name="type-check" />
`type-check` is a library which allows you to check the types of JavaScript values at runtime with a Haskell like type syntax. It is great for checking external input, for testing, or even for adding a bit of safety to your internal code. It is a major component of [levn](https://github.com/gkz/levn). MIT license. Version 0.3.2. Check out the [demo](http://gkz.github.io/type-check/).
For updates on `type-check`, [follow me on twitter](https://twitter.com/gkzahariev).
npm install type-check
## Quick Examples
```js
// Basic types:
var typeCheck = require('type-check').typeCheck;
typeCheck('Number', 1); // true
typeCheck('Number', 'str'); // false
typeCheck('Error', new Error); // true
typeCheck('Undefined', undefined); // true
// Comment
typeCheck('count::Number', 1); // true
// One type OR another type:
typeCheck('Number | String', 2); // true
typeCheck('Number | String', 'str'); // true
// Wildcard, matches all types:
typeCheck('*', 2) // true
// Array, all elements of a single type:
typeCheck('[Number]', [1, 2, 3]); // true
typeCheck('[Number]', [1, 'str', 3]); // false
// Tuples, or fixed length arrays with elements of different types:
typeCheck('(String, Number)', ['str', 2]); // true
typeCheck('(String, Number)', ['str']); // false
typeCheck('(String, Number)', ['str', 2, 5]); // false
// Object properties:
typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true
typeCheck('{x: Number, y: Boolean}', {x: 2}); // false
typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true
typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false
typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true
// A particular type AND object properties:
typeCheck('RegExp{source: String, ...}', /re/i); // true
typeCheck('RegExp{source: String, ...}', {source: 're'}); // false
// Custom types:
var opt = {customTypes:
{Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}};
typeCheck('Even', 2, opt); // true
// Nested:
var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}'
typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true
```
Check out the [type syntax format](#syntax) and [guide](#guide).
## Usage
`require('type-check');` returns an object that exposes four properties. `VERSION` is the current version of the library as a string. `typeCheck`, `parseType`, and `parsedTypeCheck` are functions.
```js
// typeCheck(type, input, options);
typeCheck('Number', 2); // true
// parseType(type);
var parsedType = parseType('Number'); // object
// parsedTypeCheck(parsedType, input, options);
parsedTypeCheck(parsedType, 2); // true
```
### typeCheck(type, input, options)
`typeCheck` checks a JavaScript value `input` against `type` written in the [type format](#type-format) (and taking account the optional `options`) and returns whether the `input` matches the `type`.
##### arguments
* type - `String` - the type written in the [type format](#type-format) which to check against
* input - `*` - any JavaScript value, which is to be checked against the type
* options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types)
##### returns
`Boolean` - whether the input matches the type
##### example
```js
typeCheck('Number', 2); // true
```
### parseType(type)
`parseType` parses string `type` written in the [type format](#type-format) into an object representing the parsed type.
##### arguments
* type - `String` - the type written in the [type format](#type-format) which to parse
##### returns
`Object` - an object in the parsed type format representing the parsed type
##### example
```js
parseType('Number'); // [{type: 'Number'}]
```
### parsedTypeCheck(parsedType, input, options)
`parsedTypeCheck` checks a JavaScript value `input` against parsed `type` in the parsed type format (and taking account the optional `options`) and returns whether the `input` matches the `type`. Use this in conjunction with `parseType` if you are going to use a type more than once.
##### arguments
* type - `Object` - the type in the parsed type format which to check against
* input - `*` - any JavaScript value, which is to be checked against the type
* options - `Maybe Object` - an optional parameter specifying additional options, currently the only available option is specifying [custom types](#custom-types)
##### returns
`Boolean` - whether the input matches the type
##### example
```js
parsedTypeCheck([{type: 'Number'}], 2); // true
var parsedType = parseType('String');
parsedTypeCheck(parsedType, 'str'); // true
```
<a name="type-format" />
## Type Format
### Syntax
White space is ignored. The root node is a __Types__.
* __Identifier__ = `[\$\w]+` - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. `String`
* __Type__ = an `Identifier`, an `Identifier` followed by a `Structure`, just a `Structure`, or a wildcard `*` - eg. `String`, `Object{x: Number}`, `{x: Number}`, `Array{0: String, 1: Boolean, length: Number}`, `*`
* __Types__ = optionally a comment (an `Indentifier` followed by a `::`), optionally the identifier `Maybe`, one or more `Type`, separated by `|` - eg. `Number`, `String | Date`, `Maybe Number`, `Maybe Boolean | String`
* __Structure__ = `Fields`, or a `Tuple`, or an `Array` - eg. `{x: Number}`, `(String, Number)`, `[Date]`
* __Fields__ = a `{`, followed one or more `Field` separated by a comma `,` (trailing comma `,` is permitted), optionally an `...` (always preceded by a comma `,`), followed by a `}` - eg. `{x: Number, y: String}`, `{k: Function, ...}`
* __Field__ = an `Identifier`, followed by a colon `:`, followed by `Types` - eg. `x: Date | String`, `y: Boolean`
* __Tuple__ = a `(`, followed by one or more `Types` separated by a comma `,` (trailing comma `,` is permitted), followed by a `)` - eg `(Date)`, `(Number, Date)`
* __Array__ = a `[` followed by exactly one `Types` followed by a `]` - eg. `[Boolean]`, `[Boolean | Null]`
### Guide
`type-check` uses `Object.toString` to find out the basic type of a value. Specifically,
```js
{}.toString.call(VALUE).slice(8, -1)
{}.toString.call(true).slice(8, -1) // 'Boolean'
```
A basic type, eg. `Number`, uses this check. This is much more versatile than using `typeof` - for example, with `document`, `typeof` produces `'object'` which isn't that useful, and our technique produces `'HTMLDocument'`.
You may check for multiple types by separating types with a `|`. The checker proceeds from left to right, and passes if the value is any of the types - eg. `String | Boolean` first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false.
Adding a `Maybe` in front of a list of multiple types is the same as also checking for `Null` and `Undefined` - eg. `Maybe String` is equivalent to `Undefined | Null | String`.
You may add a comment to remind you of what the type is for by following an identifier with a `::` before a type (or multiple types). The comment is simply thrown out.
The wildcard `*` matches all types.
There are three types of structures for checking the contents of a value: 'fields', 'tuple', and 'array'.
If used by itself, a 'fields' structure will pass with any type of object as long as it is an instance of `Object` and the properties pass - this allows for duck typing - eg. `{x: Boolean}`.
To check if the properties pass, and the value is of a certain type, you can specify the type - eg. `Error{message: String}`.
If you want to make a field optional, you can simply use `Maybe` - eg. `{x: Boolean, y: Maybe String}` will still pass if `y` is undefined (or null).
If you don't care if the value has properties beyond what you have specified, you can use the 'etc' operator `...` - eg. `{x: Boolean, ...}` will match an object with an `x` property that is a boolean, and with zero or more other properties.
For an array, you must specify one or more types (separated by `|`) - it will pass for something of any length as long as each element passes the types provided - eg. `[Number]`, `[Number | String]`.
A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. `(String, Number)`.
An array and tuple structure check that the value is of type `Array` by default, but if another type is specified, they will check for that instead - eg. `Int32Array[Number]`. You can use the wildcard `*` to search for any type at all.
Check out the [type precedence](https://github.com/zaboco/type-precedence) library for type-check.
## Options
Options is an object. It is an optional parameter to the `typeCheck` and `parsedTypeCheck` functions. The only current option is `customTypes`.
<a name="custom-types" />
### Custom Types
__Example:__
```js
var options = {
customTypes: {
Even: {
typeOf: 'Number',
validate: function(x) {
return x % 2 === 0;
}
}
}
};
typeCheck('Even', 2, options); // true
typeCheck('Even', 3, options); // false
```
`customTypes` allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a `typeOf` property - a string, and `validate` property - a function.
The `typeOf` property is the type the value should be, and `validate` is a function which should return true if the value is of that type. `validate` receives one parameter, which is the value that we are checking.
## Technical About
`type-check` is written in [LiveScript](http://livescript.net/) - a language that compiles to JavaScript. It also uses the [prelude.ls](http://preludels.com/) library.
|
PypiClean
|
/dbl_sat_sdk-0.1.32-py3-none-any.whl/clientpkgs/unity_catalog_client.py
|
from core.dbclient import SatDBClient
from core.logging_utils import LoggingUtils
import json
LOGGR=None
if LOGGR is None:
LOGGR = LoggingUtils.get_logger()
class UnityCatalogClient(SatDBClient):
'''unity catalog helper'''
def get_catalogs_list(self):
"""
Returns an array of json objects for catalogs list
"""
# fetch all catalogs list
catalogslist = self.get("/unity-catalog/catalogs", version='2.1').get('catalogs', [])
return catalogslist
def get_schemas_list(self, catalogname):
"""
Returns list of schemas
"""
# fetch all schemaslist
schemaslist = self.get(f"/unity-catalog/schemas?catalog_name={catalogname}", version='2.1').get('schemas', [])
return schemaslist
def get_tables(self, catalog_name, schema_name):
"""
Returns list of tables
"""
# fetch all schemaslist
query = f"/unity-catalog/tables?catalog_name={catalog_name}&schema_name={schema_name}"
tableslist = self.get(query, version='2.1').get('tables', [])
return tableslist
def get_functions(self, catalog_name, schema_name):
"""
Returns an array of json objects for functions
"""
# fetch all functions
query = f"/unity-catalog/functions?catalog_name={catalog_name}&schema_name={schema_name}"
funcs = self.get(query, version='2.1').get('schemas', [])
return funcs
def get_sharing_providers_list(self):
"""
Returns an array of json objects for sharing providers
"""
# fetch all sharing providers list
query = f"/unity-catalog/providers"
sharingproviderslist = self.get(query, version='2.1').get('providers', [])
return sharingproviderslist
def get_sharing_recepients_list(self):
"""
Returns an array of json objects for sharing recepients
"""
# fetch all sharing recepients list
sharingrecepientslist = self.get("/unity-catalog/recipients", version='2.1').get('recipients', [])
return sharingrecepientslist
def get_sharing_recepient_permissions(self, sharename):
"""
Returns an array of json objects for sharing recepients permission
"""
# fetch all acls list
sharingacl = self.get(f"/unity-catalog/recipients/{sharename}/share-permissions", version='2.1').get('permissions_out', [])
return sharingacl
def get_list_shares(self):
"""
Returns an array of json objects for shares
"""
# fetch all shares
shareslist = self.get("/unity-catalog/shares", version='2.1').get('shares', [])
return shareslist
def get_share_permissions(self, sharename):
"""
Returns an array of json objects for share permission
"""
# fetch all acls list
sharingacl = self.get(f"/unity-catalog/shares/{sharename}/permissions", version='2.1').get('privilege_assignments', [])
return sharingacl
def get_external_locations(self):
"""
Returns an array of json objects for external locations
"""
# fetch all external locations
extlocns = self.get("/unity-catalog/external-locations", version='2.1').get('external_locations', [])
return extlocns
def get_workspace_metastore_assignments(self):
"""
Returns workspace metastore assignment
"""
# fetch all metastore assignment list
metastorejson = self.get("/unity-catalog/current-metastore-assignment", version='2.1')
metastoreassgnlist = []
metastoreassgnlist.append(json.loads(json.dumps(metastorejson)))
return metastoreassgnlist
def get_workspace_metastore_summary(self):
"""
Returns workspace metastore summary
"""
# fetch all metastore assignment list
metastoresumjson = self.get("/unity-catalog/metastore_summary", version='2.1')
metastoresumlist = []
metastoresumlist.append(json.loads(json.dumps(metastoresumjson)))
return metastoresumlist
#Has to be an account admin to run this api
def get_metastore_list(self):
"""
Returns list of workspace metastore
"""
# fetch all metastores
# Exception: Error: GET request failed with code 403 {"error_code":"PERMISSION_DENIED","message":"Only account admin can list metastores.","details":[{"@type":"type.googleapis.com/google.rpc.RequestInfo","request_id":"b9353080-94ea-47b6-b551-083336de7d84","serving_data":""}]
try:
metastores = self.get("/unity-catalog/metastores", version='2.1').get('metastores', [])
except Exception as e:
LOGGR.exception(e)
return []
return metastores
def get_credentials(self):
"""
Returns list of credentials
"""
# fetch all schemaslist
credentialslist = self.get("/unity-catalog/storage-credentials", version='2.1').get('storage_credentials', [])
return credentialslist
def get_grants_effective_permissions(self, securable_type, full_name):
"""
Returns effective permissions for securable type
:param securable_type like METASTORE, CATALOG, SCHEMA
:param full_name like metastore guid
"""
# fetch all schemaslist
permslist = self.get(f"/unity-catalog/permissions/{securable_type}/{full_name}", version='2.1').get('privilege_assignments', [])
return permslist
def get_grants_permissions(self, securable_type, full_name):
"""
Returns permissions for securable type
:param securable_type like METASTORE, CATALOG, SCHEMA
:param full_name like metastore guid
"""
# fetch all schemaslist
permslist = self.get("/unity-catalog/effective-permissions/{securable_type}/{full_name}", version='2.1').get('privilege_assignments', [])
return permslist
#the user should have account admin privileges
def get_grants_effective_permissions_ext(self):
arrperms=[]
arrlist = self.get_metastore_list()
for meta in arrlist:
metastore_id = meta['metastore_id']
effperms = self.get_grants_effective_permissions('METASTORE', metastore_id)
for effpermselem in effperms:
effpermselem['metastore_id'] = meta['metastore_id']
effpermselem['metastore_name'] = meta['name']
arrperms.extend(effperms)
jsonarr = json.dumps(arrperms)
return arrperms
|
PypiClean
|
/antares-0.3.23.1-py3-none-manylinux1_x86_64.whl/antares_core/3rdparty/tvm/python/tvm/relay/frontend/coreml.py
|
"""CoreML frontend."""
import math
import numpy as np
import tvm
from tvm.ir import IRModule
from .. import analysis
from .. import expr as _expr
from .. import function as _function
from .. import op as _op
from ... import nd as _nd
from ..._ffi import base as _base
from .common import ExprTable
from .common import infer_shape as _infer_shape
__all__ = ["from_coreml"]
def _NeuralNetworkImageScaler(op, inexpr, etab):
# TODO: we need to support more colorspace, such as rgb.
# this changes the symbol
biases = np.array([op.blueBias, op.greenBias, op.redBias]).reshape([3, 1, 1])
bias = etab.new_const(biases)
ret = _op.multiply(inexpr, _expr.const(op.channelScale, dtype="float32"))
ret = _op.add(ret, bias)
return ret
def _NeuralNetworkMeanImage(op, inexpr, etab):
# this changes the symbol
ret = _op.subtract(inexpr, _expr.const(op.meanImage, dtype="float32"))
return ret
def _ConvolutionLayerParams(op, inexpr, etab):
"""Convolution layer params."""
if op.isDeconvolution:
weights = etab.new_const(
np.array(list(op.weights.floatValue)).reshape(
tuple([op.kernelChannels, op.outputChannels] + list(op.kernelSize))
)
)
else:
weights = etab.new_const(
np.array(list(op.weights.floatValue)).reshape(
tuple([op.outputChannels, op.kernelChannels] + list(op.kernelSize))
)
)
dilation = list(op.dilationFactor)
if not dilation:
dilation = [1, 1]
N, C, H, W = _infer_shape(inexpr)
params = {
"channels": op.outputChannels,
"kernel_size": list(op.kernelSize),
"strides": list(op.stride),
"dilation": dilation,
"groups": op.nGroups,
}
if op.WhichOneof("ConvolutionPaddingType") == "valid":
valid = op.valid
if valid.paddingAmounts.borderAmounts:
assert len(valid.paddingAmounts.borderAmounts) == 2
pad_t = valid.paddingAmounts.borderAmounts[0].startEdgeSize
pad_l = valid.paddingAmounts.borderAmounts[1].startEdgeSize
pad_b = valid.paddingAmounts.borderAmounts[0].endEdgeSize
pad_r = valid.paddingAmounts.borderAmounts[1].endEdgeSize
if not all(v == 0 for v in (pad_t, pad_l, pad_b, pad_r)):
params["padding"] = (pad_t, pad_l, pad_b, pad_r)
elif op.WhichOneof("ConvolutionPaddingType") == "same":
assert op.same.asymmetryMode == 0, (
"Only support BOTTOM_RIGHT_HEAVY mode, " "which is used by tf/caffe and so on"
)
kernel = params["kernel_size"]
strides = params["strides"]
pad_t, pad_b = get_pad_value(H, kernel[0], strides[0])
pad_l, pad_r = get_pad_value(W, kernel[1], strides[1])
params["padding"] = (pad_t, pad_l, pad_b, pad_r)
else:
raise NotImplementedError("Valid/Same convolution padding implemented")
if op.isDeconvolution:
ret = _op.nn.conv2d_transpose(data=inexpr, weight=weights, **params)
else:
ret = _op.nn.conv2d(data=inexpr, weight=weights, **params)
if op.hasBias:
biases = etab.new_const(list(op.bias.floatValue))
ret = _op.nn.bias_add(ret, biases)
return ret
def _BatchnormLayerParams(op, inexpr, etab):
"""Get layer of batchnorm parameter"""
# this changes the symbol
if op.instanceNormalization:
raise tvm.error.OpNotImplemented(
'Operator "instance normalization" is not supported in frontend CoreML.'
)
params = {
"gamma": etab.new_const(list(op.gamma.floatValue)),
"beta": etab.new_const(list(op.beta.floatValue)),
"moving_mean": etab.new_const(list(op.mean.floatValue)),
"moving_var": etab.new_const(list(op.variance.floatValue)),
"epsilon": op.epsilon,
}
result, moving_mean, moving_var = _op.nn.batch_norm(data=inexpr, **params)
return result
def _ActivationParams(op, inexpr, etab):
"""Get activation parameters"""
whichActivation = op.WhichOneof("NonlinearityType")
par = getattr(op, whichActivation)
if whichActivation == "linear":
alpha = _expr.const(par.alpha, dtype="float32")
beta = _expr.const(par.beta, dtype="float32")
return _op.add(_op.multiply(inexpr, alpha), beta)
if whichActivation == "ReLU":
return _op.nn.relu(inexpr)
if whichActivation == "leakyReLU":
return _op.nn.leaky_relu(inexpr, alpha=par.alpha)
elif whichActivation == "thresholdedReLU":
alpha_tensor = _op.full_like(inexpr, fill_value=_expr.const(par.alpha, dtype="float32"))
return _op.multiply(inexpr, _op.greater(inexpr, alpha_tensor).as_type("float32"))
if whichActivation == "PReLU":
return _op.nn.prelu(inexpr, alpha=_expr.const(par.alpha, dtype="float32"))
if whichActivation == "tanh":
return _op.tanh(inexpr)
if whichActivation == "scaledTanh":
alpha = _expr.const(par.alpha, dtype="float32")
beta = _expr.const(par.beta, dtype="float32")
return _op.multiply(_op.tanh(_op.multiply(inexpr, beta)), alpha)
if whichActivation == "sigmoid":
return _op.sigmoid(inexpr)
if whichActivation == "sigmoidHard":
alpha = _expr.const(par.alpha, dtype="float32")
beta = _expr.const(par.beta, dtype="float32")
transformX = (alpha * inexpr) + beta
return _op.clip(transformX, a_min=0.0, a_max=1.0)
if whichActivation == "ELU":
return _op.multiply(
_op.add(_op.exp(inexpr), _expr.const(-1, dtype="float32")),
_expr.const(par.alpha, dtype="float32"),
)
if whichActivation == "softsign":
return inexpr / (
_expr.const(1, dtype="float32")
+ (op.nn.relu(inexpr) + _op.nn.relu(_op.negative(inexpr)))
)
if whichActivation == "softplus":
return _op.log(_op.add(_op.exp(inexpr), _expr.const(1, dtype="float32")))
if whichActivation == "parametricSoftplus":
alpha = list(par.alpha.floatValue)
beta = list(par.alpha.floatValue)
if len(alpha) == 1:
return _op.multiply(
_op.log(_op.add(_op.exp(inexpr), _expr.const(beta[0], dtype="float32"))),
_expr.const(alpha[0], dtype="float32"),
)
alpha = np.array(alpha).reshape((len(alpha), 1, 1))
beta = np.array(beta).reshape((len(beta), 1, 1))
alpha_expr = etab.new_const(alpha)
beta_expr = etab.new_const(beta)
return _op.multiply(_op.log(_op.add(_op.exp(inexpr), beta_expr)), alpha_expr)
raise tvm.error.OpNotImplemented(
"Operator {} is not supported in frontend CoreML.".format(whichActivation)
)
def _ScaleLayerParams(op, inexpr, etab):
"""Scale layer params."""
scale = etab.new_const(
np.array(list(op.scale.floatValue)).reshape(tuple(list(op.shapeScale) + [1, 1]))
)
ret = _op.multiply(inexpr, scale)
if op.hasBias:
bias = etab.new_const(
np.array(list(op.bias.floatValue)).reshape(tuple(list(op.shapeBias) + [1, 1]))
)
ret = _op.add(ret, bias)
return ret
def _PoolingLayerParams(op, inexpr, etab):
"""get pooling parameters"""
if op.globalPooling:
if op.type == 0:
return _op.nn.global_max_pool2d(inexpr)
if op.type == 1:
return _op.nn.global_avg_pool2d(inexpr)
raise tvm.error.OpNotImplemented(
"Only Max and Average Pooling are supported in frontend CoreML."
)
params = {"pool_size": list(op.kernelSize), "strides": list(op.stride)}
if op.WhichOneof("PoolingPaddingType") == "valid":
valid = op.valid
if valid.paddingAmounts.borderAmounts:
assert len(valid.paddingAmounts.borderAmounts) == 2
pad_t = valid.paddingAmounts.borderAmounts[0].startEdgeSize
pad_l = valid.paddingAmounts.borderAmounts[1].startEdgeSize
pad_b = valid.paddingAmounts.borderAmounts[0].endEdgeSize
pad_r = valid.paddingAmounts.borderAmounts[1].endEdgeSize
if not all(v == 0 for v in (pad_t, pad_l, pad_b, pad_r)):
params["padding"] = [pad_t, pad_l, pad_b, pad_r]
elif op.WhichOneof("PoolingPaddingType") == "includeLastPixel":
# I don't know if this is correct
valid = op.includeLastPixel
padding = list(valid.paddingAmounts)
params["padding"] = padding
params["ceil_mode"] = True
else:
msg = "PoolingPaddingType {} is not supported in operator Pooling."
op_name = op.WhichOneof("PoolingPaddingType")
raise tvm.error.OpAttributeUnImplemented(msg.format(op_name))
if op.type == 0:
return _op.nn.max_pool2d(inexpr, **params)
if op.type == 1:
return _op.nn.avg_pool2d(inexpr, **params)
raise tvm.error.OpNotImplemented("Only Max and Average Pooling are supported in CoreML.")
def _SoftmaxLayerParams(op, inexpr, etab):
return _op.nn.softmax(_op.nn.batch_flatten(inexpr))
def _InnerProductLayerParams(op, inexpr, etab):
weights = etab.new_const(
np.array(op.weights.floatValue).reshape((op.outputChannels, op.inputChannels))
)
out = _op.nn.dense(data=inexpr, weight=weights, units=op.outputChannels)
if op.hasBias:
bias = etab.new_const(np.array(op.bias.floatValue))
out = _op.nn.bias_add(out, bias)
return out
def _AddLayerParams(op, inexpr, etab):
if not isinstance(inexpr, list):
inexpr = [inexpr]
ret = inexpr[0]
for i in range(1, len(inexpr)):
ret = _op.add(ret, inexpr[i])
if op.alpha > 0:
ret = _op.add(ret, _expr.const(op.alpha, dtype="float32"))
return ret
def _MultiplyLayerParams(op, inexpr, etab):
if not isinstance(inexpr, list):
inexpr = [inexpr]
ret = inexpr[0]
for i in range(1, len(inexpr)):
ret = _op.multiply(ret, inexpr[i])
if op.alpha != 1:
ret = _op.multiply(ret, _expr.const(op.alpha, dtype="float32"))
return ret
def _ConcatLayerParams(op, inexpr, etab):
if not isinstance(inexpr, list):
inexpr = [inexpr]
if op.sequenceConcat:
raise tvm.error.OpNotImplemented(
"Operator Sequence Concat is not supported in frontend CoreML."
)
ret = _op.concatenate(inexpr, axis=1)
return ret
def _FlattenLayerParams(op, inexpr, etab):
if op.mode == 1:
inexpr = _op.transpose(_op.reshape(inexpr, newshape=(0, 0, -1)), axes=(0, 2, 1))
return _op.nn.batch_flatten(inexpr)
def _PaddingLayerParams(op, inexpr, etab):
"""Padding layer params."""
if op.WhichOneof("PaddingType") == "constant":
constant = op.constant
if constant.value != 0:
raise tvm.error.OpAttributeUnImplemented(
"{} is not supported in operator Padding.".format(constant.value)
)
pad_t = op.paddingAmounts.borderAmounts[0].startEdgeSize
pad_l = op.paddingAmounts.borderAmounts[1].startEdgeSize
pad_b = op.paddingAmounts.borderAmounts[0].endEdgeSize
pad_r = op.paddingAmounts.borderAmounts[1].endEdgeSize
return _op.nn.pad(data=inexpr, pad_width=((0, 0), (0, 0), (pad_t, pad_b), (pad_l, pad_r)))
raise tvm.error.OpNotImplemented("Non-constant padding is not supported in frontend CoreML.")
def _PermuteLayerParams(op, inexpr, etab):
axes = tuple(op.axis)
return _op.transpose(inexpr, axes=axes)
def _UpsampleLayerParams(op, inexpr, etab):
if op.scalingFactor[0] != op.scalingFactor[1]:
raise tvm.error.OpAttributeUnimplemented("Upsample height and width must be equal.")
interpolationMode = "nearest_neighbor" if op.mode == 0 else "bilinear"
return _op.nn.upsampling(
inexpr, scale_h=op.scalingFactor[0], scale_w=op.scalingFactor[1], method=interpolationMode
)
def _L2NormalizeLayerParams(op, inexpr, etab):
return _op.nn.l2_normalize(inexpr, eps=op.epsilon, axis=[1])
def _LRNLayerParams(op, inexpr, etab):
par = {}
par["size"] = op.localSize
par["bias"] = op.k
par["alpha"] = op.alpha
par["beta"] = op.beta
par["axis"] = 1 # default layout is nchw
return _op.nn.lrn(data=inexpr, **par)
def _AverageLayerParams(op, inexpr, etab):
if not isinstance(inexpr, list) or len(inexpr) < 2:
raise ValueError("Expect minimum 2 inputs")
count = len(inexpr)
_sum = inexpr[0]
for i in range(1, count):
_sum = _op.add(_sum, inexpr[i])
return _sum / _expr.const(count, dtype="float32")
def _MaxLayerParams(op, inexpr, etab):
if not isinstance(inexpr, list) or len(inexpr) < 2:
raise ValueError("Expect minimum 2 inputs")
_max = inexpr[0]
for i in range(1, len(inexpr)):
_max = _op.maximum(_max, inexpr[i])
return _max
def _MinLayerParams(op, inexpr, etab):
if not isinstance(inexpr, list) or len(inexpr) < 2:
raise ValueError("Expect minimum 2 inputs")
_min = inexpr[0]
for i in range(1, len(inexpr)):
_min = _op.minimum(_min, inexpr[i])
return _min
def _UnaryFunctionLayerParams(op, inexpr, etab):
op_type = op.type
if op_type == op.SQRT:
return _op.sqrt(inexpr)
elif op_type == op.RSQRT:
epsilon = _expr.const(op.epsilon)
return _op.rsqrt(inexpr + epsilon)
elif op_type == op.INVERSE:
epsilon = _expr.const(op.epsilon)
return _expr.const(1.0) / (inexpr + epsilon)
elif op_type == op.POWER:
alpha = _expr.const(op.alpha)
return _op.power(inexpr, alpha)
elif op_type == op.EXP:
return _op.exp(inexpr)
elif op_type == op.LOG:
return _op.log(inexpr)
elif op_type == op.ABS:
return _op.abs(inexpr)
elif op_type == op.THRESHOLD:
alpha = _expr.const(op.alpha)
return _op.maximum(inexpr, alpha)
else:
msg = "Unary Op type value {} is not supported in frontend CoreML."
raise tvm.error.OpAttributeUnImplemented(msg.format(op_type))
def _ReduceLayerParams(op, inexpr, etab):
axis = op.axis
if axis == op.CHW:
axis = [-3, -2, -1]
elif axis == op.HW:
axis = [-2, -1]
elif axis == op.C:
axis = -3
elif axis == op.H:
axis = -2
elif axis == op.W:
axis = -1
else:
msg = "Reduce axis value {} is not supported in frontend CoreML."
raise tvm.error.OpAttributeUnImplemented(msg.format(axis))
mode = op.mode
if mode == op.SUM:
return _op.sum(inexpr, axis=axis, keepdims=True)
elif mode == op.AVG:
return _op.mean(inexpr, axis=axis, keepdims=True)
elif mode == op.PROD:
return _op.prod(inexpr, axis=axis, keepdims=True)
elif mode == op.MIN:
return _op.min(inexpr, axis=axis, keepdims=True)
elif mode == op.MAX:
return _op.max(inexpr, axis=axis, keepdims=True)
elif mode == op.ARGMAX:
return _op.argmax(inexpr, axis=axis, keepdims=True)
else:
msg = "Reduce mode value {} is not supported in frontend CoreML."
raise tvm.error.OpAttributeUnImplemented(msg.format(mode))
def _ReshapeLayerParams(op, inexpr, etab):
return _op.reshape(inexpr, op.targetShape)
def _SplitLayerParams(op, inexpr, etab):
return _op.split(inexpr, op.nOutputs, axis=-3)
_convert_map = {
"NeuralNetworkMeanImage": _NeuralNetworkMeanImage,
"NeuralNetworkImageScaler": _NeuralNetworkImageScaler,
"ConvolutionLayerParams": _ConvolutionLayerParams,
"BatchnormLayerParams": _BatchnormLayerParams,
"ActivationParams": _ActivationParams,
"ScaleLayerParams": _ScaleLayerParams,
"PoolingLayerParams": _PoolingLayerParams,
"SoftmaxLayerParams": _SoftmaxLayerParams,
"InnerProductLayerParams": _InnerProductLayerParams,
"AddLayerParams": _AddLayerParams,
"MultiplyLayerParams": _MultiplyLayerParams,
"FlattenLayerParams": _FlattenLayerParams,
"ConcatLayerParams": _ConcatLayerParams,
"PaddingLayerParams": _PaddingLayerParams,
"PermuteLayerParams": _PermuteLayerParams,
"UpsampleLayerParams": _UpsampleLayerParams,
"L2NormalizeLayerParams": _L2NormalizeLayerParams,
"LRNLayerParams": _LRNLayerParams,
"AverageLayerParams": _AverageLayerParams,
"MaxLayerParams": _MaxLayerParams,
"MinLayerParams": _MinLayerParams,
"UnaryFunctionLayerParams": _UnaryFunctionLayerParams,
"ReduceLayerParams": _ReduceLayerParams,
"ReshapeLayerParams": _ReshapeLayerParams,
"SplitLayerParams": _SplitLayerParams,
}
# SAME padding: https://www.tensorflow.org/api_guides/python/nn
def get_pad_value(data, kernel, stride):
"""Get the pad tuple of value for SAME padding
Parameters
----------
data:
1D input data
kernel:
1D input kernel
stride:
1D input stride
Returns
-------
pad tuple of value
"""
out = int(math.ceil(float(data) / float(stride)))
pad = max(0, (out - 1) * stride + kernel - data)
pad_before = pad // 2
pad_after = pad - pad_before
return pad_before, pad_after
def coreml_op_to_relay(op, inname, outnames, etab):
"""Convert coreml layer to a Relay expression and update the expression table.
Parameters
----------
op: a coreml protobuf bit
inname : str or list of str
Name of the input Relay expression.
outnames : str or list of str
Name of the output Relay expression.
etab : relay.frontend.common.ExprTable
The global expression table to be updated.
"""
classname = type(op).__name__
if classname not in _convert_map:
raise tvm.error.OpNotImplemented(
"Operator {} is not supported in frontend CoreML.".format(classname)
)
if isinstance(inname, _base.string_types):
insym = etab.get_expr(inname)
else:
insym = [etab.get_expr(i) for i in inname]
outs = _convert_map[classname](op, insym, etab)
if outnames:
if isinstance(outnames, _base.string_types) or len(outnames) == 1:
outname = outnames if isinstance(outnames, _base.string_types) else outnames[0]
etab.set_expr(outname, outs, force_override=True)
else:
# the number of outputs from model op and tvm relay must be same
assert len(outnames) == len(outs)
for outname, out in zip(outnames, outs):
etab.set_expr(outname, out, force_override=True)
def from_coreml(model, shape=None):
"""Convert from coreml model into Relay Function.
Parameters
----------
model:
coremltools.models.MLModel of a NeuralNetworkClassifier
shape : dict of str to int list/tuple, optional
The input shapes
Returns
-------
mod : tvm.IRModule
The relay module for compilation.
params : dict of str to tvm.nd.NDArray
The parameter dict to be used by Relay.
"""
try:
import coremltools as cm
except ImportError:
raise ImportError("The coremltools package must be installed")
assert isinstance(model, cm.models.MLModel)
spec = model.get_spec()
modeltype = spec.WhichOneof("Type")
assert modeltype in ["neuralNetworkClassifier", "neuralNetwork", "neuralNetworkRegressor"]
cc = getattr(spec, modeltype)
etab = ExprTable()
for i in spec.description.input:
input_shape = list(shape[i.name]) if shape is not None and i.name in shape else None
etab.set_expr(i.name, _expr.var(i.name, shape=input_shape))
for pp in cc.preprocessing:
whichpp = pp.WhichOneof("preprocessor")
ppmethod = getattr(pp, whichpp)
if whichpp == "scaler":
# Be careful we maybe only preprocess one input when we have multi inputs
# which is stored in pp.featureName. See unit testing verify_image_scaler
# in test_forward.py for CoreML.
for i in spec.description.input:
# we have multi inputs
if len(spec.description.input) > 1:
assert pp.featureName != ""
if i.name == pp.featureName:
coreml_op_to_relay(ppmethod, i.name, i.name, etab)
else:
assert pp.featureName == ""
coreml_op_to_relay(ppmethod, i.name, i.name, etab)
else:
coreml_op_to_relay(ppmethod, pp.featureName, pp.featureName, etab)
for l in cc.layers:
layertype = l.WhichOneof("layer")
layerop = getattr(l, layertype)
if len(l.input) == 1:
coreml_op_to_relay(layerop, l.input[0], l.output, etab)
else:
coreml_op_to_relay(layerop, list(l.input), l.output, etab)
outexpr = [
etab.get_expr(o.name) if o.name in etab.exprs else _expr.var(o.name)
for o in spec.description.output
]
# check there are multiple outputs in the model and all are there in etab
multi_out = all([bool(o.name in etab.exprs) for o in spec.description.output])
outexpr = _expr.Tuple(outexpr) if multi_out else outexpr[0]
func = _function.Function(analysis.free_vars(outexpr), outexpr)
params = {k: _nd.array(np.array(v, dtype=np.float32)) for k, v in etab.params.items()}
return IRModule.from_expr(func), params
|
PypiClean
|
/msgraph-sdk-1.0.0a3.tar.gz/msgraph-sdk-1.0.0a3/msgraph/generated/drives/item/list/subscriptions/count/count_request_builder.py
|
from __future__ import annotations
from dataclasses import dataclass
from kiota_abstractions.get_path_parameters import get_path_parameters
from kiota_abstractions.method import Method
from kiota_abstractions.request_adapter import RequestAdapter
from kiota_abstractions.request_information import RequestInformation
from kiota_abstractions.request_option import RequestOption
from kiota_abstractions.response_handler import ResponseHandler
from kiota_abstractions.serialization import Parsable, ParsableFactory
from typing import Any, Callable, Dict, List, Optional, Union
from ......models.o_data_errors import o_data_error
class CountRequestBuilder():
"""
Provides operations to count the resources in the collection.
"""
def __init__(self,request_adapter: RequestAdapter, path_parameters: Optional[Union[Dict[str, Any], str]] = None) -> None:
"""
Instantiates a new CountRequestBuilder and sets the default values.
Args:
pathParameters: The raw url or the Url template parameters for the request.
requestAdapter: The request adapter to use to execute the requests.
"""
if path_parameters is None:
raise Exception("path_parameters cannot be undefined")
if request_adapter is None:
raise Exception("request_adapter cannot be undefined")
# Url template to use to build the URL for the current request builder
self.url_template: str = "{+baseurl}/drives/{drive%2Did}/list/subscriptions/$count"
url_tpl_params = get_path_parameters(path_parameters)
self.path_parameters = url_tpl_params
self.request_adapter = request_adapter
def create_get_request_information(self,request_configuration: Optional[CountRequestBuilderGetRequestConfiguration] = None) -> RequestInformation:
"""
Get the number of the resource
Args:
requestConfiguration: Configuration for the request such as headers, query parameters, and middleware options.
Returns: RequestInformation
"""
request_info = RequestInformation()
request_info.url_template = self.url_template
request_info.path_parameters = self.path_parameters
request_info.http_method = Method.GET
request_info.headers["Accept"] = "text/plain"
if request_configuration:
request_info.add_request_headers(request_configuration.headers)
request_info.add_request_options(request_configuration.options)
return request_info
async def get(self,request_configuration: Optional[CountRequestBuilderGetRequestConfiguration] = None, response_handler: Optional[ResponseHandler] = None) -> Optional[int]:
"""
Get the number of the resource
Args:
requestConfiguration: Configuration for the request such as headers, query parameters, and middleware options.
responseHandler: Response handler to use in place of the default response handling provided by the core service
Returns: Optional[int]
"""
request_info = self.create_get_request_information(
request_configuration
)
error_mapping: Dict[str, ParsableFactory] = {
"4XX": o_data_error.ODataError,
"5XX": o_data_error.ODataError,
}
if not self.request_adapter:
raise Exception("Http core is null")
return await self.request_adapter.send_primitive_async(request_info, "int", response_handler, error_mapping)
@dataclass
class CountRequestBuilderGetRequestConfiguration():
"""
Configuration for the request such as headers, query parameters, and middleware options.
"""
# Request headers
headers: Optional[Dict[str, str]] = None
# Request options
options: Optional[List[RequestOption]] = None
|
PypiClean
|
/pic_dl-0.2.0-py3-none-any.whl/pic_dl/utils.py
|
import os
import html
import requests
import re
import threading
import logging
logging.getLogger("requests").setLevel("WARNING")
class LibError(Exception):
pass
def r0(pattern, text):
_r = re.findall(pattern, text)
return _r
def r1(pattern, text):
_r = re.search(pattern, text)
return _r
def escape_file_path(path):
path = path.replace("/", "-")
path = path.replace('"', "-")
path = path.replace("\\", "-")
path = path.replace("*", "-")
path = path.replace("?", "-")
return path
def r_get(link, headers=None, proxy=None):
if not headers:
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Charset": "UTF-8,*;q=0.5",
"Accept-Encoding": "gzip,deflate,sdch",
"Accept-Language": "en-US,en;q=0.8",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20100101 Firefox/13.0"
}
res = requests.get(link,
proxies={"http": proxy, "https": proxy},
headers=headers, timeout=20)
return res
def to_url(u):
u = u.replace("http", "")
u = u.replace("https", "")
u = u.replace("://", "")
return u
def multithre_downloader(threads=4, dic=None, **kwargs):
logger = logging.getLogger()
proxy = kwargs.get("proxy", None)
mod = kwargs.get("mod", None)
dic["author"] = html.unescape(dic["author"])
dic["title"] = html.unescape(dic["title"])
pic_links = list(set(dic["pics"]))
from queue import Queue
q = Queue()
for i in pic_links:
path = ""
path = path + dic["author"] + " - " if dic["author"] != "" else path
path = path + dic["title"] + " - " if dic["title"] != "" else path
path += i[1]
path = escape_file_path(path)
q.put((i[0], path, proxy))
def worker():
def downloader(link, path, proxy=None):
logger.info("{}: Downloading {}/{}".format(mod, len(pic_links)-q.qsize(), len(pic_links)))
if os.path.isfile(path):
logger.debug("{}: Already exists, passing {}".format(mod, path))
return 0
content = r_get(link, proxy=proxy).content
if len(path) > 255:
path = path[-255:]
with open(path, "wb") as f:
f.write(content)
return 0
nonlocal q
while not q.empty():
job = q.get()
logger.debug("{}: Processing {}, {} left.".format(mod, job[0], q.qsize()))
try:
downloader(job[0], job[1], proxy=job[2])
except:
logger.warning("{}: Error {}, {} left.".format(mod, job[0], q.qsize()))
finally:
q.task_done()
return 0
for i in range(threads):
threading.Thread(target=worker, daemon=True).start()
q.join()
logger.info("{}: DONE".format(mod))
|
PypiClean
|
/nepsy-0.1.5.0-py3-none-any.whl/nep_aldebaran/Bumpers.py
|
# Luis Enrique Coronado Zuniga
# You are free to use, change, or redistribute the code in any way you wish
# but please maintain the name of the original author.
# This code comes with no warranty of any kind.
from naoqi import ALProxy
from naoqi import ALBroker
from naoqi import ALModule
import time
import os
import sys
import copy
# Template action funtion:
"""
class name:
def __init__(self,ip,port=9559):
self.ip = ip
self.port = port
def onLoad(self):
try:
proxy_name "AL.."
self.proxy = ALProxy(proxy_name, self.ip, self.port)
print ( proxy_name + " success")
except:
print ( proxy_name + " error")
#onRun for action, onInput for peception.
def onRun(self, input_ = "", parameters = {}, parallel = "false"):
def onStop(self, input_ = "", parameters = {}, parallel = "false"):
"""
# Template module:
"""
class NameModule(ALModule):
def __init__(self, name, robot, ip port = 9559):
ALModule.__init__(self, name)
self.name = name
self.robot = robot
self.ip = ip
self.port = port
try:
proxy_name = "AL.."
self.proxy = ALProxy(proxy_name,self.ip,self.port)
self.memory = ALProxy("ALMemory",self.ip, self.port)
print ( proxy_name + " success")
try:
self.memory.subscribeToEvent(EventName, self.name, "EventListener")
except():
self.memory.unsubscribeToEvent(EventName, self.name)
self.memory.subscribeToEvent(EventName, self.name, "EventListener")
except:
print ( proxy_name + " error")
def EventListener(self, key, value, message):
"""
class Bumpers:
def __init__(self, memory, sharo, robot):
self.memoryProxy = memory
self.robot = robot
self.sharo = sharo
self.run = True
def onRun(self):
import copy
old_proximity = "none"
proximity = "none"
error = False
body = { "left": 0, "right": 0, "back": 0}
old_body = copy.deepcopy(body)
while self.run:
try:
r_value = memoryProxy.getData("Device/SubDeviceList/Platform/FrontRight/Bumper/Sensor/Value")
l_value = memoryProxy.getData("Device/SubDeviceList/Platform/FrontLeft/Bumper/Sensor/Value")
b_value = memoryProxy.getData("Device/SubDeviceList/Platform/Back/Bumper/Sensor/Value")
if l_value > 0.4:
value = 1
if body["left"] != value:
data = {"primitive":"bumpers", "input":{"left":value}, "robot":self.robot}
self.sharo.send_json(data)
body["left"] = value
print data
else:
value = 0
if body["left"] != value:
data = {"primitive":"bumpers", "input":{"left":value}, "robot":self.robot}
self.sharo.send_json(data)
body["left"] = value
if r_value > 0.4:
value = 1
if body["right"] != value:
data = {"primitive":"bumpers", "input":{"right":value}, "robot":self.robot}
self.sharo.send_json(data)
body["right"] = value
print data
else:
value = 0
if body["right"] != value:
data = {"primitive":"bumpers", "input":{"right":value}, "robot":self.robot}
self.sharo.send_json(data)
body["right"] = value
if b_value > 0.4:
value = 1
if body["back"] != value:
data = {"primitive":"bumpers", "input":{"back":value}, "robot":self.robot}
self.sharo.send_json(data)
body["back"] = value
print data
else:
value = 0
if body["back"] != value:
data = {"primitive":"bumpers", "input":{"back":value}, "robot":self.robot}
self.sharo.send_json(data)
body["back"] = value
old_body = copy.deepcopy(body)
except:
pass
time.sleep(.01)
|
PypiClean
|
/fincity-django-allauth-0.40.0.tar.gz/fincity-django-allauth-0.40.0/allauth/socialaccount/providers/oauth/client.py
|
import requests
from django.http import HttpResponseRedirect
from django.utils.http import urlencode
from django.utils.translation import gettext as _
from requests_oauthlib import OAuth1
from allauth.compat import parse_qsl, urlparse
from allauth.utils import build_absolute_uri, get_request_param
def get_token_prefix(url):
"""
Returns a prefix for the token to store in the session so we can hold
more than one single oauth provider's access key in the session.
Example:
The request token url ``http://twitter.com/oauth/request_token``
returns ``twitter.com``
"""
return urlparse(url).netloc
class OAuthError(Exception):
pass
class OAuthClient(object):
def __init__(self, request, consumer_key, consumer_secret,
request_token_url, access_token_url, callback_url,
parameters=None, provider=None):
self.request = request
self.request_token_url = request_token_url
self.access_token_url = access_token_url
self.consumer_key = consumer_key
self.consumer_secret = consumer_secret
self.parameters = parameters
self.callback_url = callback_url
self.provider = provider
self.errors = []
self.request_token = None
self.access_token = None
def _get_request_token(self):
"""
Obtain a temporary request token to authorize an access token and to
sign the request to obtain the access token
"""
if self.request_token is None:
get_params = {}
if self.parameters:
get_params.update(self.parameters)
get_params['oauth_callback'] = build_absolute_uri(
self.request, self.callback_url)
rt_url = self.request_token_url + '?' + urlencode(get_params)
oauth = OAuth1(self.consumer_key,
client_secret=self.consumer_secret)
response = requests.post(url=rt_url, auth=oauth)
if response.status_code not in [200, 201]:
raise OAuthError(
_('Invalid response while obtaining request token'
' from "%s".') % get_token_prefix(
self.request_token_url))
self.request_token = dict(parse_qsl(response.text))
self.request.session['oauth_%s_request_token' % get_token_prefix(
self.request_token_url)] = self.request_token
return self.request_token
def get_access_token(self):
"""
Obtain the access token to access private resources at the API
endpoint.
"""
if self.access_token is None:
request_token = self._get_rt_from_session()
oauth = OAuth1(
self.consumer_key,
client_secret=self.consumer_secret,
resource_owner_key=request_token['oauth_token'],
resource_owner_secret=request_token['oauth_token_secret'])
at_url = self.access_token_url
# Passing along oauth_verifier is required according to:
# http://groups.google.com/group/twitter-development-talk/browse_frm/thread/472500cfe9e7cdb9#
# Though, the custom oauth_callback seems to work without it?
oauth_verifier = get_request_param(self.request, 'oauth_verifier')
if oauth_verifier:
at_url = at_url + '?' + urlencode(
{'oauth_verifier': oauth_verifier})
response = requests.post(url=at_url, auth=oauth)
if response.status_code not in [200, 201]:
raise OAuthError(
_('Invalid response while obtaining access token'
' from "%s".') % get_token_prefix(
self.request_token_url))
self.access_token = dict(parse_qsl(response.text))
self.request.session['oauth_%s_access_token' % get_token_prefix(
self.request_token_url)] = self.access_token
return self.access_token
def _get_rt_from_session(self):
"""
Returns the request token cached in the session by
``_get_request_token``
"""
try:
return self.request.session['oauth_%s_request_token'
% get_token_prefix(
self.request_token_url)]
except KeyError:
raise OAuthError(_('No request token saved for "%s".')
% get_token_prefix(self.request_token_url))
def is_valid(self):
try:
self._get_rt_from_session()
self.get_access_token()
except OAuthError as e:
self.errors.append(e.args[0])
return False
return True
def get_redirect(self, authorization_url, extra_params):
"""
Returns a ``HttpResponseRedirect`` object to redirect the user
to the URL the OAuth provider handles authorization.
"""
request_token = self._get_request_token()
params = {'oauth_token': request_token['oauth_token'],
'oauth_callback': self.request.build_absolute_uri(
self.callback_url)}
params.update(extra_params)
url = authorization_url + '?' + urlencode(params)
return HttpResponseRedirect(url)
class OAuth(object):
"""
Base class to perform oauth signed requests from access keys saved
in a user's session. See the ``OAuthTwitter`` class below for an
example.
"""
def __init__(self, request, consumer_key, secret_key, request_token_url):
self.request = request
self.consumer_key = consumer_key
self.secret_key = secret_key
self.request_token_url = request_token_url
def _get_at_from_session(self):
"""
Get the saved access token for private resources from the session.
"""
try:
return self.request.session['oauth_%s_access_token'
% get_token_prefix(
self.request_token_url)]
except KeyError:
raise OAuthError(
_('No access token saved for "%s".')
% get_token_prefix(self.request_token_url))
def query(self, url, method="GET", params=dict(), headers=dict()):
"""
Request a API endpoint at ``url`` with ``params`` being either the
POST or GET data.
"""
access_token = self._get_at_from_session()
oauth = OAuth1(
self.consumer_key,
client_secret=self.secret_key,
resource_owner_key=access_token['oauth_token'],
resource_owner_secret=access_token['oauth_token_secret'])
response = getattr(requests, method.lower())(url,
auth=oauth,
headers=headers,
params=params)
if response.status_code != 200:
raise OAuthError(
_('No access to private resources at "%s".')
% get_token_prefix(self.request_token_url))
return response.text
|
PypiClean
|
/django_declarative_apis-0.29.0-py3-none-any.whl/django_declarative_apis/authentication/oauthlib/endpoint.py
|
import logging
from oauthlib.oauth1 import SignatureOnlyEndpoint
from oauthlib.oauth1.rfc5849 import SIGNATURE_RSA
from oauthlib.oauth1.rfc5849 import errors, signature
from django_declarative_apis.resources.utils import preprocess_rsa_key
log = logging.getLogger(__name__)
class TweakedSignatureOnlyEndpoint(SignatureOnlyEndpoint):
"""An endpoint only responsible for verifying an oauthlib signature.
This class modified oauthlib.oauth1.SignatureOnlyEndpoint so that
the validate_request() method will support returning an error
message to support our API OAuth error messages
Altered lines are marked with # TOOPHER
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.validation_error_message = ""
def validate_request(self, uri, http_method="GET", body=None, headers=None):
"""Validate a signed OAuth request.
:param uri: The full URI of the token request.
:param http_method: A valid HTTP verb, i.e. GET, POST, PUT, HEAD, etc.
:param body: The request body as a string.
:param headers: The request headers as a dict.
:returns: A tuple of 2 elements.
1. True if valid, False otherwise.
2. An oauthlib.common.Request object.
"""
try:
request = self._create_request(uri, http_method, body, headers)
except errors.OAuth1Error as e: # noqa
return False, None
try:
self._check_transport_security(request)
self._check_mandatory_parameters(request)
except errors.OAuth1Error as e:
self.validation_error_message = e.description # TOOPHER
return False, request
if not self.request_validator.validate_timestamp_and_nonce(
request.client_key, request.timestamp, request.nonce, request
):
return False, request
# The server SHOULD return a 401 (Unauthorized) status code when
# receiving a request with invalid client credentials.
# Note: This is postponed in order to avoid timing attacks, instead
# a dummy client is assigned and used to maintain near constant
# time request verification.
#
# Note that early exit would enable client enumeration
valid_client = self.request_validator.validate_client_key(
request.client_key, request
)
if not valid_client:
request.client_key = self.request_validator.dummy_client
valid_signature = self._check_signature(request)
# We delay checking validity until the very end, using dummy values for
# calculations and fetching secrets/keys to ensure the flow of every
# request remains almost identical regardless of whether valid values
# have been supplied. This ensures near constant time execution and
# prevents malicious users from guessing sensitive information
v = all((valid_client, valid_signature))
if not v:
log.info("[Failure] request verification failed.")
log.info("Valid client: %s", valid_client)
log.info("Valid signature: %s", valid_signature)
if valid_client and not valid_signature: # TOOPHER
norm_params = signature.normalize_parameters(request.params) # TOOPHER
uri = signature.base_string_uri(request.uri) # TOOPHER
base_signing_string = signature.signature_base_string(
request.http_method, uri, norm_params
) # TOOPHER
self.validation_error_message = (
"Invalid signature. Expected signature base string: {0}".format(
base_signing_string
)
) # TOOPHER
return v, request
def _check_signature(self, request):
if request.signature_method == SIGNATURE_RSA: # pragma: nocover
key_str = self.request_validator.get_rsa_key(request.client_key, request)
key_str = preprocess_rsa_key(key_str)
return signature.verify_rsa_sha1(request, key_str)
return super()._check_signature(request)
|
PypiClean
|
/HalRing-2.0.4-py3-none-any.whl/halring/svn/halring_svn.py
|
import operator
import os
import re
from collections import Counter
from halring.windows.halring_os import OsUtil
class SvnUtil(object):
def __init__(self, username, password):
"""
基础信息
Author: Chenying
:param username: 账号
:param password: 密码
"""
self._username = username
self._password = password
def svn_info(self, remote_path):
"""
获得详细信息
Author: Chenying
:param remote_path: 远端路径
:return: 信息dict|IS_NOT_EXIST
"""
# SVN INFO命令,路径、用户名、密码
svn_info_cmd = 'svn info ' + remote_path + ' --username ' + self._username + ' --password ' + self._password
# 执行SVN INFO命令,逐行读取SVN INFO命令输出字符串
run_cmd = OsUtil().popen_no_block(svn_info_cmd)
# 返回一个包含各行字符串作为元素的列表
info_splitlines_list = run_cmd.splitlines()
return 'IS_NOT_EXIST' if not info_splitlines_list else dict(
[info_lines.split(': ') for info_lines in info_splitlines_list if not re.match(info_lines, ': ')])
def svn_info_get_revision(self, remote_path):
"""
获取REVISION
:param remote_path: 远端路径
:return: REVISION|IS_NOT_EXIST
"""
if self.svn_info(remote_path) == 'IS_NOT_EXIST':
return 'IS_NOT_EXIST'
else:
# 获取路径详细信息
return self.svn_info(remote_path)['Revision']
def svn_info_get_commit_id(self, remote_path):
"""
获取COMMIT_ID
:param remote_path: 远端路径
:return: COMMIT_ID|IS_NOT_EXIST
"""
if self.svn_info(remote_path) == 'IS_NOT_EXIST':
return 'IS_NOT_EXIST'
else:
# 获取路径详细信息
return self.svn_info(remote_path)['Last Changed Rev']
def svn_info_is_file_or_directory(self, remote_path):
"""
判断文件or文件夹
Author: Chenying
:param remote_path: 远端路径
:return: IS_FILE_EXIST|IS_DIRECTORY_EXIST|IS_NOT_EXIST
"""
if self.svn_info(remote_path) == 'IS_NOT_EXIST':
return 'IS_NOT_EXIST'
else:
# 获取路径详细信息
info_dict = self.svn_info(remote_path)['Node Kind']
# 判断远端路径的类型:IS_NOT_EXIST|IS_FILE_EXIST|IS_DIRECTORY_EXIST
return 'IS_FILE_EXIST' if info_dict == 'file' else 'IS_DIRECTORY_EXIST'
def svn_get_filelist_under_directory(self, remote_path):
"""
获取远端目录详细信息
Author: Chenying
:param remote_path: 目的路径
:return: 深度,目的路径文件列表|IS_NOT_EXIST
"""
remote_type = self.svn_info_is_file_or_directory(remote_path)
if remote_type == 'IS_NOT_EXIST':
return 'IS_NOT_EXIST'
elif remote_type == 'IS_FILE_EXIST':
return 'IS_FILE_EXIST'
else:
# SVN LIST命令
svn_list_cmd = 'svn list ' + remote_path + ' --username ' + self._username + ' --password ' + \
self._password + ' --recursive'
# 执行SVN INFO命令,逐行读取SVN INFO命令输出字符串
run_cmd = OsUtil().popen_no_block(svn_list_cmd)
# 返回一个包含各行字符串作为元素的列表
list_splitlines_list = run_cmd.splitlines()
directory_level_deep = max([list_line.count('/') for list_line in list_splitlines_list if not re.match(
list_line, '/')])
return directory_level_deep, list_splitlines_list
def svn_export(self, remote_path, local_path):
"""
检出
Author: Chenying
:param remote_path: 目的路径
:param local_path: 本地路径
:return: success,本地目录深度,本地路径目录文件列表|fail,本地路径目录文件确缺失列表
"""
if not local_path[-1] == "/":
local_path += "/"
# SVN EXPORT命令
svn_export_cmd = 'svn export ' + remote_path + ' ' + local_path + ' --username ' + self._username \
+ ' --password ' + self._password + ' --force'
# 执行SVN EXPORT命令
OsUtil().popen_block(svn_export_cmd)
# 验证本地文件列表是否匹配目的路径文件列表
local_directories_and_files = []
# 遍历输出本地目录及子目录,不包含隐藏目录
[local_directories_and_files.append(
os.path.join(rs, d[:]).replace(local_path, '').replace("\\", '/') + '/') for rs, ds, fs in
os.walk(local_path) for d in ds if not d[0] == '.']
# 遍历输出本地目录及子目录下所有的文件,不包含隐藏文件
[local_directories_and_files.append(
os.path.join(rs, f).replace(local_path, '').replace("\\", '/')) for rs, ds, fs in
os.walk(local_path) for f in fs if not f[0] == '.']
local_list = [local_info for local_info in local_directories_and_files]
local_list.sort()
local_deep_level = max(list_line.count('/') for list_line in local_list)
remote_list = self.svn_get_filelist_under_directory(remote_path)[1]
miss_list = [miss for miss in local_list if miss not in remote_list]
return 'SUCCESS', local_deep_level, local_list if not miss_list else 'FAILED', miss_list
def svn_mkdir(self, remote_path):
"""
创建目的路径
Author: Chenying
:param remote_path: 目的路径
:return: success:目的路径创建成功,远端路径信息|fail:目的路径创建失败
"""
code = ['success', 'fail']
# SVN MKDIR 命令
svn_mkdir_cmd = 'svn mkdir ' + remote_path + ' -m "Create directory"' + ' --username ' + self._username + \
'--password ' + self._password
# 判断远端路径是否存在
if not self.svn_info(remote_path):
OsUtil().popen_block(svn_mkdir_cmd)
else:
return '目的路径已存在!'
# 验证目的路径是否创建
remote_path_mkdir_dicts = self.svn_info(remote_path)
if not remote_path_mkdir_dicts:
return '目的路径创建失败!'
else:
return code[0], self.svn_info(remote_path)
def svn_delete(self, remote_path):
"""
删除目的路径
Author: Chenying
:param remote_path: 目的路径
:return: success:目的路径删除成功|fail:目的路径删除失败,目的路径信息
"""
code = ['success', 'fail']
# SVN DELETE 命令
svn_delete_cmd = 'svn delete -m "delete trunk" ' + remote_path + ' --username ' + self._username + \
' --password ' + self._password
# 判断远端路径是否存在
if not self.svn_info(remote_path):
return '目的路径不存在!'
else:
OsUtil().popen_block(svn_delete_cmd)
# 验证目的路径是否删除
remote_path_deleted_info = self.svn_info(remote_path)
if not remote_path_deleted_info:
return code[0]
else:
return code[1]
def svn_add(self, remote_path, source_path):
"""
上传文件
Author: Chenying
:param remote_path: 目的路径
:param source_path: 源路径
:return: success:目的路径文件上传成功|fail:目的路径文件上传失败,目的路径文件列表
"""
code = ['success', 'fail']
# SVN ADD 命令
svn_add_cmd = 'svn --force add ' + source_path + ' --username ' + self._username + ' --password ' + \
self._password
# SVN COMMIT 命令
svn_commit_cmd = 'svn -m %Revision% commit ' + source_path + ' --username ' + self._username + \
' --password ' + self._password
OsUtil().popen_block(svn_add_cmd)
OsUtil().popen_block(svn_commit_cmd)
# 验证目的文件列表是否匹配源文件列表
remote_list = self.svn_get_filelist_under_directory(remote_path)[1]
source_list = self.svn_get_filelist_under_directory(source_path)[1]
miss_list = [miss for miss in source_list if miss not in remote_list]
if not miss_list:
return code[0]
else:
return code[1], miss_list
def svn_cp(self, remote_path, source_path):
"""
复制文件
Author: Chenying
:param remote_path: 目的路径
:param source_path: 源路径
:return: success:目的路径文件上传成功|fail:目的路径文件上传失败,目的路径文件列表
"""
code = ['success', 'fail']
# SVN CP 命令
svn_cp_cmd = 'svn -m "CP" cp ' + source_path + ' ' + remote_path + ' --username ' + self._username + \
' --password ' + self._password
OsUtil().popen_block(svn_cp_cmd)
# 比较源路径的版本信息与目的路径最后一次修改的版本信息是否一致
remote_info = self.svn_info(remote_path)
source_info = self.svn_info(source_path)
if not operator.eq(remote_info['Last Changed Rev'], source_info['Revision']):
return code[1]
else:
return code[0]
def svn_diff_text(self, path1, path2):
"""
差异统计
:param path1: 目录
:param path2: 目录
:return: subtractions_counts:sum_counts:差异总行数|删除的行数|additions_counts:新增的行数|difference_files: 差异文件总数
"""
# SVN DIFF命令
svn_diff_cmd = 'svn diff ' + path1 + ' ' + path2 + ' --username ' + self._username + \
' --password ' + self._password
# 执行SVN DIFF命令并输出到屏幕
result = OsUtil().popen_block2(svn_diff_cmd)
if result.get("status"):
diff_splitlines_list = result.get("message").decode('utf-8', 'ignore').strip().splitlines()
else:
return {
"status": "error",
"message": result.get("message").decode('utf-8')
}
# 列出所有差异列表
special_chars_lists1 = [special_chars for special_chars in diff_splitlines_list if special_chars != ''
# 去除所有特殊字符行
if not special_chars.startswith('Index: ')
if not special_chars.startswith('===')
if not special_chars.startswith('--- ')
if not special_chars.startswith('+++ ')
if not special_chars.startswith('@@ ')
if not special_chars.startswith('Cannot display: ')
if not special_chars.startswith('svn:')
if not special_chars.startswith('Property changes on: ')
if not special_chars.startswith('Deleted: ')
if not special_chars.startswith('##')
if not special_chars.startswith('-application/')
if not special_chars.startswith('+application/')
if not special_chars.startswith('\\')
if not special_chars.startswith('___')
if not special_chars.startswith('Added:')
if not special_chars.startswith(' ')
if not special_chars.startswith('Modified:')]
special_chars_lists2 = list(set(special_chars for special_chars in diff_splitlines_list if special_chars !=
'' if special_chars.startswith('Index: ')))
special_chars_lists = special_chars_lists1 + special_chars_lists2
print(special_chars_lists2)
# 统计新增、删除次数以及差异文件数量,以字典返回{'+': , '-': 'I': }
additions_subtractions_counts_dict = Counter(
[additions_subtractions_lists[:1] for additions_subtractions_lists in special_chars_lists])
additions_counts = additions_subtractions_counts_dict['+']
subtractions_counts = additions_subtractions_counts_dict['-']
sum_counts = additions_counts + subtractions_counts
difference_files = additions_subtractions_counts_dict['I']
return {
"status": "success",
"message": {
'total': "{0}".format(sum_counts),
'add_count': "{0}".format(additions_counts),
'del_count': "{0}".format(subtractions_counts),
'difference_files': "{0}".format(difference_files)
}}
def svn_get_filenums_under_directory(self, remote_path):
"""
获取路径下的文件总数
:param remote_path: 远端路径
:return: 文件总数
"""
under_directories_list = self.svn_get_filelist_under_directory(remote_path)[1]
files_under_directories_numbers = len([files for files in under_directories_list if not files[-1] == '/'])
return files_under_directories_numbers
|
PypiClean
|
/hexagonit.socialbutton-0.11.zip/hexagonit.socialbutton-0.11/src/hexagonit/socialbutton/browser/template.py
|
from Products.Five import BrowserView
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from hexagonit.socialbutton import _
from hexagonit.socialbutton.data import SocialButtonCode
from hexagonit.socialbutton.data import SocialButtonConfig
from hexagonit.socialbutton.interfaces import IAddSocialButtonCode
from hexagonit.socialbutton.interfaces import IAddSocialButtonConfig
from hexagonit.socialbutton.interfaces import ISocialButtonCode
from hexagonit.socialbutton.interfaces import ISocialButtonConfig
from hexagonit.socialbutton.utility import IConvertToUnicode
from plone.registry.interfaces import IRegistry
from plone.z3cform.crud import crud
from zope.component import getUtility
class BaseCrudForm(crud.CrudForm):
"""Base Crud Form"""
def add(self, data):
"""Add new data to registry.
:param data: data.
:type data: dict
"""
data = getUtility(IConvertToUnicode)(data)
registry = getUtility(IRegistry)
items = registry[self._record_name] or {}
items[data.pop('code_id')] = data
registry[self._record_name] = items
def get_items(self):
"""Get items to show on the form."""
registry = getUtility(IRegistry)
items = registry[self._record_name]
data = []
for key in items:
code_id = str(key)
instance = self._class(code_id, **items[key])
data.append((code_id, instance))
return data
def remove(self, (code_id, item)):
"""Delete data from registry.
:param code_id: ID for social button.
:type code_id: unicode
:param item: item instance.
:type item: obj
"""
registry = getUtility(IRegistry)
items = registry[self._record_name]
del items[code_id]
registry[self._record_name] = items
def before_update(self, item, data):
"""Update field values.
:param item: data instance.
:type item: object
:param data: Field key and value.
:type data: dict
"""
registry = getUtility(IRegistry)
items = registry[self._record_name]
data = getUtility(IConvertToUnicode)(data)
items[item.code_id] = data
registry[self._record_name] = items
class SocialButtonCodeForm(BaseCrudForm):
"""Form for updating social button code at ControlPanel."""
label = _(u'Social Button Code Setting')
update_schema = ISocialButtonCode
_record_name = 'hexagonit.socialbutton.codes'
_class = SocialButtonCode
@property
def add_schema(self):
return IAddSocialButtonCode
def update(self):
super(self.__class__, self).update()
edit_forms = self.subforms[0]
forms = edit_forms.subforms
cols = 70
for form in forms:
code_text_widget = form.widgets['code_text']
code_text_widget.cols = cols
add_form = self.subforms[1]
add_form.widgets['code_text'].cols = cols
class SocialButtonConfigForm(BaseCrudForm):
"""Form for updating social button configuration at ControlPanel."""
label = _(u'Social Button Configuration')
update_schema = ISocialButtonConfig
_record_name = 'hexagonit.socialbutton.config'
_class = SocialButtonConfig
@property
def add_schema(self):
return IAddSocialButtonConfig
class BaseControlPanelView(BrowserView):
__call__ = ViewPageTemplateFile('templates/controlpanel.pt')
class SocialButtonCodeControlPanelView(BaseControlPanelView):
def form(self):
return SocialButtonCodeForm(self.context, self.request)()
class SocialButtonConfigControlPanelView(BaseControlPanelView):
def form(self):
return SocialButtonConfigForm(self.context, self.request)()
|
PypiClean
|
/pspdfutils-3.2.0.tar.gz/pspdfutils-3.2.0/psutils/transformers.py
|
import io
import sys
import shutil
from abc import ABC, abstractmethod
from contextlib import contextmanager
from typing import List, Optional, Union, Iterator, IO
from warnings import warn
from pypdf import PdfWriter, Transformation
from pypdf.annotations import PolyLine
from .argparse import parserange
from .io import setup_input_and_output
from .readers import PsReader, PdfReader, document_reader
from .types import Rectangle, Range, Offset, PageSpec, PageList
from .warnings import die
def page_index_to_page_number(
spec: PageSpec, maxpage: int, modulo: int, pagebase: int
) -> int:
return (maxpage - pagebase - modulo if spec.reversed else pagebase) + spec.pageno
class DocumentTransform(ABC):
def __init__(self) -> None:
self.in_size: Optional[Rectangle]
self.specs: List[List[PageSpec]]
@abstractmethod
def pages(self) -> int:
pass
@abstractmethod
def write_header(self, maxpage: int, modulo: int) -> None:
pass
@abstractmethod
def write_page_comment(self, pagelabel: str, outputpage: int) -> None:
pass
@abstractmethod
def write_page(
self,
page_list: PageList,
outputpage: int,
page_specs: List[PageSpec],
maxpage: int,
modulo: int,
pagebase: int,
) -> None:
pass
@abstractmethod
def finalize(self) -> None:
pass
def transform_pages(
self,
pagerange: Optional[List[Range]],
flipping: bool,
reverse: bool,
odd: bool,
even: bool,
modulo: int,
verbose: bool,
) -> None:
if self.in_size is None and flipping:
die("input page size must be set when flipping the page")
# Page spec routines for page rearrangement
def abs_page(n: int) -> int:
if n < 0:
n += self.pages() + 1
n = max(n, 1)
return n
def transform_pages(
pagerange: Optional[List[Range]], odd: bool, even: bool, reverse: bool
) -> None:
outputpage = 0
# If no page range given, select all pages
if pagerange is None:
pagerange = parserange("1-_1")
# Normalize end-relative pageranges
for range_ in pagerange:
range_.start = abs_page(range_.start)
range_.end = abs_page(range_.end)
# Get list of pages
page_list = PageList(self.pages(), pagerange, reverse, odd, even)
# Calculate highest page number output (including any blanks)
maxpage = (
page_list.num_pages()
+ (modulo - page_list.num_pages() % modulo) % modulo
)
# Rearrange pages
self.write_header(maxpage, modulo)
pagebase = 0
while pagebase < maxpage:
for page in self.specs:
# Construct the page label from the input page numbers
pagelabels = []
for spec in page:
n = page_list.real_page(
page_index_to_page_number(spec, maxpage, modulo, pagebase)
)
pagelabels.append(str(n + 1) if n >= 0 else "*")
pagelabel = ",".join(pagelabels)
outputpage += 1
self.write_page_comment(pagelabel, outputpage)
if verbose:
sys.stderr.write(f"[{pagelabel}] ")
self.write_page(
page_list, outputpage, page, maxpage, modulo, pagebase
)
pagebase += modulo
self.finalize()
if verbose:
print(f"\nWrote {outputpage} pages", file=sys.stderr)
# Output the pages
transform_pages(pagerange, odd, even, reverse)
# FIXME: Extract PsWriter.
class PsTransform(DocumentTransform): # pylint: disable=too-many-instance-attributes
# PStoPS procset
# Wrap showpage, erasepage and copypage in our own versions.
# Nullify paper size operators.
procset = """userdict begin
[/showpage/erasepage/copypage]{dup where{pop dup load
type/operatortype eq{ /PStoPSenablepage cvx 1 index
load 1 array astore cvx {} bind /ifelse cvx 4 array
astore cvx def}{pop}ifelse}{pop}ifelse}forall
/PStoPSenablepage true def
[/letter/legal/executivepage/a4/a4small/b5/com10envelope
/monarchenvelope/c5envelope/dlenvelope/lettersmall/note
/folio/quarto/a5]{dup where{dup wcheck{exch{}put}
{pop{}def}ifelse}{pop}ifelse}forall
/setpagedevice {pop}bind 1 index where{dup wcheck{3 1 roll put}
{pop def}ifelse}{def}ifelse
/PStoPSmatrix matrix currentmatrix def
/PStoPSxform matrix def/PStoPSclip{clippath}def
/defaultmatrix{PStoPSmatrix exch PStoPSxform exch concatmatrix}bind def
/initmatrix{matrix defaultmatrix setmatrix}bind def
/initclip[{matrix currentmatrix PStoPSmatrix setmatrix
[{currentpoint}stopped{$error/newerror false put{newpath}}
{/newpath cvx 3 1 roll/moveto cvx 4 array astore cvx}ifelse]
{[/newpath cvx{/moveto cvx}{/lineto cvx}
{/curveto cvx}{/closepath cvx}pathforall]cvx exch pop}
stopped{$error/errorname get/invalidaccess eq{cleartomark
$error/newerror false put cvx exec}{stop}ifelse}if}bind aload pop
/initclip dup load dup type dup/operatortype eq{pop exch pop}
{dup/arraytype eq exch/packedarraytype eq or
{dup xcheck{exch pop aload pop}{pop cvx}ifelse}
{pop cvx}ifelse}ifelse
{newpath PStoPSclip clip newpath exec setmatrix} bind aload pop]cvx def
/initgraphics{initmatrix newpath initclip 1 setlinewidth
0 setlinecap 0 setlinejoin []0 setdash 0 setgray
10 setmiterlimit}bind def
end"""
def __init__(
self,
reader: PsReader,
outfile: IO[bytes],
size: Optional[Rectangle],
in_size: Optional[Rectangle],
specs: List[List[PageSpec]],
draw: float,
in_size_guessed: bool,
):
super().__init__()
self.reader = reader
self.outfile = outfile
self.draw = draw
self.specs = specs
self.in_size_guessed = in_size_guessed
self.use_procset = any(
len(page) > 1 or page[0].has_transform() for page in specs
)
self.size = size
if in_size is None:
if reader.size is not None:
in_size = reader.size
elif size is not None:
in_size = size
self.in_size = in_size
def pages(self) -> int:
return self.reader.num_pages
def write_header(self, maxpage: int, modulo: int) -> None:
# FIXME: doesn't cope properly with loaded definitions
ignorelist = [] if self.size is None else self.reader.sizeheaders
self.reader.infile.seek(0)
if self.reader.pagescmt:
self.fcopy(self.reader.pagescmt, ignorelist)
try:
_ = self.reader.infile.readline()
except IOError:
die("I/O error in header", 2)
if self.size is not None:
if self.in_size_guessed:
warn(f"required input paper size was guessed as {self.in_size}")
self.write(
f"%%DocumentMedia: plain {int(self.size.width)} {int(self.size.height)} 0 () ()"
)
self.write(
f"%%BoundingBox: 0 0 {int(self.size.width)} {int(self.size.height)}"
)
pagesperspec = len(self.specs)
self.write(f"%%Pages: {int(maxpage / modulo) * pagesperspec} 0")
self.fcopy(self.reader.headerpos, ignorelist)
if self.use_procset:
self.write(f"%%BeginProcSet: PStoPS 1 15\n{self.procset}")
self.write("%%EndProcSet")
# Write prologue to end of setup section, skipping our procset if present
# and we're outputting it (this allows us to upgrade our procset)
if self.reader.procset_pos and self.use_procset:
self.fcopy(self.reader.procset_pos.start, [])
self.reader.infile.seek(self.reader.procset_pos.stop)
self.fcopy(self.reader.endsetup, [])
# Save transformation from original to current matrix
if not self.reader.procset_pos and self.use_procset:
self.write(
"""userdict/PStoPSxform PStoPSmatrix matrix currentmatrix
matrix invertmatrix matrix concatmatrix
matrix invertmatrix put"""
)
# Write from end of setup to start of pages
self.fcopy(self.reader.pageptr[0], [])
def write(self, text: str) -> None:
self.outfile.write((text + "\n").encode("utf-8"))
def write_page_comment(self, pagelabel: str, outputpage: int) -> None:
self.write(f"%%Page: ({pagelabel}) {outputpage}")
def write_page(
self,
page_list: PageList,
outputpage: int,
page_specs: List[PageSpec],
maxpage: int,
modulo: int,
pagebase: int,
) -> None:
spec_page_number = 0
for spec in page_specs:
page_number = page_index_to_page_number(spec, maxpage, modulo, pagebase)
real_page = page_list.real_page(page_number)
if page_number < page_list.num_pages() and 0 <= real_page < self.pages():
# Seek the page
pagenum = real_page
self.reader.infile.seek(self.reader.pageptr[pagenum])
try:
line = self.reader.infile.readline()
keyword, _ = self.reader.comment(line)
assert keyword == b"Page"
except IOError:
die(f"I/O error seeking page {pagenum}", 2)
if self.use_procset:
self.write("userdict/PStoPSsaved save put")
if spec.has_transform():
self.write("PStoPSmatrix setmatrix")
if spec.off != Offset(0.0, 0.0):
self.write(f"{spec.off.x:f} {spec.off.y:f} translate")
if spec.rotate != 0:
self.write(f"{spec.rotate % 360} rotate")
if spec.hflip == 1:
assert self.in_size is not None
self.write(
f"[ -1 0 0 1 {self.in_size.width * spec.scale:g} 0 ] concat"
)
if spec.vflip == 1:
assert self.in_size is not None
self.write(
f"[ 1 0 0 -1 0 {self.in_size.height * spec.scale:g} ] concat"
)
if spec.scale != 1.0:
self.write(f"{spec.scale:f} dup scale")
self.write("userdict/PStoPSmatrix matrix currentmatrix put")
if self.in_size is not None:
w, h = self.in_size.width, self.in_size.height
self.write(
f"""userdict/PStoPSclip{{0 0 moveto
{w:f} 0 rlineto 0 {h:f} rlineto {-w:f} 0 rlineto
closepath}}put initclip"""
)
if self.draw > 0:
self.write(
f"gsave clippath 0 setgray {self.draw} setlinewidth stroke grestore"
)
if spec_page_number < len(page_specs) - 1:
self.write("/PStoPSenablepage false def")
if (
self.reader.procset_pos
and page_number < page_list.num_pages()
and real_page < self.pages()
):
# Search for page setup
while True:
try:
line = self.reader.infile.readline()
except IOError:
die(f"I/O error reading page setup {outputpage}", 2)
if line.startswith(b"PStoPSxform"):
break
try:
self.write(line.decode())
except IOError:
die(f"I/O error writing page setup {outputpage}", 2)
if not self.reader.procset_pos and self.use_procset:
self.write("PStoPSxform concat")
if page_number < page_list.num_pages() and 0 <= real_page < self.pages():
# Write the body of a page
self.fcopy(self.reader.pageptr[real_page + 1], [])
else:
self.write("showpage")
if self.use_procset:
self.write("PStoPSsaved restore")
spec_page_number += 1
def finalize(self) -> None:
# Write trailer
self.reader.infile.seek(self.reader.pageptr[self.pages()])
shutil.copyfileobj(self.reader.infile, self.outfile) # type: ignore
self.outfile.flush()
# Copy input file from current position up to new position to output file,
# ignoring the lines starting at something ignorelist points to.
# Updates ignorelist.
def fcopy(self, upto: int, ignorelist: List[int]) -> None:
here = self.reader.infile.tell()
while len(ignorelist) > 0 and ignorelist[0] < upto:
while len(ignorelist) > 0 and ignorelist[0] < here:
ignorelist.pop(0)
if len(ignorelist) > 0:
self.fcopy(ignorelist[0], [])
try:
self.reader.infile.readline()
except IOError:
die("I/O error", 2)
ignorelist.pop(0)
here = self.reader.infile.tell()
try:
self.outfile.write(self.reader.infile.read(upto - here))
except IOError:
die("I/O error", 2)
class PdfTransform(DocumentTransform):
def __init__(
self,
reader: PdfReader,
outfile: IO[bytes],
size: Optional[Rectangle],
in_size: Optional[Rectangle],
specs: List[List[PageSpec]],
draw: float,
):
super().__init__()
self.outfile = outfile
self.reader = reader
self.writer = PdfWriter(self.outfile)
self.draw = draw
self.specs = specs
if in_size is None:
in_size = reader.size
if size is None:
size = in_size
self.size = size
self.in_size = in_size
def pages(self) -> int:
return len(self.reader.pages)
def write_header(self, maxpage: int, modulo: int) -> None:
pass
def write_page_comment(self, pagelabel: str, outputpage: int) -> None:
pass
def write_page(
self,
page_list: PageList,
outputpage: int,
page_specs: List[PageSpec],
maxpage: int,
modulo: int,
pagebase: int,
) -> None:
assert self.in_size
page_number = page_index_to_page_number(
page_specs[0], maxpage, modulo, pagebase
)
real_page = page_list.real_page(page_number)
if ( # pylint: disable=too-many-boolean-expressions
len(page_specs) == 1
and not page_specs[0].has_transform()
and page_number < page_list.num_pages()
and 0 <= real_page < len(self.reader.pages)
and self.draw == 0
and (
self.in_size.width is None
or (
self.in_size.width == self.reader.pages[real_page].mediabox.width
and self.in_size.height
== self.reader.pages[real_page].mediabox.height
)
)
):
self.writer.add_page(self.reader.pages[real_page])
else:
# Add a blank page of the correct size to the end of the document
outpdf_page = self.writer.add_blank_page(self.size.width, self.size.height)
for spec in page_specs:
page_number = page_index_to_page_number(spec, maxpage, modulo, pagebase)
real_page = page_list.real_page(page_number)
if page_number < page_list.num_pages() and 0 <= real_page < len(
self.reader.pages
):
# Calculate input page transformation
t = Transformation()
if spec.hflip:
t = t.transform(
Transformation((-1, 0, 0, 1, self.in_size.width, 0))
)
elif spec.vflip:
t = t.transform(
Transformation((1, 0, 0, -1, 0, self.in_size.height))
)
if spec.rotate != 0:
t = t.rotate(spec.rotate % 360)
if spec.scale != 1.0:
t = t.scale(spec.scale, spec.scale)
if spec.off != Offset(0.0, 0.0):
t = t.translate(spec.off.x, spec.off.y)
# Merge input page into the output document
outpdf_page.merge_transformed_page(self.reader.pages[real_page], t)
if self.draw > 0: # FIXME: draw the line at the requested width
mediabox = self.reader.pages[real_page].mediabox
line = PolyLine(
vertices=[
(
mediabox.left + spec.off.x,
mediabox.bottom + spec.off.y,
),
(mediabox.left + spec.off.x, mediabox.top + spec.off.y),
(
mediabox.right + spec.off.x,
mediabox.top + spec.off.y,
),
(
mediabox.right + spec.off.x,
mediabox.bottom + spec.off.y,
),
(
mediabox.left + spec.off.x,
mediabox.bottom + spec.off.y,
),
],
)
self.writer.add_annotation(outpdf_page, line)
def finalize(self) -> None:
# PyPDF seeks, so write to a buffer first in case outfile is stdout.
buf = io.BytesIO()
self.writer.write(buf)
buf.seek(0)
self.outfile.write(buf.read())
self.outfile.flush()
def document_transform(
indoc: Union[PdfReader, PsReader],
outfile: IO[bytes],
size: Optional[Rectangle],
in_size: Optional[Rectangle],
specs: List[List[PageSpec]],
draw: float,
in_size_guessed: bool,
) -> Union[PdfTransform, PsTransform]:
if isinstance(indoc, PsReader):
return PsTransform(indoc, outfile, size, in_size, specs, draw, in_size_guessed)
if isinstance(indoc, PdfReader):
return PdfTransform(indoc, outfile, size, in_size, specs, draw)
die("unknown document type")
@contextmanager
def file_transform(
infile_name: str,
outfile_name: str,
size: Optional[Rectangle],
in_size: Optional[Rectangle],
specs: List[List[PageSpec]],
draw: float,
in_size_guessed: bool,
) -> Iterator[Union[PdfTransform, PsTransform]]:
with setup_input_and_output(infile_name, outfile_name) as (
infile,
file_type,
outfile,
):
doc = document_reader(infile, file_type)
yield document_transform(
doc, outfile, size, in_size, specs, draw, in_size_guessed
)
|
PypiClean
|
/peakrdl-regblock-0.18.0.tar.gz/peakrdl-regblock-0.18.0/src/peakrdl_regblock/field_logic/sw_onwrite.py
|
from typing import TYPE_CHECKING, List
from systemrdl.rdltypes import OnWriteType
from .bases import NextStateConditional
if TYPE_CHECKING:
from systemrdl.node import FieldNode
# TODO: implement sw=w1 "write once" fields
class _OnWrite(NextStateConditional):
onwritetype = None
def is_match(self, field: 'FieldNode') -> bool:
return field.is_sw_writable and field.get_property('onwrite') == self.onwritetype
def get_predicate(self, field: 'FieldNode') -> str:
if field.parent.get_property('buffer_writes'):
# Is buffered write. Use alternate strobe
wstrb = self.exp.write_buffering.get_write_strobe(field)
if field.get_property('swwe') or field.get_property('swwel'):
# dereferencer will wrap swwel complement if necessary
qualifier = self.exp.dereferencer.get_field_propref_value(field, 'swwe')
return f"{wstrb} && {qualifier}"
return wstrb
else:
# is regular register
strb = self.exp.dereferencer.get_access_strobe(field)
if field.get_property('swwe') or field.get_property('swwel'):
# dereferencer will wrap swwel complement if necessary
qualifier = self.exp.dereferencer.get_field_propref_value(field, 'swwe')
return f"{strb} && decoded_req_is_wr && {qualifier}"
return f"{strb} && decoded_req_is_wr"
def _wbus_bitslice(self, field: 'FieldNode', subword_idx: int = 0) -> str:
# Get the source bitslice range from the internal cpuif's data bus
if field.parent.get_property('buffer_writes'):
# register is buffered.
# write buffer is the full width of the register. no need to deal with subwords
high = field.high
low = field.low
if field.msb < field.lsb:
# slice is for an msb0 field.
# mirror it
regwidth = field.parent.get_property('regwidth')
low = regwidth - 1 - low
high = regwidth - 1 - high
low, high = high, low
else:
# Regular non-buffered register
# For normal fields this ends up passing-through the field's low/high
# values unchanged.
# For fields within a wide register (accesswidth < regwidth), low/high
# may be shifted down and clamped depending on which sub-word is being accessed
accesswidth = field.parent.get_property('accesswidth')
# Shift based on subword
high = field.high - (subword_idx * accesswidth)
low = field.low - (subword_idx * accesswidth)
# clamp to accesswidth
high = max(min(high, accesswidth), 0)
low = max(min(low, accesswidth), 0)
if field.msb < field.lsb:
# slice is for an msb0 field.
# mirror it
bus_width = self.exp.cpuif.data_width
low = bus_width - 1 - low
high = bus_width - 1 - high
low, high = high, low
return f"[{high}:{low}]"
def _wr_data(self, field: 'FieldNode', subword_idx: int=0) -> str:
if field.parent.get_property('buffer_writes'):
# Is buffered. Use value from write buffer
# No need to check msb0 ordering. Bus is pre-swapped, and bitslice
# accounts for it
bslice = self._wbus_bitslice(field)
wbuf_prefix = self.exp.write_buffering.get_wbuf_prefix(field)
return wbuf_prefix + ".data" + bslice
else:
# Regular non-buffered register
bslice = self._wbus_bitslice(field, subword_idx)
if field.msb < field.lsb:
# Field gets bitswapped since it is in [low:high] orientation
value = "decoded_wr_data_bswap" + bslice
else:
value = "decoded_wr_data" + bslice
return value
def _wr_biten(self, field: 'FieldNode', subword_idx: int=0) -> str:
if field.parent.get_property('buffer_writes'):
# Is buffered. Use value from write buffer
# No need to check msb0 ordering. Bus is pre-swapped, and bitslice
# accounts for it
bslice = self._wbus_bitslice(field)
wbuf_prefix = self.exp.write_buffering.get_wbuf_prefix(field)
return wbuf_prefix + ".biten" + bslice
else:
# Regular non-buffered register
bslice = self._wbus_bitslice(field, subword_idx)
if field.msb < field.lsb:
# Field gets bitswapped since it is in [low:high] orientation
value = "decoded_wr_biten_bswap" + bslice
else:
value = "decoded_wr_biten" + bslice
return value
def get_assignments(self, field: 'FieldNode') -> List[str]:
accesswidth = field.parent.get_property("accesswidth")
# Due to 10.6.1-f, it is impossible for a field with an onwrite action to
# be split across subwords.
# Therefore it is ok to get the subword idx from only one of the bit offsets
sidx = field.low // accesswidth
# field does not get split between subwords
R = self.exp.field_logic.get_storage_identifier(field)
D = self._wr_data(field, sidx)
S = self._wr_biten(field, sidx)
lines = [
f"next_c = {self.get_onwrite_rhs(R, D, S)};",
"load_next_c = '1;",
]
return lines
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
raise NotImplementedError
#-------------------------------------------------------------------------------
class WriteOneSet(_OnWrite):
comment = "SW write 1 set"
onwritetype = OnWriteType.woset
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
return f"{reg} | ({data} & {strb})"
class WriteOneClear(_OnWrite):
comment = "SW write 1 clear"
onwritetype = OnWriteType.woclr
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
return f"{reg} & ~({data} & {strb})"
class WriteOneToggle(_OnWrite):
comment = "SW write 1 toggle"
onwritetype = OnWriteType.wot
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
return f"{reg} ^ ({data} & {strb})"
class WriteZeroSet(_OnWrite):
comment = "SW write 0 set"
onwritetype = OnWriteType.wzs
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
return f"{reg} | (~{data} & {strb})"
class WriteZeroClear(_OnWrite):
comment = "SW write 0 clear"
onwritetype = OnWriteType.wzc
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
return f"{reg} & ({data} | ~{strb})"
class WriteZeroToggle(_OnWrite):
comment = "SW write 0 toggle"
onwritetype = OnWriteType.wzt
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
return f"{reg} ^ (~{data} & {strb})"
class WriteClear(_OnWrite):
comment = "SW write clear"
onwritetype = OnWriteType.wclr
def get_assignments(self, field: 'FieldNode') -> List[str]:
return [
"next_c = '0;",
"load_next_c = '1;",
]
class WriteSet(_OnWrite):
comment = "SW write set"
onwritetype = OnWriteType.wset
def get_assignments(self, field: 'FieldNode') -> List[str]:
return [
"next_c = '1;",
"load_next_c = '1;",
]
class Write(_OnWrite):
comment = "SW write"
onwritetype = None
def get_onwrite_rhs(self, reg: str, data: str, strb: str) -> str:
return f"({reg} & ~{strb}) | ({data} & {strb})"
|
PypiClean
|
/deep_disfluency-0.0.1.tar.gz/deep_disfluency-0.0.1/deep_disfluency/utils/tools.py
|
import random
import numpy as np
import itertools
import re
from collections import defaultdict
import os
def get_tags(s, open_delim='<', close_delim='/>'):
"""Iterator to spit out the xml style disfluency tags in a given string.
Keyword arguments:
s -- input string
"""
while True:
# Search for the next two delimiters in the source text
start = s.find(open_delim)
end = s.find(close_delim)
# We found a non-empty match
if -1 < start < end:
# Skip the length of the open delimiter
start += len(open_delim)
# Spit out the tag
yield open_delim + s[start:end].strip() + close_delim
# Truncate string to start from last match
s = s[end+len(close_delim):]
else:
return
def remove_uttseg_tag(tag):
tags = get_tags(tag)
final_tag = ""
for t in tags:
m = re.search(r'<[ct]*/>', t)
if m:
continue
final_tag += t
return final_tag
def convert_to_simple_label(tag, rep="disf1_uttseg"):
"""Takes the complex tag set and gives back the simple,
smaller version with ten tags:
"""
disftag = "<f/>"
if "<rm-" in tag:
disftag = "<rm-0/>"
elif "<e" in tag:
disftag = "<e/>"
if "uttseg" in rep: # if combined task with TTO
m = re.search(r'<[ct]*/>', tag)
if m:
return disftag + m.group(0)
else:
print "WARNING NO TAG", tag
return ""
return disftag # if not TT0
def convert_to_simple_idx(tag, rep='1_trp'):
tag = convert_to_simple_label(tag, rep)
simple_tags = """<e/><cc/>
<e/><ct/>
<e/><tc/>
<e/><tt/>
<f/><cc/>
<f/><ct/>
<f/><tc/>
<f/><tt/>
<rm-0/><cc/>
<rm-0/><ct/>""".split("\n")
simple_tag_dict = {}
for s in range(0, len(simple_tags)):
simple_tag_dict[simple_tags[s].strip()] = s
return simple_tag_dict[tag]
def convert_from_full_tag_set_to_idx(tag, rep, idx_to_label):
"""Maps from the full tag set of trp repairs to the new dictionary"""
if "simple" in rep:
tag = convert_to_simple_label(tag)
for k, v in idx_to_label.items():
if v in tag: # a substring relation
return k
def add_word_continuation_tags(tags):
"""In place, add a continutation tag to each word:
<cc/> -word continues current dialogue act and the next word will also
continue it
<ct/> -word continues current dialogue act and is the last word of it
<tc/> -word starts this dialogue act tag and the next word continues it
<tt/> -word starts and ends dialogue act (single word dialogue act)
"""
tags = list(tags)
for i in range(0, len(tags)):
if i == 0:
tags[i] = tags[i] + "<t"
else:
tags[i] = tags[i] + "<c"
if i == len(tags)-1:
tags[i] = tags[i] + "t/>"
else:
tags[i] = tags[i] + "c/>"
return tags
def verify_disfluency_tags(tags, normalize_ID=False):
"""Check that the repair tags sequence is valid.
Keyword arguments:
normalize_ID -- boolean, whether to convert the repair ID
numbers to be derivable from their unique RPS position in the utterance.
"""
id_map = dict() # map between old ID and new ID
# in first pass get old and new IDs
for i in range(0, len(tags)):
rps = re.findall("<rps id\=\"[0-9]+\"\/>", tags[i])
if rps:
id_map[rps[0][rps[0].find("=")+2:-3]] = str(i)
# key: old repair ID, value, list [reparandum,interregnum,repair]
# all True when repair is all there
repairs = defaultdict(list)
for r in id_map.keys():
repairs[r] = [None, None, None] # three valued None<False<True
print repairs
# second pass verify the validity of the tags
# and (optionally) modify the IDs
for i in range(0, len(tags)): # iterate over all tag strings
new_tags = []
if tags[i] == "":
assert(all([repairs[ID][2] or
repairs[ID] == [None, None, None]
for ID in repairs.keys()])),\
"Unresolved repairs at fluent tag\n\t" + str(repairs)
for tag in get_tags(tags[i]): # iterate over all tags
print i, tag
if tag == "<e/>":
new_tags.append(tag)
continue
ID = tag[tag.find("=")+2:-3]
if "<rms" in tag:
assert repairs[ID][0] == None,\
"reparandum started parsed more than once " + ID
assert repairs[ID][1] == None,\
"reparandum start again during interregnum phase " + ID
assert repairs[ID][2] == None,\
"reparandum start again during repair phase " + ID
repairs[ID][0] = False # set in progress
elif "<rm " in tag:
assert repairs[ID][0] != None,\
"mid reparandum tag before reparandum start " + ID
assert repairs[ID][2] == None,\
"mid reparandum tag in a interregnum phase or beyond " + ID
assert repairs[ID][2] == None,\
"mid reparandum tag in a repair phase or beyond " + ID
elif "<i" in tag:
assert repairs[ID][0] != None,\
"interregnum start before reparandum start " + ID
assert repairs[ID][2] == None,\
"interregnum in a repair phase " + ID
if repairs[ID][1] == None: # interregnum not reached yet
repairs[ID][0] = True # reparandum completed
repairs[ID][1] = False # interregnum in progress
elif "<rps" in tag:
assert repairs[ID][0] != None,\
"repair start before reparandum start " + ID
assert repairs[ID][1] != True,\
"interregnum over before repair start " + ID
assert repairs[ID][2] == None,\
"repair start parsed twice " + ID
repairs[ID][0] = True # reparanudm complete
repairs[ID][1] = True # interregnum complete
repairs[ID][2] = False # repair in progress
elif "<rp " in tag:
assert repairs[ID][0] == True,\
"mid repair word start before reparandum end " + ID
assert repairs[ID][1] == True,\
"mid repair word start before interregnum end " + ID
assert repairs[ID][2] == False,\
"mid repair tag before repair start tag " + ID
elif "<rpn" in tag:
# make sure the rps is order in tag string is before
assert repairs[ID][0] == True,\
"repair end before reparandum end " + ID
assert repairs[ID][1] == True,\
"repair end before interregnum end " + ID
assert repairs[ID][2] == False,\
"repair end before repair start " + ID
repairs[ID][2] = True
# do the replacement of the tag's ID after checking
new_tags.append(tag.replace(ID, id_map[ID]))
if normalize_ID:
tags[i] = "".join(new_tags)
assert all([repairs[ID][2] for ID in repairs.keys()]),\
"Unresolved repairs:\n\t" + str(repairs)
def shuffle(lol, seed):
"""Shuffle inplace each list in the same order.
lol :: list of list as input
seed :: seed the shuffling
"""
for l in lol:
random.seed(seed)
random.shuffle(l)
def minibatch(l, bs):
"""Returns a list of minibatches of indexes
which size is equal to bs
border cases are treated as follow:
eg: [0,1,2,3] and bs = 3
will output:
[[0],[0,1],[0,1,2],[1,2,3]]
l :: list of word idxs
"""
out = [l[:i] for i in xrange(1, min(bs, len(l)+1))]
out += [l[i-bs:i] for i in xrange(bs, len(l)+1)]
assert len(l) == len(out)
return out
def indices_from_length(sentence_length, bs, start_index=0):
"""Return a list of indexes pairs (start/stop) for each word
max difference between start and stop equal to bs
border cases are treated as follow:
eg: sentenceLength=4 and bs = 3
will output:
[[0,0],[0,1],[0,2],[1,3]]
"""
l = map(lambda x: start_index+x, xrange(sentence_length))
out = []
for i in xrange(0, min(bs, len(l))):
out.append([l[0], l[i]])
for i in xrange(bs+1, len(l)+1):
out.append([l[i-bs], l[i-1]])
assert len(l) == sentence_length
return out
def context_win(l, win):
"""Return a list of list of indexes corresponding
to context windows surrounding each word in the sentence
given a list of indexes composing a sentence.
win :: int corresponding to the size of the window
"""
assert (win % 2) == 1
assert win >= 1
l = list(l)
lpadded = win/2 * [-1] + l + win/2 * [-1]
out = [lpadded[i:i+win] for i in range(len(l))]
assert len(out) == len(l)
return out
def context_win_backwards(l, win):
'''Same as contextwin except only backwards context
(i.e. like an n-gram model)
'''
assert win >= 1
l = list(l)
lpadded = (win-1) * [-1] + l
out = [lpadded[i: i+win] for i in range(len(l))]
assert len(out) == len(l)
return out
def corpus_to_indexed_matrix(my_array_list, win, bs, sentence=False):
"""Returns a matrix of contextwins for a list of utterances of
dimensions win * n_words_in_corpus
(i.e. total length of all arrays in my_array_list)
and corresponding matrix of indexes (of just start/stop for each one)
so 2 * n_words_in_corpus
of where to access these, using bs (backprop distance)
as the limiting history size
"""
sentences = [] # a list (of arrays, or lists?), returned as matrix
indices = [] # a list of index pairs (arrays?), returned as matrix
totalSize = 0
if sentence:
for sent in my_array_list:
mysent = np.asarray([-1] * (bs-1) + list(sent)) # padding with eos
# get list of context windows
mywords = context_win_backwards(mysent, win)
# just one per utterance for now..
cindices = [[totalSize, totalSize+len(mywords)-1]]
cwords = []
for i in range(bs, len(mywords)+1):
words = list(itertools.chain(*mywords[(i-bs):i]))
cwords.append(words) # always (bs * n) words long
# print cwords
sentences.extend(cwords)
indices.extend(cindices)
totalSize += len(cwords)
else:
for sentence in my_array_list:
# get list of context windows
cwords = context_win_backwards(sentence, win)
cindices = indices_from_length(len(cwords), bs, totalSize)
indices.extend(cindices)
sentences.extend(cwords)
totalSize += len(cwords)
for s in sentences:
if any([x is None for x in s]):
print s
return np.matrix(sentences, dtype='int32'), indices
def convert_from_eval_tags_to_inc_disfluency_tags(tags, words,
representation="disf1",
limit=8):
"""Conversion from disfluency tagged corpus with xml-style tags
as from STIR (https://bitbucket.org/julianhough/stir)
to the strictly left-to-right schemas as
described by Hough and Schlangen 2015 Interspeech paper,
which are used by RNN architectures at runtime.
Keyword arguments:
tags -- the STIR eval style disfluency tags
words -- the words in the utterance
representation -- the number corresponding to the type of tagging system
1=standard, 2=rm-N values where N does not count intervening edit terms
3=same as 2 but with a 'c' tag after edit terms have ended.
limit -- the limit on the distance back from the repair start
"""
repair_dict = defaultdict(list)
new_tags = []
for t in range(0, len(tags)):
if "uttseg" in representation:
m = re.search(r'<[ct]*/>', tags[t])
if m:
TTO_tag = m.group(0)
tags[t] = tags[t].replace(TTO_tag, "")
if "dact" in representation:
m = re.search(r'<diact type="[^\s]*"/>', tags[t])
if m:
dact_tag = m.group(0)
tags[t] = tags[t].replace(dact_tag, "")
if "laugh" in representation:
m = re.search(r'<speechLaugh/>|<laughter/>', tags[t])
if m:
laughter_tag = m.group(0)
else:
laughter_tag = "<nolaughter/>"
tags[t] = tags[t].replace(laughter_tag, "")
current_tag = ""
if "<e/>" in tags[t] or "<i" in tags[t]:
current_tag = "<e/>" # TODO may make this an interregnum
if "<rms" in tags[t]:
rms = re.findall("<rms id\=\"[0-9]+\"\/>", tags[t], re.S)
for r in rms:
repairID = r[r.find("=")+2:-3]
repair_dict[repairID] = [t, 0]
if "<rps" in tags[t]:
rps = re.findall("<rps id\=\"[0-9]+\"\/>", tags[t], re.S)
for r in rps:
repairID = r[r.find("=")+2:-3]
assert repair_dict.get(repairID), str(repairID)+str(tags)+str(words)
repair_dict[repairID][1] = t
dist = min(t-repair_dict[repairID][0], limit)
# adjust in case the reparandum is shortened due to the limit
repair_dict[repairID][0] = t-dist
current_tag += "<rm-{}/>".format(dist) + "<rpMid/>"
if "<rpn" in tags[t]:
rpns = re.findall("<rpnrep id\=\"[0-9]+\"\/>", tags[t], re.S) +\
re.findall("<rpnsub id\=\"[0-9]+\"\/>", tags[t], re.S)
rpns_del = re.findall("<rpndel id\=\"[0-9]+\"\/>", tags[t], re.S)
# slight simplifying assumption is to take the repair with
# the longest reparandum as the end category
repair_type = ""
longestlength = 0
for r in rpns:
repairID = r[r.find("=")+2:-3]
l = repair_dict[repairID]
if l[1]-l[0] > longestlength:
longestlength = l[1]-l[0]
repair_type = "Sub"
for r in rpns_del:
repairID = r[r.find("=")+2:-3]
l = repair_dict[repairID]
if l[1]-l[0] > longestlength:
longestlength = l[1]-l[0]
repair_type = "Del"
if repair_type == "":
raise Exception("Repair not passed \
correctly."+str(words)+str(tags))
current_tag += "<rpEnd"+repair_type+"/>"
current_tag = current_tag.replace("<rpMid/>", "")
if current_tag == "":
current_tag = "<f/>"
if "uttseg" in representation:
current_tag += TTO_tag
if "dact" in representation:
current_tag += dact_tag
if "laugh" in representation:
current_tag += laughter_tag
new_tags.append(current_tag)
return new_tags
def convert_from_inc_disfluency_tags_to_eval_tags(
tags, words,
start=0,
representation="disf1_uttseg"):
"""Converts the incremental style output tags of the RNN to the standard
STIR eval output tags.
The exact inverse of convertFromEvalTagsToIncrementalDisfluencyTags.
Keyword arguments:
tags -- the RNN style disfluency tags
words -- the words in the utterance
start -- position from where to begin changing the tags from
representation -- the number corresponding to the type of tagging system,
1=standard, 2=rm-N values where N does not count intervening edit terms
3=same as 2 but with a 'c' tag after edit terms have ended.
"""
# maps from the repair ID to a list of
# [reparandumStart,repairStart,repairOver]
repair_dict = defaultdict(list)
new_tags = []
if start > 0:
# assuming the tags up to this point are already converted
new_tags = tags[:start]
if "mid" not in representation:
rps_s = re.findall("<rps id\=\"[0-9]+\"\/>", tags[start-1])
rpmid = re.findall("<rp id\=\"[0-9]+\"\/>", tags[start-1])
if rps_s:
for r in rps_s:
repairID = r[r.find("=")+2:-3]
resolved_repair = re.findall(
"<rpn[repsubdl]+ id\=\"{}\"\/>"
.format(repairID), tags[start-1])
if not resolved_repair:
if not rpmid:
rpmid = []
rpmid.append(r.replace("rps ", "rp "))
if rpmid:
newstart = start-1
for rp in rpmid:
rps = rp.replace("rp ", "rps ")
repairID = rp[rp.find("=")+2:-3]
# go back and find the repair
for b in range(newstart, -1, -1):
if rps in tags[b]:
repair_dict[repairID] = [b, b, False]
break
for t in range(start, len(tags)):
current_tag = ""
if "uttseg" in representation:
m = re.search(r'<[ct]*/>', tags[t])
if m:
TTO_tag = m.group(0)
if "<e/>" in tags[t] or "<i/>" in tags[t]:
current_tag = "<e/>"
if "<rm-" in tags[t]:
rps = re.findall("<rm-[0-9]+\/>", tags[t], re.S)
for r in rps: # should only be one
current_tag += '<rps id="{}"/>'.format(t)
# print t-dist
if "simple" in representation:
# simply tagging the rps
pass
else:
dist = int(r[r.find("-")+1:-2])
repair_dict[str(t)] = [max([0, t-dist]), t, False]
# backwards looking search if full set
# print new_tags, t, dist, t-dist, max([0, t-dist])
# print tags[:t+1]
rms_start_idx = max([0, t-dist])
new_tags[rms_start_idx] = '<rms id="{}"/>'\
.format(t) + new_tags[rms_start_idx]\
.replace("<f/>", "")
reparandum = False # interregnum if edit term
for b in range(t-1, max([0, t-dist]), -1):
if "<e" not in new_tags[b]:
reparandum = True
new_tags[b] = '<rm id="{}"/>'.format(t) +\
new_tags[b].replace("<f/>", "")
if reparandum is False and "<e" in new_tags[b]:
new_tags[b] = '<i id="{}"/>'.\
format(t) + new_tags[b]
# repair ends
if "<rpEnd" in tags[t]:
rpns = re.findall("<rpEndSub/>", tags[t], re.S)
rpns_del = re.findall("<rpEndDel/>", tags[t], re.S)
rpnAll = rpns + rpns_del
if rpnAll:
for k, v in repair_dict.items():
if t >= int(k) and v[2] is False:
repair_dict[k][2] = True
# classify the repair
if rpns_del: # a delete
current_tag += '<rpndel id="{}"/>'.format(k)
rpns_del.pop(0)
continue
reparandum = [words[i] for i in range(0, len(new_tags))
if '<rms id="{}"/>'.
format(k) in new_tags[i] or
'<rm id="{}"/>'.
format(k) in new_tags[i]]
repair = [words[i] for i in range(0, len(new_tags))
if '<rps id="{}"/>'.format(k)
in new_tags[i] or '<rp id="{}"/>'.format(k)
in new_tags[i]] + [words[t]]
if reparandum == repair:
current_tag += '<rpnrep id="{}"/>'.format(k)
else:
current_tag += '<rpnsub id="{}"/>'.format(k)
# mid repair phases still in progress
for k, v in repair_dict.items():
if t > int(k) and v[2] is False:
current_tag += '<rp id="{}"/>'.format(k)
if current_tag == "":
current_tag = "<f/>"
if "uttseg" in representation:
current_tag += TTO_tag
new_tags.append(current_tag)
return new_tags
def verify_dialogue_data_matrix(dialogue_data_matrix, word_dict=None,
pos_dict=None, tag_dict=None, n_lm=0,
n_acoustic=0):
"""Boolean check of whether dialogue data consistent
with args. Checks all idxs are valid and number of features is correct.
Standard form of each row of the matrix should be:
utt_index, word_idx, pos_idx, word_duration,
acoustic_feats.., lm_feats....,label
"""
l = 3 + n_acoustic + n_lm + 1 # row length
try:
for i, row in enumerate(dialogue_data_matrix):
assert len(row) == l,\
"row {} wrong length {}, should be {}".format(i, len(row), l)
assert word_dict[row[1]] is not None,\
"row[1][{}] {} not in word dict".format(i, row[1])
assert pos_dict[row[2]] is not None,\
"row[2][{}] {} not in POS dict".format(i, row[2])
assert tag_dict[row[-1]] is not None,\
"row[-1][{}] {} not in tag dict".format(i, row[-1])
except AssertionError as a:
print a
return False
return True
def verify_dialogue_data_matrices_from_folder(matrices_folder_filepath,
word_dict=None,
pos_dict=None,
tag_dict=None,
n_lm=0,
n_acoustic=0):
"""A boolean check that the dialogue matrices make sense for the
particular configuration in args and tag2idx dicts.
"""
for dialogue_file in os.listdir(matrices_folder_filepath):
v = np.load(matrices_folder_filepath + "/" + dialogue_file)
if not verify_dialogue_data_matrix(v,
word_dict=word_dict,
pos_dict=pos_dict,
tag_dict=tag_dict,
n_lm=n_lm,
n_acoustic=n_acoustic):
print "{} failed test".format(dialogue_file)
return False
return True
def dialogue_data_and_indices_from_matrix(d_matrix,
n_extra,
pre_seg=False,
window_size=2,
bs=9,
tag_rep="disf1_uttseg",
tag_to_idx_map=None,
in_utterances=False):
"""Transforming from input format of row:
utt_index, word_idx, pos_idx, word_duration,
acoustic_feats.., lm_feats....,label
to 5-tuple of:
word_idx, pos_idx, extra, labels, indices
where :word_idx: and :pos_idx: have the correct window context
according to @window_size
and :indices: is the start and stop points for consumption by the
net in training for each label in :labels:. :extra: is the matrix
of extra features.
"""
utt_indices = d_matrix[:, 0]
words = d_matrix[:, 1]
pos = d_matrix[:, 2]
extra = None if n_extra == 0 else d_matrix[:, 3: -1]
labels = d_matrix[:, -1]
word_idx = []
pos_idx = []
current = []
indices = []
previous_idx = -1
for i, a_tuple in enumerate(zip(utt_indices, words, pos, labels)):
utt_idx, w, p, l = a_tuple
current.append((w, p, l))
if pre_seg:
if previous_idx != utt_idx or i == len(labels)-1:
if in_utterances:
start = 0 if indices == [] else indices[-1][1]+1
indices.append([start, start + (len(current)-1)])
else:
indices.extend(indices_from_length(len(current), bs,
start_index=len(indices)))
word_idx.extend(context_win_backwards([x[0] for x in current],
window_size))
pos_idx.extend(context_win_backwards([x[1] for x in current],
window_size))
current = []
elif i == len(labels)-1:
# indices = indices_from_length(len(current), bs)
# currently a simple window of same size
indices = [[j, j + bs] for j in range(0, len(current))]
padding = [[-1, -1]] * (bs - window_size)
word_idx = padding + context_win_backwards([x[0] for x in current],
window_size)
pos_idx = padding + context_win_backwards([x[1] for x in current],
window_size)
previous_idx = utt_idx
return np.asarray(word_idx, dtype=np.int32), np.asarray(pos_idx,
dtype=np.int32),\
extra,\
labels,\
np.asarray(indices, dtype=np.int32)
if __name__ == '__main__':
tags = '<f/>,<rms id="3"/>,<i id="3"/><e/>,<rps id="3"/>' +\
'<rpnsub id="3"/>,<f/>,<e/>,<f/>,' + \
'<f/>'
tags = tags.split(",")
words = "i,like,uh,love,to,uh,love,alot".split(",")
print tags
print len(tags), len(words)
new_tags = convert_from_eval_tags_to_inc_disfluency_tags(
tags,
words,
representation="disf1")
print new_tags
old_tags = convert_from_inc_disfluency_tags_to_eval_tags(
new_tags,
words,
representation="disf1")
assert old_tags == tags, "\n " + str(old_tags) + "\n" + str(tags)
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
print context_win_backwards(x, 2)
print "indices", indices_from_length(11, 9)
|
PypiClean
|
/hrm_omero-0.4.0-py3-none-any.whl/hrm_omero/tree.py
|
from loguru import logger as log
def gen_obj_dict(obj, id_pfx=""):
"""Create a dict from an OMERO object.
Parameters
----------
obj : omero.gateway._*Wrapper
The OMERO object to process.
id_pfx : str, optional
A string prefix that will be added to the `id` value, by default ''.
Returns
-------
dict
A dictionary with the following structure:
```
{
'children': [],
'id': 'Project:1154',
'label': 'HRM_TESTDATA',
'owner': u'demo01',
'class': 'Project'
}
```
"""
obj_dict = {}
obj_dict["label"] = obj.getName()
obj_dict["class"] = obj.OMERO_CLASS
if obj.OMERO_CLASS == "Experimenter":
obj_dict["owner"] = obj.getId()
obj_dict["label"] = obj.getFullName()
elif obj.OMERO_CLASS == "ExperimenterGroup":
# for some reason getOwner() et al. return nothing on a group, so we
# simply put it to None for group objects:
obj_dict["owner"] = None
else:
obj_dict["owner"] = obj.getOwnerOmeName()
obj_dict["id"] = id_pfx + f"{obj.OMERO_CLASS}:{obj.getId()}"
obj_dict["children"] = []
return obj_dict
def gen_children(conn, omero_id):
"""Get the children for a given node.
Parameters
----------
conn : omero.gateway.BlitzGateway
The OMERO connection object.
omero_id : hrm_omero.misc.OmeroId
An object denoting an OMERO target.
Returns
-------
list
A list with children nodes (of type `dict`), having the `load_on_demand`
property set to `True` required by the jqTree JavaScript library (except for
nodes of type `Dataset` as they are the last / lowest level).
"""
if omero_id.obj_type == "BaseTree":
return gen_base_tree(conn)
gid = omero_id.group
obj_type = omero_id.obj_type
oid = omero_id.obj_id
log.debug(f"generating children for: gid={gid} | obj_type={obj_type} | oid={oid}")
conn.SERVICE_OPTS.setOmeroGroup(gid)
obj = conn.getObject(obj_type, oid)
# we need different child-wrappers, depending on the object type:
if obj_type == "Experimenter":
children_wrapper = []
for proj in conn.listProjects(oid):
children_wrapper.append(proj)
# OMERO.web is showing "orphaned" datasets (i.e. that do NOT belong to a
# certain project) at the top level, next to the projects - so we are going to
# add them to the tree at the same hierarchy level:
for dataset in conn.listOrphans("Dataset", eid=oid):
children_wrapper.append(dataset)
elif obj_type == "ExperimenterGroup":
log.warning(
f"{__name__} has been called with omero_id='{str(omero_id)}', but "
"'ExperimenterGroup' trees should be generated via `gen_group_tree()`!",
)
return []
else:
children_wrapper = obj.listChildren()
# now process children:
children = []
for child in children_wrapper:
children.append(gen_obj_dict(child, "G:" + gid + ":"))
children = sorted(children, key=lambda d: d["label"].lower())
# set the on-demand flag unless the children are the last level:
if not obj_type == "Dataset":
for child in children:
child["load_on_demand"] = True
return children
def gen_base_tree(conn):
"""Generate all group trees with their members as the basic tree.
Parameters
----------
conn : omero.gateway.BlitzGateway
The OMERO connection object.
Returns
-------
list
A list of grouptree dicts as generated by `gen_group_tree()`.
"""
log.debug("Generating base tree...")
tree = []
for group in conn.getGroupsMemberOf():
tree.append(gen_group_tree(conn, group))
tree_sorted = sorted(tree, key=lambda d: d["label"].lower())
return tree_sorted
def gen_group_tree(conn, group=None):
"""Create the tree nodes for a group and its members.
Parameters
----------
conn : omero.gateway.BlitzGateway
The OMERO connection object.
group : int or str or omero.gateway._ExperimenterGroupWrapper, optional
The group object (or the group ID as int or str) to generate the tree for, by
default `None` which will result in the group being derived from the current
connection's context.
Returns
-------
dict
A nested dict of the given group (or the default group if not specified
explicitly) and its members as a list of dicts in the `children` item, starting
with the current user as the first entry.
"""
if group is None:
log.debug("Getting group from current context...")
group = conn.getGroupFromContext()
if isinstance(group, (int, str)):
target_gid = int(group)
group = None
for candidate in conn.getGroupsMemberOf():
if int(candidate.getId()) == target_gid:
log.debug(f"Found group object for ID {target_gid}!")
group = candidate
break
if group is None:
msg = f"Unable to identify group with ID {target_gid}!"
log.error(msg)
raise RuntimeError(msg)
gid = str(group.getId())
log.debug(f"Generating tree for group {gid}...")
conn.setGroupForSession(gid)
group_dict = gen_obj_dict(group)
# add the user's own tree first:
user = conn.getUser()
user_dict = gen_obj_dict(user, "G:" + gid + ":")
user_dict["load_on_demand"] = True
group_dict["children"].append(user_dict)
all_user_dicts = []
# then add the trees for other group members
for user in conn.listColleagues():
user_dict = gen_obj_dict(user, "G:" + gid + ":")
user_dict["load_on_demand"] = True
all_user_dicts.append(user_dict)
group_dict["children"] += sorted(all_user_dicts, key=lambda d: d["label"].lower())
return group_dict
|
PypiClean
|
/megaman-0.2.tar.gz/megaman-0.2/doc/_build/Mmani/Mmani/geometry/geometry.py
|
from __future__ import division ## removes integer division
import numpy as np
from scipy import sparse
from scipy.spatial.distance import pdist
import subprocess, os, sys, warnings
from Mmani.geometry.distance import distance_matrix
from Mmani.utils.validation import check_array
sparse_formats = ['csr', 'coo', 'lil', 'bsr', 'dok', 'dia']
distance_methods = ['auto', 'brute', 'cyflann', 'pyflann', 'cython']
laplacian_types = ['symmetricnormalized', 'geometric', 'renormalized', 'unnormalized', 'randomwalk']
def symmetrize_sparse(A):
"""
Symmetrizes a sparse matrix in place (coo and csr formats only)
NOTES:
1. if there are values of 0 or 0.0 in the sparse matrix, this operation will DELETE them.
"""
if A.getformat() is not "csr":
A = A.tocsr()
A = (A + A.transpose(copy = True))/2
return A
def affinity_matrix(distances, neighbors_radius, symmetrize = True):
if neighbors_radius <= 0.:
raise ValueError('neighbors_radius must be >0.')
A = distances.copy()
if sparse.isspmatrix( A ):
A.data = A.data**2
A.data = A.data/(-neighbors_radius**2)
np.exp( A.data, A.data )
if symmetrize:
A = symmetrize_sparse( A ) # converts to CSR; deletes 0's
else:
pass
with warnings.catch_warnings():
warnings.simplefilter("ignore")
# sparse will complain that this is faster with lil_matrix
A.setdiag(1) # the 0 on the diagonal is a true zero
else:
A **= 2
A /= (-neighbors_radius**2)
np.exp(A, A)
if symmetrize:
A = (A+A.T)/2
A = np.asarray( A, order="C" ) # is this necessary??
else:
pass
return A
###############################################################################
# Graph laplacian
# Code adapted from the Matlab function laplacian.m of Dominique Perrault-Joncas
def graph_laplacian(csgraph, normed = 'geometric', symmetrize = False,
scaling_epps = 0., renormalization_exponent = 1,
return_diag = False, return_lapsym = False):
"""
Return the Laplacian matrix of an undirected graph.
Computes a consistent estimate of the Laplace-Beltrami operator L
from the similarity matrix A . See "Diffusion Maps" (Coifman and
Lafon, 2006) and "Graph Laplacians and their Convergence on Random
Neighborhood Graphs" (Hein, Audibert, Luxburg, 2007) for more
details.
A is the similarity matrix from the sampled data on the manifold M.
Typically A is obtained from the data X by applying the heat kernel
A_ij = exp(-||X_i-X_j||^2/EPPS). The bandwidth EPPS of the kernel is
need to obtained the properly scaled version of L. Following the usual
convention, the laplacian (Laplace-Beltrami operator) is defined as
div(grad(f)) (that is the laplacian is taken to be negative
semi-definite).
Note that the Laplacians defined here are the negative of what is
commonly used in the machine learning literature. This convention is used
so that the Laplacians converge to the standard definition of the
differential operator.
notation: A = csgraph, D=diag(A1) the diagonal matrix of degrees
L = lap = returned object, EPPS = scaling_epps**2
Parameters
----------
csgraph : array_like or sparse matrix, 2 dimensions
compressed-sparse graph, with shape (N, N).
normed : string, optional
if 'renormalized':
compute renormalized Laplacian of Coifman & Lafon
L = D**-alpha A D**-alpha
T = diag(L1)
L = T**-1 L - eye()
if 'symmetricnormalized':
compute normalized Laplacian
L = D**-0.5 A D**-0.5 - eye()
if 'unnormalized': compute unnormalized Laplacian.
L = A-D
if 'randomwalks': compute stochastic transition matrix
L = D**-1 A
symmetrize: bool, optional
if True symmetrize adjacency matrix (internally) before computing lap
scaling_epps: float, optional
if >0., it should be the same neighbors_radius that was used as kernel
width for computing the affinity. The Laplacian gets the scaled by
4/np.sqrt(scaling_epps) in order to ensure consistency in the limit
of large N
return_diag : bool, optional (kept for compatibility)
If True, then return diagonal as well as laplacian.
return_lapsym : bool, optional
If normed in { 'geometric', 'renormalized' } then a symmetric matrix
lapsym, and a row normalization vector w are also returned. Having
these allows us to compute the laplacian spectral decomposition
as a symmetric matrix, which has much better numerical properties.
Returns
-------
lap : ndarray
The N x N laplacian matrix of graph.
diag : ndarray (obsolete, for compatibiility)
The length-N diagonal of the laplacian matrix.
diag is returned only if return_diag is True.
Notes
-----
There are a few differences from the sklearn.spectral_embedding laplacian
function.
1. normed='unnormalized' and 'symmetricnormalized' correspond respectively
to normed=False and True in the latter. (Note also that normed was changed
from bool to string.
2. the signs of this laplacians are changed w.r.t the original
3. the diagonal of lap is no longer set to 0; also there is no checking if
the matrix has zeros on the diagonal. If the degree of a node is 0, this
is handled graciuously (by not dividing by 0).
4. if csgraph is not symmetric the out-degree is used in the
computation and no warning is raised. However, it is not recommended to
use this function for directed graphs.
"""
if csgraph.ndim != 2 or csgraph.shape[0] != csgraph.shape[1]:
raise ValueError('csgraph must be a square matrix or array')
normed = normed.lower()
if normed not in ('unnormalized', 'geometric', 'randomwalk', 'symmetricnormalized','renormalized' ):
raise ValueError('normed must be one of unnormalized, geometric, randomwalk, symmetricnormalized, renormalized')
if (np.issubdtype(csgraph.dtype, np.int) or np.issubdtype(csgraph.dtype, np.uint)):
csgraph = csgraph.astype(np.float)
if sparse.isspmatrix(csgraph):
return _laplacian_sparse(csgraph, normed = normed, symmetrize = symmetrize,
scaling_epps = scaling_epps,
renormalization_exponent = renormalization_exponent,
return_diag = return_diag, return_lapsym = return_lapsym)
else:
return _laplacian_dense(csgraph, normed = normed, symmetrize = symmetrize,
scaling_epps = scaling_epps,
renormalization_exponent = renormalization_exponent,
return_diag = return_diag, return_lapsym = return_lapsym)
def _laplacian_sparse(csgraph, normed = 'geometric', symmetrize = True,
scaling_epps = 0., renormalization_exponent = 1,
return_diag = False, return_lapsym = False):
n_nodes = csgraph.shape[0]
lap = csgraph.copy()
if symmetrize:
if lap.format is not 'csr':
lap.tocsr()
lap = (lap + lap.T)/2.
if lap.format is not 'coo':
lap = lap.tocoo()
diag_mask = (lap.row == lap.col) # True/False
degrees = np.asarray(lap.sum(axis=1)).squeeze()
if normed == 'symmetricnormalized':
w = np.sqrt(degrees)
w_zeros = (w == 0)
w[w_zeros] = 1
lap.data /= w[lap.row]
lap.data /= w[lap.col]
lap.data[diag_mask] -= 1.
if return_lapsym:
lapsym = lap.copy()
if normed == 'geometric':
w = degrees.copy() # normzlize one symmetrically by d
w_zeros = (w == 0)
w[w_zeros] = 1
lap.data /= w[lap.row]
lap.data /= w[lap.col]
w = np.asarray(lap.sum(axis=1)).squeeze() #normalize again asymmetricall
if return_lapsym:
lapsym = lap.copy()
lap.data /= w[lap.row]
lap.data[diag_mask] -= 1.
if normed == 'renormalized':
w = degrees**renormalization_exponent;
# same as 'geometric' from here on
w_zeros = (w == 0)
w[w_zeros] = 1
lap.data /= w[lap.row]
lap.data /= w[lap.col]
w = np.asarray(lap.sum(axis=1)).squeeze() #normalize again asymmetricall
if return_lapsym:
lapsym = lap.copy()
lap.data /= w[lap.row]
lap.data[diag_mask] -= 1.
if normed == 'unnormalized':
lap.data[diag_mask] -= degrees
if return_lapsym:
lapsym = lap.copy()
if normed == 'randomwalk':
w = degrees.copy()
if return_lapsym:
lapsym = lap.copy()
lap.data /= w[lap.row]
lap.data[diag_mask] -= 1.
if scaling_epps > 0.:
lap.data *= 4/(scaling_epps**2)
if return_diag:
if return_lapsym:
return lap, lap.data[diag_mask], lapsym, w
else:
return lap, lap.data[diag_mask]
elif return_lapsym:
return lap, lapsym, w
else:
return lap
def _laplacian_dense(csgraph, normed = 'geometric', symmetrize = True,
scaling_epps = 0., renormalization_exponent = 1,
return_diag = False, return_lapsym = False):
n_nodes = csgraph.shape[0]
if symmetrize:
lap = (csgraph + csgraph.T)/2.
else:
lap = csgraph.copy()
degrees = np.asarray(lap.sum(axis=1)).squeeze()
di = np.diag_indices( lap.shape[0] ) # diagonal indices
if normed == 'symmetricnormalized':
w = np.sqrt(degrees)
w_zeros = (w == 0)
w[w_zeros] = 1
lap /= w
lap /= w[:, np.newaxis]
di = np.diag_indices( lap.shape[0] )
lap[di] -= (1 - w_zeros).astype(lap.dtype)
if return_lapsym:
lapsym = lap.copy()
if normed == 'geometric':
w = degrees.copy() # normalize once symmetrically by d
w_zeros = (w == 0)
w[w_zeros] = 1
lap /= w
lap /= w[:, np.newaxis]
w = np.asarray(lap.sum(axis=1)).squeeze() #normalize again asymmetricall
if return_lapsym:
lapsym = lap.copy()
lap /= w[:, np.newaxis]
lap[di] -= (1 - w_zeros).astype(lap.dtype)
if normed == 'renormalized':
w = degrees**renormalization_exponent;
# same as 'geometric' from here on
w_zeros = (w == 0)
w[w_zeros] = 1
lap /= w
lap /= w[:, np.newaxis]
w = np.asarray(lap.sum(axis=1)).squeeze() #normalize again asymmetricall
if return_lapsym:
lapsym = lap.copy()
lap /= w[:, np.newaxis]
lap[di] -= (1 - w_zeros).astype(lap.dtype)
if normed == 'unnormalized':
dum = lap[di]-degrees[np.newaxis,:]
lap[di] = dum[0,:]
if return_lapsym:
lapsym = lap.copy()
if normed == 'randomwalk':
w = degrees.copy()
if return_lapsym:
lapsym = lap.copy()
lap /= w[:,np.newaxis]
lap -= np.eye(lap.shape[0])
if scaling_epps > 0.:
lap *= 4/(scaling_epps**2)
if return_diag:
diag = np.array( lap[di] )
if return_lapsym:
return lap, diag, lapsym, w
else:
return lap, diag
elif return_lapsym:
return lap, lapsym, w
else:
return lap
class Geometry:
"""
The Geometry class stores the data, distance, affinity and laplacian
matrices used by the various embedding methods and is the primary
object passed to embedding functions.
The Geometry class contains functions to build the aforementioned
matrices and allows for re-computation whenever necessary.
Parameters
----------
X : array_like or sparse array. 2 dimensional. Value depends on input_type.
size: (N_obs, N_dim) if 'data', (N_obs, N_obs) otherwise.
input_type : string, one of: 'data', 'distance', 'affinity'. The values of X.
neighborhood_radius : scalar, passed to distance_matrix. Value such that all
distances beyond neighborhood_radius are considered infinite.
affinity_radius : scalar, passed to affinity_matrix. 'bandwidth' parameter
used in Guassian kernel for affinity matrix
distance_method : string, one of 'auto', 'brute', 'cython', 'pyflann', 'cyflann'.
method for computing pairwise radius neighbors graph.
laplacian_type : string, one of: 'symmetricnormalized', 'geometric', 'renormalized',
'unnormalized', 'randomwalk'
type of laplacian to be computed. See graph_laplacian for more information.
path_to_flann : string. full file path location of FLANN if not installed to root or
FLANN_ROOT set to path location. Used for importing pyflann from a different location.
"""
def __init__(self, X, neighborhood_radius = None, affinity_radius = None,
distance_method = 'auto', input_type = 'data',
laplacian_type = None, path_to_flann = None):
self.distance_method = distance_method
self.input_type = input_type
self.path_to_flann = path_to_flann
self.laplacian_type = laplacian_type
if self.distance_method not in distance_methods:
raise ValueError("invalid distance method.")
if neighborhood_radius is None:
self.neighborhood_radius = 1/X.shape[1]
else:
try:
neighborhood_radius = np.float(neighborhood_radius)
self.neighborhood_radius = neighborhood_radius
except ValueError:
raise ValueError("neighborhood_radius must be convertable to float")
if affinity_radius is None:
self.affinity_radius = self.neighborhood_radius
self.default_affinity = True
else:
try:
affinity_radius = np.float(affinity_radius)
self.affinity_radius = affinity_radius
self.default_affinity = False
except ValueError:
raise ValueError("affinity_radius must be convertable to float")
if self.input_type == 'distance':
X = check_array(X, accept_sparse = sparse_formats)
a, b = X.shape
if a != b:
raise ValueError("input_type is distance but input matrix is not square")
self.X = None
self.distance_matrix = X
self.affinity_matrix = None
self.laplacian_matrix = None
elif self.input_type == 'affinity':
X = check_array(X, accept_sparse = sparse_formats)
a, b = X.shape
if a != b:
raise ValueError("input_type is affinity but input matrix is not square")
self.X = None
self.distance_matrix = None
self.affinity_matrix = X
self.laplacian_matrix = None
elif self.input_type == 'data':
X = check_array(X, accept_sparse = sparse_formats)
self.X = X
self.distance_matrix = None
self.affinity_matrix = None
self.laplacian_matrix = None
else:
raise ValueError('input_type must be one of: data, distance, affinity.')
if distance_method == 'cython':
if input_type == 'data':
try:
from Mmani.geometry.cyflann.index import Index
self.cyindex = Index(X)
except ImportError:
raise ValueError("distance_method set to cython but cyflann_index cannot be imported.")
else:
self.cyindex = None
if distance_method == 'pyflann':
if self.path_to_flann is not None:
# FLANN is installed in specific location
sys.path.insert(0, self.path_to_flann)
try:
import pyflann as pyf
self.flindex = pyf.FLANN()
self.flparams = self.flindex.build_index(X, algorithm = 'kmeans',
target_precision = 0.9)
except ImportError:
raise ValueError("distance_method is set to pyflann but pyflann is "
"not available.")
else:
self.flindex = None
self.flparams = None
def get_distance_matrix(self, neighborhood_radius = None, copy = True):
"""
Parameters
----------
neighborhood_radius : scalar, passed to distance_matrix. Value such that all
distances beyond neighborhood_radius are considered infinite.
if this value is not passed the value of self.neighborhood_radius is used
copy : boolean, whether to return a copied version of the distance matrix
Returns
-------
self.distance_matrix : sparse Ndarray (N_obs, N_obs). Non explicit 0.0 values
(e.g. diagonal) should be considered Infinite.
"""
if self.input_type == 'affinity':
raise ValueError("input_method was passed as affinity. "
"Distance matrix cannot be computed.")
if self.distance_matrix is None:
# if there's no existing distance matrix we make one
if ((neighborhood_radius is not None) and (neighborhood_radius != self.neighborhood_radius)):
# a different radius was passed than self.neighborhood_radius
self.neighborhood_radius = neighborhood_radius
self.distance_matrix = distance_matrix(self.X, method = self.distance_method,
flindex = self.flindex,
radius = self.neighborhood_radius,
cyindex = self.cyindex)
else:
# if there is an existing matrix we have to see if we need to overwrite
if ((neighborhood_radius is not None) and (neighborhood_radius != self.neighborhood_radius)):
# if there's a new radius we need to re-calculate
if self.input_type == 'distance':
# but if we were passed distance this is impossible
raise ValueError("input_method was passed as distance."
"requested radius not equal to self.neighborhood_radius."
"distance matrix cannot be re-calculated.")
else:
# if we were passed data then we need to re-calculate
self.neighborhood_radius = neighborhood_radius
self.distance_matrix = distance_matrix(self.X, method = self.distance_method,
flindex = self.flindex,
radius = self.neighborhood_radius,
cyindex = self.cyindex)
if copy:
return self.distance_matrix.copy()
else:
return self.distance_matrix
def get_affinity_matrix(self, affinity_radius = None, copy = True,
symmetrize = True):
"""
Parameters
----------
affinity_radius : scalar, passed to affinity_matrix. 'bandwidth' parameter
used in Guassian kernel for affinity matrix
If this value is not passed then the self.affinity_radius value is used.
copy : boolean, whether to return a copied version of the affinity matrix
symmetrize : boolean, whether to explicitly symmetrize the affinity matrix.
if distance_method = 'cython', 'cyflann', or 'pyflann' it is recommended
to set this to True.
Returns
-------
self.affinity_matrix : sparse Ndarray (N_obs, N_obs) contains the pairwise
affinity values using the Guassian kernel and bandwidth equal to the
affinity_radius
"""
if self.affinity_matrix is None:
# if there's no existing affinity matrix we make one
if self.distance_matrix is None:
# first check to see if we have the distance matrix
self.distance_matrix = self.get_distance_matrix(copy = False)
if affinity_radius is not None and affinity_radius != self.affinity_radius:
self.affinity_radius = affinity_radius
self.default_affinity = False
self.affinity_matrix = affinity_matrix(self.distance_matrix,
self.affinity_radius, symmetrize)
else:
# if there is an existing matrix we have to see if we need to overrwite
if (affinity_radius is not None and affinity_radius != self.affinity_radius) or (
affinity_radius is not None and self.default_affinity):
# if there's a new radius we need to re-calculate
# or there's a passed radius and the current radius was set to default
if self.input_type == 'affinity':
# but if we were passed affinity this is impossible
raise ValueError("Input_method was passed as affinity."
"Requested radius was not equal to self.affinity_radius."
"Affinity Matrix cannot be recalculated.")
else:
# if we were passed distance or data we can recalculate:
if self.distance_matrix is None:
# first check to see if we have the distance matrix
self.distance_matrix = self.get_distance_matrix(copy = False)
self.affinity_radius = affinity_radius
self.default_affinity = False
self.affinity_matrix = affinity_matrix(self.distance_matrix,
self.affinity_radius, symmetrize)
if copy:
return self.affinity_matrix.copy()
else:
return self.affinity_matrix
def get_laplacian_matrix(self, laplacian_type=None, symmetrize=False,
scaling_epps=None, renormalization_exponent=1,
copy=True, return_lapsym=False):
"""
Parameters
----------
laplacian_type : string, the type of graph laplacian to compute.
see 'normed' in graph_laplacian for more information
symmetrize : boolean, whether to pre-symmetrize the affinity matrix before
computing the laplacian_matrix
scaling_epps : scalar, the bandwidth/radius parameter used in the affinity matrix
see graph_laplacian for more information
renormalization_exponent : scalar, renormalization exponent for computing Laplacian
see graph_laplacian for more information
copy : boolean, whether to return copied version of the self.laplacian_matrix
return_lapsym : boolean, if True returns additionally the symmetrized version of
the requested laplacian and the re-normalization weights.
Returns
-------
self.laplacian_matrix : sparse Ndarray (N_obs, N_obs). The requested laplacian.
self.laplacian_symmetric : sparse Ndarray (N_obs, N_obs). The symmetric laplacian.
self.w : Ndarray (N_obs). The renormalization weights used to make
laplacian_matrix from laplacian_symmetric
"""
# if scaling_epps is None:
# scaling_epps = self.affinity_radius
# First check if there's an existing Laplacian:
if self.laplacian_matrix is not None:
if (laplacian_type == self.laplacian_type) or (laplacian_type is None):
if copy:
return self.laplacian_matrix.copy()
else:
return self.laplacian_matrix
else:
warnings.warn("current Laplacian matrix is of type " + str(self.laplacian_type) +
" but type " + str(laplacian_type) + " was requested. "
"Existing Laplacian matrix will be overwritten.")
# Next, either there is no Laplacian or we're replacing. Check type:
if self.laplacian_type is None:
if laplacian_type is None:
laplacian_type = 'geometric' # default value
self.laplacian_type = laplacian_type
elif laplacian_type is not None and self.laplacian_type != laplacian_type:
self.laplacian_type = laplacian_type
# next check if we have an affinity matrix:
if self.affinity_matrix is None:
self.affinity_matrix = self.get_affinity_matrix(copy=False)
# results depend on symmetric or not:
if return_lapsym:
(self.laplacian_matrix,
self.laplacian_symmetric,
self.w) = graph_laplacian(self.affinity_matrix, self.laplacian_type,
symmetrize, scaling_epps, renormalization_exponent,
return_diag=False, return_lapsym=True)
else:
self.laplacian_matrix = graph_laplacian(self.affinity_matrix,
self.laplacian_type,
symmetrize, scaling_epps,
renormalization_exponent)
if copy:
return self.laplacian_matrix.copy()
else:
return self.laplacian_matrix
def assign_data_matrix(self, X):
X = check_array(X, accept_sparse = sparse_formats)
self.X = X
def assign_distance_matrix(self, distance_mat, neighborhood_radius = None):
distance_mat = check_array(distance_mat, accept_sparse = sparse_formats)
(a, b) = distance_mat.shape
if a != b:
raise ValueError("distance matrix is not square")
else:
self.distance_matrix = distance_mat
if neighborhood_radius is not None:
self.neighborhood_radius = neighborhood_radius
def assign_affinity_matrix(self, affinity_matrix, affinity_radius = None):
affinity_matrix = check_array(affinity_matrix, accept_sparse = sparse_formats)
(a, b) = affinity_matrix.shape
if a != b:
raise ValueError("affinity matrix is not square")
else:
self.affinity_matrix = affinity_matrix
if affinity_radius is not None:
self.affinity_radius = affinity_radius
self.default_affinity = False
def assign_laplacian_matrix(self, laplacian_matrix, laplacian_type = "unknown"):
laplacian_matrix = check_array(laplacian_matrix, accept_sparse = sparse_formats)
(a, b) = laplacian_matrix.shape
if a != b:
raise ValueError("Laplacian matrix is not square")
else:
self.laplacian_matrix = laplacian_matrix
self.laplacian_type = laplacian_type;
def assign_parameters(self, neighborhood_radius=None, affinity_radius=None,
distance_method=None, laplacian_type=None,
path_to_flann=None):
"""
Note: self.neighborhood_radius, self.affinity_radius,
and self.laplacian_type refer to the CURRENT
version of these matrices.
If you want to re-calculate with a new parameter DO NOT
update these with assign_parameters, instead use
get_distance_matrix(), get_affinity_matrix(), or get_laplacian_matrix()
and pass the desired new parameter. This will automatically update
the self.parameter version.
If you change these values with assign_parameters Geometry will assume
that the existing matrix follows that parameter and so, for example,
calling get_distance_matrix() with a passed radius will *not*
recalculate if the passed radius is equal to self.neighborhood_radius
and there already exists a distance matrix.
"""
if neighborhood_radius is not None:
try:
np.float(neighborhood_radius)
self.neighborhood_radius = neighborhood_radius
except ValueError:
raise ValueError("neighborhood_radius must convertable to float")
if affinity_radius is not None:
try:
np.float(affinity_radius)
self.affinity_radius = affinity_radius
except ValueError:
raise ValueError("neighborhood_radius must convertable to float")
if distance_method is not None:
if distance_method in distance_methods:
self.distance_method = distance_method
else:
raise ValueError("distance_method must be one of: ")
if laplacian_type is not None:
if laplacian_type in laplacian_types:
self.laplacian_type = laplacian_type
else:
raise ValueError("laplacian_type method must be one of: ")
if path_to_flann is not None:
self.path_to_flann = path_to_flann
sys.path.insert(0, self.path_to_flann)
try:
import pyflann as pyf
self.flindex = pyf.FLANN()
self.flparams = self.flindex.build_index(X, algorithm = 'kmeans',
target_precision = 0.9)
except ImportError:
raise ValueError("distance_method is set to pyflann but pyflann is "
"not available.")
|
PypiClean
|
/jupyter_jsmol-2022.1.0.tar.gz/jupyter_jsmol-2022.1.0/jsmol/j2s/JU/InfBlocks.js
|
Clazz.declarePackage ("JU");
Clazz.load (["JU.InfTree"], "JU.InfBlocks", ["JU.InfCodes"], function () {
c$ = Clazz.decorateAsClass (function () {
this.mode = 0;
this.left = 0;
this.table = 0;
this.index = 0;
this.blens = null;
this.bb = null;
this.tb = null;
this.bl = null;
this.bd = null;
this.tl = null;
this.td = null;
this.tli = null;
this.tdi = null;
this.codes = null;
this.last = 0;
this.bitk = 0;
this.bitb = 0;
this.hufts = null;
this.window = null;
this.end = 0;
this.read = 0;
this.write = 0;
this.check = false;
this.inftree = null;
this.z = null;
Clazz.instantialize (this, arguments);
}, JU, "InfBlocks");
Clazz.prepareFields (c$, function () {
this.bb = Clazz.newIntArray (1, 0);
this.tb = Clazz.newIntArray (1, 0);
this.bl = Clazz.newIntArray (1, 0);
this.bd = Clazz.newIntArray (1, 0);
this.tli = Clazz.newIntArray (1, 0);
this.tdi = Clazz.newIntArray (1, 0);
this.inftree = new JU.InfTree ();
});
Clazz.makeConstructor (c$,
function (z, w) {
this.z = z;
this.codes = new JU.InfCodes (this.z, this);
this.hufts = Clazz.newIntArray (4320, 0);
this.window = Clazz.newByteArray (w, 0);
this.end = w;
this.check = (z.istate.wrap == 0) ? false : true;
this.mode = 0;
{
this.tl = Clazz.newArray(1, null);
this.td = Clazz.newArray(1, null);
}this.reset ();
}, "JU.ZStream,~N");
Clazz.defineMethod (c$, "reset",
function () {
if (this.mode == 6) {
this.codes.free (this.z);
}this.mode = 0;
this.bitk = 0;
this.bitb = 0;
this.read = this.write = 0;
if (this.check) {
this.z.checksum.reset ();
}});
Clazz.defineMethod (c$, "proc",
function (r) {
var t;
var b;
var k;
var p;
var n;
var q;
var m;
{
p = this.z.next_in_index;
n = this.z.avail_in;
b = this.bitb;
k = this.bitk;
}{
q = this.write;
m = (q < this.read ? this.read - q - 1 : this.end - q);
}while (true) {
switch (this.mode) {
case 0:
while (k < (3)) {
if (n != 0) {
r = 0;
} else {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}n--;
b |= (this.z.next_in[p++] & 0xff) << k;
k += 8;
}
t = (b & 7);
this.last = t & 1;
switch (t >>> 1) {
case 0:
{
b >>>= (3);
k -= (3);
}t = k & 7;
{
b >>>= (t);
k -= (t);
}this.mode = 1;
break;
case 1:
JU.InfTree.inflate_trees_fixed (this.bl, this.bd, this.tl, this.td, this.z);
this.codes.init (this.bl[0], this.bd[0], this.tl[0], 0, this.td[0], 0);
{
b >>>= (3);
k -= (3);
}this.mode = 6;
break;
case 2:
{
b >>>= (3);
k -= (3);
}this.mode = 3;
break;
case 3:
{
b >>>= (3);
k -= (3);
}this.mode = 9;
this.z.msg = "invalid block type";
r = -3;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}
break;
case 1:
while (k < (32)) {
if (n != 0) {
r = 0;
} else {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}n--;
b |= (this.z.next_in[p++] & 0xff) << k;
k += 8;
}
if ((((~b) >>> 16) & 0xffff) != (b & 0xffff)) {
this.mode = 9;
this.z.msg = "invalid stored block lengths";
r = -3;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}this.left = (b & 0xffff);
b = k = 0;
this.mode = this.left != 0 ? 2 : (this.last != 0 ? 7 : 0);
break;
case 2:
if (n == 0) {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}if (m == 0) {
if (q == this.end && this.read != 0) {
q = 0;
m = (q < this.read ? this.read - q - 1 : this.end - q);
}if (m == 0) {
this.write = q;
r = this.inflate_flush (r);
q = this.write;
m = (q < this.read ? this.read - q - 1 : this.end - q);
if (q == this.end && this.read != 0) {
q = 0;
m = (q < this.read ? this.read - q - 1 : this.end - q);
}if (m == 0) {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}}}r = 0;
t = this.left;
if (t > n) t = n;
if (t > m) t = m;
Zystem.arraycopy (this.z.next_in, p, this.window, q, t);
p += t;
n -= t;
q += t;
m -= t;
if ((this.left -= t) != 0) break;
this.mode = this.last != 0 ? 7 : 0;
break;
case 3:
while (k < (14)) {
if (n != 0) {
r = 0;
} else {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}n--;
b |= (this.z.next_in[p++] & 0xff) << k;
k += 8;
}
this.table = t = (b & 0x3fff);
if ((t & 0x1f) > 29 || ((t >> 5) & 0x1f) > 29) {
this.mode = 9;
this.z.msg = "too many length or distance symbols";
r = -3;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}t = 258 + (t & 0x1f) + ((t >> 5) & 0x1f);
if (this.blens == null || this.blens.length < t) {
this.blens = Clazz.newIntArray (t, 0);
} else {
for (var i = 0; i < t; i++) {
this.blens[i] = 0;
}
}{
b >>>= (14);
k -= (14);
}this.index = 0;
this.mode = 4;
case 4:
while (this.index < 4 + (this.table >>> 10)) {
while (k < (3)) {
if (n != 0) {
r = 0;
} else {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}n--;
b |= (this.z.next_in[p++] & 0xff) << k;
k += 8;
}
this.blens[JU.InfBlocks.border[this.index++]] = b & 7;
{
b >>>= (3);
k -= (3);
}}
while (this.index < 19) {
this.blens[JU.InfBlocks.border[this.index++]] = 0;
}
this.bb[0] = 7;
t = this.inftree.inflate_trees_bits (this.blens, this.bb, this.tb, this.hufts, this.z);
if (t != 0) {
r = t;
if (r == -3) {
this.blens = null;
this.mode = 9;
}this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}this.index = 0;
this.mode = 5;
case 5:
while (true) {
t = this.table;
if (!(this.index < 258 + (t & 0x1f) + ((t >> 5) & 0x1f))) {
break;
}var i;
var j;
var c;
t = this.bb[0];
while (k < (t)) {
if (n != 0) {
r = 0;
} else {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}n--;
b |= (this.z.next_in[p++] & 0xff) << k;
k += 8;
}
t = this.hufts[(this.tb[0] + (b & JU.InfBlocks.inflate_mask[t])) * 3 + 1];
c = this.hufts[(this.tb[0] + (b & JU.InfBlocks.inflate_mask[t])) * 3 + 2];
if (c < 16) {
b >>>= (t);
k -= (t);
this.blens[this.index++] = c;
} else {
i = c == 18 ? 7 : c - 14;
j = c == 18 ? 11 : 3;
while (k < (t + i)) {
if (n != 0) {
r = 0;
} else {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}n--;
b |= (this.z.next_in[p++] & 0xff) << k;
k += 8;
}
b >>>= (t);
k -= (t);
j += (b & JU.InfBlocks.inflate_mask[i]);
b >>>= (i);
k -= (i);
i = this.index;
t = this.table;
if (i + j > 258 + (t & 0x1f) + ((t >> 5) & 0x1f) || (c == 16 && i < 1)) {
this.blens = null;
this.mode = 9;
this.z.msg = "invalid bit length repeat";
r = -3;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}c = c == 16 ? this.blens[i - 1] : 0;
do {
this.blens[i++] = c;
} while (--j != 0);
this.index = i;
}}
this.tb[0] = -1;
{
this.bl[0] = 9;
this.bd[0] = 6;
t = this.table;
t = this.inftree.inflate_trees_dynamic (257 + (t & 0x1f), 1 + ((t >> 5) & 0x1f), this.blens, this.bl, this.bd, this.tli, this.tdi, this.hufts, this.z);
if (t != 0) {
if (t == -3) {
this.blens = null;
this.mode = 9;
}r = t;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}this.codes.init (this.bl[0], this.bd[0], this.hufts, this.tli[0], this.hufts, this.tdi[0]);
}this.mode = 6;
case 6:
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
if ((r = this.codes.proc (r)) != 1) {
return this.inflate_flush (r);
}r = 0;
this.codes.free (this.z);
p = this.z.next_in_index;
n = this.z.avail_in;
b = this.bitb;
k = this.bitk;
q = this.write;
m = (q < this.read ? this.read - q - 1 : this.end - q);
if (this.last == 0) {
this.mode = 0;
break;
}this.mode = 7;
case 7:
this.write = q;
r = this.inflate_flush (r);
q = this.write;
m = (q < this.read ? this.read - q - 1 : this.end - q);
if (this.read != this.write) {
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}this.mode = 8;
case 8:
r = 1;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
case 9:
r = -3;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
default:
r = -2;
this.bitb = b;
this.bitk = k;
this.z.avail_in = n;
this.z.total_in += p - this.z.next_in_index;
this.z.next_in_index = p;
this.write = q;
return this.inflate_flush (r);
}
}
}, "~N");
Clazz.defineMethod (c$, "free",
function () {
this.reset ();
this.window = null;
this.hufts = null;
});
Clazz.defineMethod (c$, "set_dictionary",
function (d, start, n) {
Zystem.arraycopy (d, start, this.window, 0, n);
this.read = this.write = n;
}, "~A,~N,~N");
Clazz.defineMethod (c$, "sync_point",
function () {
return this.mode == 1 ? 1 : 0;
});
Clazz.defineMethod (c$, "inflate_flush",
function (r) {
var n;
var p;
var q;
p = this.z.next_out_index;
q = this.read;
n = ((q <= this.write ? this.write : this.end) - q);
if (n > this.z.avail_out) n = this.z.avail_out;
if (n != 0 && r == -5) r = 0;
this.z.avail_out -= n;
this.z.total_out += n;
if (this.check && n > 0) {
this.z.checksum.update (this.window, q, n);
}Zystem.arraycopy (this.window, q, this.z.next_out, p, n);
p += n;
q += n;
if (q == this.end) {
q = 0;
if (this.write == this.end) this.write = 0;
n = this.write - q;
if (n > this.z.avail_out) n = this.z.avail_out;
if (n != 0 && r == -5) r = 0;
this.z.avail_out -= n;
this.z.total_out += n;
if (this.check && n > 0) {
this.z.checksum.update (this.window, q, n);
}Zystem.arraycopy (this.window, q, this.z.next_out, p, n);
p += n;
q += n;
}this.z.next_out_index = p;
this.read = q;
return r;
}, "~N");
Clazz.defineStatics (c$,
"MANY", 1440,
"inflate_mask", Clazz.newIntArray (-1, [0x00000000, 0x00000001, 0x00000003, 0x00000007, 0x0000000f, 0x0000001f, 0x0000003f, 0x0000007f, 0x000000ff, 0x000001ff, 0x000003ff, 0x000007ff, 0x00000fff, 0x00001fff, 0x00003fff, 0x00007fff, 0x0000ffff]),
"border", Clazz.newIntArray (-1, [16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15]),
"Z_OK", 0,
"Z_STREAM_END", 1,
"Z_STREAM_ERROR", -2,
"Z_DATA_ERROR", -3,
"Z_BUF_ERROR", -5,
"TYPE", 0,
"LENS", 1,
"STORED", 2,
"TABLE", 3,
"BTREE", 4,
"DTREE", 5,
"CODES", 6,
"DRY", 7,
"DONE", 8,
"BAD", 9);
});
|
PypiClean
|
/mindspore_gpu-1.10.0-cp39-cp39-manylinux1_x86_64.whl/mindspore/_akg/akg/tvm/relay/frontend/nnvm_common.py
|
"""Utility functions common to NNVM and MxNet conversion."""
from __future__ import absolute_import as _abs
from .. import expr as _expr
from .. import op as _op
from .common import get_relay_op
from .common import infer_type as _infer_type
def _warn_not_used(attr, op='nnvm'):
import warnings
err = "{} is ignored in {}.".format(attr, op)
warnings.warn(err)
def _rename(new_op):
if isinstance(new_op, str):
new_op = get_relay_op(new_op)
# attrs are ignored.
def impl(inputs, _, _dtype='float32'):
return new_op(*inputs)
return impl
def _reshape(inputs, attrs):
shape = attrs.get_int_tuple("shape")
reverse = attrs.get_bool("reverse", False)
if reverse:
return _op.reverse_reshape(inputs[0], newshape=shape)
return _op.reshape(inputs[0], newshape=shape)
def _init_op(new_op):
"""Init ops like zeros/ones"""
def _impl(inputs, attrs):
assert len(inputs) == 0
shape = attrs.get_int_tuple("shape")
dtype = attrs.get_str("dtype", "float32")
return new_op(shape=shape, dtype=dtype)
return _impl
def _softmax_op(new_op):
"""softmax/log_softmax"""
def _impl(inputs, attrs, _dtype='float32'):
assert len(inputs) == 1
axis = attrs.get_int("axis", -1)
return new_op(inputs[0], axis=axis)
return _impl
def _reduce(new_op):
"""Reduction ops like sum/min/max"""
def _impl(inputs, attrs, _dtype='float32'):
assert len(inputs) == 1
axis = attrs.get_int_tuple("axis", [])
keepdims = attrs.get_bool("keepdims", False)
exclude = attrs.get_bool("exclude", False)
# use None for reduce over all axis.
axis = None if len(axis) == 0 else axis
return new_op(inputs[0], axis=axis, keepdims=keepdims, exclude=exclude)
return _impl
def _arg_reduce(new_op):
"""Arg Reduction ops like argmin/argmax"""
def _impl(inputs, attrs):
assert len(inputs) == 1
axis = attrs.get_int("axis", None)
keepdims = attrs.get_bool("keepdims", False)
res = new_op(inputs[0], axis=[axis], keepdims=keepdims)
# cast to dtype.
res = res.astype("float32")
return res
return _impl
def _cast(inputs, attrs):
"""Type cast"""
dtype = attrs.get_str("dtype")
return inputs[0].astype(dtype=dtype)
def _clip(inputs, attrs):
a_min = attrs.get_float("a_min")
a_max = attrs.get_float("a_max")
return _op.clip(inputs[0], a_min=a_min, a_max=a_max)
def _transpose(inputs, attrs):
axes = attrs.get_int_tuple("axes", None)
# translate default case
axes = None if len(axes) == 0 else axes
return _op.transpose(inputs[0], axes=axes)
def _upsampling(inputs, attrs):
scale = attrs.get_int("scale")
return _op.nn.upsampling(inputs[0], scale_h=scale, scale_w=scale)
def _elemwise_sum(inputs, _, _dtype='float32'):
assert len(inputs) > 0
res = inputs[0]
for x in inputs[1:]:
res = _op.add(res, x)
return res
def _binop_scalar(new_op):
def _impl(inputs, attrs, odtype=None):
assert len(inputs) == 1
scalar = attrs.get_float("scalar")
if odtype is None:
odtype = _infer_type(inputs[0]).checked_type.dtype
scalar = _expr.const(scalar, dtype=odtype)
return new_op(inputs[0], scalar)
return _impl
def _rbinop_scalar(new_op):
def _impl(inputs, attrs, odtype=None):
assert len(inputs) == 1
scalar = attrs.get_float("scalar")
if odtype is None:
odtype = _infer_type(inputs[0]).checked_type.dtype
scalar = _expr.const(scalar, dtype=odtype)
return new_op(scalar, inputs[0])
return _impl
def _compare(new_op):
"""Compare ops like greater/less"""
def _impl(inputs, _, odtype='float32'):
assert len(inputs) == 2
return new_op(inputs[0], inputs[1]).astype(odtype)
return _impl
|
PypiClean
|
/iotoutlier1-0.0.1-py3-none-any.whl/iot_outlier_src/legacy/utils/dataloader.py
|
import os
import pickle
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
def random_sampling_data(X, y, num=100, random_state=42):
"""
:param data:
:param num:
:return:
"""
if num > len(y):
print(f'input data size {len(y)} is less than the sampled number.')
return -1
X, y = shuffle(X, y, n_samples=num, random_state=random_state)
return X, y
def load_and_split_data(norm_file='input_data/demo_dataset/normal_demo_dataset.txt',
anomaly_file='input_data/demo_dataset/anomaly_demo_dataset.txt',
test_size=0.3, random_state=42):
"""
:param test_size:
:param random_state:
:return:
"""
X_norm, y_norm = load_data_from_txt(input_file=norm_file, sample_type='normal')
X_train, X_norm_test, y_train, y_norm_test = train_test_split(X_norm, y_norm, test_size=test_size,
random_state=random_state)
X_anomaly, y_anomaly = load_data_from_txt(input_file=anomaly_file, sample_type='anomaly')
X_anomaly, y_anomaly = random_sampling_data(X_anomaly, y_anomaly, num=len(y_norm_test), random_state=random_state)
X_test = np.concatenate([X_norm_test, X_anomaly], axis=0) # axis =0 means that concatenate by rows
y_test = np.concatenate([y_norm_test, y_anomaly], axis=0)
return X_train, y_train, X_test, y_test
def load_data_from_txt(input_file, sample_type='normal'):
X = []
y = []
with open(input_file, 'r') as hdl:
line = hdl.readline()
while line != '':
if line.startswith('ts'):
line = hdl.readline()
continue
X.append(line.split(','))
if sample_type == 'normal':
y.append(0) # normal: 0
else:
y.append(1) # anomaly: 1
line = hdl.readline()
return np.asarray(X, dtype=float), np.asarray(y, dtype=int)
def select_features_from_list(input_file='', output_file='', features_lst=[5, 9, 10, 12]):
"""
:param input_file:
:param features_lst:
:return:
"""
X = []
with open(input_file, 'r') as hdl:
line = hdl.readline()
while line != '':
if line.startswith('ts'):
line = hdl.readline()
continue
arr = line.split(',')
line_tmp = ''
for idx in features_lst[:-1]:
line_tmp += str(arr[idx]) + ','
X.append(line_tmp + str(arr[features_lst[-1]]))
line = hdl.readline()
if output_file == '':
output_file = os.path.splitext(input_file)[0] + '_sub.txt'
with open(output_file, 'w') as hdl:
for line in X:
hdl.write(line + '\n')
return output_file
def balance_data(x_norm_train_DT, y_norm_train_DT, x_attack_train_DT, y_attack_train_DT, random_state=42):
min_size = x_norm_train_DT.shape[0]
if min_size > x_attack_train_DT.shape[0]:
min_size = x_attack_train_DT.shape[0]
x_train_DT = np.concatenate([shuffle(x_norm_train_DT, random_state=random_state)[:min_size], x_attack_train_DT])
y_train_DT = np.concatenate([y_norm_train_DT[:min_size], y_attack_train_DT])
else:
x_train_DT = np.concatenate([x_norm_train_DT, shuffle(x_attack_train_DT, random_state=random_state)[:min_size]])
y_train_DT = np.concatenate([y_norm_train_DT, y_attack_train_DT[:min_size]])
print(f'\nWith data balance, x_train.shape: {x_train_DT.shape}')
print(
f' in which, x_norm_train_DT.shape: {x_norm_train_DT[:min_size].shape}, and x_attack_train_DT.shape: {x_attack_train_DT[:min_size].shape}')
return x_train_DT, y_train_DT
def discretize_features(features_arr=[]):
# features=[]
# for idx, feat in enumerate(features_arr):
# if idx
features = []
if features_arr[0] == '6': # 6: tcp
features.extend([1, 0]) # one hot: tcp and udp
else: # features_arr[0] == '17': # 17: udp
features.extend([0, 1])
features.extend(features_arr[1:])
return features
def load_data_from_txt_1(input_file, start=0, end=77989, discretize_flg=True):
"""
features: ts, sip, dip, sport, dport, proto, dura, orig_pks, reply_pks, orig_bytes, reply_bytes, orig_min_pkt_size, orig_max_pkt_size, reply_min_pkt_size, reply_max_pkt_size, orig_min_interval, orig_max_interval, reply_min_interval, reply_max_interval, orig_min_ttl, orig_max_ttl, reply_min_ttl, reply_max_ttl, urg, ack, psh, rst, syn, fin, is_new, state, prev_state
idx : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
:param input_file:
:param start:
:param end:
:param discretize_flg:
:return:
"""
data = []
cnt = 0
with open(input_file, 'r') as hdl:
line = hdl.readline()
while line != '' and cnt < end:
if line.startswith('ts'):
line = hdl.readline()
continue
if cnt >= start:
if discretize_flg:
arr = line.split(',')[5:]
features = discretize_features(arr)
data.append(features) # without : "ts, sip, dip, sport, dport"
else:
data.append(line.split(',')[5:]) # without : "ts, sip, dip, sport, dport"
line = hdl.readline()
cnt += 1
return np.asarray(data, dtype=float)
def dump_model(model, out_file):
"""
save model to disk
:param model:
:param out_file:
:return:
"""
out_dir = os.path.split(out_file)[0]
if not os.path.exists(out_dir):
os.makedirs(out_dir)
with open(out_file, 'wb') as f:
pickle.dump(model, f)
print("Model saved in %s" % out_file)
return out_file
def load_model(input_file):
"""
:param input_file:
:return:
"""
print("Loading model...")
with open(input_file, 'rb') as f:
model = pickle.load(f)
print("Model loaded.")
return model
def get_variable_name(data_var):
"""
get variable name as string
:param data_var:
:return:
"""
name = ''
keys = locals().keys()
for key, val in locals().items():
# if id(key) == id(data_var):
print(key, id(key), id(data_var), key is data_var)
# if id(key) == id(data_var):
if val == data_var:
name = key
break
return name
|
PypiClean
|
/py_tgcalls-0.9.1-cp310-cp310-macosx_10_15_x86_64.whl/pytgcalls/node_modules/typescript/lib/lib.es2015.core.d.ts
|
interface Array<T> {
/**
* Returns the value of the first element in the array where predicate is true, and undefined
* otherwise.
* @param predicate find calls predicate once for each element of the array, in ascending
* order, until it finds one where predicate returns true. If such an element is found, find
* immediately returns that element value. Otherwise, find returns undefined.
* @param thisArg If provided, it will be used as the this value for each invocation of
* predicate. If it is not provided, undefined is used instead.
*/
find<S extends T>(predicate: (this: void, value: T, index: number, obj: T[]) => value is S, thisArg?: any): S | undefined;
find(predicate: (value: T, index: number, obj: T[]) => unknown, thisArg?: any): T | undefined;
/**
* Returns the index of the first element in the array where predicate is true, and -1
* otherwise.
* @param predicate find calls predicate once for each element of the array, in ascending
* order, until it finds one where predicate returns true. If such an element is found,
* findIndex immediately returns that element index. Otherwise, findIndex returns -1.
* @param thisArg If provided, it will be used as the this value for each invocation of
* predicate. If it is not provided, undefined is used instead.
*/
findIndex(predicate: (value: T, index: number, obj: T[]) => unknown, thisArg?: any): number;
/**
* Changes all array elements from `start` to `end` index to a static `value` and returns the modified array
* @param value value to fill array section with
* @param start index to start filling the array at. If start is negative, it is treated as
* length+start where length is the length of the array.
* @param end index to stop filling the array at. If end is negative, it is treated as
* length+end.
*/
fill(value: T, start?: number, end?: number): this;
/**
* Returns the this object after copying a section of the array identified by start and end
* to the same array starting at position target
* @param target If target is negative, it is treated as length+target where length is the
* length of the array.
* @param start If start is negative, it is treated as length+start. If end is negative, it
* is treated as length+end.
* @param end If not specified, length of the this object is used as its default value.
*/
copyWithin(target: number, start: number, end?: number): this;
}
interface ArrayConstructor {
/**
* Creates an array from an array-like object.
* @param arrayLike An array-like object to convert to an array.
*/
from<T>(arrayLike: ArrayLike<T>): T[];
/**
* Creates an array from an iterable object.
* @param arrayLike An array-like object to convert to an array.
* @param mapfn A mapping function to call on every element of the array.
* @param thisArg Value of 'this' used to invoke the mapfn.
*/
from<T, U>(arrayLike: ArrayLike<T>, mapfn: (v: T, k: number) => U, thisArg?: any): U[];
/**
* Returns a new array from a set of elements.
* @param items A set of elements to include in the new array object.
*/
of<T>(...items: T[]): T[];
}
interface DateConstructor {
new (value: number | string | Date): Date;
}
interface Function {
/**
* Returns the name of the function. Function names are read-only and can not be changed.
*/
readonly name: string;
}
interface Math {
/**
* Returns the number of leading zero bits in the 32-bit binary representation of a number.
* @param x A numeric expression.
*/
clz32(x: number): number;
/**
* Returns the result of 32-bit multiplication of two numbers.
* @param x First number
* @param y Second number
*/
imul(x: number, y: number): number;
/**
* Returns the sign of the x, indicating whether x is positive, negative or zero.
* @param x The numeric expression to test
*/
sign(x: number): number;
/**
* Returns the base 10 logarithm of a number.
* @param x A numeric expression.
*/
log10(x: number): number;
/**
* Returns the base 2 logarithm of a number.
* @param x A numeric expression.
*/
log2(x: number): number;
/**
* Returns the natural logarithm of 1 + x.
* @param x A numeric expression.
*/
log1p(x: number): number;
/**
* Returns the result of (e^x - 1), which is an implementation-dependent approximation to
* subtracting 1 from the exponential function of x (e raised to the power of x, where e
* is the base of the natural logarithms).
* @param x A numeric expression.
*/
expm1(x: number): number;
/**
* Returns the hyperbolic cosine of a number.
* @param x A numeric expression that contains an angle measured in radians.
*/
cosh(x: number): number;
/**
* Returns the hyperbolic sine of a number.
* @param x A numeric expression that contains an angle measured in radians.
*/
sinh(x: number): number;
/**
* Returns the hyperbolic tangent of a number.
* @param x A numeric expression that contains an angle measured in radians.
*/
tanh(x: number): number;
/**
* Returns the inverse hyperbolic cosine of a number.
* @param x A numeric expression that contains an angle measured in radians.
*/
acosh(x: number): number;
/**
* Returns the inverse hyperbolic sine of a number.
* @param x A numeric expression that contains an angle measured in radians.
*/
asinh(x: number): number;
/**
* Returns the inverse hyperbolic tangent of a number.
* @param x A numeric expression that contains an angle measured in radians.
*/
atanh(x: number): number;
/**
* Returns the square root of the sum of squares of its arguments.
* @param values Values to compute the square root for.
* If no arguments are passed, the result is +0.
* If there is only one argument, the result is the absolute value.
* If any argument is +Infinity or -Infinity, the result is +Infinity.
* If any argument is NaN, the result is NaN.
* If all arguments are either +0 or −0, the result is +0.
*/
hypot(...values: number[]): number;
/**
* Returns the integral part of the a numeric expression, x, removing any fractional digits.
* If x is already an integer, the result is x.
* @param x A numeric expression.
*/
trunc(x: number): number;
/**
* Returns the nearest single precision float representation of a number.
* @param x A numeric expression.
*/
fround(x: number): number;
/**
* Returns an implementation-dependent approximation to the cube root of number.
* @param x A numeric expression.
*/
cbrt(x: number): number;
}
interface NumberConstructor {
/**
* The value of Number.EPSILON is the difference between 1 and the smallest value greater than 1
* that is representable as a Number value, which is approximately:
* 2.2204460492503130808472633361816 x 10−16.
*/
readonly EPSILON: number;
/**
* Returns true if passed value is finite.
* Unlike the global isFinite, Number.isFinite doesn't forcibly convert the parameter to a
* number. Only finite values of the type number, result in true.
* @param number A numeric value.
*/
isFinite(number: unknown): boolean;
/**
* Returns true if the value passed is an integer, false otherwise.
* @param number A numeric value.
*/
isInteger(number: unknown): boolean;
/**
* Returns a Boolean value that indicates whether a value is the reserved value NaN (not a
* number). Unlike the global isNaN(), Number.isNaN() doesn't forcefully convert the parameter
* to a number. Only values of the type number, that are also NaN, result in true.
* @param number A numeric value.
*/
isNaN(number: unknown): boolean;
/**
* Returns true if the value passed is a safe integer.
* @param number A numeric value.
*/
isSafeInteger(number: unknown): boolean;
/**
* The value of the largest integer n such that n and n + 1 are both exactly representable as
* a Number value.
* The value of Number.MAX_SAFE_INTEGER is 9007199254740991 2^53 − 1.
*/
readonly MAX_SAFE_INTEGER: number;
/**
* The value of the smallest integer n such that n and n − 1 are both exactly representable as
* a Number value.
* The value of Number.MIN_SAFE_INTEGER is −9007199254740991 (−(2^53 − 1)).
*/
readonly MIN_SAFE_INTEGER: number;
/**
* Converts a string to a floating-point number.
* @param string A string that contains a floating-point number.
*/
parseFloat(string: string): number;
/**
* Converts A string to an integer.
* @param string A string to convert into a number.
* @param radix A value between 2 and 36 that specifies the base of the number in `string`.
* If this argument is not supplied, strings with a prefix of '0x' are considered hexadecimal.
* All other strings are considered decimal.
*/
parseInt(string: string, radix?: number): number;
}
interface ObjectConstructor {
/**
* Copy the values of all of the enumerable own properties from one or more source objects to a
* target object. Returns the target object.
* @param target The target object to copy to.
* @param source The source object from which to copy properties.
*/
assign<T extends {}, U>(target: T, source: U): T & U;
/**
* Copy the values of all of the enumerable own properties from one or more source objects to a
* target object. Returns the target object.
* @param target The target object to copy to.
* @param source1 The first source object from which to copy properties.
* @param source2 The second source object from which to copy properties.
*/
assign<T extends {}, U, V>(target: T, source1: U, source2: V): T & U & V;
/**
* Copy the values of all of the enumerable own properties from one or more source objects to a
* target object. Returns the target object.
* @param target The target object to copy to.
* @param source1 The first source object from which to copy properties.
* @param source2 The second source object from which to copy properties.
* @param source3 The third source object from which to copy properties.
*/
assign<T extends {}, U, V, W>(target: T, source1: U, source2: V, source3: W): T & U & V & W;
/**
* Copy the values of all of the enumerable own properties from one or more source objects to a
* target object. Returns the target object.
* @param target The target object to copy to.
* @param sources One or more source objects from which to copy properties
*/
assign(target: object, ...sources: any[]): any;
/**
* Returns an array of all symbol properties found directly on object o.
* @param o Object to retrieve the symbols from.
*/
getOwnPropertySymbols(o: any): symbol[];
/**
* Returns the names of the enumerable string properties and methods of an object.
* @param o Object that contains the properties and methods. This can be an object that you created or an existing Document Object Model (DOM) object.
*/
keys(o: {}): string[];
/**
* Returns true if the values are the same value, false otherwise.
* @param value1 The first value.
* @param value2 The second value.
*/
is(value1: any, value2: any): boolean;
/**
* Sets the prototype of a specified object o to object proto or null. Returns the object o.
* @param o The object to change its prototype.
* @param proto The value of the new prototype or null.
*/
setPrototypeOf(o: any, proto: object | null): any;
}
interface ReadonlyArray<T> {
/**
* Returns the value of the first element in the array where predicate is true, and undefined
* otherwise.
* @param predicate find calls predicate once for each element of the array, in ascending
* order, until it finds one where predicate returns true. If such an element is found, find
* immediately returns that element value. Otherwise, find returns undefined.
* @param thisArg If provided, it will be used as the this value for each invocation of
* predicate. If it is not provided, undefined is used instead.
*/
find<S extends T>(predicate: (this: void, value: T, index: number, obj: readonly T[]) => value is S, thisArg?: any): S | undefined;
find(predicate: (value: T, index: number, obj: readonly T[]) => unknown, thisArg?: any): T | undefined;
/**
* Returns the index of the first element in the array where predicate is true, and -1
* otherwise.
* @param predicate find calls predicate once for each element of the array, in ascending
* order, until it finds one where predicate returns true. If such an element is found,
* findIndex immediately returns that element index. Otherwise, findIndex returns -1.
* @param thisArg If provided, it will be used as the this value for each invocation of
* predicate. If it is not provided, undefined is used instead.
*/
findIndex(predicate: (value: T, index: number, obj: readonly T[]) => unknown, thisArg?: any): number;
}
interface RegExp {
/**
* Returns a string indicating the flags of the regular expression in question. This field is read-only.
* The characters in this string are sequenced and concatenated in the following order:
*
* - "g" for global
* - "i" for ignoreCase
* - "m" for multiline
* - "u" for unicode
* - "y" for sticky
*
* If no flags are set, the value is the empty string.
*/
readonly flags: string;
/**
* Returns a Boolean value indicating the state of the sticky flag (y) used with a regular
* expression. Default is false. Read-only.
*/
readonly sticky: boolean;
/**
* Returns a Boolean value indicating the state of the Unicode flag (u) used with a regular
* expression. Default is false. Read-only.
*/
readonly unicode: boolean;
}
interface RegExpConstructor {
new (pattern: RegExp | string, flags?: string): RegExp;
(pattern: RegExp | string, flags?: string): RegExp;
}
interface String {
/**
* Returns a nonnegative integer Number less than 1114112 (0x110000) that is the code point
* value of the UTF-16 encoded code point starting at the string element at position pos in
* the String resulting from converting this object to a String.
* If there is no element at that position, the result is undefined.
* If a valid UTF-16 surrogate pair does not begin at pos, the result is the code unit at pos.
*/
codePointAt(pos: number): number | undefined;
/**
* Returns true if searchString appears as a substring of the result of converting this
* object to a String, at one or more positions that are
* greater than or equal to position; otherwise, returns false.
* @param searchString search string
* @param position If position is undefined, 0 is assumed, so as to search all of the String.
*/
includes(searchString: string, position?: number): boolean;
/**
* Returns true if the sequence of elements of searchString converted to a String is the
* same as the corresponding elements of this object (converted to a String) starting at
* endPosition – length(this). Otherwise returns false.
*/
endsWith(searchString: string, endPosition?: number): boolean;
/**
* Returns the String value result of normalizing the string into the normalization form
* named by form as specified in Unicode Standard Annex #15, Unicode Normalization Forms.
* @param form Applicable values: "NFC", "NFD", "NFKC", or "NFKD", If not specified default
* is "NFC"
*/
normalize(form: "NFC" | "NFD" | "NFKC" | "NFKD"): string;
/**
* Returns the String value result of normalizing the string into the normalization form
* named by form as specified in Unicode Standard Annex #15, Unicode Normalization Forms.
* @param form Applicable values: "NFC", "NFD", "NFKC", or "NFKD", If not specified default
* is "NFC"
*/
normalize(form?: string): string;
/**
* Returns a String value that is made from count copies appended together. If count is 0,
* the empty string is returned.
* @param count number of copies to append
*/
repeat(count: number): string;
/**
* Returns true if the sequence of elements of searchString converted to a String is the
* same as the corresponding elements of this object (converted to a String) starting at
* position. Otherwise returns false.
*/
startsWith(searchString: string, position?: number): boolean;
/**
* Returns an `<a>` HTML anchor element and sets the name attribute to the text value
* @deprecated A legacy feature for browser compatibility
* @param name
*/
anchor(name: string): string;
/**
* Returns a `<big>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
big(): string;
/**
* Returns a `<blink>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
blink(): string;
/**
* Returns a `<b>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
bold(): string;
/**
* Returns a `<tt>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
fixed(): string;
/**
* Returns a `<font>` HTML element and sets the color attribute value
* @deprecated A legacy feature for browser compatibility
*/
fontcolor(color: string): string;
/**
* Returns a `<font>` HTML element and sets the size attribute value
* @deprecated A legacy feature for browser compatibility
*/
fontsize(size: number): string;
/**
* Returns a `<font>` HTML element and sets the size attribute value
* @deprecated A legacy feature for browser compatibility
*/
fontsize(size: string): string;
/**
* Returns an `<i>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
italics(): string;
/**
* Returns an `<a>` HTML element and sets the href attribute value
* @deprecated A legacy feature for browser compatibility
*/
link(url: string): string;
/**
* Returns a `<small>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
small(): string;
/**
* Returns a `<strike>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
strike(): string;
/**
* Returns a `<sub>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
sub(): string;
/**
* Returns a `<sup>` HTML element
* @deprecated A legacy feature for browser compatibility
*/
sup(): string;
}
interface StringConstructor {
/**
* Return the String value whose elements are, in order, the elements in the List elements.
* If length is 0, the empty string is returned.
*/
fromCodePoint(...codePoints: number[]): string;
/**
* String.raw is usually used as a tag function of a Tagged Template String. When called as
* such, the first argument will be a well formed template call site object and the rest
* parameter will contain the substitution values. It can also be called directly, for example,
* to interleave strings and values from your own tag function, and in this case the only thing
* it needs from the first argument is the raw property.
* @param template A well-formed template string call site representation.
* @param substitutions A set of substitution values.
*/
raw(template: { raw: readonly string[] | ArrayLike<string>}, ...substitutions: any[]): string;
}
|
PypiClean
|
/GalSim-2.4.11-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl/galsim/roman/__init__.py
|
import os
import numpy as np
from .. import meta_data, Image
gain = 1.0
pixel_scale = 0.11 # arcsec / pixel
diameter = 2.37 # meters
obscuration = 0.32
collecting_area = 3.757e4 # cm^2, from Cycle 7
exptime = 140.25 # s
dark_current = 0.015 # e-/pix/s
nonlinearity_beta = -6.e-7
reciprocity_alpha = 0.0065
read_noise = 8.5 # e-
n_dithers = 6
thermal_backgrounds = {'J129': 0.023, # e-/pix/s
'F184': 0.179,
'Y106': 0.023,
'Z087': 0.023,
'H158': 0.022,
'W149': 0.023}
# Physical pixel size
pixel_scale_mm = 0.01 # mm
# There are actually separate pupil plane files for each SCA, since the view of the pupil
# obscuration is different from different locations on the focal plane. It's also modestly
# wavelength dependent, so there is a different file appropriate for F184, the longest wavelength
# filter. This file is for SCA2, which is near the center and for short wavelengths, so in
# some sense the most typical example of the pupil mask. If anyone needs a generic pupil
# plane file to use, this one should be fine.
pupil_plane_file = os.path.join(meta_data.share_dir, 'roman', 'SCA2_rim_mask.fits.gz')
# The pupil plane files all keep track of their correct pixel scale, but for the exit pupil,
# rather than the input pupil. The scaling to use to get to the entrance pupil, which is what
# we actually want, is in the header as PUPILMAG. The result for the above file is given here.
pupil_plane_scale = 0.00111175097
# Which bands should use the long vs short pupil plane files for the PSF.
# F184
longwave_bands = ['F184']
# Z087, Y106, J129, H158, W149
shortwave_bands = ['Z087', 'Y106', 'J129', 'H158', 'W149']
stray_light_fraction = 0.1
# IPC kernel is unnormalized at first. We will normalize it.
ipc_kernel = np.array([ [0.001269938, 0.015399776, 0.001199862], \
[0.013800177, 1.0, 0.015600367], \
[0.001270391, 0.016129619, 0.001200137] ])
ipc_kernel /= np.sum(ipc_kernel)
ipc_kernel = Image(ipc_kernel)
persistence_coefficients = np.array([0.045707683,0.014959818,0.009115737,0.00656769,0.005135571,0.004217028,0.003577534,0.003106601])/100.
# parameters in the fermi model = [ A, x0, dx, a, r, half_well]
# The following parameters are for H4RG-lo, the conservative model for low influence level x.
# The info and implementation can be found in roman_detectors.applyPersistence() and roman_detectors.fermi_linear().
persistence_fermi_parameters = np.array([0.017, 60000., 50000., 0.045, 1., 50000.])
n_sca = 18
n_pix_tot = 4096
n_pix = 4088
jitter_rms = 0.014
charge_diffusion = 0.1
from .roman_bandpass import getBandpasses
from .roman_backgrounds import getSkyLevel
from .roman_psfs import getPSF
from .roman_wcs import getWCS, findSCA, allowedPos, bestPA, convertCenter
from .roman_detectors import applyNonlinearity, addReciprocityFailure, applyIPC, applyPersistence, allDetectorEffects, NLfunc
from . import roman_config
|
PypiClean
|
/mis_modulos-0.1.tar.gz/mis_modulos-0.1/pandas/core/arrays/string_arrow.py
|
from __future__ import annotations
from collections.abc import Callable # noqa: PDF001
import re
from typing import Union
import numpy as np
from pandas._libs import (
lib,
missing as libmissing,
)
from pandas._typing import (
Dtype,
NpDtype,
Scalar,
npt,
)
from pandas.compat import (
pa_version_under1p01,
pa_version_under2p0,
pa_version_under3p0,
pa_version_under4p0,
)
from pandas.core.dtypes.common import (
is_bool_dtype,
is_dtype_equal,
is_integer_dtype,
is_object_dtype,
is_scalar,
is_string_dtype,
pandas_dtype,
)
from pandas.core.dtypes.missing import isna
from pandas.core.arrays.arrow import ArrowExtensionArray
from pandas.core.arrays.boolean import BooleanDtype
from pandas.core.arrays.integer import Int64Dtype
from pandas.core.arrays.numeric import NumericDtype
from pandas.core.arrays.string_ import (
BaseStringArray,
StringDtype,
)
from pandas.core.strings.object_array import ObjectStringArrayMixin
if not pa_version_under1p01:
import pyarrow as pa
import pyarrow.compute as pc
from pandas.core.arrays.arrow._arrow_utils import fallback_performancewarning
ArrowStringScalarOrNAT = Union[str, libmissing.NAType]
def _chk_pyarrow_available() -> None:
if pa_version_under1p01:
msg = "pyarrow>=1.0.0 is required for PyArrow backed ArrowExtensionArray."
raise ImportError(msg)
# TODO: Inherit directly from BaseStringArrayMethods. Currently we inherit from
# ObjectStringArrayMixin because we want to have the object-dtype based methods as
# fallback for the ones that pyarrow doesn't yet support
class ArrowStringArray(ArrowExtensionArray, BaseStringArray, ObjectStringArrayMixin):
"""
Extension array for string data in a ``pyarrow.ChunkedArray``.
.. versionadded:: 1.2.0
.. warning::
ArrowStringArray is considered experimental. The implementation and
parts of the API may change without warning.
Parameters
----------
values : pyarrow.Array or pyarrow.ChunkedArray
The array of data.
Attributes
----------
None
Methods
-------
None
See Also
--------
array
The recommended function for creating a ArrowStringArray.
Series.str
The string methods are available on Series backed by
a ArrowStringArray.
Notes
-----
ArrowStringArray returns a BooleanArray for comparison methods.
Examples
--------
>>> pd.array(['This is', 'some text', None, 'data.'], dtype="string[pyarrow]")
<ArrowStringArray>
['This is', 'some text', <NA>, 'data.']
Length: 4, dtype: string
"""
# error: Incompatible types in assignment (expression has type "StringDtype",
# base class "ArrowExtensionArray" defined the type as "ArrowDtype")
_dtype: StringDtype # type: ignore[assignment]
def __init__(self, values) -> None:
super().__init__(values)
self._dtype = StringDtype(storage="pyarrow")
if not pa.types.is_string(self._data.type):
raise ValueError(
"ArrowStringArray requires a PyArrow (chunked) array of string type"
)
@classmethod
def _from_sequence(cls, scalars, dtype: Dtype | None = None, copy: bool = False):
from pandas.core.arrays.masked import BaseMaskedArray
_chk_pyarrow_available()
if dtype and not (isinstance(dtype, str) and dtype == "string"):
dtype = pandas_dtype(dtype)
assert isinstance(dtype, StringDtype) and dtype.storage == "pyarrow"
if isinstance(scalars, BaseMaskedArray):
# avoid costly conversion to object dtype in ensure_string_array and
# numerical issues with Float32Dtype
na_values = scalars._mask
result = scalars._data
result = lib.ensure_string_array(result, copy=copy, convert_na_value=False)
return cls(pa.array(result, mask=na_values, type=pa.string()))
# convert non-na-likes to str
result = lib.ensure_string_array(scalars, copy=copy)
return cls(pa.array(result, type=pa.string(), from_pandas=True))
@classmethod
def _from_sequence_of_strings(
cls, strings, dtype: Dtype | None = None, copy: bool = False
):
return cls._from_sequence(strings, dtype=dtype, copy=copy)
@property
def dtype(self) -> StringDtype: # type: ignore[override]
"""
An instance of 'string[pyarrow]'.
"""
return self._dtype
def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
"""Correctly construct numpy arrays when passed to `np.asarray()`."""
return self.to_numpy(dtype=dtype)
def to_numpy(
self,
dtype: npt.DTypeLike | None = None,
copy: bool = False,
na_value=lib.no_default,
) -> np.ndarray:
"""
Convert to a NumPy ndarray.
"""
# TODO: copy argument is ignored
result = np.array(self._data, dtype=dtype)
if self._data.null_count > 0:
if na_value is lib.no_default:
if dtype and np.issubdtype(dtype, np.floating):
return result
na_value = self._dtype.na_value
mask = self.isna()
result[mask] = na_value
return result
def insert(self, loc: int, item) -> ArrowStringArray:
if not isinstance(item, str) and item is not libmissing.NA:
raise TypeError("Scalar must be NA or str")
return super().insert(loc, item)
def _maybe_convert_setitem_value(self, value):
"""Maybe convert value to be pyarrow compatible."""
if is_scalar(value):
if isna(value):
value = None
elif not isinstance(value, str):
raise ValueError("Scalar must be NA or str")
else:
value = np.array(value, dtype=object, copy=True)
value[isna(value)] = None
for v in value:
if not (v is None or isinstance(v, str)):
raise ValueError("Scalar must be NA or str")
return value
def isin(self, values) -> npt.NDArray[np.bool_]:
if pa_version_under2p0:
fallback_performancewarning(version="2")
return super().isin(values)
value_set = [
pa_scalar.as_py()
for pa_scalar in [pa.scalar(value, from_pandas=True) for value in values]
if pa_scalar.type in (pa.string(), pa.null())
]
# for an empty value_set pyarrow 3.0.0 segfaults and pyarrow 2.0.0 returns True
# for null values, so we short-circuit to return all False array.
if not len(value_set):
return np.zeros(len(self), dtype=bool)
kwargs = {}
if pa_version_under3p0:
# in pyarrow 2.0.0 skip_null is ignored but is a required keyword and raises
# with unexpected keyword argument in pyarrow 3.0.0+
kwargs["skip_null"] = True
result = pc.is_in(self._data, value_set=pa.array(value_set), **kwargs)
# pyarrow 2.0.0 returned nulls, so we explicily specify dtype to convert nulls
# to False
return np.array(result, dtype=np.bool_)
def astype(self, dtype, copy: bool = True):
dtype = pandas_dtype(dtype)
if is_dtype_equal(dtype, self.dtype):
if copy:
return self.copy()
return self
elif isinstance(dtype, NumericDtype):
data = self._data.cast(pa.from_numpy_dtype(dtype.numpy_dtype))
return dtype.__from_arrow__(data)
return super().astype(dtype, copy=copy)
# ------------------------------------------------------------------------
# String methods interface
# error: Incompatible types in assignment (expression has type "NAType",
# base class "ObjectStringArrayMixin" defined the type as "float")
_str_na_value = libmissing.NA # type: ignore[assignment]
def _str_map(
self, f, na_value=None, dtype: Dtype | None = None, convert: bool = True
):
# TODO: de-duplicate with StringArray method. This method is moreless copy and
# paste.
from pandas.arrays import (
BooleanArray,
IntegerArray,
)
if dtype is None:
dtype = self.dtype
if na_value is None:
na_value = self.dtype.na_value
mask = isna(self)
arr = np.asarray(self)
if is_integer_dtype(dtype) or is_bool_dtype(dtype):
constructor: type[IntegerArray] | type[BooleanArray]
if is_integer_dtype(dtype):
constructor = IntegerArray
else:
constructor = BooleanArray
na_value_is_na = isna(na_value)
if na_value_is_na:
na_value = 1
result = lib.map_infer_mask(
arr,
f,
mask.view("uint8"),
convert=False,
na_value=na_value,
# error: Argument 1 to "dtype" has incompatible type
# "Union[ExtensionDtype, str, dtype[Any], Type[object]]"; expected
# "Type[object]"
dtype=np.dtype(dtype), # type: ignore[arg-type]
)
if not na_value_is_na:
mask[:] = False
return constructor(result, mask)
elif is_string_dtype(dtype) and not is_object_dtype(dtype):
# i.e. StringDtype
result = lib.map_infer_mask(
arr, f, mask.view("uint8"), convert=False, na_value=na_value
)
result = pa.array(result, mask=mask, type=pa.string(), from_pandas=True)
return type(self)(result)
else:
# This is when the result type is object. We reach this when
# -> We know the result type is truly object (e.g. .encode returns bytes
# or .findall returns a list).
# -> We don't know the result type. E.g. `.get` can return anything.
return lib.map_infer_mask(arr, f, mask.view("uint8"))
def _str_contains(self, pat, case=True, flags=0, na=np.nan, regex: bool = True):
if flags:
fallback_performancewarning()
return super()._str_contains(pat, case, flags, na, regex)
if regex:
if pa_version_under4p0 or case is False:
fallback_performancewarning(version="4")
return super()._str_contains(pat, case, flags, na, regex)
else:
result = pc.match_substring_regex(self._data, pat)
else:
if case:
result = pc.match_substring(self._data, pat)
else:
result = pc.match_substring(pc.utf8_upper(self._data), pat.upper())
result = BooleanDtype().__from_arrow__(result)
if not isna(na):
result[isna(result)] = bool(na)
return result
def _str_startswith(self, pat: str, na=None):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_startswith(pat, na)
pat = "^" + re.escape(pat)
return self._str_contains(pat, na=na, regex=True)
def _str_endswith(self, pat: str, na=None):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_endswith(pat, na)
pat = re.escape(pat) + "$"
return self._str_contains(pat, na=na, regex=True)
def _str_replace(
self,
pat: str | re.Pattern,
repl: str | Callable,
n: int = -1,
case: bool = True,
flags: int = 0,
regex: bool = True,
):
if (
pa_version_under4p0
or isinstance(pat, re.Pattern)
or callable(repl)
or not case
or flags
):
fallback_performancewarning(version="4")
return super()._str_replace(pat, repl, n, case, flags, regex)
func = pc.replace_substring_regex if regex else pc.replace_substring
result = func(self._data, pattern=pat, replacement=repl, max_replacements=n)
return type(self)(result)
def _str_match(
self, pat: str, case: bool = True, flags: int = 0, na: Scalar | None = None
):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_match(pat, case, flags, na)
if not pat.startswith("^"):
pat = "^" + pat
return self._str_contains(pat, case, flags, na, regex=True)
def _str_fullmatch(
self, pat, case: bool = True, flags: int = 0, na: Scalar | None = None
):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_fullmatch(pat, case, flags, na)
if not pat.endswith("$") or pat.endswith("//$"):
pat = pat + "$"
return self._str_match(pat, case, flags, na)
def _str_isalnum(self):
result = pc.utf8_is_alnum(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_isalpha(self):
result = pc.utf8_is_alpha(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_isdecimal(self):
result = pc.utf8_is_decimal(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_isdigit(self):
result = pc.utf8_is_digit(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_islower(self):
result = pc.utf8_is_lower(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_isnumeric(self):
result = pc.utf8_is_numeric(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_isspace(self):
if pa_version_under2p0:
fallback_performancewarning(version="2")
return super()._str_isspace()
result = pc.utf8_is_space(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_istitle(self):
result = pc.utf8_is_title(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_isupper(self):
result = pc.utf8_is_upper(self._data)
return BooleanDtype().__from_arrow__(result)
def _str_len(self):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_len()
result = pc.utf8_length(self._data)
return Int64Dtype().__from_arrow__(result)
def _str_lower(self):
return type(self)(pc.utf8_lower(self._data))
def _str_upper(self):
return type(self)(pc.utf8_upper(self._data))
def _str_strip(self, to_strip=None):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_strip(to_strip)
if to_strip is None:
result = pc.utf8_trim_whitespace(self._data)
else:
result = pc.utf8_trim(self._data, characters=to_strip)
return type(self)(result)
def _str_lstrip(self, to_strip=None):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_lstrip(to_strip)
if to_strip is None:
result = pc.utf8_ltrim_whitespace(self._data)
else:
result = pc.utf8_ltrim(self._data, characters=to_strip)
return type(self)(result)
def _str_rstrip(self, to_strip=None):
if pa_version_under4p0:
fallback_performancewarning(version="4")
return super()._str_rstrip(to_strip)
if to_strip is None:
result = pc.utf8_rtrim_whitespace(self._data)
else:
result = pc.utf8_rtrim(self._data, characters=to_strip)
return type(self)(result)
|
PypiClean
|
/jupyter-annotator-1.3.0.tar.gz/jupyter-annotator-1.3.0/README.md
|
# jupyter json annotator
This package provides an annotation UI for arbitrary dataset in json format.
## Install
```
pip install jupyter-annotator
```
## Usage
### 1. Normal usage
```python
from jupyter_annotator import Annotator
problems = [{
"id": 2,
"problem": "Where would I not want a fox? (a problem from coommonsenseQA)",
"options": {
"a": "hen house", "b": "england", "c": "mountains", "d": "english hunt", "e": "california"
},
"answer": "a",
"filtered": "xxxxxxxxxx"
}]
anno = Annotator(problems)
anno.start()
```

### 2. Custom fields + skip + filter
+ **Custom fields**: add custom field in the format (field_name, type, max_length)
+ **Skip fields**: the fields which will not appear in the form but still in the preview so that they won't be edited.
+ **Filter fields**: the fields that won't appear either in the form or in the preview
```python
problems = [{
"id": 2,
"problem": "Where would I not want a fox? (a problem from coommonsenseQA)",
"options": {
"a": "hen house", "b": "england", "c": "mountains", "d": "english hunt", "e": "california"
},
"answer": "a",
"filtered": "xxxxxxxxxx"
}]
custom_fields = [("rationale", str, 100)]
skip_fields = ['id']
filter_fields = ["xxx"]
annotator = Annotator(problems, custom_fields=custom_fields, skip_fields=skip_fields, filter_fields=filter_fields)
annotator.start()
```

## References
+ [Jupyter Widgets - Using Interact](https://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html)
+ [jupyter-innotater](https://github.com/ideonate/jupyter-innotater)
|
PypiClean
|
/pyscard-2.0.7-cp39-cp39-macosx_10_9_universal2.whl/smartcard/ReaderMonitoring.py
|
from threading import Thread, Event
from time import sleep
import traceback
import smartcard.System
from smartcard.Observer import Observer
from smartcard.Observer import Observable
from smartcard.Synchronization import *
# ReaderObserver interface
class ReaderObserver(Observer):
"""
ReaderObserver is a base abstract class for objects that are to be notified
upon smartcard reader insertion/removal.
"""
def __init__(self):
pass
def update(self, observable, handlers):
"""Called upon reader insertion/removal.
@param observable:
@param handlers:
- addedreaders: list of added readers causing notification
- removedreaders: list of removed readers causing notification
"""
pass
class ReaderMonitor(Observable):
"""Class that monitors reader insertion/removal.
and notify observers
note: a reader monitoring thread will be running
as long as the reader monitor has observers, or ReaderMonitor.stop()
is called.
It implements the shared state design pattern, where objects
of the same type all share the same state, in our case essentially
the ReaderMonitoring Thread. Thanks to Frank Aune for implementing
the shared state pattern logics.
"""
__shared_state = {}
def __init__(self, startOnDemand=True, readerProc=smartcard.System.readers,
period=1):
self.__dict__ = self.__shared_state
Observable.__init__(self)
self.startOnDemand = startOnDemand
self.readerProc = readerProc
self.period = period
if self.startOnDemand:
self.rmthread = None
else:
self.rmthread = ReaderMonitoringThread(self, self.readerProc,
self.period)
self.rmthread.start()
def addObserver(self, observer):
"""Add an observer."""
Observable.addObserver(self, observer)
# If self.startOnDemand is True, the reader monitoring
# thread only runs when there are observers.
if self.startOnDemand:
if 0 < self.countObservers():
if not self.rmthread:
self.rmthread = ReaderMonitoringThread(
self,
self.readerProc, self.period)
# start reader monitoring thread in another thread to
# avoid a deadlock; addObserver and notifyObservers called
# in the ReaderMonitoringThread run() method are
# synchronized
try:
# Python 3.x
import _thread
_thread.start_new_thread(self.rmthread.start, ())
except:
# Python 2.x
import thread
thread.start_new_thread(self.rmthread.start, ())
else:
observer.update(self, (self.rmthread.readers, []))
def deleteObserver(self, observer):
"""Remove an observer."""
Observable.deleteObserver(self, observer)
# If self.startOnDemand is True, the reader monitoring
# thread is stopped when there are no more observers.
if self.startOnDemand:
if 0 == self.countObservers():
self.rmthread.stop()
del self.rmthread
self.rmthread = None
def __str__(self):
return self.__class__.__name__
synchronize(ReaderMonitor,
"addObserver deleteObserver deleteObservers " +
"setChanged clearChanged hasChanged " +
"countObservers")
class ReaderMonitoringThread(Thread):
"""Reader insertion thread.
This thread polls for pcsc reader insertion, since no
reader insertion event is available in pcsc.
"""
__shared_state = {}
def __init__(self, observable, readerProc, period):
self.__dict__ = self.__shared_state
Thread.__init__(self)
self.observable = observable
self.stopEvent = Event()
self.stopEvent.clear()
self.readers = []
self.setDaemon(True)
self.setName('smartcard.ReaderMonitoringThread')
self.readerProc = readerProc
self.period = period
def run(self):
"""Runs until stopEvent is notified, and notify
observers of all reader insertion/removal.
"""
while not self.stopEvent.isSet():
try:
# no need to monitor if no observers
if 0 < self.observable.countObservers():
currentReaders = self.readerProc()
addedReaders = []
removedReaders = []
if currentReaders != self.readers:
for reader in currentReaders:
if reader not in self.readers:
addedReaders.append(reader)
for reader in self.readers:
if reader not in currentReaders:
removedReaders.append(reader)
if addedReaders or removedReaders:
# Notify observers
self.readers = []
for r in currentReaders:
self.readers.append(r)
self.observable.setChanged()
self.observable.notifyObservers((addedReaders,
removedReaders))
# wait every second on stopEvent
self.stopEvent.wait(self.period)
except Exception:
# FIXME Tighten the exceptions caught by this block
traceback.print_exc()
# Most likely raised during interpreter shutdown due
# to unclean exit which failed to remove all observers.
# To solve this, we set the stop event and pass the
# exception to let the thread finish gracefully.
self.stopEvent.set()
def stop(self):
self.stopEvent.set()
self.join()
if __name__ == "__main__":
print('insert or remove readers in the next 20 seconds')
# a simple reader observer that prints added/removed readers
class printobserver(ReaderObserver):
def __init__(self, obsindex):
self.obsindex = obsindex
def update(self, observable, handlers):
addedreaders, removedreaders = handlers
print("%d - added: " % self.obsindex, addedreaders)
print("%d - removed: " % self.obsindex, removedreaders)
class testthread(Thread):
def __init__(self, obsindex):
Thread.__init__(self)
self.readermonitor = ReaderMonitor()
self.obsindex = obsindex
self.observer = None
def run(self):
# create and register observer
self.observer = printobserver(self.obsindex)
self.readermonitor.addObserver(self.observer)
sleep(20)
self.readermonitor.deleteObserver(self.observer)
t1 = testthread(1)
t2 = testthread(2)
t1.start()
t2.start()
t1.join()
t2.join()
|
PypiClean
|
/git-deps-1.1.0.zip/git-deps-1.1.0/git_deps/html/node_modules/jquery/src/core/ready.js
|
define( [
"../core",
"../var/document",
"../core/readyException",
"../deferred"
], function( jQuery, document ) {
"use strict";
// The deferred used on DOM ready
var readyList = jQuery.Deferred();
jQuery.fn.ready = function( fn ) {
readyList
.then( fn )
// Wrap jQuery.readyException in a function so that the lookup
// happens at the time of error handling instead of callback
// registration.
.catch( function( error ) {
jQuery.readyException( error );
} );
return this;
};
jQuery.extend( {
// Is the DOM ready to be used? Set to true once it occurs.
isReady: false,
// A counter to track how many items to wait for before
// the ready event fires. See #6781
readyWait: 1,
// Handle when the DOM is ready
ready: function( wait ) {
// Abort if there are pending holds or we're already ready
if ( wait === true ? --jQuery.readyWait : jQuery.isReady ) {
return;
}
// Remember that the DOM is ready
jQuery.isReady = true;
// If a normal DOM Ready event fired, decrement, and wait if need be
if ( wait !== true && --jQuery.readyWait > 0 ) {
return;
}
// If there are functions bound, to execute
readyList.resolveWith( document, [ jQuery ] );
}
} );
jQuery.ready.then = readyList.then;
// The ready event handler and self cleanup method
function completed() {
document.removeEventListener( "DOMContentLoaded", completed );
window.removeEventListener( "load", completed );
jQuery.ready();
}
// Catch cases where $(document).ready() is called
// after the browser event has already occurred.
// Support: IE <=9 - 10 only
// Older IE sometimes signals "interactive" too soon
if ( document.readyState === "complete" ||
( document.readyState !== "loading" && !document.documentElement.doScroll ) ) {
// Handle it asynchronously to allow scripts the opportunity to delay ready
window.setTimeout( jQuery.ready );
} else {
// Use the handy event callback
document.addEventListener( "DOMContentLoaded", completed );
// A fallback to window.onload, that will always work
window.addEventListener( "load", completed );
}
} );
|
PypiClean
|
/python-binance-chain-0.1.20.tar.gz/python-binance-chain-0.1.20/binance_chain/websockets.py
|
import asyncio
import ujson as json
import logging
from random import random
from typing import Dict, Callable, Awaitable, Optional, List
import websockets as ws
from binance_chain.environment import BinanceEnvironment
from binance_chain.constants import KlineInterval
class ReconnectingWebsocket:
MAX_RECONNECTS: int = 5
MAX_RECONNECT_SECONDS: int = 60
MIN_RECONNECT_WAIT = 0.1
TIMEOUT: int = 10
PROTOCOL_VERSION: str = '1.0.0'
def __init__(self, loop, coro, env: BinanceEnvironment):
self._loop = loop
self._log = logging.getLogger(__name__)
self._coro = coro
self._reconnect_attempts: int = 0
self._conn = None
self._env = env
self._connect_id: int = None
self._ping_timeout = 60
self._socket: Optional[ws.client.WebSocketClientProtocol] = None
self._connect()
def _connect(self):
self._conn = asyncio.ensure_future(self._run())
def _get_ws_endpoint_url(self):
return f"{self._env.wss_url}ws"
async def _run(self):
keep_waiting: bool = True
logging.info(f"connecting to {self._get_ws_endpoint_url()}")
try:
async with ws.connect(self._get_ws_endpoint_url(), loop=self._loop) as socket:
self._on_connect(socket)
try:
while keep_waiting:
try:
evt = await asyncio.wait_for(self._socket.recv(), timeout=self._ping_timeout)
except asyncio.TimeoutError:
self._log.debug("no message in {} seconds".format(self._ping_timeout))
await self.send_keepalive()
except asyncio.CancelledError:
self._log.debug("cancelled error")
await self.ping()
else:
try:
evt_obj = json.loads(evt)
except ValueError:
pass
else:
await self._coro(evt_obj)
except ws.ConnectionClosed as e:
self._log.debug('conn closed:{}'.format(e))
keep_waiting = False
await self._reconnect()
except Exception as e:
self._log.debug('ws exception:{}'.format(e))
keep_waiting = False
await self._reconnect()
except Exception as e:
logging.info(f"websocket error: {e}")
def _on_connect(self, socket):
self._socket = socket
self._reconnect_attempts = 0
async def _reconnect(self):
await self.cancel()
self._reconnect_attempts += 1
if self._reconnect_attempts < self.MAX_RECONNECTS:
self._log.debug(f"websocket reconnecting {self.MAX_RECONNECTS - self._reconnect_attempts} attempts left")
reconnect_wait = self._get_reconnect_wait(self._reconnect_attempts)
self._log.debug(f' waiting {reconnect_wait}')
await asyncio.sleep(reconnect_wait)
self._connect()
else:
# maybe raise an exception
self._log.error(f"websocket could not reconnect after {self._reconnect_attempts} attempts")
pass
def _get_reconnect_wait(self, attempts: int) -> int:
expo = 2 ** attempts
return round(random() * min(self.MAX_RECONNECT_SECONDS, expo - 1) + 1)
async def send_keepalive(self):
msg = {"method": "keepAlive"}
await self._socket.send(json.dumps(msg, ensure_ascii=False))
async def send_message(self, msg, retry_count=0):
if not self._socket:
if retry_count < 5:
await asyncio.sleep(1)
await self.send_message(msg, retry_count + 1)
else:
logging.info("Unable to send, not connected")
else:
await self._socket.send(json.dumps(msg, ensure_ascii=False))
async def ping(self):
await self._socket.ping()
async def cancel(self):
try:
self._conn.cancel()
except asyncio.CancelledError:
pass
class BinanceChainSocketManagerBase:
def __init__(self, env: BinanceEnvironment):
"""Initialise the BinanceChainSocketManager
"""
self._env = env
self._callback: Callable[[int], Awaitable[str]]
self._conn = None
self._loop = None
self._log = logging.getLogger(__name__)
@classmethod
async def create(cls, loop, callback: Callable[[int], Awaitable[str]], env: Optional[BinanceEnvironment] = None):
"""Create a BinanceChainSocketManager instance
:param loop: asyncio loop
:param callback: async callback function to receive messages
:param env:
:return:
"""
env = env or BinanceEnvironment.get_production_env()
self = BinanceChainSocketManager(env=env)
self._loop = loop
self._callback = callback
self._conn = ReconnectingWebsocket(loop, self._recv, env=env)
return self
async def _recv(self, msg: Dict):
await self._callback(msg)
class BinanceChainSocketManager(BinanceChainSocketManagerBase):
async def subscribe_market_depth(self, symbols: List[str]):
"""Top 20 levels of bids and asks.
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#6-book-depth-streams
:param symbols:
:return:
Sample ws response
.. code-block:: python
{
"stream": "marketDepth",
"data": {
"lastUpdateId": 160, // Last update ID
"symbol": "BNB_BTC", // symbol
"bids": [ // Bids to be updated
[
"0.0024", // Price level to be updated
"10" // Quantity
]
],
"asks": [ // Asks to be updated
[
"0.0026", // Price level to be updated
"100" // Quantity
]
]
}
}
"""
req_msg = {
"method": "subscribe",
"topic": "marketDepth",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def subscribe_market_diff(self, symbols: List[str]):
"""Returns individual trade updates.
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#5-diff-depth-stream
:param symbols:
:return:
Sample ws response
.. code-block:: python
{
"stream": "marketDiff",
"data": {
"e": "depthUpdate", // Event type
"E": 123456789, // Event time
"s": "BNB_BTC", // Symbol
"b": [ // Bids to be updated
[
"0.0024", // Price level to be updated
"10" // Quantity
]
],
"a": [ // Asks to be updated
[
"0.0026", // Price level to be updated
"100" // Quantity
]
]
}
}
"""
req_msg = {
"method": "subscribe",
"topic": "marketDiff",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def subscribe_trades(self, symbols: List[str]):
"""Returns individual trade updates.
:param symbols:
:return:
Sample ws response
.. code-block:: python
{
"stream": "trades",
"data": [{
"e": "trade", // Event type
"E": 123456789, // Event height
"s": "BNB_BTC", // Symbol
"t": "12345", // Trade ID
"p": "0.001", // Price
"q": "100", // Quantity
"b": "88", // Buyer order ID
"a": "50", // Seller order ID
"T": 123456785, // Trade time
"sa": "bnb1me5u083m2spzt8pw8vunprnctc8syy64hegrcp", // SellerAddress
"ba": "bnb1kdr00ydr8xj3ydcd3a8ej2xxn8lkuja7mdunr5" // BuyerAddress
},
{
"e": "trade", // Event type
"E": 123456795, // Event time
"s": "BNB_BTC", // Symbol
"t": "12348", // Trade ID
"p": "0.001", // Price
"q": "100", // Quantity
"b": "88", // Buyer order ID
"a": "52", // Seller order ID
"T": 123456795, // Trade time
"sa": "bnb1me5u083m2spzt8pw8vunprnctc8syy64hegrcp", // SellerAddress
"ba": "bnb1kdr00ydr8xj3ydcd3a8ej2xxn8lkuja7mdunr5" // BuyerAddress
}]
}
"""
req_msg = {
"method": "subscribe",
"topic": "trades",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def subscribe_ticker(self, symbols: Optional[List[str]]):
"""24hr Ticker statistics for a symbols are pushed every second.
Default is all symbols, otherwise specify a list of symbols to watch
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#8-individual-symbol-ticker-streams
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#9-all-symbols-ticker-streams
:param symbols: optional
:return:
Sample ws response
.. code-block:: python
{
"stream": "ticker",
"data": {
"e": "24hrTicker", // Event type
"E": 123456789, // Event time
"s": "BNBBTC", // Symbol
"p": "0.0015", // Price change
"P": "250.00", // Price change percent
"w": "0.0018", // Weighted average price
"x": "0.0009", // Previous day's close price
"c": "0.0025", // Current day's close price
"Q": "10", // Close trade's quantity
"b": "0.0024", // Best bid price
"B": "10", // Best bid quantity
"a": "0.0026", // Best ask price
"A": "100", // Best ask quantity
"o": "0.0010", // Open price
"h": "0.0025", // High price
"l": "0.0010", // Low price
"v": "10000", // Total traded base asset volume
"q": "18", // Total traded quote asset volume
"O": 0, // Statistics open time
"C": 86400000, // Statistics close time
"F": "0", // First trade ID
"L": "18150", // Last trade Id
"n": 18151 // Total number of trades
}
}
"""
topic = 'ticker'
if not symbols:
topic = 'allTickers'
symbols = ['$all']
req_msg = {
"method": "subscribe",
"topic": topic,
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def subscribe_mini_ticker(self, symbol: Optional[str]):
"""Compact ticker for all or a single symbol
Default is all symbols, otherwise specify a symbol to watch
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#10-individual-symbol-mini-ticker-streams
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#11-all-symbols-mini-ticker-streams
:param symbol: optional
:return:
Sample ws response
.. code-block:: python
{
"stream": "allMiniTickers",
"data": [
{
"e": "24hrMiniTicker", // Event type
"E": 123456789, // Event time
"s": "BNBBTC", // Symbol
"c": "0.0025", // Current day's close price
"o": "0.0010", // Open price
"h": "0.0025", // High price
"l": "0.0010", // Low price
"v": "10000", // Total traded base asset volume
"q": "18", // Total traded quote asset volume
},
{
...
}]
}
"""
if not symbol:
topic = 'allMiniTickers'
symbol_list = ['$all']
else:
topic = 'miniTicker'
symbol_list = [symbol]
req_msg = {
"method": "subscribe",
"topic": topic,
"symbols": symbol_list
}
await self._conn.send_message(req_msg)
async def subscribe_blockheight(self):
"""Streams the latest block height.
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#12-blockheight
:return:
Sample ws response
.. code-block:: python
{
"stream": "blockheight",
"data": {
"h": 123456789, // Block height
}
}
"""
req_msg = {
"method": "subscribe",
"topic": 'blockheight',
"symbols": ["$all"]
}
await self._conn.send_message(req_msg)
async def subscribe_orders(self, address: str):
"""Returns individual order updates.
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#1-orders
:param address: address to watch
:return:
Sample ws response
.. code-block:: python
{
"stream": "orders",
"data": [{
"e": "executionReport", // Event type
"E": 1499405658658, // Event height
"s": "ETH_BTC", // Symbol
"S": 1, // Side, 1 for Buy; 2 for Sell
"o": 2, // Order type, 2 for LIMIT (only)
"f": 1, // Time in force, 1 for Good Till Expire (GTE); 3 for Immediate Or Cancel (IOC)
"q": "1.00000000", // Order quantity
"p": "0.10264410", // Order price
"x": "NEW", // Current execution type
"X": "Ack", // Current order status, possible values Ack, Canceled, Expired, IocNoFill, PartialFill, FullyFill, FailedBlocking, FailedMatching, Unknown
"i": "91D9...7E18-2317", // Order ID
"l": "0.00000000", // Last executed quantity
"z": "0.00000000", // Cumulative filled quantity
"L": "0.00000000", // Last executed price
"n": "10000BNB", // Commission amount for all user trades within a given block. Fees will be displayed with each order but will be charged once.
// Fee can be composed of a single symbol, ex: "10000BNB"
// or multiple symbols if the available "BNB" balance is not enough to cover the whole fees, ex: "1.00000000BNB;0.00001000BTC;0.00050000ETH"
"T": 1499405658657, // Transaction time
"t": "TRD1", // Trade ID
"O": 1499405658657, // Order creation time
},
{
"e": "executionReport", // Event type
"E": 1499405658658, // Event height
"s": "ETH_BNB", // Symbol
"S": "BUY", // Side
"o": "LIMIT", // Order type
"f": "GTE", // Time in force
"q": "1.00000000", // Order quantity
"p": "0.10264410", // Order price
"x": "NEW", // Current execution type
"X": "Ack", // Current order status
"i": 4293154, // Order ID
"l": "0.00000000", // Last executed quantity
"z": "0.00000000", // Cumulative filled quantity
"L": "0.00000000", // Last executed price
"n": "10000BNB", // Commission amount for all user trades within a given block. Fees will be displayed with each order but will be charged once.
// Fee can be composed of a single symbol, ex: "10000BNB"
// or multiple symbols if the available "BNB" balance is not enough to cover the whole fees, ex: "1.00000000BNB;0.00001000BTC;0.00050000ETH"
"T": 1499405658657, // Transaction time
"t": "TRD2", // Trade ID
"O": 1499405658657, // Order creation time
}]
}
"""
req_msg = {
"method": "subscribe",
"topic": "orders",
"userAddress": address
}
await self._conn.send_message(req_msg)
async def subscribe_account(self, address: str):
"""Return account updates.
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#2-account
:param address: address to watch
:return:
Sample ws response
.. code-block:: python
{
"stream": "accounts",
"data": [{
"e": "outboundAccountInfo", // Event type
"E": 1499405658849, // Event height
"B": [ // Balances array
{
"a": "LTC", // Asset
"f": "17366.18538083", // Free amount
"l": "0.00000000", // Locked amount
"r": "0.00000000" // Frozen amount
},
{
"a": "BTC",
"f": "10537.85314051",
"l": "2.19464093",
"r": "0.00000000"
},
{
"a": "ETH",
"f": "17902.35190619",
"l": "0.00000000",
"r": "0.00000000"
}
]
}]
}
"""
req_msg = {
"method": "subscribe",
"topic": "accounts",
"userAddress": address
}
await self._conn.send_message(req_msg)
async def subscribe_transfers(self, address: str):
"""Return transfer updates if userAddress is involved (as sender or receiver) in a transfer.
Multisend is also covered
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#3-transfer
:param address: address to watch
:return:
Sample ws response
.. code-block:: python
{
"stream": "transfers",
"data": {
"e":"outboundTransferInfo", // Event type
"E":12893, // Event height
"H":"0434786487A1F4AE35D49FAE3C6F012A2AAF8DD59EC860DC7E77123B761DD91B", // Transaction hash
"f":"bnb1z220ps26qlwfgz5dew9hdxe8m5malre3qy6zr9", // From addr
"t":
[{
"o":"bnb1xngdalruw8g23eqvpx9klmtttwvnlk2x4lfccu", // To addr
"c":[{ // Coins
"a":"BNB", // Asset
"A":"100.00000000" // Amount
}]
}]
}
}
"""
req_msg = {
"method": "subscribe",
"topic": "transfers",
"userAddress": address
}
await self._conn.send_message(req_msg)
async def subscribe_klines(self, symbols: List[str], interval: KlineInterval = KlineInterval.FIVE_MINUTES):
"""The kline/candlestick stream pushes updates to the current klines/candlestick every second.
https://binance-chain.github.io/api-reference/dex-api/ws-streams.html#7-klinecandlestick-streams
:param symbols:
:param interval:
:return:
Sample ws response
.. code-block:: python
{
"stream": "kline_1m",
"data": {
"e": "kline", // Event type
"E": 123456789, // Event time
"s": "BNBBTC", // Symbol
"k": {
"t": 123400000, // Kline start time
"T": 123460000, // Kline close time
"s": "BNBBTC", // Symbol
"i": "1m", // Interval
"f": "100", // First trade ID
"L": "200", // Last trade ID
"o": "0.0010", // Open price
"c": "0.0020", // Close price
"h": "0.0025", // High price
"l": "0.0015", // Low price
"v": "1000", // Base asset volume
"n": 100, // Number of trades
"x": false, // Is this kline closed?
"q": "1.0000", // Quote asset volume
}
}
}
"""
req_msg = {
"method": "subscribe",
"topic": f"kline_{interval.value}",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def unsubscribe_orders(self):
req_msg = {
"method": "unsubscribe",
"topic": "orders"
}
await self._conn.send_message(req_msg)
async def unsubscribe_account(self):
req_msg = {
"method": "unsubscribe",
"topic": "accounts"
}
await self._conn.send_message(req_msg)
async def unsubscribe_transfers(self):
req_msg = {
"method": "unsubscribe",
"topic": "transfers"
}
await self._conn.send_message(req_msg)
async def unsubscribe_market_depth(self, symbols: List[str]):
"""
:param symbols: List of symbols to unsubscribe from
:return:
"""
req_msg = {
"method": "unsubscribe",
"topic": "marketDepth",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def unsubscribe_market_diff(self, symbols: List[str]):
"""
:param symbols: List of symbols to unsubscribe from
:return:
"""
req_msg = {
"method": "unsubscribe",
"topic": "marketDiff",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def unsubscribe_trades(self, symbols: List[str]):
"""
:param symbols: List of symbols to unsubscribe from
:return:
"""
req_msg = {
"method": "unsubscribe",
"topic": "trades",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def unsubscribe_klines(self, symbols: List[str], interval: KlineInterval):
"""
:param symbols: List of symbols to unsubscribe from
:param interval:
:return:
"""
req_msg = {
"method": "unsubscribe",
"topic": f"kline_{interval.value}",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def unsubscribe_ticker(self, symbols: Optional[List[str]]):
if not symbols:
req_msg = {
"method": "unsubscribe",
"topic": "allTickers"
}
await self._conn.send_message(req_msg)
req_msg = {
"method": "unsubscribe",
"topic": "ticker",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def unsubscribe_mini_ticker(self, symbols: Optional[List[str]]):
if not symbols:
req_msg = {
"method": "unsubscribe",
"topic": "allMiniTickers"
}
await self._conn.send_message(req_msg)
req_msg = {
"method": "unsubscribe",
"topic": "miniTicker",
"symbols": symbols
}
await self._conn.send_message(req_msg)
async def unsubscribe_blockheight(self):
req_msg = {
"method": "unsubscribe",
"topic": "blockheight",
"symbols": ["$all"]
}
await self._conn.send_message(req_msg)
async def close_connection(self):
req_msg = {
"method": "close"
}
await self._conn.send_message(req_msg)
|
PypiClean
|
/pytest-libiio-0.0.13.tar.gz/pytest-libiio-0.0.13/README.rst
|
=============
pytest-libiio
=============
.. image:: https://img.shields.io/pypi/v/pytest-libiio.svg
:target: https://pypi.org/project/pytest-libiio
:alt: PyPI version
.. image:: https://img.shields.io/pypi/pyversions/pytest-libiio.svg
:target: https://pypi.org/project/pytest-libiio
:alt: Python versions
.. image:: https://travis-ci.org/tfcollins/pytest-libiio.svg?branch=master
:target: https://travis-ci.org/tfcollins/pytest-libiio
:alt: See Build Status on Travis CI
.. image:: https://coveralls.io/repos/github/tfcollins/pytest-libiio/badge.svg?branch=master
:target: https://coveralls.io/github/tfcollins/pytest-libiio?branch=master
:alt: See Coverage Status on Coveralls
.. image:: https://readthedocs.org/projects/pytest-libiio/badge/?version=latest
:target: https://pytest-libiio.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
A pytest plugin to manage interfacing with libiio contexts
----
pytest-libiio is pytest plugin to manage interfacing with libiio contexts. This plugin is handy for leveraging the (new) zeroconf features of libiio to find, filter, and map libiio contexts to tests. It was created for `pyadi-iio <https://pypi.org/project/pyadi-iio/>`_ testing but is used in other applications that need an organized way to handle libiio contexts without hardcoding URIs or lots of boilerplate code.
Requirements
------------
* libiio and pylibiio
* pytest
* pyyaml
For development the following are also needed:
* tox
* pytest-mock
* pre-commit
* isort
* flake8
* codespell
* black
Installation
------------
You can install "pytest-libiio" via `pip`_ from `PyPI`_::
$ pip install pytest-libiio
Usage
-----
This plugin is used to make the access of libiio contexts easier and to provide a unified API through fixtures.
Accessing contexts
^^^^^^^^^^^^^^^^^^
Get list of context descriptions of all found contained
.. code-block:: python
import pytest
import iio
def test_libiio_device(context_desc):
hardware = ["pluto", "adrv9361", "fmcomms2"]
for ctx_desc in context_desc:
if ctx_desc["hw"] in hardware:
ctx = iio.Context(ctx_desc["uri"])
if not ctx:
pytest.skip("No required hardware found")
Require certain hardware through marks
.. code-block:: python
import pytest
import iio
@pytest.mark.iio_hardware("adrv9361")
def test_libiio_device(context_desc):
for ctx_desc in context_desc:
ctx = iio.Context(ctx_desc["uri"])
...
Future ideas
------------
Mock testing is common with libiio's python library since hardware is needed otherwise. In future releases we hope to extend features in pytest-mock through this plugin to make mocking libiio more streamlined.
Contributing
------------
Contributions are very welcome. Tests can be run with `tox`_, please ensure
the coverage at least stays the same before you submit a pull request.
License
-------
Distributed under the terms of the `BSD-3`_ license, "pytest-libiio" is free and open source software
Issues
------
If you encounter any problems, please `file an issue`_ along with a detailed description.
.. _`Cookiecutter`: https://github.com/audreyr/cookiecutter
.. _`@hackebrot`: https://github.com/hackebrot
.. _`MIT`: http://opensource.org/licenses/MIT
.. _`BSD-3`: http://opensource.org/licenses/BSD-3-Clause
.. _`GNU GPL v3.0`: http://www.gnu.org/licenses/gpl-3.0.txt
.. _`Apache Software License 2.0`: http://www.apache.org/licenses/LICENSE-2.0
.. _`cookiecutter-pytest-plugin`: https://github.com/pytest-dev/cookiecutter-pytest-plugin
.. _`file an issue`: https://github.com/tfcollins/pytest-libiio/issues
.. _`pytest`: https://github.com/pytest-dev/pytest
.. _`tox`: https://tox.readthedocs.io/en/latest/
.. _`pip`: https://pypi.org/project/pip/
.. _`PyPI`: https://pypi.org/project
|
PypiClean
|
/pulumi_gcp_native-0.0.2a1617829075.tar.gz/pulumi_gcp_native-0.0.2a1617829075/pulumi_gcp_native/cloudchannel/v1/account_customer_entitlement.py
|
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
from ._inputs import *
__all__ = ['AccountCustomerEntitlement']
class AccountCustomerEntitlement(pulumi.CustomResource):
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
accounts_id: Optional[pulumi.Input[str]] = None,
association_info: Optional[pulumi.Input[pulumi.InputType['GoogleCloudChannelV1AssociationInfoArgs']]] = None,
commitment_settings: Optional[pulumi.Input[pulumi.InputType['GoogleCloudChannelV1CommitmentSettingsArgs']]] = None,
customers_id: Optional[pulumi.Input[str]] = None,
entitlements_id: Optional[pulumi.Input[str]] = None,
offer: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['GoogleCloudChannelV1ParameterArgs']]]]] = None,
purchase_order_id: Optional[pulumi.Input[str]] = None,
request_id: Optional[pulumi.Input[str]] = None,
__props__=None,
__name__=None,
__opts__=None):
"""
Creates an entitlement for a customer. Possible error codes: * PERMISSION_DENIED: The customer doesn't belong to the reseller. * INVALID_ARGUMENT: * Required request parameters are missing or invalid. * There is already a customer entitlement for a SKU from the same product family. * INVALID_VALUE: Make sure the OfferId is valid. If it is, contact Google Channel support for further troubleshooting. * NOT_FOUND: The customer or offer resource was not found. * ALREADY_EXISTS: * The SKU was already purchased for the customer. * The customer's primary email already exists. Retry after changing the customer's primary contact email. * CONDITION_NOT_MET or FAILED_PRECONDITION: * The domain required for purchasing a SKU has not been verified. * A pre-requisite SKU required to purchase an Add-On SKU is missing. For example, Google Workspace Business Starter is required to purchase Vault or Drive. * (Developer accounts only) Reseller and resold domain must meet the following naming requirements: * Domain names must start with goog-test. * Domain names must include the reseller domain. * INTERNAL: Any non-user error related to a technical issue in the backend. Contact Cloud Channel support. * UNKNOWN: Any non-user error related to a technical issue in the backend. Contact Cloud Channel support. Return value: The ID of a long-running operation. To get the results of the operation, call the GetOperation method of CloudChannelOperationsService. The Operation metadata will contain an instance of OperationMetadata.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['GoogleCloudChannelV1AssociationInfoArgs']] association_info: Association information to other entitlements.
:param pulumi.Input[pulumi.InputType['GoogleCloudChannelV1CommitmentSettingsArgs']] commitment_settings: Commitment settings for a commitment-based Offer. Required for commitment based offers.
:param pulumi.Input[str] offer: Required. The offer resource name for which the entitlement is to be created. Takes the form: accounts/{account_id}/offers/{offer_id}.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['GoogleCloudChannelV1ParameterArgs']]]] parameters: Extended entitlement parameters. When creating an entitlement, valid parameters' names and values are defined in the offer's parameter definitions.
:param pulumi.Input[str] purchase_order_id: Optional. This purchase order (PO) information is for resellers to use for their company tracking usage. If a purchaseOrderId value is given, it appears in the API responses and shows up in the invoice. The property accepts up to 80 plain text characters.
:param pulumi.Input[str] request_id: Optional. You can specify an optional unique request ID, and if you need to retry your request, the server will know to ignore the request if it's complete. For example, you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if it received the original operation with the same request ID. If it did, it will ignore the second request. The request ID must be a valid [UUID](https://tools.ietf.org/html/rfc4122) with the exception that zero UUID is not supported (`00000000-0000-0000-0000-000000000000`).
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
if accounts_id is None and not opts.urn:
raise TypeError("Missing required property 'accounts_id'")
__props__['accounts_id'] = accounts_id
__props__['association_info'] = association_info
__props__['commitment_settings'] = commitment_settings
if customers_id is None and not opts.urn:
raise TypeError("Missing required property 'customers_id'")
__props__['customers_id'] = customers_id
if entitlements_id is None and not opts.urn:
raise TypeError("Missing required property 'entitlements_id'")
__props__['entitlements_id'] = entitlements_id
__props__['offer'] = offer
__props__['parameters'] = parameters
__props__['purchase_order_id'] = purchase_order_id
__props__['request_id'] = request_id
__props__['create_time'] = None
__props__['name'] = None
__props__['provisioned_service'] = None
__props__['provisioning_state'] = None
__props__['suspension_reasons'] = None
__props__['trial_settings'] = None
__props__['update_time'] = None
super(AccountCustomerEntitlement, __self__).__init__(
'gcp-native:cloudchannel/v1:AccountCustomerEntitlement',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'AccountCustomerEntitlement':
"""
Get an existing AccountCustomerEntitlement resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["association_info"] = None
__props__["commitment_settings"] = None
__props__["create_time"] = None
__props__["name"] = None
__props__["offer"] = None
__props__["parameters"] = None
__props__["provisioned_service"] = None
__props__["provisioning_state"] = None
__props__["purchase_order_id"] = None
__props__["suspension_reasons"] = None
__props__["trial_settings"] = None
__props__["update_time"] = None
return AccountCustomerEntitlement(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="associationInfo")
def association_info(self) -> pulumi.Output['outputs.GoogleCloudChannelV1AssociationInfoResponse']:
"""
Association information to other entitlements.
"""
return pulumi.get(self, "association_info")
@property
@pulumi.getter(name="commitmentSettings")
def commitment_settings(self) -> pulumi.Output['outputs.GoogleCloudChannelV1CommitmentSettingsResponse']:
"""
Commitment settings for a commitment-based Offer. Required for commitment based offers.
"""
return pulumi.get(self, "commitment_settings")
@property
@pulumi.getter(name="createTime")
def create_time(self) -> pulumi.Output[str]:
"""
The time at which the entitlement is created.
"""
return pulumi.get(self, "create_time")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Resource name of an entitlement in the form: accounts/{account_id}/customers/{customer_id}/entitlements/{entitlement_id}.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def offer(self) -> pulumi.Output[str]:
"""
Required. The offer resource name for which the entitlement is to be created. Takes the form: accounts/{account_id}/offers/{offer_id}.
"""
return pulumi.get(self, "offer")
@property
@pulumi.getter
def parameters(self) -> pulumi.Output[Sequence['outputs.GoogleCloudChannelV1ParameterResponse']]:
"""
Extended entitlement parameters. When creating an entitlement, valid parameters' names and values are defined in the offer's parameter definitions.
"""
return pulumi.get(self, "parameters")
@property
@pulumi.getter(name="provisionedService")
def provisioned_service(self) -> pulumi.Output['outputs.GoogleCloudChannelV1ProvisionedServiceResponse']:
"""
Service provisioning details for the entitlement.
"""
return pulumi.get(self, "provisioned_service")
@property
@pulumi.getter(name="provisioningState")
def provisioning_state(self) -> pulumi.Output[str]:
"""
Current provisioning state of the entitlement.
"""
return pulumi.get(self, "provisioning_state")
@property
@pulumi.getter(name="purchaseOrderId")
def purchase_order_id(self) -> pulumi.Output[str]:
"""
Optional. This purchase order (PO) information is for resellers to use for their company tracking usage. If a purchaseOrderId value is given, it appears in the API responses and shows up in the invoice. The property accepts up to 80 plain text characters.
"""
return pulumi.get(self, "purchase_order_id")
@property
@pulumi.getter(name="suspensionReasons")
def suspension_reasons(self) -> pulumi.Output[Sequence[str]]:
"""
Enumerable of all current suspension reasons for an entitlement.
"""
return pulumi.get(self, "suspension_reasons")
@property
@pulumi.getter(name="trialSettings")
def trial_settings(self) -> pulumi.Output['outputs.GoogleCloudChannelV1TrialSettingsResponse']:
"""
Settings for trial offers.
"""
return pulumi.get(self, "trial_settings")
@property
@pulumi.getter(name="updateTime")
def update_time(self) -> pulumi.Output[str]:
"""
The time at which the entitlement is updated.
"""
return pulumi.get(self, "update_time")
def translate_output_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return _tables.SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
|
PypiClean
|
/Pyomo-6.6.2-cp39-cp39-win_amd64.whl/pyomo/gdp/plugins/hull.py
|
import logging
import pyomo.common.config as cfg
from pyomo.common import deprecated
from pyomo.common.collections import ComponentMap, ComponentSet
from pyomo.common.modeling import unique_component_name
from pyomo.core.expr.numvalue import ZeroConstant
import pyomo.core.expr as EXPR
from pyomo.core.base import TransformationFactory, Reference
from pyomo.core import (
Block,
BooleanVar,
Connector,
Constraint,
Param,
Set,
SetOf,
Suffix,
Var,
Expression,
SortComponents,
TraversalStrategy,
Any,
RangeSet,
Reals,
value,
NonNegativeIntegers,
Binary,
)
from pyomo.gdp import Disjunct, Disjunction, GDP_Error
from pyomo.gdp.plugins.gdp_to_mip_transformation import GDP_to_MIP_Transformation
from pyomo.gdp.transformed_disjunct import _TransformedDisjunct
from pyomo.gdp.util import (
clone_without_expression_components,
is_child_of,
_warn_for_active_disjunct,
)
from pyomo.core.util import target_list
from weakref import ref as weakref_ref
logger = logging.getLogger('pyomo.gdp.hull')
@TransformationFactory.register(
'gdp.hull', doc="Relax disjunctive model by forming the hull reformulation."
)
class Hull_Reformulation(GDP_to_MIP_Transformation):
"""Relax disjunctive model by forming the hull reformulation.
Relaxes a disjunctive model into an algebraic model by forming the
hull reformulation of each disjunction.
This transformation accepts the following keyword arguments:
Parameters
----------
perspective_function : str
The perspective function used for the disaggregated variables.
Must be one of 'FurmanSawayaGrossmann' (default),
'LeeGrossmann', or 'GrossmannLee'
EPS : float
The value to use for epsilon [default: 1e-4]
targets : (block, disjunction, or list of those types)
The targets to transform. This can be a block, disjunction, or a
list of blocks and Disjunctions [default: the instance]
The transformation will create a new Block with a unique
name beginning "_pyomo_gdp_hull_reformulation".
The block will have a dictionary "_disaggregatedVarMap:
'srcVar': ComponentMap(<src var>:<disaggregated var>),
'disaggregatedVar': ComponentMap(<disaggregated var>:<src var>)
It will also have a ComponentMap "_bigMConstraintMap":
<disaggregated var>:<bounds constraint>
Last, it will contain an indexed Block named "relaxedDisjuncts",
which will hold the relaxed disjuncts. This block is indexed by
an integer indicating the order in which the disjuncts were relaxed.
Each block has a dictionary "_constraintMap":
'srcConstraints': ComponentMap(<transformed constraint>:
<src constraint>),
'transformedConstraints':
ComponentMap(<src constraint container> :
<transformed constraint container>,
<src constraintData> : [<transformed constraintDatas>])
All transformed Disjuncts will have a pointer to the block their transformed
constraints are on, and all transformed Disjunctions will have a
pointer to the corresponding OR or XOR constraint.
The _pyomo_gdp_hull_reformulation block will have a ComponentMap
"_disaggregationConstraintMap":
<src var>:ComponentMap(<srcDisjunction>: <disaggregation constraint>)
"""
CONFIG = cfg.ConfigDict('gdp.hull')
CONFIG.declare(
'targets',
cfg.ConfigValue(
default=None,
domain=target_list,
description="target or list of targets that will be relaxed",
doc="""
This specifies the target or list of targets to relax as either a
component or a list of components. If None (default), the entire model
is transformed. Note that if the transformation is done out of place,
the list of targets should be attached to the model before it is cloned,
and the list will specify the targets on the cloned instance.""",
),
)
CONFIG.declare(
'perspective function',
cfg.ConfigValue(
default='FurmanSawayaGrossmann',
domain=cfg.In(['FurmanSawayaGrossmann', 'LeeGrossmann', 'GrossmannLee']),
description='perspective function used for variable disaggregation',
doc="""
The perspective function used for variable disaggregation
"LeeGrossmann" is the original NL convex hull from Lee &
Grossmann (2000) [1]_, which substitutes nonlinear constraints
h_ik(x) <= 0
with
x_k = sum( nu_ik )
y_ik * h_ik( nu_ik/y_ik ) <= 0
"GrossmannLee" is an updated formulation from Grossmann &
Lee (2003) [2]_, which avoids divide-by-0 errors by using:
x_k = sum( nu_ik )
(y_ik + eps) * h_ik( nu_ik/(y_ik + eps) ) <= 0
"FurmanSawayaGrossmann" (default) is an improved relaxation [3]_
that is exact at 0 and 1 while avoiding numerical issues from
the Lee & Grossmann formulation by using:
x_k = sum( nu_ik )
((1-eps)*y_ik + eps) * h_ik( nu_ik/((1-eps)*y_ik + eps) ) \
- eps * h_ki(0) * ( 1-y_ik ) <= 0
References
----------
.. [1] Lee, S., & Grossmann, I. E. (2000). New algorithms for
nonlinear generalized disjunctive programming. Computers and
Chemical Engineering, 24, 2125-2141
.. [2] Grossmann, I. E., & Lee, S. (2003). Generalized disjunctive
programming: Nonlinear convex hull relaxation and algorithms.
Computational Optimization and Applications, 26, 83-100.
.. [3] Furman, K., Sawaya, N., and Grossmann, I. A computationally
useful algebraic representation of nonlinear disjunctive convex
sets using the perspective function. Optimization Online
(2016). http://www.optimization-online.org/DB_HTML/2016/07/5544.html.
""",
),
)
CONFIG.declare(
'EPS',
cfg.ConfigValue(
default=1e-4,
domain=cfg.PositiveFloat,
description="Epsilon value to use in perspective function",
),
)
CONFIG.declare(
'assume_fixed_vars_permanent',
cfg.ConfigValue(
default=False,
domain=bool,
description="Boolean indicating whether or not to transform so that "
"the transformed model will still be valid when fixed Vars are "
"unfixed.",
doc="""
If True, the transformation will not disaggregate fixed variables.
This means that if a fixed variable is unfixed after transformation,
the transformed model is no longer valid. By default, the transformation
will disagregate fixed variables so that any later fixing and unfixing
will be valid in the transformed model.
""",
),
)
transformation_name = 'hull'
def __init__(self):
super().__init__(logger)
self._targets = set()
def _add_local_vars(self, block, local_var_dict):
localVars = block.component('LocalVars')
if type(localVars) is Suffix:
for disj, var_list in localVars.items():
if local_var_dict.get(disj) is None:
local_var_dict[disj] = ComponentSet(var_list)
else:
local_var_dict[disj].update(var_list)
def _get_local_var_suffixes(self, block, local_var_dict):
# You can specify suffixes on any block (disjuncts included). This
# method starts from a Disjunct (presumably) and checks for a LocalVar
# suffixes going both up and down the tree, adding them into the
# dictionary that is the second argument.
# first look beneath where we are (there could be Blocks on this
# disjunct)
for b in block.component_data_objects(
Block, descend_into=(Block), active=True, sort=SortComponents.deterministic
):
self._add_local_vars(b, local_var_dict)
# now traverse upwards and get what's above
while block is not None:
self._add_local_vars(block, local_var_dict)
block = block.parent_block()
return local_var_dict
def _apply_to(self, instance, **kwds):
try:
self._apply_to_impl(instance, **kwds)
finally:
self._restore_state()
self._transformation_blocks.clear()
self._algebraic_constraints.clear()
self._targets_set = set()
def _apply_to_impl(self, instance, **kwds):
self._process_arguments(instance, **kwds)
# filter out inactive targets and handle case where targets aren't
# specified.
targets = self._filter_targets(instance)
# transform logical constraints based on targets
self._transform_logical_constraints(instance, targets)
# Preprocess in order to find what disjunctive components need
# transformation
gdp_tree = self._get_gdp_tree_from_targets(instance, targets)
preprocessed_targets = gdp_tree.topological_sort()
self._targets_set = set(preprocessed_targets)
for t in preprocessed_targets:
if t.ctype is Disjunction:
self._transform_disjunctionData(
t,
t.index(),
parent_disjunct=gdp_tree.parent(t),
root_disjunct=gdp_tree.root_disjunct(t),
)
# We skip disjuncts now, because we need information from the
# disjunctions to transform them (which variables to disaggregate),
# so for hull's purposes, they need not be in the tree.
def _add_transformation_block(self, to_block):
transBlock, new_block = super()._add_transformation_block(to_block)
if not new_block:
return transBlock, new_block
transBlock.lbub = Set(initialize=['lb', 'ub', 'eq'])
# Map between disaggregated variables and their
# originals
transBlock._disaggregatedVarMap = {
'srcVar': ComponentMap(),
'disaggregatedVar': ComponentMap(),
}
# Map between disaggregated variables and their lb*indicator <= var <=
# ub*indicator constraints
transBlock._bigMConstraintMap = ComponentMap()
# We will store all of the disaggregation constraints for any
# Disjunctions we transform onto this block here.
transBlock.disaggregationConstraints = Constraint(NonNegativeIntegers)
# This will map from srcVar to a map of srcDisjunction to the
# disaggregation constraint corresponding to srcDisjunction
transBlock._disaggregationConstraintMap = ComponentMap()
# we are going to store some of the disaggregated vars directly here
# when we have vars that don't appear in every disjunct
transBlock._disaggregatedVars = Var(NonNegativeIntegers, dense=False)
transBlock._boundsConstraints = Constraint(NonNegativeIntegers, transBlock.lbub)
return transBlock, True
def _transform_disjunctionData(
self, obj, index, parent_disjunct=None, root_disjunct=None
):
# Hull reformulation doesn't work if this is an OR constraint. So if
# xor is false, give up
if not obj.xor:
raise GDP_Error(
"Cannot do hull reformulation for "
"Disjunction '%s' with OR constraint. "
"Must be an XOR!" % obj.name
)
transBlock, xorConstraint = self._setup_transform_disjunctionData(
obj, root_disjunct
)
disaggregationConstraint = transBlock.disaggregationConstraints
disaggregationConstraintMap = transBlock._disaggregationConstraintMap
disaggregatedVars = transBlock._disaggregatedVars
disaggregated_var_bounds = transBlock._boundsConstraints
# We first go through and collect all the variables that we
# are going to disaggregate.
varOrder_set = ComponentSet()
varOrder = []
varsByDisjunct = ComponentMap()
localVarsByDisjunct = ComponentMap()
include_fixed_vars = not self._config.assume_fixed_vars_permanent
for disjunct in obj.disjuncts:
if not disjunct.active:
continue
disjunctVars = varsByDisjunct[disjunct] = ComponentSet()
# create the key for each disjunct now
transBlock._disaggregatedVarMap['disaggregatedVar'][
disjunct
] = ComponentMap()
for cons in disjunct.component_data_objects(
Constraint,
active=True,
sort=SortComponents.deterministic,
descend_into=(Block, Disjunct),
):
# [ESJ 02/14/2020] By default, we disaggregate fixed variables
# on the philosophy that fixing is not a promise for the future
# and we are mathematically wrong if we don't transform these
# correctly and someone later unfixes them and keeps playing
# with their transformed model. However, the user may have set
# assume_fixed_vars_permanent to True in which case we will skip
# them
for var in EXPR.identify_variables(
cons.body, include_fixed=include_fixed_vars
):
# Note the use of a list so that we will
# eventually disaggregate the vars in a
# deterministic order (the order that we found
# them)
disjunctVars.add(var)
if not var in varOrder_set:
varOrder.append(var)
varOrder_set.add(var)
# check for LocalVars Suffix
localVarsByDisjunct = self._get_local_var_suffixes(
disjunct, localVarsByDisjunct
)
# We will disaggregate all variables that are not explicitly declared as
# being local. Since we transform from leaf to root, we are implicitly
# treating our own disaggregated variables as local, so they will not be
# re-disaggregated.
varSet = []
varSet = {disj: [] for disj in obj.disjuncts}
# Note that variables are local with respect to a Disjunct. We deal with
# them here to do some error checking (if something is obviously not
# local since it is used in multiple Disjuncts in this Disjunction) and
# also to get a deterministic order in which to process them when we
# transform the Disjuncts: Values of localVarsByDisjunct are
# ComponentSets, so we need this for determinism (we iterate through the
# localVars of a Disjunct later)
localVars = ComponentMap()
varsToDisaggregate = []
disjunctsVarAppearsIn = ComponentMap()
for var in varOrder:
disjuncts = disjunctsVarAppearsIn[var] = [
d for d in varsByDisjunct if var in varsByDisjunct[d]
]
# clearly not local if used in more than one disjunct
if len(disjuncts) > 1:
if self._generate_debug_messages:
logger.debug(
"Assuming '%s' is not a local var since it is"
"used in multiple disjuncts."
% var.getname(fully_qualified=True)
)
for disj in disjuncts:
varSet[disj].append(var)
varsToDisaggregate.append(var)
# disjuncts is a list of length 1
elif localVarsByDisjunct.get(disjuncts[0]) is not None:
if var in localVarsByDisjunct[disjuncts[0]]:
localVars_thisDisjunct = localVars.get(disjuncts[0])
if localVars_thisDisjunct is not None:
localVars[disjuncts[0]].append(var)
else:
localVars[disjuncts[0]] = [var]
else:
# It's not local to this Disjunct
varSet[disjuncts[0]].append(var)
varsToDisaggregate.append(var)
else:
# We don't even have have any local vars for this Disjunct.
varSet[disjuncts[0]].append(var)
varsToDisaggregate.append(var)
# Now that we know who we need to disaggregate, we will do it
# while we also transform the disjuncts.
local_var_set = self._get_local_var_set(obj)
or_expr = 0
for disjunct in obj.disjuncts:
or_expr += disjunct.indicator_var.get_associated_binary()
self._transform_disjunct(
disjunct,
transBlock,
varSet[disjunct],
localVars.get(disjunct, []),
local_var_set,
)
rhs = 1 if parent_disjunct is None else parent_disjunct.binary_indicator_var
xorConstraint.add(index, (or_expr, rhs))
# map the DisjunctionData to its XOR constraint to mark it as
# transformed
obj._algebraic_constraint = weakref_ref(xorConstraint[index])
# add the reaggregation constraints
for i, var in enumerate(varsToDisaggregate):
# There are two cases here: Either the var appeared in every
# disjunct in the disjunction, or it didn't. If it did, there's
# nothing special to do: All of the disaggregated variables have
# been created, and we can just proceed and make this constraint. If
# it didn't, we need one more disaggregated variable, correctly
# defined. And then we can make the constraint.
if len(disjunctsVarAppearsIn[var]) < len(obj.disjuncts):
# create one more disaggregated var
idx = len(disaggregatedVars)
disaggregated_var = disaggregatedVars[idx]
# mark this as local because we won't re-disaggregate if this is
# a nested disjunction
if local_var_set is not None:
local_var_set.append(disaggregated_var)
var_free = 1 - sum(
disj.indicator_var.get_associated_binary()
for disj in disjunctsVarAppearsIn[var]
)
self._declare_disaggregated_var_bounds(
var,
disaggregated_var,
obj,
disaggregated_var_bounds,
(idx, 'lb'),
(idx, 'ub'),
var_free,
)
# maintain the mappings
for disj in obj.disjuncts:
# Because we called _transform_disjunct above, we know that
# if this isn't transformed it is because it was cleanly
# deactivated, and we can just skip it.
if (
disj._transformation_block is not None
and disj not in disjunctsVarAppearsIn[var]
):
relaxationBlock = disj._transformation_block().parent_block()
relaxationBlock._bigMConstraintMap[
disaggregated_var
] = Reference(disaggregated_var_bounds[idx, :])
relaxationBlock._disaggregatedVarMap['srcVar'][
disaggregated_var
] = var
relaxationBlock._disaggregatedVarMap['disaggregatedVar'][disj][
var
] = disaggregated_var
disaggregatedExpr = disaggregated_var
else:
disaggregatedExpr = 0
for disjunct in disjunctsVarAppearsIn[var]:
if disjunct._transformation_block is None:
# Because we called _transform_disjunct above, we know that
# if this isn't transformed it is because it was cleanly
# deactivated, and we can just skip it.
continue
disaggregatedVar = (
disjunct._transformation_block()
.parent_block()
._disaggregatedVarMap['disaggregatedVar'][disjunct][var]
)
disaggregatedExpr += disaggregatedVar
# We equate the sum of the disaggregated vars to var (the original)
# if parent_disjunct is None, else it needs to be the disaggregated
# var corresponding to var on the parent disjunct. This is the
# reason we transform from root to leaf: This constraint is now
# correct regardless of how nested something may have been.
parent_var = (
var
if parent_disjunct is None
else self.get_disaggregated_var(var, parent_disjunct)
)
cons_idx = len(disaggregationConstraint)
disaggregationConstraint.add(cons_idx, parent_var == disaggregatedExpr)
# and update the map so that we can find this later. We index by
# variable and the particular disjunction because there is a
# different one for each disjunction
if disaggregationConstraintMap.get(var) is not None:
disaggregationConstraintMap[var][obj] = disaggregationConstraint[
cons_idx
]
else:
thismap = disaggregationConstraintMap[var] = ComponentMap()
thismap[obj] = disaggregationConstraint[cons_idx]
# deactivate for the writers
obj.deactivate()
def _transform_disjunct(self, obj, transBlock, varSet, localVars, local_var_set):
# We're not using the preprocessed list here, so this could be
# inactive. We've already done the error checking in preprocessing, so
# we just skip it here.
if not obj.active:
return
relaxationBlock = self._get_disjunct_transformation_block(obj, transBlock)
# Put the disaggregated variables all on their own block so that we can
# isolate the name collisions and still have complete control over the
# names on this block.
relaxationBlock.disaggregatedVars = Block()
# add the disaggregated variables and their bigm constraints
# to the relaxationBlock
for var in varSet:
disaggregatedVar = Var(within=Reals, initialize=var.value)
# naming conflicts are possible here since this is a bunch
# of variables from different blocks coming together, so we
# get a unique name
disaggregatedVarName = unique_component_name(
relaxationBlock.disaggregatedVars, var.getname(fully_qualified=True)
)
relaxationBlock.disaggregatedVars.add_component(
disaggregatedVarName, disaggregatedVar
)
# mark this as local because we won't re-disaggregate if this is a
# nested disjunction
if local_var_set is not None:
local_var_set.append(disaggregatedVar)
# add the bigm constraint
bigmConstraint = Constraint(transBlock.lbub)
relaxationBlock.add_component(
disaggregatedVarName + "_bounds", bigmConstraint
)
self._declare_disaggregated_var_bounds(
var,
disaggregatedVar,
obj,
bigmConstraint,
'lb',
'ub',
obj.indicator_var.get_associated_binary(),
transBlock,
)
for var in localVars:
# we don't need to disaggregated, we can use this Var, but we do
# need to set up its bounds constraints.
# naming conflicts are possible here since this is a bunch
# of variables from different blocks coming together, so we
# get a unique name
conName = unique_component_name(
relaxationBlock, var.getname(fully_qualified=False) + "_bounds"
)
bigmConstraint = Constraint(transBlock.lbub)
relaxationBlock.add_component(conName, bigmConstraint)
self._declare_disaggregated_var_bounds(
var,
var,
obj,
bigmConstraint,
'lb',
'ub',
obj.indicator_var.get_associated_binary(),
transBlock,
)
var_substitute_map = dict(
(id(v), newV)
for v, newV in transBlock._disaggregatedVarMap['disaggregatedVar'][
obj
].items()
)
zero_substitute_map = dict(
(id(v), ZeroConstant)
for v, newV in transBlock._disaggregatedVarMap['disaggregatedVar'][
obj
].items()
)
zero_substitute_map.update((id(v), ZeroConstant) for v in localVars)
# Transform each component within this disjunct
self._transform_block_components(
obj, obj, var_substitute_map, zero_substitute_map
)
# deactivate disjunct so writers can be happy
obj._deactivate_without_fixing_indicator()
def _declare_disaggregated_var_bounds(
self,
original_var,
disaggregatedVar,
disjunct,
bigmConstraint,
lb_idx,
ub_idx,
var_free_indicator,
transBlock=None,
):
# If transBlock is None then this is a disaggregated variable for
# multiple Disjuncts and we will handle the mappings separately.
lb = original_var.lb
ub = original_var.ub
if lb is None or ub is None:
raise GDP_Error(
"Variables that appear in disjuncts must be "
"bounded in order to use the hull "
"transformation! Missing bound for %s." % (original_var.name)
)
disaggregatedVar.setlb(min(0, lb))
disaggregatedVar.setub(max(0, ub))
if lb:
bigmConstraint.add(lb_idx, var_free_indicator * lb <= disaggregatedVar)
if ub:
bigmConstraint.add(ub_idx, disaggregatedVar <= ub * var_free_indicator)
# store the mappings from variables to their disaggregated selves on
# the transformation block.
if transBlock is not None:
transBlock._disaggregatedVarMap['disaggregatedVar'][disjunct][
original_var
] = disaggregatedVar
transBlock._disaggregatedVarMap['srcVar'][disaggregatedVar] = original_var
transBlock._bigMConstraintMap[disaggregatedVar] = bigmConstraint
def _get_local_var_set(self, disjunction):
# add Suffix to the relaxation block that disaggregated variables are
# local (in case this is nested in another Disjunct)
local_var_set = None
parent_disjunct = disjunction.parent_block()
while parent_disjunct is not None:
if parent_disjunct.ctype is Disjunct:
break
parent_disjunct = parent_disjunct.parent_block()
if parent_disjunct is not None:
# This limits the cases that a user is allowed to name something
# (other than a Suffix) 'LocalVars' on a Disjunct. But I am assuming
# that the Suffix has to be somewhere above the disjunct in the
# tree, so I can't put it on a Block that I own. And if I'm coopting
# something of theirs, it may as well be here.
self._add_local_var_suffix(parent_disjunct)
if parent_disjunct.LocalVars.get(parent_disjunct) is None:
parent_disjunct.LocalVars[parent_disjunct] = []
local_var_set = parent_disjunct.LocalVars[parent_disjunct]
return local_var_set
def _warn_for_active_disjunct(
self, innerdisjunct, outerdisjunct, var_substitute_map, zero_substitute_map
):
# We override the base class method because in hull, it might just be
# that we haven't gotten here yet.
disjuncts = (
innerdisjunct.values() if innerdisjunct.is_indexed() else (innerdisjunct,)
)
for disj in disjuncts:
if disj in self._targets_set:
# We're getting to this, have some patience.
continue
else:
# But if it wasn't in the targets after preprocessing, it
# doesn't belong in an active Disjunction that we are
# transforming and we should be confused.
_warn_for_active_disjunct(innerdisjunct, outerdisjunct)
def _transform_constraint(
self, obj, disjunct, var_substitute_map, zero_substitute_map
):
# we will put a new transformed constraint on the relaxation block.
relaxationBlock = disjunct._transformation_block()
constraintMap = relaxationBlock._constraintMap
# We will make indexes from ({obj.local_name} x obj.index_set() x ['lb',
# 'ub']), but don't bother construct that set here, as taking Cartesian
# products is kind of expensive (and redundant since we have the
# original model)
newConstraint = relaxationBlock.transformedConstraints
for i in sorted(obj.keys()):
c = obj[i]
if not c.active:
continue
unique = len(newConstraint)
name = c.local_name + "_%s" % unique
NL = c.body.polynomial_degree() not in (0, 1)
EPS = self._config.EPS
mode = self._config.perspective_function
# We need to evaluate the expression at the origin *before*
# we substitute the expression variables with the
# disaggregated variables
if not NL or mode == "FurmanSawayaGrossmann":
h_0 = clone_without_expression_components(
c.body, substitute=zero_substitute_map
)
y = disjunct.binary_indicator_var
if NL:
if mode == "LeeGrossmann":
sub_expr = clone_without_expression_components(
c.body,
substitute=dict(
(var, subs / y) for var, subs in var_substitute_map.items()
),
)
expr = sub_expr * y
elif mode == "GrossmannLee":
sub_expr = clone_without_expression_components(
c.body,
substitute=dict(
(var, subs / (y + EPS))
for var, subs in var_substitute_map.items()
),
)
expr = (y + EPS) * sub_expr
elif mode == "FurmanSawayaGrossmann":
sub_expr = clone_without_expression_components(
c.body,
substitute=dict(
(var, subs / ((1 - EPS) * y + EPS))
for var, subs in var_substitute_map.items()
),
)
expr = ((1 - EPS) * y + EPS) * sub_expr - EPS * h_0 * (1 - y)
else:
raise RuntimeError("Unknown NL Hull mode")
else:
expr = clone_without_expression_components(
c.body, substitute=var_substitute_map
)
if c.equality:
if NL:
# ESJ TODO: This can't happen right? This is the only
# obvious case where someone has messed up, but this has to
# be nonconvex, right? Shouldn't we tell them?
newConsExpr = expr == c.lower * y
else:
v = list(EXPR.identify_variables(expr))
if len(v) == 1 and not c.lower:
# Setting a variable to 0 in a disjunct is
# *very* common. We should recognize that in
# that structure, the disaggregated variable
# will also be fixed to 0.
v[0].fix(0)
# ESJ: If you ask where the transformed constraint is,
# the answer is nowhere. Really, it is in the bounds of
# this variable, so I'm going to return
# it. Alternatively we could return an empty list, but I
# think I like this better.
constraintMap['transformedConstraints'][c] = [v[0]]
# Reverse map also (this is strange)
constraintMap['srcConstraints'][v[0]] = c
continue
newConsExpr = expr - (1 - y) * h_0 == c.lower * y
if obj.is_indexed():
newConstraint.add((name, i, 'eq'), newConsExpr)
# map the _ConstraintDatas (we mapped the container above)
constraintMap['transformedConstraints'][c] = [
newConstraint[name, i, 'eq']
]
constraintMap['srcConstraints'][newConstraint[name, i, 'eq']] = c
else:
newConstraint.add((name, 'eq'), newConsExpr)
# map to the _ConstraintData (And yes, for
# ScalarConstraints, this is overwriting the map to the
# container we made above, and that is what I want to
# happen. ScalarConstraints will map to lists. For
# IndexedConstraints, we can map the container to the
# container, but more importantly, we are mapping the
# _ConstraintDatas to each other above)
constraintMap['transformedConstraints'][c] = [
newConstraint[name, 'eq']
]
constraintMap['srcConstraints'][newConstraint[name, 'eq']] = c
continue
if c.lower is not None:
if self._generate_debug_messages:
_name = c.getname(fully_qualified=True)
logger.debug("GDP(Hull): Transforming constraint " + "'%s'", _name)
if NL:
newConsExpr = expr >= c.lower * y
else:
newConsExpr = expr - (1 - y) * h_0 >= c.lower * y
if obj.is_indexed():
newConstraint.add((name, i, 'lb'), newConsExpr)
constraintMap['transformedConstraints'][c] = [
newConstraint[name, i, 'lb']
]
constraintMap['srcConstraints'][newConstraint[name, i, 'lb']] = c
else:
newConstraint.add((name, 'lb'), newConsExpr)
constraintMap['transformedConstraints'][c] = [
newConstraint[name, 'lb']
]
constraintMap['srcConstraints'][newConstraint[name, 'lb']] = c
if c.upper is not None:
if self._generate_debug_messages:
_name = c.getname(fully_qualified=True)
logger.debug("GDP(Hull): Transforming constraint " + "'%s'", _name)
if NL:
newConsExpr = expr <= c.upper * y
else:
newConsExpr = expr - (1 - y) * h_0 <= c.upper * y
if obj.is_indexed():
newConstraint.add((name, i, 'ub'), newConsExpr)
# map (have to account for fact we might have created list
# above
transformed = constraintMap['transformedConstraints'].get(c)
if transformed is not None:
transformed.append(newConstraint[name, i, 'ub'])
else:
constraintMap['transformedConstraints'][c] = [
newConstraint[name, i, 'ub']
]
constraintMap['srcConstraints'][newConstraint[name, i, 'ub']] = c
else:
newConstraint.add((name, 'ub'), newConsExpr)
transformed = constraintMap['transformedConstraints'].get(c)
if transformed is not None:
transformed.append(newConstraint[name, 'ub'])
else:
constraintMap['transformedConstraints'][c] = [
newConstraint[name, 'ub']
]
constraintMap['srcConstraints'][newConstraint[name, 'ub']] = c
# deactivate now that we have transformed
obj.deactivate()
def _add_local_var_suffix(self, disjunct):
# If the Suffix is there, we will borrow it. If not, we make it. If it's
# something else, we complain.
localSuffix = disjunct.component("LocalVars")
if localSuffix is None:
disjunct.LocalVars = Suffix(direction=Suffix.LOCAL)
else:
if localSuffix.ctype is Suffix:
return
raise GDP_Error(
"A component called 'LocalVars' is declared on "
"Disjunct %s, but it is of type %s, not Suffix."
% (disjunct.getname(fully_qualified=True), localSuffix.ctype)
)
def get_disaggregated_var(self, v, disjunct):
"""
Returns the disaggregated variable corresponding to the Var v and the
Disjunct disjunct.
If v is a local variable, this method will return v.
Parameters
----------
v: a Var that appears in a constraint in a transformed Disjunct
disjunct: a transformed Disjunct in which v appears
"""
if disjunct._transformation_block is None:
raise GDP_Error("Disjunct '%s' has not been transformed" % disjunct.name)
transBlock = disjunct._transformation_block().parent_block()
try:
return transBlock._disaggregatedVarMap['disaggregatedVar'][disjunct][v]
except:
logger.error(
"It does not appear '%s' is a "
"variable that appears in disjunct '%s'" % (v.name, disjunct.name)
)
raise
def get_src_var(self, disaggregated_var):
"""
Returns the original model variable to which disaggregated_var
corresponds.
Parameters
----------
disaggregated_var: a Var which was created by the hull
transformation as a disaggregated variable
(and so appears on a transformation block
of some Disjunct)
"""
msg = (
"'%s' does not appear to be a "
"disaggregated variable" % disaggregated_var.name
)
# There are two possibilities: It is declared on a Disjunct
# transformation Block, or it is declared on the parent of a Disjunct
# transformation block (if it is a single variable for multiple
# Disjuncts the original doesn't appear in)
transBlock = disaggregated_var.parent_block()
if not hasattr(transBlock, '_disaggregatedVarMap'):
try:
transBlock = transBlock.parent_block().parent_block()
except:
logger.error(msg)
raise
try:
return transBlock._disaggregatedVarMap['srcVar'][disaggregated_var]
except:
logger.error(msg)
raise
# retrieves the disaggregation constraint for original_var resulting from
# transforming disjunction
def get_disaggregation_constraint(self, original_var, disjunction):
"""
Returns the disaggregation (re-aggregation?) constraint
(which links the disaggregated variables to their original)
corresponding to original_var and the transformation of disjunction.
Parameters
----------
original_var: a Var which was disaggregated in the transformation
of Disjunction disjunction
disjunction: a transformed Disjunction containing original_var
"""
for disjunct in disjunction.disjuncts:
transBlock = disjunct._transformation_block
if transBlock is not None:
break
if transBlock is None:
raise GDP_Error(
"Disjunction '%s' has not been properly "
"transformed:"
" None of its disjuncts are transformed." % disjunction.name
)
try:
return (
transBlock()
.parent_block()
._disaggregationConstraintMap[original_var][disjunction]
)
except:
logger.error(
"It doesn't appear that '%s' is a variable that was "
"disaggregated by Disjunction '%s'"
% (original_var.name, disjunction.name)
)
raise
def get_var_bounds_constraint(self, v):
"""
Returns the IndexedConstraint which sets a disaggregated
variable to be within its bounds when its Disjunct is active and to
be 0 otherwise. (It is always an IndexedConstraint because each
bound becomes a separate constraint.)
Parameters
----------
v: a Var which was created by the hull transformation as a
disaggregated variable (and so appears on a transformation
block of some Disjunct)
"""
msg = (
"Either '%s' is not a disaggregated variable, or "
"the disjunction that disaggregates it has not "
"been properly transformed." % v.name
)
# This can only go well if v is a disaggregated var
transBlock = v.parent_block()
if not hasattr(transBlock, '_bigMConstraintMap'):
try:
transBlock = transBlock.parent_block().parent_block()
except:
logger.error(msg)
raise
try:
return transBlock._bigMConstraintMap[v]
except:
logger.error(msg)
raise
@TransformationFactory.register(
'gdp.chull',
doc="[DEPRECATED] please use 'gdp.hull' to get the Hull transformation.",
)
@deprecated(
"The 'gdp.chull' name is deprecated. "
"Please use the more apt 'gdp.hull' instead.",
logger='pyomo.gdp',
version="5.7",
)
class _Deprecated_Name_Hull(Hull_Reformulation):
def __init__(self):
super(_Deprecated_Name_Hull, self).__init__()
|
PypiClean
|
/trepan2-1.2.8.tar.gz/trepan2-1.2.8/trepan/interfaces/server.py
|
"""Module for Server (i.e. program to communication-device) interaction"""
import atexit
# Our local modules
from trepan import interface as Minterface
from trepan.inout import tcpserver as Mtcpserver, fifoserver as Mfifoserver
from trepan.interfaces import comcodes as Mcomcodes
DEFAULT_INIT_CONNECTION_OPTS = {'IO': 'TCP',
'PORT': 1955}
class ServerInterface(Minterface.DebuggerInterface):
"""Interface for debugging a program but having user control
reside outside of the debugged process, possibly on another
computer."""
def __init__(self, inout=None, out=None, connection_opts={}):
atexit.register(self.finalize)
opts = DEFAULT_INIT_CONNECTION_OPTS.copy()
opts.update(connection_opts)
self.inout = None # initialize in case assignment below fails
if inout:
self.inout = inout
else:
self.server_type = opts['IO']
if 'FIFO' == self.server_type:
self.inout = Mfifoserver.FIFOServer()
else:
self.inout = Mtcpserver.TCPServer(opts=opts)
pass
pass
# For Compatability
self.output = self.inout
self.input = self.inout
self.interactive = True # Or at least so we think initially
self.histfile = None
return
def close(self):
""" Closes both input and output """
if self.inout:
self.inout.close()
return
def confirm(self, prompt, default):
""" Called when a dangerous action is about to be done to make sure
it's okay. `prompt' is printed; user response is returned."""
while True:
try:
self.write_confirm(prompt, default)
reply = self.readline('').strip().lower()
except EOFError:
return default
if reply in ('y', 'yes'):
return True
elif reply in ('n', 'no'):
return False
else:
self.msg("Please answer y or n.")
pass
pass
return default
def errmsg(self, str, prefix="** "):
"""Common routine for reporting debugger error messages.
"""
return self.msg("%s%s" %(prefix, str))
def finalize(self, last_wishes=Mcomcodes.QUIT):
# print exit annotation
if self.is_connected():
self.inout.writeline(last_wishes)
pass
self.close()
return
def is_connected(self):
""" Return True if we are connected """
return 'connected' == self.inout.state
def msg(self, msg):
""" used to write to a debugger that is connected to this
server; `str' written will have a newline added to it
"""
self.inout.writeline(Mcomcodes.PRINT + msg)
return
def msg_nocr(self, msg):
""" used to write to a debugger that is connected to this
server; `str' written will not have a newline added to it
"""
self.inout.write(Mcomcodes.PRINT + msg)
return
def read_command(self, prompt):
return self.readline(prompt)
def read_data(self):
return self.inout.read_data()
def readline(self, prompt, add_to_history=True):
if prompt:
self.write_prompt(prompt)
pass
coded_line = self.inout.read_msg()
self.read_ctrl = coded_line[0]
return coded_line[1:]
def state(self):
""" Return connected """
return self.inout.state
def write_prompt(self, prompt):
return self.inout.writeline(Mcomcodes.PROMPT + prompt)
def write_confirm(self, prompt, default):
if default:
code = Mcomcodes.CONFIRM_TRUE
else:
code = Mcomcodes.CONFIRM_FALSE
pass
return self.inout.writeline(code + prompt)
pass
# Demo
if __name__=='__main__':
connection_opts={'IO': 'TCP', 'PORT': 1954}
intf = ServerInterface(connection_opts=connection_opts)
pass
|
PypiClean
|
/context_logging-1.1.0-py3-none-any.whl/context_logging/context.py
|
import time
from collections import UserDict
from contextvars import ContextVar, Token
from typing import Any, ChainMap, Dict, Optional, Type, cast
from deprecated import deprecated
from .config import config
from .logger import logger
from .utils import (
SyncAsyncContextDecorator,
context_name_with_code_path,
seconds_to_time_string,
)
ROOT_CONTEXT_NAME = 'root'
class ContextFactory(SyncAsyncContextDecorator):
def __init__(
self,
name: Optional[str] = None,
*,
log_execution_time: Optional[bool] = None,
fill_exception_context: Optional[bool] = None,
**kwargs: Any
) -> None:
self.name = name or context_name_with_code_path()
self._context_data = kwargs
if log_execution_time is None:
log_execution_time = config.LOG_EXECUTION_TIME_DEFAULT
self._log_execution_time = log_execution_time
if fill_exception_context is None:
fill_exception_context = config.FILL_EXEPTIONS_DEFAULT
self._fill_exception_context = fill_exception_context
@deprecated
def start(self) -> None:
self.__enter__()
@deprecated
def finish(self) -> None:
self.__exit__(None, None, None)
def __enter__(self) -> 'ContextObject':
context = self.create_context()
context.start()
return context
def __exit__(
self,
exc_type: Optional[Type[Exception]],
exc_value: Optional[Exception],
traceback: Any,
) -> None:
context = _current_context.get()
context.finish(exc_value)
def create_context(self) -> 'ContextObject':
return ContextObject(
name=self.name,
log_execution_time=self._log_execution_time,
fill_exception_context=self._fill_exception_context,
context_data=self._context_data.copy(),
)
class ContextObject(UserDict): # type: ignore
def __init__( # pylint:disable=super-init-not-called
self,
name: str,
log_execution_time: bool,
fill_exception_context: bool,
context_data: Dict[Any, Any],
) -> None:
self.name = name
self._log_execution_time = log_execution_time
self._fill_exception_context = fill_exception_context
self._context_data = context_data
self._parent_context: Optional[ContextObject] = None
self._parent_context_token: Optional[Token[ContextObject]] = None
self._start_time: Optional[float] = None
@property
def data(self) -> ChainMap[Any, Any]: # type: ignore
return ChainMap(self._context_data, self._parent_context or {})
def start(self) -> None:
self._parent_context = _current_context.get()
self._parent_context_token = _current_context.set(self)
self._start_time = time.monotonic()
def finish(self, exc: Optional[Exception] = None) -> None:
if self._log_execution_time:
finish_time = time.monotonic() - cast(float, self._start_time)
logger.info(
'%s: executed in %s',
self.name,
seconds_to_time_string(finish_time),
)
if exc and self._fill_exception_context and current_context:
if not getattr(exc, '__context_logging__', None):
exc.__context_logging__ = True # type: ignore
exc.args += (dict(current_context),)
_current_context.reset(
cast(Token, self._parent_context_token) # type: ignore
)
root_context = ContextFactory(name=ROOT_CONTEXT_NAME).create_context()
_current_context: ContextVar[ContextObject] = ContextVar(
'ctx', default=root_context
)
class CurrentContextProxy(UserDict): # type: ignore
def __init__(self) -> None: # pylint:disable=super-init-not-called
pass
@property
def data(self) -> ContextObject: # type: ignore
return _current_context.get()
current_context = CurrentContextProxy()
Context = ContextFactory # for backward compatibility
|
PypiClean
|
/pulumi_azure_nextgen-0.6.2a1613157620.tar.gz/pulumi_azure_nextgen-0.6.2a1613157620/pulumi_azure_nextgen/operationsmanagement/v20151101preview/get_solution.py
|
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
__all__ = [
'GetSolutionResult',
'AwaitableGetSolutionResult',
'get_solution',
]
@pulumi.output_type
class GetSolutionResult:
"""
The container for solution.
"""
def __init__(__self__, id=None, location=None, name=None, plan=None, properties=None, tags=None, type=None):
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if location and not isinstance(location, str):
raise TypeError("Expected argument 'location' to be a str")
pulumi.set(__self__, "location", location)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if plan and not isinstance(plan, dict):
raise TypeError("Expected argument 'plan' to be a dict")
pulumi.set(__self__, "plan", plan)
if properties and not isinstance(properties, dict):
raise TypeError("Expected argument 'properties' to be a dict")
pulumi.set(__self__, "properties", properties)
if tags and not isinstance(tags, dict):
raise TypeError("Expected argument 'tags' to be a dict")
pulumi.set(__self__, "tags", tags)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def id(self) -> str:
"""
Resource ID.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def location(self) -> Optional[str]:
"""
Resource location
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> str:
"""
Resource name.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def plan(self) -> Optional['outputs.SolutionPlanResponse']:
"""
Plan for solution object supported by the OperationsManagement resource provider.
"""
return pulumi.get(self, "plan")
@property
@pulumi.getter
def properties(self) -> 'outputs.SolutionPropertiesResponse':
"""
Properties for solution object supported by the OperationsManagement resource provider.
"""
return pulumi.get(self, "properties")
@property
@pulumi.getter
def tags(self) -> Optional[Mapping[str, str]]:
"""
Resource tags
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def type(self) -> str:
"""
Resource type.
"""
return pulumi.get(self, "type")
class AwaitableGetSolutionResult(GetSolutionResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetSolutionResult(
id=self.id,
location=self.location,
name=self.name,
plan=self.plan,
properties=self.properties,
tags=self.tags,
type=self.type)
def get_solution(resource_group_name: Optional[str] = None,
solution_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetSolutionResult:
"""
Use this data source to access information about an existing resource.
:param str resource_group_name: The name of the resource group to get. The name is case insensitive.
:param str solution_name: User Solution Name.
"""
__args__ = dict()
__args__['resourceGroupName'] = resource_group_name
__args__['solutionName'] = solution_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-nextgen:operationsmanagement/v20151101preview:getSolution', __args__, opts=opts, typ=GetSolutionResult).value
return AwaitableGetSolutionResult(
id=__ret__.id,
location=__ret__.location,
name=__ret__.name,
plan=__ret__.plan,
properties=__ret__.properties,
tags=__ret__.tags,
type=__ret__.type)
|
PypiClean
|
/opensesame_plugin_media_player_mpy-0.2.2.tar.gz/opensesame_plugin_media_player_mpy-0.2.2/opensesame_plugins/media_player_mpy/media_player_mpy/locale/fr.ts
|
<?xml version='1.0' encoding='utf-8'?>
<TS version="2.1">
<context>
<name>plugin_media_player_mpy</name>
<message>
<location filename="../../../../translation_tools/translatables.py" line="4" />
<source>Media player based on moviepy</source>
<translation>Lecteur multimédia basé sur moviepy</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="18" />
<source>Play audio</source>
<translation>Jouer l'audio</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="17" />
<source>Custom Python code (See Help for more information)</source>
<translation>Code Python personnalisé (Voir Aide pour plus d'informations)</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="5" />
<source>Duration</source>
<translation>Durée</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="14" />
<source>Fit video to screen</source>
<translation>Adapter la vidéo à l'écran</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="3" />
<source>Restart the video after it ends</source>
<translation>Redémarrer la vidéo après qu'elle se termine</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="9" />
<source>Specify how you would like to handle events like mouse clicks or keypresses. When set, this overrides the Duration attribute</source>
<translation>Précisez comment vous souhaitez gérer des événements tels que les clics de souris ou les pressions de touches. Lorsqu'il est défini, cela remplace l'attribut Durée</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="15" />
<source>Video file</source>
<translation>Fichier vidéo</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="12" />
<source>Maintains the aspect ratio</source>
<translation>Maintient le rapport d'aspect</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="8" />
<source>Call custom Python code</source>
<translation>Appeler du code Python personnalisé</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="11" />
<source>When to call custom event handling code (if any)</source>
<translation>Quand appeler du code de gestion d'événement personnalisé (le cas échéant)</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="19" />
<source>The library to use for sound rendering (recommended: sounddevice)</source>
<translation>La bibliothèque à utiliser pour le rendu sonore (recommandé : sounddevice)</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="10" />
<source><small><b>Media Player OpenSesame Plugin, Copyright (2015-2023) Daniel Schreij</b></small></source>
<translation><small><b>Media Player Plugin OpenSesame, Droits d'auteur (2015-2023) Daniel Schreij</b></small></translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="13" />
<source>Loop</source>
<translation>Boucle</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="6" />
<source>Sound renderer</source>
<translation>Rendu sonore</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="16" />
<source>Visual stimuli</source>
<translation>Stimuli visuels</translation>
</message>
<message>
<location filename="../../../../translation_tools/translatables.py" line="7" />
<source>A value in milliseconds, 'sound', 'mouseclick', or 'keypress'</source>
<translation>Une valeur en millisecondes, 'sound', 'mouseclick', ou 'keypress'</translation>
</message>
</context>
</TS>
|
PypiClean
|
/my_recommending-0.1.22-py3-none-any.whl/my_recommending/models/baseline.py
|
import io
import pickle
from typing import Literal
import implicit
import numpy as np
import torch
import wandb
from matplotlib import pyplot as plt
from sklearn.decomposition import TruncatedSVD
from my_tools.utils import build_class
from ..interface import (
RecommenderModuleBase,
FitExplicitInterfaceMixin,
UnpopularRecommenderMixin,
)
from ..utils import wandb_plt_figure, Timer
class RandomRecommender(RecommenderModuleBase, FitExplicitInterfaceMixin):
def fit(self):
pass
def forward(self, user_ids, item_ids):
ratings = torch.randn(len(user_ids), len(item_ids))
return ratings
class PopularRecommender(RecommenderModuleBase, FitExplicitInterfaceMixin):
items_count: torch.Tensor
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.register_buffer(name="items_count", tensor=torch.zeros(self.n_items))
def fit(self):
implicit_feedback = self.to_scipy_coo(self.explicit) > 0
self.items_count = torch.from_numpy(implicit_feedback.sum(axis=0).A.squeeze(0))
def forward(self, user_ids, item_ids):
ratings = self.items_count[item_ids].repeat(len(user_ids), 1)
return ratings.to(torch.float32)
class SVDRecommender(RecommenderModuleBase, FitExplicitInterfaceMixin):
def __init__(self, n_components=10, **kwargs):
super().__init__(**kwargs)
self.model = TruncatedSVD(n_components=n_components)
def fit(self):
self.model.fit(self.to_scipy_coo(self.explicit))
self.plot_explained_variance()
def plot_explained_variance(self):
if wandb.run is None:
return
wandb.log(
dict(explained_variance_ratio=self.model.explained_variance_ratio_.sum())
)
with wandb_plt_figure(
title="Explained variance depending on number of components"
):
plt.xlabel("Number of components")
plt.ylabel("Explained cumulative variance ratio")
plt.plot(np.cumsum(self.model.explained_variance_ratio_))
@Timer()
def online_ratings(self, users_explicit):
users_explicit = self.to_scipy_coo(users_explicit)
embedding = self.model.transform(users_explicit)
ratings = self.model.inverse_transform(embedding)
return torch.from_numpy(ratings).to(torch.float32)
def get_extra_state(self):
pickled_bytes = pickle.dumps(self.model)
return pickled_bytes
def set_extra_state(self, pickled_bytes):
self.model = pickle.load(io.BytesIO(pickled_bytes))
class ImplicitRecommenderBase(RecommenderModuleBase, FitExplicitInterfaceMixin):
def __init__(self, *, implicit_model, implicit_kwargs=None, **kwargs):
super().__init__(**kwargs)
self.model = build_class(
module_candidates=[
implicit.nearest_neighbours,
implicit.als,
implicit.bpr,
implicit.lmf,
],
class_name=implicit_model,
**(implicit_kwargs or {}),
)
def fit(self):
self.explicit = self.explicit.to(torch.float32)
self.model.fit(self.to_scipy_coo(self.explicit))
def forward(self, user_ids, item_ids):
explicit_feedback = self.to_scipy_coo(self.explicit).tocsr()
recommended_item_ids, item_ratings = self.model.content(
userid=user_ids.numpy(),
user_items=explicit_feedback[user_ids],
N=self.n_items,
filter_already_liked_items=False,
)
ratings = np.full(
shape=(len(user_ids), self.n_items + 1),
fill_value=np.finfo(np.float32).min,
)
np.put_along_axis(
arr=ratings, indices=recommended_item_ids, values=item_ratings, axis=1
)
ratings = torch.from_numpy(ratings[:, :-1])
return ratings[:, item_ids]
@torch.inference_mode()
def recommend(self, user_ids, n_recommendations=None):
item_ids, item_ratings = self.model.recommend(
userid=user_ids.numpy(),
user_items=self.to_scipy_coo(self.explicit).tocsr()[user_ids],
N=n_recommendations or self.n_items,
filter_already_liked_items=True,
)
item_ids = torch.from_numpy(item_ids).to(torch.int64)
self.check_invalid_recommendations(recommendations=item_ids, warn=False)
return item_ids
def get_extra_state(self):
bytesio = io.BytesIO()
self.model.save(bytesio)
return bytesio
def set_extra_state(self, bytesio):
bytesio.seek(0)
self.model = self.model.load(bytesio)
class ImplicitNearestNeighborsRecommender(ImplicitRecommenderBase):
def __init__(
self,
implicit_model: Literal[
"BM25Recommender", "CosineRecommender", "TFIDFRecommender"
] = "BM25Recommender",
num_neighbors=20,
num_threads=0,
**kwargs,
):
super().__init__(
implicit_model=implicit_model,
implicit_kwargs=dict(K=num_neighbors, num_threads=num_threads),
**kwargs,
)
def online_recommend(
self, users_explicit, n_recommendations=None, **kwargs
) -> torch.IntTensor:
users_explicit = self.to_torch_coo(users_explicit)
item_ids, item_ratings = self.model.recommend(
userid=np.arange(users_explicit.shape[0]),
user_items=self.to_scipy_coo(users_explicit.to(torch.float32)).tocsr(),
N=n_recommendations or self.n_items,
**kwargs,
)
item_ids = torch.from_numpy(item_ids).to(torch.int64)
self.check_invalid_recommendations(recommendations=item_ids, warn=False)
return item_ids
class ImplicitMatrixFactorizationRecommender(ImplicitRecommenderBase):
def __init__(
self,
implicit_model: Literal[
"AlternatingLeastSquares",
"LogisticMatrixFactorization",
"BayesianPersonalizedRanking",
] = "AlternatingLeastSquares",
factors=100,
learning_rate=1e-2,
regularization=1e-2,
num_threads=0,
use_gpu=True,
implicit_kwargs=None,
**kwargs,
):
implicit_kwargs = implicit_kwargs or {}
implicit_kwargs.update(
factors=factors,
learning_rate=learning_rate,
regularization=regularization,
num_threads=num_threads,
use_gpu=use_gpu,
)
if implicit_model == "AlternatingLeastSquares":
implicit_kwargs.pop("learning_rate")
else:
implicit_kwargs["use_gpu"] = False
super().__init__(
implicit_model=implicit_model, implicit_kwargs=implicit_kwargs, **kwargs
)
class UnpopularSVDRecommender(SVDRecommender, UnpopularRecommenderMixin):
def __init__(self, *args, unpopularity_coef=1e-3, **kwargs):
super().__init__(*args, **kwargs)
self.init_unpopular_recommender_mixin(unpopularity_coef=unpopularity_coef)
def fit(self):
self.fit_unpopular_recommender_mixin()
return super().fit()
def online_ratings(self, users_explicit, users_activity=None):
ratings = super().online_ratings(users_explicit=users_explicit)
if users_activity is None:
users_activity = torch.from_numpy(
(self.to_scipy_coo(users_explicit) > 0).mean(1).A.squeeze(1)
)
ratings += self.additive_rating_offset(users_activity=users_activity)
return ratings
|
PypiClean
|
/SmsVk%20Wrapper-1.0.1.tar.gz/SmsVk Wrapper-1.0.1/smsvk/services.py
|
class ServiceModel:
@property
def short_name(self):
return self.__service_short_name
@property
def count(self):
return self.__count_slot
@count.setter
def count(self, value):
self.__count_slot = int(value)
def object_factory(name, base_class, argnames):
def __init__(self, **kwargs):
for key, value in kwargs.items():
if key not in argnames:
raise TypeError('Argument {} not valid for {}'.format(key, self.__class__.__name__))
setattr(self, key, value)
base_class.__init__(self)
newclass = type(name, (base_class,), {'__init__': __init__})
return newclass
class ServiceStorage:
names = {
'VkCom': 'vk',
'Netflix': 'nf',
'Google': 'go',
'Imo': 'im',
'Telegram': 'tg',
'Instagram': 'ig',
'Facebook': 'fb',
'WhatsApp': 'wa',
'Viber': 'vi',
'AliBaba': 'ab',
'KakaoTalk': 'kt',
'Microsoft': 'mm',
'Naver': 'nv',
'ProtonMail': 'dp'
}
class SmsService:
def __init__(self):
for name, short_name in ServiceStorage.names.items():
object = object_factory(
name,
base_class=ServiceModel,
argnames=['__service_short_name', '__count_slot']
)(__service_short_name=short_name, __count_slot=0)
setattr(self, '_' + name, object)
@property
def VkCom(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._VkCom
@property
def Whatsapp(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Whatsapp
@property
def Viber(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Viber
@property
def Telegram(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Telegram
@property
def Google(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Google
@property
def Imo(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Imo
def Instagram(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Instagram
@property
def KakaoTalk(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._KakaoTalk
@property
def AliBaba(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._AliBaba
def Netflix(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Netflix
def Facebook(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Facebook
def Microsoft(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Naver
def Naver(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._Microsoft
def ProtonMail(self):
"""
:rtype: smsvk.ServiceModel
"""
return self._ProtonMail
|
PypiClean
|
/starterkit-1.3.17.tar.gz/starterkit-1.3.17/geosk/static/EDI-NG_client/js/bootstrap-datepicker.js
|
(function($, undefined){
var $window = $(window);
function UTCDate(){
return new Date(Date.UTC.apply(Date, arguments));
}
function UTCToday(){
var today = new Date();
return UTCDate(today.getFullYear(), today.getMonth(), today.getDate());
}
function alias(method){
return function(){
return this[method].apply(this, arguments);
};
}
var DateArray = (function(){
var extras = {
get: function(i){
return this.slice(i)[0];
},
contains: function(d){
// Array.indexOf is not cross-browser;
// $.inArray doesn't work with Dates
var val = d && d.valueOf();
for (var i=0, l=this.length; i < l; i++)
if (this[i].valueOf() === val)
return i;
return -1;
},
remove: function(i){
this.splice(i,1);
},
replace: function(new_array){
if (!new_array)
return;
if (!$.isArray(new_array))
new_array = [new_array];
this.clear();
this.push.apply(this, new_array);
},
clear: function(){
this.splice(0);
},
copy: function(){
var a = new DateArray();
a.replace(this);
return a;
}
};
return function(){
var a = [];
a.push.apply(a, arguments);
$.extend(a, extras);
return a;
};
})();
// Picker object
var Datepicker = function(element, options){
this.dates = new DateArray();
this.viewDate = UTCToday();
this.focusDate = null;
this._process_options(options);
this.element = $(element);
this.isInline = false;
this.isInput = this.element.is('input');
this.component = this.element.is('.date') ? this.element.find('.add-on, .input-group-addon, .btn') : false;
this.hasInput = this.component && this.element.find('input').length;
if (this.component && this.component.length === 0)
this.component = false;
this.picker = $(DPGlobal.template);
this._buildEvents();
this._attachEvents();
if (this.isInline){
this.picker.addClass('datepicker-inline').appendTo(this.element);
}
else {
this.picker.addClass('datepicker-dropdown dropdown-menu');
}
if (this.o.rtl){
this.picker.addClass('datepicker-rtl');
}
this.viewMode = this.o.startView;
if (this.o.calendarWeeks)
this.picker.find('tfoot th.today')
.attr('colspan', function(i, val){
return parseInt(val) + 1;
});
this._allow_update = false;
this.setStartDate(this._o.startDate);
this.setEndDate(this._o.endDate);
this.setDaysOfWeekDisabled(this.o.daysOfWeekDisabled);
this.fillDow();
this.fillMonths();
this._allow_update = true;
this.update();
this.showMode();
if (this.isInline){
this.show();
}
};
Datepicker.prototype = {
constructor: Datepicker,
_process_options: function(opts){
// Store raw options for reference
this._o = $.extend({}, this._o, opts);
// Processed options
var o = this.o = $.extend({}, this._o);
// Check if "de-DE" style date is available, if not language should
// fallback to 2 letter code eg "de"
var lang = o.language;
if (!dates[lang]){
lang = lang.split('-')[0];
if (!dates[lang])
lang = defaults.language;
}
o.language = lang;
switch (o.startView){
case 2:
case 'decade':
o.startView = 2;
break;
case 1:
case 'year':
o.startView = 1;
break;
default:
o.startView = 0;
}
switch (o.minViewMode){
case 1:
case 'months':
o.minViewMode = 1;
break;
case 2:
case 'years':
o.minViewMode = 2;
break;
default:
o.minViewMode = 0;
}
o.startView = Math.max(o.startView, o.minViewMode);
// true, false, or Number > 0
if (o.multidate !== true){
o.multidate = Number(o.multidate) || false;
if (o.multidate !== false)
o.multidate = Math.max(0, o.multidate);
else
o.multidate = 1;
}
o.multidateSeparator = String(o.multidateSeparator);
o.weekStart %= 7;
o.weekEnd = ((o.weekStart + 6) % 7);
var format = DPGlobal.parseFormat(o.format);
if (o.startDate !== -Infinity){
if (!!o.startDate){
if (o.startDate instanceof Date)
o.startDate = this._local_to_utc(this._zero_time(o.startDate));
else
o.startDate = DPGlobal.parseDate(o.startDate, format, o.language);
}
else {
o.startDate = -Infinity;
}
}
if (o.endDate !== Infinity){
if (!!o.endDate){
if (o.endDate instanceof Date)
o.endDate = this._local_to_utc(this._zero_time(o.endDate));
else
o.endDate = DPGlobal.parseDate(o.endDate, format, o.language);
}
else {
o.endDate = Infinity;
}
}
o.daysOfWeekDisabled = o.daysOfWeekDisabled||[];
if (!$.isArray(o.daysOfWeekDisabled))
o.daysOfWeekDisabled = o.daysOfWeekDisabled.split(/[,\s]*/);
o.daysOfWeekDisabled = $.map(o.daysOfWeekDisabled, function(d){
return parseInt(d, 10);
});
var plc = String(o.orientation).toLowerCase().split(/\s+/g),
_plc = o.orientation.toLowerCase();
plc = $.grep(plc, function(word){
return (/^auto|left|right|top|bottom$/).test(word);
});
o.orientation = {x: 'auto', y: 'auto'};
if (!_plc || _plc === 'auto')
; // no action
else if (plc.length === 1){
switch (plc[0]){
case 'top':
case 'bottom':
o.orientation.y = plc[0];
break;
case 'left':
case 'right':
o.orientation.x = plc[0];
break;
}
}
else {
_plc = $.grep(plc, function(word){
return (/^left|right$/).test(word);
});
o.orientation.x = _plc[0] || 'auto';
_plc = $.grep(plc, function(word){
return (/^top|bottom$/).test(word);
});
o.orientation.y = _plc[0] || 'auto';
}
},
_events: [],
_secondaryEvents: [],
_applyEvents: function(evs){
for (var i=0, el, ch, ev; i < evs.length; i++){
el = evs[i][0];
if (evs[i].length === 2){
ch = undefined;
ev = evs[i][1];
}
else if (evs[i].length === 3){
ch = evs[i][1];
ev = evs[i][2];
}
el.on(ev, ch);
}
},
_unapplyEvents: function(evs){
for (var i=0, el, ev, ch; i < evs.length; i++){
el = evs[i][0];
if (evs[i].length === 2){
ch = undefined;
ev = evs[i][1];
}
else if (evs[i].length === 3){
ch = evs[i][1];
ev = evs[i][2];
}
el.off(ev, ch);
}
},
_buildEvents: function(){
if (this.isInput){ // single input
this._events = [
[this.element, {
focus: $.proxy(this.show, this),
keyup: $.proxy(function(e){
if ($.inArray(e.keyCode, [27,37,39,38,40,32,13,9]) === -1)
this.update();
}, this),
keydown: $.proxy(this.keydown, this)
}]
];
}
else if (this.component && this.hasInput){ // component: input + button
this._events = [
// For components that are not readonly, allow keyboard nav
[this.element.find('input'), {
focus: $.proxy(this.show, this),
keyup: $.proxy(function(e){
if ($.inArray(e.keyCode, [27,37,39,38,40,32,13,9]) === -1)
this.update();
}, this),
keydown: $.proxy(this.keydown, this)
}],
[this.component, {
click: $.proxy(this.show, this)
}]
];
}
else if (this.element.is('div')){ // inline datepicker
this.isInline = true;
}
else {
this._events = [
[this.element, {
click: $.proxy(this.show, this)
}]
];
}
this._events.push(
// Component: listen for blur on element descendants
[this.element, '*', {
blur: $.proxy(function(e){
this._focused_from = e.target;
}, this)
}],
// Input: listen for blur on element
[this.element, {
blur: $.proxy(function(e){
this._focused_from = e.target;
}, this)
}]
);
this._secondaryEvents = [
[this.picker, {
click: $.proxy(this.click, this)
}],
[$(window), {
resize: $.proxy(this.place, this)
}],
[$(document), {
'mousedown touchstart': $.proxy(function(e){
// Clicked outside the datepicker, hide it
if (!(
this.element.is(e.target) ||
this.element.find(e.target).length ||
this.picker.is(e.target) ||
this.picker.find(e.target).length
)){
this.hide();
}
}, this)
}]
];
},
_attachEvents: function(){
this._detachEvents();
this._applyEvents(this._events);
},
_detachEvents: function(){
this._unapplyEvents(this._events);
},
_attachSecondaryEvents: function(){
this._detachSecondaryEvents();
this._applyEvents(this._secondaryEvents);
},
_detachSecondaryEvents: function(){
this._unapplyEvents(this._secondaryEvents);
},
_trigger: function(event, altdate){
var date = altdate || this.dates.get(-1),
local_date = this._utc_to_local(date);
this.element.trigger({
type: event,
date: local_date,
dates: $.map(this.dates, this._utc_to_local),
format: $.proxy(function(ix, format){
if (arguments.length === 0){
ix = this.dates.length - 1;
format = this.o.format;
}
else if (typeof ix === 'string'){
format = ix;
ix = this.dates.length - 1;
}
format = format || this.o.format;
var date = this.dates.get(ix);
return DPGlobal.formatDate(date, format, this.o.language);
}, this)
});
},
show: function(){
if (!this.isInline)
this.picker.appendTo('body');
this.picker.show();
this.place();
this._attachSecondaryEvents();
this._trigger('show');
},
hide: function(){
if (this.isInline)
return;
if (!this.picker.is(':visible'))
return;
this.focusDate = null;
this.picker.hide().detach();
this._detachSecondaryEvents();
this.viewMode = this.o.startView;
this.showMode();
if (
this.o.forceParse &&
(
this.isInput && this.element.val() ||
this.hasInput && this.element.find('input').val()
)
)
this.setValue();
this._trigger('hide');
},
remove: function(){
this.hide();
this._detachEvents();
this._detachSecondaryEvents();
this.picker.remove();
delete this.element.data().datepicker;
if (!this.isInput){
delete this.element.data().date;
}
},
_utc_to_local: function(utc){
return utc && new Date(utc.getTime() + (utc.getTimezoneOffset()*60000));
},
_local_to_utc: function(local){
return local && new Date(local.getTime() - (local.getTimezoneOffset()*60000));
},
_zero_time: function(local){
return local && new Date(local.getFullYear(), local.getMonth(), local.getDate());
},
_zero_utc_time: function(utc){
return utc && new Date(Date.UTC(utc.getUTCFullYear(), utc.getUTCMonth(), utc.getUTCDate()));
},
getDates: function(){
return $.map(this.dates, this._utc_to_local);
},
getUTCDates: function(){
return $.map(this.dates, function(d){
return new Date(d);
});
},
getDate: function(){
return this._utc_to_local(this.getUTCDate());
},
getUTCDate: function(){
return new Date(this.dates.get(-1));
},
setDates: function(){
var args = $.isArray(arguments[0]) ? arguments[0] : arguments;
this.update.apply(this, args);
this._trigger('changeDate');
this.setValue();
},
setUTCDates: function(){
var args = $.isArray(arguments[0]) ? arguments[0] : arguments;
this.update.apply(this, $.map(args, this._utc_to_local));
this._trigger('changeDate');
this.setValue();
},
setDate: alias('setDates'),
setUTCDate: alias('setUTCDates'),
setValue: function(){
var formatted = this.getFormattedDate();
if (!this.isInput){
if (this.component){
this.element.find('input').val(formatted).change();
}
}
else {
this.element.val(formatted).change();
}
},
getFormattedDate: function(format){
if (format === undefined)
format = this.o.format;
var lang = this.o.language;
return $.map(this.dates, function(d){
return DPGlobal.formatDate(d, format, lang);
}).join(this.o.multidateSeparator);
},
setStartDate: function(startDate){
this._process_options({startDate: startDate});
this.update();
this.updateNavArrows();
},
setEndDate: function(endDate){
this._process_options({endDate: endDate});
this.update();
this.updateNavArrows();
},
setDaysOfWeekDisabled: function(daysOfWeekDisabled){
this._process_options({daysOfWeekDisabled: daysOfWeekDisabled});
this.update();
this.updateNavArrows();
},
place: function(){
if (this.isInline)
return;
var calendarWidth = this.picker.outerWidth(),
calendarHeight = this.picker.outerHeight(),
visualPadding = 10,
windowWidth = $window.width(),
windowHeight = $window.height(),
scrollTop = $window.scrollTop();
var zIndex = parseInt(this.element.parents().filter(function(){
return $(this).css('z-index') !== 'auto';
}).first().css('z-index'))+10;
var offset = this.component ? this.component.parent().offset() : this.element.offset();
var height = this.component ? this.component.outerHeight(true) : this.element.outerHeight(false);
var width = this.component ? this.component.outerWidth(true) : this.element.outerWidth(false);
var left = offset.left,
top = offset.top;
this.picker.removeClass(
'datepicker-orient-top datepicker-orient-bottom '+
'datepicker-orient-right datepicker-orient-left'
);
if (this.o.orientation.x !== 'auto'){
this.picker.addClass('datepicker-orient-' + this.o.orientation.x);
if (this.o.orientation.x === 'right')
left -= calendarWidth - width;
}
// auto x orientation is best-placement: if it crosses a window
// edge, fudge it sideways
else {
// Default to left
this.picker.addClass('datepicker-orient-left');
if (offset.left < 0)
left -= offset.left - visualPadding;
else if (offset.left + calendarWidth > windowWidth)
left = windowWidth - calendarWidth - visualPadding;
}
// auto y orientation is best-situation: top or bottom, no fudging,
// decision based on which shows more of the calendar
var yorient = this.o.orientation.y,
top_overflow, bottom_overflow;
if (yorient === 'auto'){
top_overflow = -scrollTop + offset.top - calendarHeight;
bottom_overflow = scrollTop + windowHeight - (offset.top + height + calendarHeight);
if (Math.max(top_overflow, bottom_overflow) === bottom_overflow)
yorient = 'top';
else
yorient = 'bottom';
}
this.picker.addClass('datepicker-orient-' + yorient);
if (yorient === 'top')
top += height;
else
top -= calendarHeight + parseInt(this.picker.css('padding-top'));
this.picker.css({
top: top,
left: left,
zIndex: zIndex
});
},
_allow_update: true,
update: function(){
if (!this._allow_update)
return;
var oldDates = this.dates.copy(),
dates = [],
fromArgs = false;
if (arguments.length){
$.each(arguments, $.proxy(function(i, date){
if (date instanceof Date)
date = this._local_to_utc(date);
dates.push(date);
}, this));
fromArgs = true;
}
else {
dates = this.isInput
? this.element.val()
: this.element.data('date') || this.element.find('input').val();
if (dates && this.o.multidate)
dates = dates.split(this.o.multidateSeparator);
else
dates = [dates];
delete this.element.data().date;
}
dates = $.map(dates, $.proxy(function(date){
return DPGlobal.parseDate(date, this.o.format, this.o.language);
}, this));
dates = $.grep(dates, $.proxy(function(date){
return (
date < this.o.startDate ||
date > this.o.endDate ||
!date
);
}, this), true);
this.dates.replace(dates);
if (this.dates.length)
this.viewDate = new Date(this.dates.get(-1));
else if (this.viewDate < this.o.startDate)
this.viewDate = new Date(this.o.startDate);
else if (this.viewDate > this.o.endDate)
this.viewDate = new Date(this.o.endDate);
if (fromArgs){
// setting date by clicking
this.setValue();
}
else if (dates.length){
// setting date by typing
if (String(oldDates) !== String(this.dates))
this._trigger('changeDate');
}
if (!this.dates.length && oldDates.length)
this._trigger('clearDate');
this.fill();
},
fillDow: function(){
var dowCnt = this.o.weekStart,
html = '<tr>';
if (this.o.calendarWeeks){
var cell = '<th class="cw"> </th>';
html += cell;
this.picker.find('.datepicker-days thead tr:first-child').prepend(cell);
}
while (dowCnt < this.o.weekStart + 7){
html += '<th class="dow">'+dates[this.o.language].daysMin[(dowCnt++)%7]+'</th>';
}
html += '</tr>';
this.picker.find('.datepicker-days thead').append(html);
},
fillMonths: function(){
var html = '',
i = 0;
while (i < 12){
html += '<span class="month">'+dates[this.o.language].monthsShort[i++]+'</span>';
}
this.picker.find('.datepicker-months td').html(html);
},
setRange: function(range){
if (!range || !range.length)
delete this.range;
else
this.range = $.map(range, function(d){
return d.valueOf();
});
this.fill();
},
getClassNames: function(date){
var cls = [],
year = this.viewDate.getUTCFullYear(),
month = this.viewDate.getUTCMonth(),
today = new Date();
if (date.getUTCFullYear() < year || (date.getUTCFullYear() === year && date.getUTCMonth() < month)){
cls.push('old');
}
else if (date.getUTCFullYear() > year || (date.getUTCFullYear() === year && date.getUTCMonth() > month)){
cls.push('new');
}
if (this.focusDate && date.valueOf() === this.focusDate.valueOf())
cls.push('focused');
// Compare internal UTC date with local today, not UTC today
if (this.o.todayHighlight &&
date.getUTCFullYear() === today.getFullYear() &&
date.getUTCMonth() === today.getMonth() &&
date.getUTCDate() === today.getDate()){
cls.push('today');
}
if (this.dates.contains(date) !== -1)
cls.push('active');
if (date.valueOf() < this.o.startDate || date.valueOf() > this.o.endDate ||
$.inArray(date.getUTCDay(), this.o.daysOfWeekDisabled) !== -1){
cls.push('disabled');
}
if (this.range){
if (date > this.range[0] && date < this.range[this.range.length-1]){
cls.push('range');
}
if ($.inArray(date.valueOf(), this.range) !== -1){
cls.push('selected');
}
}
return cls;
},
fill: function(){
var d = new Date(this.viewDate),
year = d.getUTCFullYear(),
month = d.getUTCMonth(),
startYear = this.o.startDate !== -Infinity ? this.o.startDate.getUTCFullYear() : -Infinity,
startMonth = this.o.startDate !== -Infinity ? this.o.startDate.getUTCMonth() : -Infinity,
endYear = this.o.endDate !== Infinity ? this.o.endDate.getUTCFullYear() : Infinity,
endMonth = this.o.endDate !== Infinity ? this.o.endDate.getUTCMonth() : Infinity,
todaytxt = dates[this.o.language].today || dates['en'].today || '',
cleartxt = dates[this.o.language].clear || dates['en'].clear || '',
tooltip;
this.picker.find('.datepicker-days thead th.datepicker-switch')
.text(dates[this.o.language].months[month]+' '+year);
this.picker.find('tfoot th.today')
.text(todaytxt)
.toggle(this.o.todayBtn !== false);
this.picker.find('tfoot th.clear')
.text(cleartxt)
.toggle(this.o.clearBtn !== false);
this.updateNavArrows();
this.fillMonths();
var prevMonth = UTCDate(year, month-1, 28),
day = DPGlobal.getDaysInMonth(prevMonth.getUTCFullYear(), prevMonth.getUTCMonth());
prevMonth.setUTCDate(day);
prevMonth.setUTCDate(day - (prevMonth.getUTCDay() - this.o.weekStart + 7)%7);
var nextMonth = new Date(prevMonth);
nextMonth.setUTCDate(nextMonth.getUTCDate() + 42);
nextMonth = nextMonth.valueOf();
var html = [];
var clsName;
while (prevMonth.valueOf() < nextMonth){
if (prevMonth.getUTCDay() === this.o.weekStart){
html.push('<tr>');
if (this.o.calendarWeeks){
// ISO 8601: First week contains first thursday.
// ISO also states week starts on Monday, but we can be more abstract here.
var
// Start of current week: based on weekstart/current date
ws = new Date(+prevMonth + (this.o.weekStart - prevMonth.getUTCDay() - 7) % 7 * 864e5),
// Thursday of this week
th = new Date(Number(ws) + (7 + 4 - ws.getUTCDay()) % 7 * 864e5),
// First Thursday of year, year from thursday
yth = new Date(Number(yth = UTCDate(th.getUTCFullYear(), 0, 1)) + (7 + 4 - yth.getUTCDay())%7*864e5),
// Calendar week: ms between thursdays, div ms per day, div 7 days
calWeek = (th - yth) / 864e5 / 7 + 1;
html.push('<td class="cw">'+ calWeek +'</td>');
}
}
clsName = this.getClassNames(prevMonth);
clsName.push('day');
if (this.o.beforeShowDay !== $.noop){
var before = this.o.beforeShowDay(this._utc_to_local(prevMonth));
if (before === undefined)
before = {};
else if (typeof(before) === 'boolean')
before = {enabled: before};
else if (typeof(before) === 'string')
before = {classes: before};
if (before.enabled === false)
clsName.push('disabled');
if (before.classes)
clsName = clsName.concat(before.classes.split(/\s+/));
if (before.tooltip)
tooltip = before.tooltip;
}
clsName = $.unique(clsName);
html.push('<td class="'+clsName.join(' ')+'"' + (tooltip ? ' title="'+tooltip+'"' : '') + '>'+prevMonth.getUTCDate() + '</td>');
if (prevMonth.getUTCDay() === this.o.weekEnd){
html.push('</tr>');
}
prevMonth.setUTCDate(prevMonth.getUTCDate()+1);
}
this.picker.find('.datepicker-days tbody').empty().append(html.join(''));
var months = this.picker.find('.datepicker-months')
.find('th:eq(1)')
.text(year)
.end()
.find('span').removeClass('active');
$.each(this.dates, function(i, d){
if (d.getUTCFullYear() === year)
months.eq(d.getUTCMonth()).addClass('active');
});
if (year < startYear || year > endYear){
months.addClass('disabled');
}
if (year === startYear){
months.slice(0, startMonth).addClass('disabled');
}
if (year === endYear){
months.slice(endMonth+1).addClass('disabled');
}
html = '';
year = parseInt(year/10, 10) * 10;
var yearCont = this.picker.find('.datepicker-years')
.find('th:eq(1)')
.text(year + '-' + (year + 9))
.end()
.find('td');
year -= 1;
var years = $.map(this.dates, function(d){
return d.getUTCFullYear();
}),
classes;
for (var i = -1; i < 11; i++){
classes = ['year'];
if (i === -1)
classes.push('old');
else if (i === 10)
classes.push('new');
if ($.inArray(year, years) !== -1)
classes.push('active');
if (year < startYear || year > endYear)
classes.push('disabled');
html += '<span class="' + classes.join(' ') + '">'+year+'</span>';
year += 1;
}
yearCont.html(html);
},
updateNavArrows: function(){
if (!this._allow_update)
return;
var d = new Date(this.viewDate),
year = d.getUTCFullYear(),
month = d.getUTCMonth();
switch (this.viewMode){
case 0:
if (this.o.startDate !== -Infinity && year <= this.o.startDate.getUTCFullYear() && month <= this.o.startDate.getUTCMonth()){
this.picker.find('.prev').css({visibility: 'hidden'});
}
else {
this.picker.find('.prev').css({visibility: 'visible'});
}
if (this.o.endDate !== Infinity && year >= this.o.endDate.getUTCFullYear() && month >= this.o.endDate.getUTCMonth()){
this.picker.find('.next').css({visibility: 'hidden'});
}
else {
this.picker.find('.next').css({visibility: 'visible'});
}
break;
case 1:
case 2:
if (this.o.startDate !== -Infinity && year <= this.o.startDate.getUTCFullYear()){
this.picker.find('.prev').css({visibility: 'hidden'});
}
else {
this.picker.find('.prev').css({visibility: 'visible'});
}
if (this.o.endDate !== Infinity && year >= this.o.endDate.getUTCFullYear()){
this.picker.find('.next').css({visibility: 'hidden'});
}
else {
this.picker.find('.next').css({visibility: 'visible'});
}
break;
}
},
click: function(e){
e.preventDefault();
var target = $(e.target).closest('span, td, th'),
year, month, day;
if (target.length === 1){
switch (target[0].nodeName.toLowerCase()){
case 'th':
switch (target[0].className){
case 'datepicker-switch':
this.showMode(1);
break;
case 'prev':
case 'next':
var dir = DPGlobal.modes[this.viewMode].navStep * (target[0].className === 'prev' ? -1 : 1);
switch (this.viewMode){
case 0:
this.viewDate = this.moveMonth(this.viewDate, dir);
this._trigger('changeMonth', this.viewDate);
break;
case 1:
case 2:
this.viewDate = this.moveYear(this.viewDate, dir);
if (this.viewMode === 1)
this._trigger('changeYear', this.viewDate);
break;
}
this.fill();
break;
case 'today':
var date = new Date();
date = UTCDate(date.getFullYear(), date.getMonth(), date.getDate(), 0, 0, 0);
this.showMode(-2);
var which = this.o.todayBtn === 'linked' ? null : 'view';
this._setDate(date, which);
break;
case 'clear':
var element;
if (this.isInput)
element = this.element;
else if (this.component)
element = this.element.find('input');
if (element)
element.val("").change();
this.update();
this._trigger('changeDate');
if (this.o.autoclose)
this.hide();
break;
}
break;
case 'span':
if (!target.is('.disabled')){
this.viewDate.setUTCDate(1);
if (target.is('.month')){
day = 1;
month = target.parent().find('span').index(target);
year = this.viewDate.getUTCFullYear();
this.viewDate.setUTCMonth(month);
this._trigger('changeMonth', this.viewDate);
if (this.o.minViewMode === 1){
this._setDate(UTCDate(year, month, day));
}
}
else {
day = 1;
month = 0;
year = parseInt(target.text(), 10)||0;
this.viewDate.setUTCFullYear(year);
this._trigger('changeYear', this.viewDate);
if (this.o.minViewMode === 2){
this._setDate(UTCDate(year, month, day));
}
}
this.showMode(-1);
this.fill();
}
break;
case 'td':
if (target.is('.day') && !target.is('.disabled')){
day = parseInt(target.text(), 10)||1;
year = this.viewDate.getUTCFullYear();
month = this.viewDate.getUTCMonth();
if (target.is('.old')){
if (month === 0){
month = 11;
year -= 1;
}
else {
month -= 1;
}
}
else if (target.is('.new')){
if (month === 11){
month = 0;
year += 1;
}
else {
month += 1;
}
}
this._setDate(UTCDate(year, month, day));
}
break;
}
}
if (this.picker.is(':visible') && this._focused_from){
$(this._focused_from).focus();
}
delete this._focused_from;
},
_toggle_multidate: function(date){
var ix = this.dates.contains(date);
if (!date){
this.dates.clear();
}
else if (ix !== -1){
this.dates.remove(ix);
}
else {
this.dates.push(date);
}
if (typeof this.o.multidate === 'number')
while (this.dates.length > this.o.multidate)
this.dates.remove(0);
},
_setDate: function(date, which){
if (!which || which === 'date')
this._toggle_multidate(date && new Date(date));
if (!which || which === 'view')
this.viewDate = date && new Date(date);
this.fill();
this.setValue();
this._trigger('changeDate');
var element;
if (this.isInput){
element = this.element;
}
else if (this.component){
element = this.element.find('input');
}
if (element){
element.change();
}
if (this.o.autoclose && (!which || which === 'date')){
this.hide();
}
},
moveMonth: function(date, dir){
if (!date)
return undefined;
if (!dir)
return date;
var new_date = new Date(date.valueOf()),
day = new_date.getUTCDate(),
month = new_date.getUTCMonth(),
mag = Math.abs(dir),
new_month, test;
dir = dir > 0 ? 1 : -1;
if (mag === 1){
test = dir === -1
// If going back one month, make sure month is not current month
// (eg, Mar 31 -> Feb 31 == Feb 28, not Mar 02)
? function(){
return new_date.getUTCMonth() === month;
}
// If going forward one month, make sure month is as expected
// (eg, Jan 31 -> Feb 31 == Feb 28, not Mar 02)
: function(){
return new_date.getUTCMonth() !== new_month;
};
new_month = month + dir;
new_date.setUTCMonth(new_month);
// Dec -> Jan (12) or Jan -> Dec (-1) -- limit expected date to 0-11
if (new_month < 0 || new_month > 11)
new_month = (new_month + 12) % 12;
}
else {
// For magnitudes >1, move one month at a time...
for (var i=0; i < mag; i++)
// ...which might decrease the day (eg, Jan 31 to Feb 28, etc)...
new_date = this.moveMonth(new_date, dir);
// ...then reset the day, keeping it in the new month
new_month = new_date.getUTCMonth();
new_date.setUTCDate(day);
test = function(){
return new_month !== new_date.getUTCMonth();
};
}
// Common date-resetting loop -- if date is beyond end of month, make it
// end of month
while (test()){
new_date.setUTCDate(--day);
new_date.setUTCMonth(new_month);
}
return new_date;
},
moveYear: function(date, dir){
return this.moveMonth(date, dir*12);
},
dateWithinRange: function(date){
return date >= this.o.startDate && date <= this.o.endDate;
},
keydown: function(e){
if (this.picker.is(':not(:visible)')){
if (e.keyCode === 27) // allow escape to hide and re-show picker
this.show();
return;
}
var dateChanged = false,
dir, newDate, newViewDate,
focusDate = this.focusDate || this.viewDate;
switch (e.keyCode){
case 27: // escape
if (this.focusDate){
this.focusDate = null;
this.viewDate = this.dates.get(-1) || this.viewDate;
this.fill();
}
else
this.hide();
e.preventDefault();
break;
case 37: // left
case 39: // right
if (!this.o.keyboardNavigation)
break;
dir = e.keyCode === 37 ? -1 : 1;
if (e.ctrlKey){
newDate = this.moveYear(this.dates.get(-1) || UTCToday(), dir);
newViewDate = this.moveYear(focusDate, dir);
this._trigger('changeYear', this.viewDate);
}
else if (e.shiftKey){
newDate = this.moveMonth(this.dates.get(-1) || UTCToday(), dir);
newViewDate = this.moveMonth(focusDate, dir);
this._trigger('changeMonth', this.viewDate);
}
else {
newDate = new Date(this.dates.get(-1) || UTCToday());
newDate.setUTCDate(newDate.getUTCDate() + dir);
newViewDate = new Date(focusDate);
newViewDate.setUTCDate(focusDate.getUTCDate() + dir);
}
if (this.dateWithinRange(newDate)){
this.focusDate = this.viewDate = newViewDate;
this.setValue();
this.fill();
e.preventDefault();
}
break;
case 38: // up
case 40: // down
if (!this.o.keyboardNavigation)
break;
dir = e.keyCode === 38 ? -1 : 1;
if (e.ctrlKey){
newDate = this.moveYear(this.dates.get(-1) || UTCToday(), dir);
newViewDate = this.moveYear(focusDate, dir);
this._trigger('changeYear', this.viewDate);
}
else if (e.shiftKey){
newDate = this.moveMonth(this.dates.get(-1) || UTCToday(), dir);
newViewDate = this.moveMonth(focusDate, dir);
this._trigger('changeMonth', this.viewDate);
}
else {
newDate = new Date(this.dates.get(-1) || UTCToday());
newDate.setUTCDate(newDate.getUTCDate() + dir * 7);
newViewDate = new Date(focusDate);
newViewDate.setUTCDate(focusDate.getUTCDate() + dir * 7);
}
if (this.dateWithinRange(newDate)){
this.focusDate = this.viewDate = newViewDate;
this.setValue();
this.fill();
e.preventDefault();
}
break;
case 32: // spacebar
// Spacebar is used in manually typing dates in some formats.
// As such, its behavior should not be hijacked.
break;
case 13: // enter
focusDate = this.focusDate || this.dates.get(-1) || this.viewDate;
this._toggle_multidate(focusDate);
dateChanged = true;
this.focusDate = null;
this.viewDate = this.dates.get(-1) || this.viewDate;
this.setValue();
this.fill();
if (this.picker.is(':visible')){
e.preventDefault();
if (this.o.autoclose)
this.hide();
}
break;
case 9: // tab
this.focusDate = null;
this.viewDate = this.dates.get(-1) || this.viewDate;
this.fill();
this.hide();
break;
}
if (dateChanged){
if (this.dates.length)
this._trigger('changeDate');
else
this._trigger('clearDate');
var element;
if (this.isInput){
element = this.element;
}
else if (this.component){
element = this.element.find('input');
}
if (element){
element.change();
}
}
},
showMode: function(dir){
if (dir){
this.viewMode = Math.max(this.o.minViewMode, Math.min(2, this.viewMode + dir));
}
this.picker
.find('>div')
.hide()
.filter('.datepicker-'+DPGlobal.modes[this.viewMode].clsName)
.css('display', 'block');
this.updateNavArrows();
}
};
var DateRangePicker = function(element, options){
this.element = $(element);
this.inputs = $.map(options.inputs, function(i){
return i.jquery ? i[0] : i;
});
delete options.inputs;
$(this.inputs)
.datepicker(options)
.bind('changeDate', $.proxy(this.dateUpdated, this));
this.pickers = $.map(this.inputs, function(i){
return $(i).data('datepicker');
});
this.updateDates();
};
DateRangePicker.prototype = {
updateDates: function(){
this.dates = $.map(this.pickers, function(i){
return i.getUTCDate();
});
this.updateRanges();
},
updateRanges: function(){
var range = $.map(this.dates, function(d){
return d.valueOf();
});
$.each(this.pickers, function(i, p){
p.setRange(range);
});
},
dateUpdated: function(e){
// `this.updating` is a workaround for preventing infinite recursion
// between `changeDate` triggering and `setUTCDate` calling. Until
// there is a better mechanism.
if (this.updating)
return;
this.updating = true;
var dp = $(e.target).data('datepicker'),
new_date = dp.getUTCDate(),
i = $.inArray(e.target, this.inputs),
l = this.inputs.length;
if (i === -1)
return;
$.each(this.pickers, function(i, p){
if (!p.getUTCDate())
p.setUTCDate(new_date);
});
if (new_date < this.dates[i]){
// Date being moved earlier/left
while (i >= 0 && new_date < this.dates[i]){
this.pickers[i--].setUTCDate(new_date);
}
}
else if (new_date > this.dates[i]){
// Date being moved later/right
while (i < l && new_date > this.dates[i]){
this.pickers[i++].setUTCDate(new_date);
}
}
this.updateDates();
delete this.updating;
},
remove: function(){
$.map(this.pickers, function(p){ p.remove(); });
delete this.element.data().datepicker;
}
};
function opts_from_el(el, prefix){
// Derive options from element data-attrs
var data = $(el).data(),
out = {}, inkey,
replace = new RegExp('^' + prefix.toLowerCase() + '([A-Z])');
prefix = new RegExp('^' + prefix.toLowerCase());
function re_lower(_,a){
return a.toLowerCase();
}
for (var key in data)
if (prefix.test(key)){
inkey = key.replace(replace, re_lower);
out[inkey] = data[key];
}
return out;
}
function opts_from_locale(lang){
// Derive options from locale plugins
var out = {};
// Check if "de-DE" style date is available, if not language should
// fallback to 2 letter code eg "de"
if (!dates[lang]){
lang = lang.split('-')[0];
if (!dates[lang])
return;
}
var d = dates[lang];
$.each(locale_opts, function(i,k){
if (k in d)
out[k] = d[k];
});
return out;
}
var old = $.fn.datepicker;
$.fn.datepicker = function(option){
var args = Array.apply(null, arguments);
args.shift();
var internal_return;
this.each(function(){
var $this = $(this),
data = $this.data('datepicker'),
options = typeof option === 'object' && option;
if (!data){
var elopts = opts_from_el(this, 'date'),
// Preliminary otions
xopts = $.extend({}, defaults, elopts, options),
locopts = opts_from_locale(xopts.language),
// Options priority: js args, data-attrs, locales, defaults
opts = $.extend({}, defaults, locopts, elopts, options);
if ($this.is('.input-daterange') || opts.inputs){
var ropts = {
inputs: opts.inputs || $this.find('input').toArray()
};
$this.data('datepicker', (data = new DateRangePicker(this, $.extend(opts, ropts))));
}
else {
$this.data('datepicker', (data = new Datepicker(this, opts)));
}
}
if (typeof option === 'string' && typeof data[option] === 'function'){
internal_return = data[option].apply(data, args);
if (internal_return !== undefined)
return false;
}
});
if (internal_return !== undefined)
return internal_return;
else
return this;
};
var defaults = $.fn.datepicker.defaults = {
autoclose: false,
beforeShowDay: $.noop,
calendarWeeks: false,
clearBtn: false,
daysOfWeekDisabled: [],
endDate: Infinity,
forceParse: true,
format: 'mm/dd/yyyy',
keyboardNavigation: true,
language: 'en',
minViewMode: 0,
multidate: false,
multidateSeparator: ',',
orientation: "auto",
rtl: false,
startDate: -Infinity,
startView: 0,
todayBtn: false,
todayHighlight: false,
weekStart: 0
};
var locale_opts = $.fn.datepicker.locale_opts = [
'format',
'rtl',
'weekStart'
];
$.fn.datepicker.Constructor = Datepicker;
var dates = $.fn.datepicker.dates = {
en: {
days: ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"],
daysShort: ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"],
daysMin: ["Su", "Mo", "Tu", "We", "Th", "Fr", "Sa", "Su"],
months: ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"],
monthsShort: ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"],
today: "Today",
clear: "Clear"
}
};
var DPGlobal = {
modes: [
{
clsName: 'days',
navFnc: 'Month',
navStep: 1
},
{
clsName: 'months',
navFnc: 'FullYear',
navStep: 1
},
{
clsName: 'years',
navFnc: 'FullYear',
navStep: 10
}],
isLeapYear: function(year){
return (((year % 4 === 0) && (year % 100 !== 0)) || (year % 400 === 0));
},
getDaysInMonth: function(year, month){
return [31, (DPGlobal.isLeapYear(year) ? 29 : 28), 31, 30, 31, 30, 31, 31, 30, 31, 30, 31][month];
},
validParts: /dd?|DD?|mm?|MM?|yy(?:yy)?/g,
nonpunctuation: /[^ -\/:-@\[\u3400-\u9fff-`{-~\t\n\r]+/g,
parseFormat: function(format){
// IE treats \0 as a string end in inputs (truncating the value),
// so it's a bad format delimiter, anyway
var separators = format.replace(this.validParts, '\0').split('\0'),
parts = format.match(this.validParts);
if (!separators || !separators.length || !parts || parts.length === 0){
throw new Error("Invalid date format.");
}
return {separators: separators, parts: parts};
},
parseDate: function(date, format, language){
if (!date)
return undefined;
if (date instanceof Date)
return date;
if (typeof format === 'string')
format = DPGlobal.parseFormat(format);
var part_re = /([\-+]\d+)([dmwy])/,
parts = date.match(/([\-+]\d+)([dmwy])/g),
part, dir, i;
if (/^[\-+]\d+[dmwy]([\s,]+[\-+]\d+[dmwy])*$/.test(date)){
date = new Date();
for (i=0; i < parts.length; i++){
part = part_re.exec(parts[i]);
dir = parseInt(part[1]);
switch (part[2]){
case 'd':
date.setUTCDate(date.getUTCDate() + dir);
break;
case 'm':
date = Datepicker.prototype.moveMonth.call(Datepicker.prototype, date, dir);
break;
case 'w':
date.setUTCDate(date.getUTCDate() + dir * 7);
break;
case 'y':
date = Datepicker.prototype.moveYear.call(Datepicker.prototype, date, dir);
break;
}
}
return UTCDate(date.getUTCFullYear(), date.getUTCMonth(), date.getUTCDate(), 0, 0, 0);
}
parts = date && date.match(this.nonpunctuation) || [];
date = new Date();
var parsed = {},
setters_order = ['yyyy', 'yy', 'M', 'MM', 'm', 'mm', 'd', 'dd'],
setters_map = {
yyyy: function(d,v){
return d.setUTCFullYear(v);
},
yy: function(d,v){
return d.setUTCFullYear(2000+v);
},
m: function(d,v){
if (isNaN(d))
return d;
v -= 1;
while (v < 0) v += 12;
v %= 12;
d.setUTCMonth(v);
while (d.getUTCMonth() !== v)
d.setUTCDate(d.getUTCDate()-1);
return d;
},
d: function(d,v){
return d.setUTCDate(v);
}
},
val, filtered;
setters_map['M'] = setters_map['MM'] = setters_map['mm'] = setters_map['m'];
setters_map['dd'] = setters_map['d'];
date = UTCDate(date.getFullYear(), date.getMonth(), date.getDate(), 0, 0, 0);
var fparts = format.parts.slice();
// Remove noop parts
if (parts.length !== fparts.length){
fparts = $(fparts).filter(function(i,p){
return $.inArray(p, setters_order) !== -1;
}).toArray();
}
// Process remainder
function match_part(){
var m = this.slice(0, parts[i].length),
p = parts[i].slice(0, m.length);
return m === p;
}
if (parts.length === fparts.length){
var cnt;
for (i=0, cnt = fparts.length; i < cnt; i++){
val = parseInt(parts[i], 10);
part = fparts[i];
if (isNaN(val)){
switch (part){
case 'MM':
filtered = $(dates[language].months).filter(match_part);
val = $.inArray(filtered[0], dates[language].months) + 1;
break;
case 'M':
filtered = $(dates[language].monthsShort).filter(match_part);
val = $.inArray(filtered[0], dates[language].monthsShort) + 1;
break;
}
}
parsed[part] = val;
}
var _date, s;
for (i=0; i < setters_order.length; i++){
s = setters_order[i];
if (s in parsed && !isNaN(parsed[s])){
_date = new Date(date);
setters_map[s](_date, parsed[s]);
if (!isNaN(_date))
date = _date;
}
}
}
return date;
},
formatDate: function(date, format, language){
if (!date)
return '';
if (typeof format === 'string')
format = DPGlobal.parseFormat(format);
var val = {
d: date.getUTCDate(),
D: dates[language].daysShort[date.getUTCDay()],
DD: dates[language].days[date.getUTCDay()],
m: date.getUTCMonth() + 1,
M: dates[language].monthsShort[date.getUTCMonth()],
MM: dates[language].months[date.getUTCMonth()],
yy: date.getUTCFullYear().toString().substring(2),
yyyy: date.getUTCFullYear()
};
val.dd = (val.d < 10 ? '0' : '') + val.d;
val.mm = (val.m < 10 ? '0' : '') + val.m;
date = [];
var seps = $.extend([], format.separators);
for (var i=0, cnt = format.parts.length; i <= cnt; i++){
if (seps.length)
date.push(seps.shift());
date.push(val[format.parts[i]]);
}
return date.join('');
},
headTemplate: '<thead>'+
'<tr>'+
'<th class="prev">«</th>'+
'<th colspan="5" class="datepicker-switch"></th>'+
'<th class="next">»</th>'+
'</tr>'+
'</thead>',
contTemplate: '<tbody><tr><td colspan="7"></td></tr></tbody>',
footTemplate: '<tfoot>'+
'<tr>'+
'<th colspan="7" class="today"></th>'+
'</tr>'+
'<tr>'+
'<th colspan="7" class="clear"></th>'+
'</tr>'+
'</tfoot>'
};
DPGlobal.template = '<div class="datepicker">'+
'<div class="datepicker-days">'+
'<table class=" table-condensed">'+
DPGlobal.headTemplate+
'<tbody></tbody>'+
DPGlobal.footTemplate+
'</table>'+
'</div>'+
'<div class="datepicker-months">'+
'<table class="table-condensed">'+
DPGlobal.headTemplate+
DPGlobal.contTemplate+
DPGlobal.footTemplate+
'</table>'+
'</div>'+
'<div class="datepicker-years">'+
'<table class="table-condensed">'+
DPGlobal.headTemplate+
DPGlobal.contTemplate+
DPGlobal.footTemplate+
'</table>'+
'</div>'+
'</div>';
$.fn.datepicker.DPGlobal = DPGlobal;
/* DATEPICKER NO CONFLICT
* =================== */
$.fn.datepicker.noConflict = function(){
$.fn.datepicker = old;
return this;
};
/* DATEPICKER DATA-API
* ================== */
$(document).on(
'focus.datepicker.data-api click.datepicker.data-api',
'[data-provide="datepicker"]',
function(e){
var $this = $(this);
if ($this.data('datepicker'))
return;
e.preventDefault();
// component click requires us to explicitly show it
$this.datepicker('show');
}
);
$(function(){
$('[data-provide="datepicker-inline"]').datepicker();
});
}(window.jQuery));
|
PypiClean
|
/write_the-0.9.1.tar.gz/write_the-0.9.1/write_the/commands/docs/docs.py
|
import asyncio
import libcst as cst
from black import format_str, FileMode
from write_the.cst import nodes_to_tree
from write_the.cst.docstring_adder import add_docstrings_to_tree
from write_the.cst.function_and_class_collector import get_node_names
from write_the.cst.node_extractor import extract_nodes_from_tree
from write_the.cst.node_batcher import create_batches
from write_the.commands.docs.utils import extract_block
from write_the.llm import LLM
from .prompts import write_docstings_for_nodes_prompt
async def write_the_docs(
tree: cst.Module,
node_names=[],
force=False,
save=False,
context=False,
background=True,
pretty=False,
max_batch_size=False,
) -> str:
"""
Generates docstrings for a given tree of nodes.
Args:
tree (cst.Module): The tree of nodes to write docs for.
node_names (list): The list of nodes names to write docs for.
force (bool): Whether to force writing of docs.
save (bool): Whether to save the docs.
context (bool): Whether to include context nodes.
pretty (bool): Whether to format the code.
max_batch_size (bool): Max number of nodes in each batch.
Returns:
str: The source code with the generated docstrings.
Notes:
If `nodes` is provided, `force` is set to `True` and `context` is set to `False`.
Examples:
>>> write_the_docs("example.py")
"def add(a, b):
\"\"\"Sums 2 numbers.
Args:
a (int): The first number to add.
b (int): The second number to add.
Returns:
int: The sum of `a` and `b`.
\"\"\"
return a + b"
"""
extract_specific_nodes = False
if node_names:
extract_specific_nodes = True
force = True
else:
node_names = get_node_names(tree, force)
if not node_names:
return tree.code
# batch
llm = LLM(write_docstings_for_nodes_prompt)
batches = create_batches(
tree=tree,
node_names=node_names,
max_tokens=llm.max_tokens,
prompt_size=llm.prompt_size,
response_size_per_node=250, # a guess... TODO: smarter
max_batch_size=max_batch_size,
send_background_context=background,
send_node_context=context,
)
promises = []
node_names_list = []
for batch in batches:
node_names = batch.node_names
code = batch.code
promises.append((llm.run(code=code, nodes=node_names)))
node_names_list.append(node_names)
# Can i yield here so batches can be logged?
results = await asyncio.gather(*promises)
docstring_dict = {}
for node_names, result in zip(node_names_list, results):
docstring_dict.update(extract_block(result, node_names))
modified_tree = add_docstrings_to_tree(tree, docstring_dict, force=force)
if not save and extract_specific_nodes:
extracted_nodes = extract_nodes_from_tree(modified_tree, node_names)
modified_tree = nodes_to_tree(extracted_nodes)
if pretty:
return format_str(modified_tree.code, mode=FileMode())
return modified_tree.code
|
PypiClean
|
/TDY_PKG_saquibquddus-1.1.1-py3-none-any.whl/tf2_webapp/object_detection/export_tflite_graph_lib_tf2.py
|
"""Library to export TFLite-compatible SavedModel from TF2 detection models."""
import os
import numpy as np
import tensorflow.compat.v1 as tf1
import tensorflow.compat.v2 as tf
from object_detection.builders import model_builder
from object_detection.builders import post_processing_builder
from object_detection.core import box_list
from object_detection.core import standard_fields as fields
_DEFAULT_NUM_CHANNELS = 3
_DEFAULT_NUM_COORD_BOX = 4
_MAX_CLASSES_PER_DETECTION = 1
_DETECTION_POSTPROCESS_FUNC = 'TFLite_Detection_PostProcess'
def get_const_center_size_encoded_anchors(anchors):
"""Exports center-size encoded anchors as a constant tensor.
Args:
anchors: a float32 tensor of shape [num_anchors, 4] containing the anchor
boxes
Returns:
encoded_anchors: a float32 constant tensor of shape [num_anchors, 4]
containing the anchor boxes.
"""
anchor_boxlist = box_list.BoxList(anchors)
y, x, h, w = anchor_boxlist.get_center_coordinates_and_sizes()
num_anchors = y.get_shape().as_list()
with tf1.Session() as sess:
y_out, x_out, h_out, w_out = sess.run([y, x, h, w])
encoded_anchors = tf1.constant(
np.transpose(np.stack((y_out, x_out, h_out, w_out))),
dtype=tf1.float32,
shape=[num_anchors[0], _DEFAULT_NUM_COORD_BOX],
name='anchors')
return num_anchors[0], encoded_anchors
class SSDModule(tf.Module):
"""Inference Module for TFLite-friendly SSD models."""
def __init__(self, pipeline_config, detection_model, max_detections,
use_regular_nms):
"""Initialization.
Args:
pipeline_config: The original pipeline_pb2.TrainEvalPipelineConfig
detection_model: The detection model to use for inference.
max_detections: Max detections desired from the TFLite model.
use_regular_nms: If True, TFLite model uses the (slower) multi-class NMS.
"""
self._process_config(pipeline_config)
self._pipeline_config = pipeline_config
self._model = detection_model
self._max_detections = max_detections
self._use_regular_nms = use_regular_nms
def _process_config(self, pipeline_config):
self._num_classes = pipeline_config.model.ssd.num_classes
self._nms_score_threshold = pipeline_config.model.ssd.post_processing.batch_non_max_suppression.score_threshold
self._nms_iou_threshold = pipeline_config.model.ssd.post_processing.batch_non_max_suppression.iou_threshold
self._scale_values = {}
self._scale_values[
'y_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.y_scale
self._scale_values[
'x_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.x_scale
self._scale_values[
'h_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.height_scale
self._scale_values[
'w_scale'] = pipeline_config.model.ssd.box_coder.faster_rcnn_box_coder.width_scale
image_resizer_config = pipeline_config.model.ssd.image_resizer
image_resizer = image_resizer_config.WhichOneof('image_resizer_oneof')
self._num_channels = _DEFAULT_NUM_CHANNELS
if image_resizer == 'fixed_shape_resizer':
self._height = image_resizer_config.fixed_shape_resizer.height
self._width = image_resizer_config.fixed_shape_resizer.width
if image_resizer_config.fixed_shape_resizer.convert_to_grayscale:
self._num_channels = 1
else:
raise ValueError(
'Only fixed_shape_resizer'
'is supported with tflite. Found {}'.format(
image_resizer_config.WhichOneof('image_resizer_oneof')))
def input_shape(self):
"""Returns shape of TFLite model input."""
return [1, self._height, self._width, self._num_channels]
def postprocess_implements_signature(self):
"""Returns tf.implements signature for MLIR legalization of TFLite NMS."""
implements_signature = [
'name: "%s"' % _DETECTION_POSTPROCESS_FUNC,
'attr { key: "max_detections" value { i: %d } }' % self._max_detections,
'attr { key: "max_classes_per_detection" value { i: %d } }' %
_MAX_CLASSES_PER_DETECTION,
'attr { key: "use_regular_nms" value { b: %s } }' %
str(self._use_regular_nms).lower(),
'attr { key: "nms_score_threshold" value { f: %f } }' %
self._nms_score_threshold,
'attr { key: "nms_iou_threshold" value { f: %f } }' %
self._nms_iou_threshold,
'attr { key: "y_scale" value { f: %f } }' %
self._scale_values['y_scale'],
'attr { key: "x_scale" value { f: %f } }' %
self._scale_values['x_scale'],
'attr { key: "h_scale" value { f: %f } }' %
self._scale_values['h_scale'],
'attr { key: "w_scale" value { f: %f } }' %
self._scale_values['w_scale'],
'attr { key: "num_classes" value { i: %d } }' % self._num_classes
]
implements_signature = ' '.join(implements_signature)
return implements_signature
def _get_postprocess_fn(self, num_anchors, num_classes):
# There is no TF equivalent for TFLite's custom post-processing op.
# So we add an 'empty' composite function here, that is legalized to the
# custom op with MLIR.
@tf.function(
experimental_implements=self.postprocess_implements_signature())
# pylint: disable=g-unused-argument,unused-argument
def dummy_post_processing(box_encodings, class_predictions, anchors):
boxes = tf.constant(0.0, dtype=tf.float32, name='boxes')
scores = tf.constant(0.0, dtype=tf.float32, name='scores')
classes = tf.constant(0.0, dtype=tf.float32, name='classes')
num_detections = tf.constant(0.0, dtype=tf.float32, name='num_detections')
return boxes, classes, scores, num_detections
return dummy_post_processing
@tf.function
def inference_fn(self, image):
"""Encapsulates SSD inference for TFLite conversion.
NOTE: The Args & Returns sections below indicate the TFLite model signature,
and not what the TF graph does (since the latter does not include the custom
NMS op used by TFLite)
Args:
image: a float32 tensor of shape [num_anchors, 4] containing the anchor
boxes
Returns:
num_detections: a float32 scalar denoting number of total detections.
classes: a float32 tensor denoting class ID for each detection.
scores: a float32 tensor denoting score for each detection.
boxes: a float32 tensor denoting coordinates of each detected box.
"""
predicted_tensors = self._model.predict(image, true_image_shapes=None)
# The score conversion occurs before the post-processing custom op
_, score_conversion_fn = post_processing_builder.build(
self._pipeline_config.model.ssd.post_processing)
class_predictions = score_conversion_fn(
predicted_tensors['class_predictions_with_background'])
with tf.name_scope('raw_outputs'):
# 'raw_outputs/box_encodings': a float32 tensor of shape
# [1, num_anchors, 4] containing the encoded box predictions. Note that
# these are raw predictions and no Non-Max suppression is applied on
# them and no decode center size boxes is applied to them.
box_encodings = tf.identity(
predicted_tensors['box_encodings'], name='box_encodings')
# 'raw_outputs/class_predictions': a float32 tensor of shape
# [1, num_anchors, num_classes] containing the class scores for each
# anchor after applying score conversion.
class_predictions = tf.identity(
class_predictions, name='class_predictions')
# 'anchors': a float32 tensor of shape
# [4, num_anchors] containing the anchors as a constant node.
num_anchors, anchors = get_const_center_size_encoded_anchors(
predicted_tensors['anchors'])
anchors = tf.identity(anchors, name='anchors')
# tf.function@ seems to reverse order of inputs, so reverse them here.
return self._get_postprocess_fn(num_anchors,
self._num_classes)(box_encodings,
class_predictions,
anchors)[::-1]
class CenterNetModule(tf.Module):
"""Inference Module for TFLite-friendly CenterNet models.
The exported CenterNet model includes the preprocessing and postprocessing
logics so the caller should pass in the raw image pixel values. It supports
both object detection and keypoint estimation task.
"""
def __init__(self, pipeline_config, max_detections, include_keypoints):
"""Initialization.
Args:
pipeline_config: The original pipeline_pb2.TrainEvalPipelineConfig
max_detections: Max detections desired from the TFLite model.
include_keypoints: If set true, the output dictionary will include the
keypoint coordinates and keypoint confidence scores.
"""
self._max_detections = max_detections
self._include_keypoints = include_keypoints
self._process_config(pipeline_config)
self._pipeline_config = pipeline_config
self._model = model_builder.build(
self._pipeline_config.model, is_training=False)
def get_model(self):
return self._model
def _process_config(self, pipeline_config):
self._num_classes = pipeline_config.model.center_net.num_classes
center_net_config = pipeline_config.model.center_net
image_resizer_config = center_net_config.image_resizer
image_resizer = image_resizer_config.WhichOneof('image_resizer_oneof')
self._num_channels = _DEFAULT_NUM_CHANNELS
if image_resizer == 'fixed_shape_resizer':
self._height = image_resizer_config.fixed_shape_resizer.height
self._width = image_resizer_config.fixed_shape_resizer.width
if image_resizer_config.fixed_shape_resizer.convert_to_grayscale:
self._num_channels = 1
else:
raise ValueError(
'Only fixed_shape_resizer'
'is supported with tflite. Found {}'.format(image_resizer))
center_net_config.object_center_params.max_box_predictions = (
self._max_detections)
if not self._include_keypoints:
del center_net_config.keypoint_estimation_task[:]
def input_shape(self):
"""Returns shape of TFLite model input."""
return [1, self._height, self._width, self._num_channels]
@tf.function
def inference_fn(self, image):
"""Encapsulates CenterNet inference for TFLite conversion.
Args:
image: a float32 tensor of shape [1, image_height, image_width, channel]
denoting the image pixel values.
Returns:
A dictionary of predicted tensors:
classes: a float32 tensor with shape [1, max_detections] denoting class
ID for each detection.
scores: a float32 tensor with shape [1, max_detections] denoting score
for each detection.
boxes: a float32 tensor with shape [1, max_detections, 4] denoting
coordinates of each detected box.
keypoints: a float32 with shape [1, max_detections, num_keypoints, 2]
denoting the predicted keypoint coordinates (normalized in between
0-1). Note that [:, :, :, 0] represents the y coordinates and
[:, :, :, 1] represents the x coordinates.
keypoint_scores: a float32 with shape [1, max_detections, num_keypoints]
denoting keypoint confidence scores.
"""
image = tf.cast(image, tf.float32)
image, shapes = self._model.preprocess(image)
prediction_dict = self._model.predict(image, None)
detections = self._model.postprocess(
prediction_dict, true_image_shapes=shapes)
field_names = fields.DetectionResultFields
classes_field = field_names.detection_classes
classes = tf.cast(detections[classes_field], tf.float32)
num_detections = tf.cast(detections[field_names.num_detections], tf.float32)
if self._include_keypoints:
model_outputs = (detections[field_names.detection_boxes], classes,
detections[field_names.detection_scores], num_detections,
detections[field_names.detection_keypoints],
detections[field_names.detection_keypoint_scores])
else:
model_outputs = (detections[field_names.detection_boxes], classes,
detections[field_names.detection_scores], num_detections)
# tf.function@ seems to reverse order of inputs, so reverse them here.
return model_outputs[::-1]
def export_tflite_model(pipeline_config, trained_checkpoint_dir,
output_directory, max_detections, use_regular_nms,
include_keypoints=False):
"""Exports inference SavedModel for TFLite conversion.
NOTE: Only supports SSD meta-architectures for now, and the output model will
have static-shaped, single-batch input.
This function creates `output_directory` if it does not already exist,
which will hold the intermediate SavedModel that can be used with the TFLite
converter.
Args:
pipeline_config: pipeline_pb2.TrainAndEvalPipelineConfig proto.
trained_checkpoint_dir: Path to the trained checkpoint file.
output_directory: Path to write outputs.
max_detections: Max detections desired from the TFLite model.
use_regular_nms: If True, TFLite model uses the (slower) multi-class NMS.
Note that this argument is only used by the SSD model.
include_keypoints: Decides whether to also output the keypoint predictions.
Note that this argument is only used by the CenterNet model.
Raises:
ValueError: if pipeline is invalid.
"""
output_saved_model_directory = os.path.join(output_directory, 'saved_model')
# Build the underlying model using pipeline config.
# TODO(b/162842801): Add support for other architectures.
if pipeline_config.model.WhichOneof('model') == 'ssd':
detection_model = model_builder.build(
pipeline_config.model, is_training=False)
ckpt = tf.train.Checkpoint(model=detection_model)
# The module helps build a TF SavedModel appropriate for TFLite conversion.
detection_module = SSDModule(pipeline_config, detection_model,
max_detections, use_regular_nms)
elif pipeline_config.model.WhichOneof('model') == 'center_net':
detection_module = CenterNetModule(
pipeline_config, max_detections, include_keypoints)
ckpt = tf.train.Checkpoint(model=detection_module.get_model())
else:
raise ValueError('Only ssd or center_net models are supported in tflite. '
'Found {} in config'.format(
pipeline_config.model.WhichOneof('model')))
manager = tf.train.CheckpointManager(
ckpt, trained_checkpoint_dir, max_to_keep=1)
status = ckpt.restore(manager.latest_checkpoint).expect_partial()
# Getting the concrete function traces the graph and forces variables to
# be constructed; only after this can we save the saved model.
status.assert_existing_objects_matched()
concrete_function = detection_module.inference_fn.get_concrete_function(
tf.TensorSpec(
shape=detection_module.input_shape(), dtype=tf.float32, name='input'))
status.assert_existing_objects_matched()
# Export SavedModel.
tf.saved_model.save(
detection_module,
output_saved_model_directory,
signatures=concrete_function)
|
PypiClean
|
/nisystemlink_clients-1.1.0.tar.gz/nisystemlink_clients-1.1.0/nisystemlink/clients/tag/_http/_http_buffered_tag_writer.py
|
import datetime
from collections import OrderedDict
from typing import Any, Dict, Optional
from nisystemlink.clients import tag as tbase
from nisystemlink.clients.core._internal._http_client import HttpClient
from nisystemlink.clients.core._internal._timestamp_utilities import TimestampUtilities
from nisystemlink.clients.tag._core._itime_stamper import ITimeStamper
from nisystemlink.clients.tag._core._manual_reset_timer import ManualResetTimer
from typing_extensions import final
@final
class HttpBufferedTagWriter(tbase.BufferedTagWriter):
def __init_subclass__(cls) -> None:
raise TypeError("type 'HttpBufferedTagWriter' is not an acceptable base type")
def __init__(
self,
client: HttpClient,
stamper: ITimeStamper,
buffer_size: int,
flush_timer: ManualResetTimer,
) -> None:
super().__init__(stamper, buffer_size, flush_timer)
self._api = client.at_uri("/nitag/v2")
self._buffer = OrderedDict() # type: OrderedDict[str, Dict[str, Any]]
def _buffer_value(self, path: str, value: Dict[str, Any]) -> None:
if path not in self._buffer:
self._buffer.setdefault(path, {"path": path, "updates": []})
self._buffer[path]["updates"].append(value)
def _clear_buffer(self) -> None:
self._buffer.clear()
def _copy_buffer(self) -> Dict[str, Dict[str, Any]]:
updates = self._buffer
self._buffer = OrderedDict()
return updates
def _create_item(
self,
path: str,
data_type: tbase.DataType,
value: str,
timestamp: Optional[datetime.datetime] = None,
) -> Dict[str, Any]:
item = {
"value": {"value": value, "type": data_type.api_name}
} # type: Dict[str, Any]
if timestamp is not None:
item["timestamp"] = TimestampUtilities.datetime_to_str(timestamp)
return item
def _send_writes(self, updates: Dict[str, Dict[str, Any]]) -> None:
self._api.post("/update-current-values", data=list(updates.values()))
async def _send_writes_async(self, updates: Dict[str, Any]) -> None:
await self._api.as_async.post(
"/update-current-values", data=list(updates.values())
)
|
PypiClean
|
/poslocalbll-0.6.tar.gz/poslocalbll-0.6/tools/weather.py
|
from urllib import request
import re
import pinyin.pinyin as pinyin
import json
import time
from lxml import etree
#省份对应的省会城市
pc_city = {
"anhui": "hefei",
"beijing": "beijing",
"chongqing": "chongqing",
"fujian": "fuzhou",
"gansu": "lanzhou",
"guangdong": "GuangZhou",
"guangxi": "NanNing",
"guizhou": "GuiYang",
"hainan": "HaiKou",
"hebei": "ShiJiaZhuang",
"heilongjiang": "HaErBin",
"henan": "ZhengZhou",
"xianggang": "xianggang",
"hubei": "WuHan",
"hunan": "ChangSha",
"neimenggu": "HuHeHaoTe",
"jiangsu": "NanJing",
"jiangxi": "NanChang",
"jilin": "ChangChun",
"liaoning": "ShenYang",
"aomen": "aomen",
"ningxia": "YinChuan",
"qinghai": "XiNing",
"shanxi": "XiAn",
"shandong": "JiNan",
"shanghaishi": "shanghai",
"shanx": "TaiYuan",
"sichuan": "ChengDu",
"tianjin": "tianjin",
"xizang": "LaSa",
"xinjiang": "WuLuMuQi",
"yunnan": "KunMing",
"zhejiang": "HangZhou",
"taiwang": "TaiBei"
}
def getweather():
res = request.urlopen('http://pv.sohu.com/cityjson')
city_info = res.read().decode('gbk')
print(city_info)
addr = str(city_info).split('=')[1].split(',')[2].split('"')[3] # 取出地址信息
py = pinyin.get(addr, format='strip')
provice = py.split('sheng', 1)[0].replace(' ', '') # 获取省份
city=None
try:
city = py.split('shi')[0].split('sheng')[
1].strip().replace(' ', '') # 获取城市
except Exception as e:
city=None
#通过IP获取省份、城市时,要是城市没有获取到,那么就直接取该省份的省会城市
if not city or city==None :
for k, v in pc_city.items():
if k == provice or k in provice:
city = v
break
url = 'http://qq.ip138.com/weather/%s/%s.htm' % (provice, city)
if city=="shanghai":
url = 'http://qq.ip138.com/weather/%s' % (city)
# 分析url可知某省某市的天气url即为上面格式
wea_info = request.urlopen(url).read().decode('gbk')
# 解析html,获取一周天气里当天天气
tree = etree.HTML(wea_info)
nodes = tree.xpath("/descendant::table[@class='t12']/tr")
n_nodes = nodes[1:]
weathers = []
for n in range(len(n_nodes)):
items = n_nodes[n].xpath("td")
weathers_items = []
for r in items:
if r.text is None:
tq = r.xpath("img")
qt_str = ''
for i_tq in tq:
if qt_str == '':
qt_str = i_tq.get("alt")
else:
qt_str = qt_str + "转" + i_tq.get("alt")
weathers_items.append(qt_str)
else:
weathers_items.append(r.text)
weathers.append(weathers_items)
# 从一周天气信息中取出当天天气
# print(time.localtime())
n_time = time.localtime()
n_year = n_time.tm_year
n_mon = n_time.tm_mon
n_day = n_time.tm_mday
todayweather = {
"date": "",
"weather": "",
"temperature": "",
"wind": "",
"addr": "",
"icon": ""}
for i in range(len(weathers[0])):
if weathers[0][i].find(
str(n_year) + "-" + str(n_mon) + "-" + str(n_day)) != -1:
for j in range(len(weathers)):
if j == 0:
todayweather["date"] = weathers[j][i]
elif j == 1:
todayweather["weather"] = weathers[j][i]
if str(weathers[j][i]).find("转"):
n_weather = str(weathers[j][i]).split("转")[-1]
else:
n_weather = str(weathers[j][i])
if str(n_weather).find("雨") >= 0:
todayweather["icon"] = r""
elif str(n_weather).find("雪") >= 0:
todayweather["icon"] = r""
elif str(n_weather).find("晴") >= 0:
todayweather["icon"] = r""
elif str(n_weather).find("阴") >= 0:
todayweather["icon"] = r""
elif str(n_weather).find("多云") >= 0:
todayweather["icon"] = r""
elif str(n_weather).find("雨夹雪") >= 0:
todayweather["icon"] = r""
else:
todayweather["icon"] = r""
elif j == 2:
todayweather["temperature"] = weathers[j][i]
elif j == 3:
todayweather["wind"] = weathers[j][i]
else:
continue
todayweather["addr"] = addr
return todayweather
|
PypiClean
|
/henxel-0.2.6.tar.gz/henxel-0.2.6/README.md
|
# Henxel
GUI-editor for Python development. Tested to work with Debian 12, Windows 11 and macOS 12.
# Featuring
* Auto-indent
* Font Chooser
* Color Chooser
* Line numbering
* Tabbed editing
* Inspect object
* Show git-branch
* Run current file
* Search - Replace
* Indent - Unindent
* Comment - Uncomment
* Syntax highlighting
* Click to open errors
* Parenthesis checking
* Persistent configuration
# Lacking
* Auto-completion
* Hinting
# Prerequisites in Linux
Python modules required that are sometimes not installed with OS: tkinter. Check in Python-console:
```console
>>> import tkinter
```
If no error, it is installed. If it throws an error you have to install it from OS-repository. In debian it is: python3-tk
```console
~$ sudo apt install python3-tk
```
# About virtual environment, optional but highly recommended
Consider creating virtual environment for your python-projects and installing python packages like this editor to it. Editor will not save your configuration if it was not launched from virtual environment. In debian you have to first install this package: python3-venv:
```console
~$ sudo apt install python3-venv
```
There is a linux-script named 'mkvenv' in /util. Copy it to some place nice like bin-directory in your home-directory and make it executable if it is not already:
```console
~/bin$ chmod u+x mkvenv
```
Then make folder for your new project and install venv there and activate it, and show currently installed python-packages in your new virtual environment, and lastly deactivate (quit) environment:
```console
~$ mkdir myproject
~$ cd myproject
~/myproject$ mkvenv env
-------------------------------
~/myproject$ source env/bin/activate
(env) ~/myproject$ pip list
-----------------------------------
(env) ~/myproject$ deactivate
~/myproject$
```
To remove venv just remove the env-directory and you can start from clean desk making new one with mkvenv later. Optional about virtual environment ends here.
# Prerequisites in Windows and venv-creation
Python installation should already include tkinter. There currently is no
mkvenv script for Windows in this project, but here is short info about how to
create a working Python virtual environment in Windows. First open console, like
PowerShell (in which: ctrl-r to search command history, most useful) and:
```console
mkdir myproject
cd myproject
myproject> python.exe -m venv env
myproject> .\env\Scripts\activate
what you can get with pressing: (e <tab> s <tab> a <tab> <return>)
First the essential for the env:
(env) myproject> pip install --upgrade pip
(env) myproject> pip install wheel
Then to enable tab-completion in Python-console, most useful:
(env) myproject> pip install pyreadline3
And it is ready to use:
(env) myproject> pip list
(env) myproject> deactivate
```
# Prerequisites in macOS and venv-creation
Python installation (you may need to install newer version of python from python.org)
should already include tkinter. There currently is no mkvenv script for macOS,
but making venv is quite same as in Linux. I will update this section later.
```console
~$ mkdir myproject
~$ cd myproject
~/myproject$ python -m venv env
-------------------------------
~/myproject$ source env/bin/activate
(env) ~/myproject$ pip list
-----------------------------------
(env) ~/myproject$ deactivate
~/myproject$
```
# Installing
```console
(env) ~/myproject$ pip install henxel
```
or to install system-wide, not recommended. You need first to install pip from OS-repository:
```console
~/myproject$ pip install henxel
```
# Running from Python-console:
```console
~/myproject$ source env/bin/activate
(env) ~/myproject$ python
--------------------------------------
>>> import henxel
>>> e=henxel.Editor()
```
# Developing
```console
~/myproject$ mkvenv env
~/myproject$ . env/bin/activate
(env) ~/myproject$ git clone https://github.com/SamuelKos/henxel
(env) ~/myproject$ cd henxel
(env) ~/myproject/henxel$ pip install -e .
```
If you currently have no internet but have previously installed virtual environment which has pip and setuptools and you have downloaded henxel-repository:
```console
(env) ~/myproject/henxel$ pip install --no-build-isolation -e .
```
Files are in src/henxel/
# More on virtual environments:
This is now bit more complex, because we are not anymore expecting that we have many older versions of the project left (as packages). But with this lenghty method we can compare to any commit, not just released packages. So this is for you who are packaging Python-project and might want things like side-by-side live-comparison of two different versions, most propably version you are currently developing and some earlier version. I Assume you are the owner of the project so you have the git-history, or else you have done git clone. I use henxel as the project example.
First create development-venv for the project, if you haven't already and install current version to it in editable mode:
```console
~/myproject/henxel$ mkvenv env
~/myproject/henxel$ . env/bin/activate
(env) ~/myproject/henxel$ pip install -e .
```
Then select the git-commit for the reference version. I have interesting commits with message like: version 0.2.0 so to list all such commits:
```console
~/myproject/henxel$ git log --grep=version
```
For example to make new branch from version 0.2.0, copy the first letters from the commit-id and:
```console
~/myproject/henxel$ git branch version020 e4f1f4ab3f
~/myproject/henxel$ git switch version020
```
Then create ref-env to some place that is not version-controlled like the parent-folder and install version020 of the project to it with pip, again in editable mode, just in case you want to try something out.
```console
~/myproject/henxel$ cd ..
~/myproject$ mkvenv v020
~/myproject$ . v020/bin/activate
(v020) ~/myproject$ cd henxel
(v020) ~/myproject/henxel$ pip list
(v020) ~/myproject/henxel$ pip install -e .
(v020) ~/myproject/henxel$ deactivate
```
Now you are ready to launch both versions of your project and do side-by-side comparison if that is what you want:
```console
~/myproject/henxel$ . env/bin/activate
(env) ~/myproject/henxel$ pip list
```
From other shell-window:
```console
~/myproject$ . v020/bin/activate
(v020) ~/myproject$ pip list
```
# More resources
[Changelog](https://github.com/SamuelKos/henxel/blob/main/CHANGELOG)
# Licence
This project is licensed under the terms of the GNU General Public License v3.0.
|
PypiClean
|
/wmagent-2.2.4rc3.tar.gz/wmagent-2.2.4rc3/src/python/WMCore/WMSpec/StdSpecs/StoreResults.py
|
from Utils.Utilities import makeList, makeNonEmptyList
from WMCore.Lexicon import dataset, block, physicsgroup, cmsname
from WMCore.WMSpec.StdSpecs.StdBase import StdBase
class StoreResultsWorkloadFactory(StdBase):
"""
_StoreResultsWorkloadFactory_
Stamp out StoreResults workloads.
"""
def __call__(self, workloadName, arguments):
"""
_call_
Create a StoreResults workload with the given parameters.
"""
# first of all, we update the merged LFN based on the physics group
arguments['MergedLFNBase'] += "/" + arguments['PhysicsGroup'].lower()
StdBase.__call__(self, workloadName, arguments)
(inputPrimaryDataset, inputProcessedDataset, inputDataTier) = self.inputDataset[1:].split("/")
workload = self.createWorkload()
mergeTask = workload.newTask("StoreResults")
self.addRuntimeMonitors(mergeTask)
mergeTaskCmssw = mergeTask.makeStep("cmsRun1")
mergeTaskCmssw.setStepType("CMSSW")
mergeTaskStageOut = mergeTaskCmssw.addStep("stageOut1")
mergeTaskStageOut.setStepType("StageOut")
mergeTaskLogArch = mergeTaskCmssw.addStep("logArch1")
mergeTaskLogArch.setStepType("LogArchive")
self.addLogCollectTask(mergeTask, taskName="StoreResultsLogCollect")
mergeTask.setTaskType("Merge")
mergeTask.applyTemplates()
mergeTask.addInputDataset(name=self.inputDataset,
primary=inputPrimaryDataset,
processed=inputProcessedDataset,
tier=inputDataTier,
dbsurl=self.dbsUrl,
block_blacklist=self.blockBlacklist,
block_whitelist=self.blockWhitelist,
run_blacklist=self.runBlacklist,
run_whitelist=self.runWhitelist)
splitAlgo = "ParentlessMergeBySize"
mergeTask.setSplittingAlgorithm(splitAlgo,
max_merge_size=self.maxMergeSize,
min_merge_size=self.minMergeSize,
max_merge_events=self.maxMergeEvents)
mergeTaskCmsswHelper = mergeTaskCmssw.getTypeHelper()
mergeTaskCmsswHelper.cmsswSetup(self.frameworkVersion, softwareEnvironment="",
scramArch=self.scramArch)
mergeTaskCmsswHelper.setGlobalTag(self.globalTag)
mergeTaskCmsswHelper.setSkipBadFiles(True)
mergeTaskCmsswHelper.setDataProcessingConfig("do_not_use", "merge")
self.addOutputModule(mergeTask, "Merged",
primaryDataset=inputPrimaryDataset,
dataTier=self.dataTier,
filterName=None,
forceMerged=True)
workload.setLFNBase(self.mergedLFNBase, self.unmergedLFNBase)
workload.setDashboardActivity("StoreResults")
# setting the parameters which need to be set for all the tasks
# sets acquisitionEra, processingVersion, processingString
workload.setTaskPropertiesFromWorkload()
return workload
@staticmethod
def getWorkloadCreateArgs():
baseArgs = StdBase.getWorkloadCreateArgs()
specArgs = {"RequestType": {"default": "StoreResults", "optional": False},
"InputDataset": {"optional": False, "validate": dataset, "null": False},
"ConfigCacheID": {"optional": True, "null": True},
"DataTier": {"default": "USER", "type": str,
"optional": True, "validate": None,
"attr": "dataTier", "null": False},
"PhysicsGroup": {"default": "", "optional": False,
"null": False, "validate": physicsgroup},
"MergedLFNBase": {"default": "/store/results", "type": str,
"optional": True, "validate": None,
"attr": "mergedLFNBase", "null": False},
# site whitelist shouldn't be allowed, but let's make an exception for StoreResults
"SiteWhitelist": {"default": [], "type": makeNonEmptyList, "assign_optional": False,
"validate": lambda x: all([cmsname(y) for y in x])},
"BlockBlacklist": {"default": [], "type": makeList,
"optional": True, "validate": lambda x: all([block(y) for y in x]),
"attr": "blockBlacklist", "null": False},
"BlockWhitelist": {"default": [], "type": makeList,
"optional": True, "validate": lambda x: all([block(y) for y in x]),
"attr": "blockWhitelist", "null": False},
"RunBlacklist": {"default": [], "type": makeList,
"optional": True, "validate": lambda x: all([int(y) > 0 for y in x]),
"attr": "runBlacklist", "null": False},
"RunWhitelist": {"default": [], "type": makeList,
"optional": True, "validate": lambda x: all([int(y) > 0 for y in x]),
"attr": "runWhitelist", "null": False}}
baseArgs.update(specArgs)
StdBase.setDefaultArgumentsProperty(baseArgs)
return baseArgs
|
PypiClean
|
/bots-open-source-edi-translator-3.1.9.tar.gz/bots-3.1.9/bots/pluglib.py
|
import os
import sys
#~ import time
import zipfile
import zipimport
import codecs
import django
from django.core import serializers
from django.utils.translation import ugettext as _
import models
import botslib
import botsglobal
#******************************************
#* read a plugin **************************
#******************************************
@django.db.transaction.commit_on_success #if no exception raised: commit, else rollback.
def read_index(filename):
''' process index file in default location. '''
try:
importedbotsindex,scriptname = botslib.botsimport('index')
pluglist = importedbotsindex.plugins[:]
if 'botsindex' in sys.modules:
del sys.modules['botsindex']
except:
txt = botslib.txtexc()
raise botslib.PluginError(_(u'Error in configuration index file. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_(u'Configuration index file is OK.'))
botsglobal.logger.info(_(u'Start writing to database.'))
#write content of index file to the bots database
try:
read_index2database(pluglist)
except:
txt = botslib.txtexc()
raise botslib.PluginError(_(u'Error writing configuration index to database. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_(u'Writing to database is OK.'))
@django.db.transaction.commit_on_success #if no exception raised: commit, else rollback.
def read_plugin(pathzipfile):
''' process uploaded plugin. '''
#test if valid zipfile
if not zipfile.is_zipfile(pathzipfile):
raise botslib.PluginError(_(u'Plugin is not a valid file.'))
#read index file
try:
myzipimport = zipimport.zipimporter(pathzipfile)
importedbotsindex = myzipimport.load_module('botsindex')
pluglist = importedbotsindex.plugins[:]
if 'botsindex' in sys.modules:
del sys.modules['botsindex']
except:
txt = botslib.txtexc()
raise botslib.PluginError(_(u'Error in plugin. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_(u'Plugin is OK.'))
botsglobal.logger.info(_(u'Start writing to database.'))
#write content of index file to the bots database
try:
read_index2database(pluglist)
except:
txt = botslib.txtexc()
raise botslib.PluginError(_(u'Error writing plugin to database. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_(u'Writing to database is OK.'))
#write files to the file system.
botsglobal.logger.info(_(u'Start writing to files'))
try:
warnrenamed = False #to report in GUI files have been overwritten.
myzip = zipfile.ZipFile(pathzipfile, mode="r")
orgtargetpath = botsglobal.ini.get('directories','botspath')
if (orgtargetpath[-1:] in (os.path.sep, os.path.altsep) and len(os.path.splitdrive(orgtargetpath)[1]) > 1):
orgtargetpath = orgtargetpath[:-1]
for zipfileobject in myzip.infolist():
if zipfileobject.filename not in ['botsindex.py','README','botssys/sqlitedb/botsdb','config/bots.ini'] and os.path.splitext(zipfileobject.filename)[1] not in ['.pyo','.pyc']:
#~ botsglobal.logger.info(u'Filename in zip "%s".',zipfileobject.filename)
if zipfileobject.filename[0] == '/':
targetpath = zipfileobject.filename[1:]
else:
targetpath = zipfileobject.filename
#convert for correct environment: repacle botssys, config, usersys in filenames
if targetpath.startswith('usersys'):
targetpath = targetpath.replace('usersys',botsglobal.ini.get('directories','usersysabs'),1)
elif targetpath.startswith('botssys'):
targetpath = targetpath.replace('botssys',botsglobal.ini.get('directories','botssys'),1)
elif targetpath.startswith('config'):
targetpath = targetpath.replace('config',botsglobal.ini.get('directories','config'),1)
targetpath = botslib.join(orgtargetpath, targetpath)
#targetpath is OK now.
botsglobal.logger.info(_(u' Start writing file: "%(targetpath)s".'),{'targetpath':targetpath})
if botslib.dirshouldbethere(os.path.dirname(targetpath)):
botsglobal.logger.info(_(u' Create directory "%(directory)s".'),{'directory':os.path.dirname(targetpath)})
if zipfileobject.filename[-1] == '/': #check if this is a dir; if so continue
continue
if os.path.isfile(targetpath): #check if file already exists
try: #this ***sometimes*** fails. (python25, for static/help/home.html...only there...)
warnrenamed = True
except:
pass
source = myzip.read(zipfileobject.filename)
target = open(targetpath, "wb")
target.write(source)
target.close()
botsglobal.logger.info(_(u' File written: "%(targetpath)s".'),{'targetpath':targetpath})
except:
txt = botslib.txtexc()
myzip.close()
raise botslib.PluginError(_(u'Error writing files to system. Nothing is written to database. Error:\n%(txt)s'),{'txt':txt})
else:
myzip.close()
botsglobal.logger.info(_(u'Writing files to filesystem is OK.'))
return warnrenamed
#PLUGINCOMPARELIST: for filtering and sorting the plugins.
PLUGINCOMPARELIST = ['uniek','persist','mutex','ta','filereport','report','ccodetrigger','ccode', 'channel','partner','chanpar','translate','routes','confirmrule']
def read_index2database(orgpluglist):
#sanity checks on pluglist
if not orgpluglist: #list of plugins is empty: is OK. DO nothing
return
if not isinstance(orgpluglist,list): #has to be a list!!
raise botslib.PluginError(_(u'Plugins should be list of dicts. Nothing is written.'))
for plug in orgpluglist:
if not isinstance(plug,dict):
raise botslib.PluginError(_(u'Plugins should be list of dicts. Nothing is written.'))
for key in plug.keys():
if not isinstance(key,basestring):
raise botslib.PluginError(_(u'Key of dict is not a string: "%(plug)s". Nothing is written.'),{'plug':plug})
if 'plugintype' not in plug:
raise botslib.PluginError(_(u'"Plugintype" missing in: "%(plug)s". Nothing is written.'),{'plug':plug})
#special case: compatibility with bots 1.* plugins.
#in bots 1.*, partnergroup was in separate tabel; in bots 2.* partnergroup is in partner
#later on, partnergroup will get filtered
for plug in orgpluglist[:]:
if plug['plugintype'] == 'partnergroup':
for plugpartner in orgpluglist:
if plugpartner['plugintype'] == 'partner' and plugpartner['idpartner'] == plug['idpartner']:
if 'group' in plugpartner:
plugpartner['group'].append(plug['idpartnergroup'])
else:
plugpartner['group'] = [plug['idpartnergroup']]
break
#copy & filter orgpluglist; do plugtype specific adaptions
pluglist = []
for plug in orgpluglist:
if plug['plugintype'] == 'ccode': #add ccodetrigger. #20101223: this is NOT needed; codetrigger shoudl be in plugin.
for seachccodetriggerplug in pluglist:
if seachccodetriggerplug['plugintype'] == 'ccodetrigger' and seachccodetriggerplug['ccodeid'] == plug['ccodeid']:
break
else:
pluglist.append({'plugintype':'ccodetrigger','ccodeid':plug['ccodeid']})
elif plug['plugintype'] == 'translate': #make some fields None instead of '' (translate formpartner, topartner)
if not plug['frompartner']:
plug['frompartner'] = None
if not plug['topartner']:
plug['topartner'] = None
elif plug['plugintype'] == 'routes':
plug['active'] = False
if 'defer' not in plug:
plug['defer'] = False
else:
if plug['defer'] is None:
plug['defer'] = False
elif plug['plugintype'] == 'channel':
#convert for correct environment: path and mpath in channels
if 'path' in plug and plug['path'].startswith('botssys'):
plug['path'] = plug['path'].replace('botssys',botsglobal.ini.get('directories','botssys_org'),1)
if 'testpath' in plug and plug['testpath'].startswith('botssys'):
plug['testpath'] = plug['testpath'].replace('botssys',botsglobal.ini.get('directories','botssys_org'),1)
elif plug['plugintype'] == 'confirmrule':
plug.pop('id', None) #id is an artificial key, delete,
elif plug['plugintype'] not in PLUGINCOMPARELIST: #if not in PLUGINCOMPARELIST: do not use
continue
pluglist.append(plug)
#sort pluglist: this is needed for relationships
pluglist.sort(key=lambda plug: plug.get('isgroup',False),reverse=True) #sort partners on being partnergroup or not
pluglist.sort(key=lambda plug: PLUGINCOMPARELIST.index(plug['plugintype'])) #sort all plugs on plugintype; are partners/partenrgroups are already sorted, this will still be true in this new sort (python guarantees!)
for plug in pluglist:
botsglobal.logger.info(u' Start write to database for: "%(plug)s".',{'plug':plug})
#correction for reading partnergroups
if plug['plugintype'] == 'partner' and plug['isgroup']:
plug['plugintype'] = 'partnergroep'
#remember the plugintype
plugintype = plug['plugintype']
table = django.db.models.get_model('bots',plugintype)
#delete fields not in model for compatibility; note that 'plugintype' is also removed.
loopdictionary = plug.keys()
for key in loopdictionary:
try:
table._meta.get_field(key)
except django.db.models.fields.FieldDoesNotExist:
del plug[key]
#get key(s), put in dict 'sleutel'
pk = table._meta.pk.name
if pk == 'id': #'id' is the artificial key django makes, if no key is indicated. Note the django has no 'composite keys'.
sleutel = {}
if table._meta.unique_together:
for key in table._meta.unique_together[0]:
sleutel[key] = plug.pop(key)
else:
sleutel = {pk:plug.pop(pk)}
sleutelorg = sleutel.copy() #make a copy of the original sleutel; this is needed later
#now we have:
#- plugintype (is removed from plug)
#- sleutelorg: original key fields
#- sleutel: unique key fields. mind: translate and confirmrule have empty 'sleutel'
#- plug: rest of database fields
#for sleutel and plug: convert names to real database names
#get real column names for fields in plug
loopdictionary = plug.keys()
for fieldname in loopdictionary:
fieldobject = table._meta.get_field_by_name(fieldname)[0]
try:
if fieldobject.column != fieldname: #if name in plug is not the real field name (in database)
plug[fieldobject.column] = plug[fieldname] #add new key in plug
del plug[fieldname] #delete old key in plug
except:
raise botslib.PluginError(_(u'No field column for: "%(fieldname)s".'),{'fieldname':fieldname})
#get real column names for fields in sleutel; basically the same loop but now for sleutel
loopdictionary = sleutel.keys()
for fieldname in loopdictionary:
fieldobject = table._meta.get_field_by_name(fieldname)[0]
try:
if fieldobject.column != fieldname:
sleutel[fieldobject.column] = sleutel[fieldname]
del sleutel[fieldname]
except:
raise botslib.PluginError(_(u'No field column for: "%(fieldname)s".'),{'fieldname':fieldname})
#find existing entry (if exists)
if sleutelorg: #note that translate and confirmrule have an empty 'sleutel'
listexistingentries = table.objects.filter(**sleutelorg)
elif plugintype == 'translate':
listexistingentries = table.objects.filter(fromeditype=plug['fromeditype'],
frommessagetype=plug['frommessagetype'],
alt=plug['alt'],
frompartner=plug['frompartner_id'],
topartner=plug['topartner_id'])
elif plugintype == 'confirmrule':
listexistingentries = table.objects.filter(confirmtype=plug['confirmtype'],
ruletype=plug['ruletype'],
negativerule=plug['negativerule'],
idroute=plug.get('idroute'),
idchannel=plug.get('idchannel_id'),
messagetype=plug.get('messagetype'),
frompartner=plug.get('frompartner_id'),
topartner=plug.get('topartner_id'))
if listexistingentries:
dbobject = listexistingentries[0] #exists, so use existing db-object
else:
dbobject = table(**sleutel) #create db-object
if plugintype == 'partner': #for partners, first the partner needs to be saved before groups can be made
dbobject.save()
for key,value in plug.iteritems(): #update object with attributes from plugin
setattr(dbobject,key,value)
dbobject.save() #and save the updated object.
botsglobal.logger.info(_(u' Write to database is OK.'))
#*********************************************
#* plugout / make a plugin (generate)*********
#*********************************************
def make_index(cleaned_data,filename):
''' generate only the index file of the plugin.
used eg for configuration change management.
'''
plugs = all_database2plug(cleaned_data)
plugsasstring = make_plugs2string(plugs)
filehandler = codecs.open(filename,'w','utf-8')
filehandler.write(plugsasstring)
filehandler.close()
def make_plugin(cleaned_data,filename):
pluginzipfilehandler = zipfile.ZipFile(filename, 'w', zipfile.ZIP_DEFLATED)
plugs = all_database2plug(cleaned_data)
plugsasstring = make_plugs2string(plugs)
pluginzipfilehandler.writestr('botsindex.py',plugsasstring.encode('utf-8')) #write index file to pluginfile
botsglobal.logger.debug(u' Write in index:\n %(index)s',{'index':plugsasstring})
files4plugin = plugout_files(cleaned_data)
for dirname, defaultdirname in files4plugin:
pluginzipfilehandler.write(dirname,defaultdirname)
botsglobal.logger.debug(u' Write file "%(file)s".',{'file':defaultdirname})
pluginzipfilehandler.close()
def all_database2plug(cleaned_data):
''' get all database objects, serialize these (to dict), adapt.'''
plugs = []
if cleaned_data['databaseconfiguration']:
plugs += \
database2plug(models.channel) + \
database2plug(models.partner) + \
database2plug(models.chanpar) + \
database2plug(models.translate) + \
database2plug(models.routes) + \
database2plug(models.confirmrule)
if cleaned_data['umlists']:
plugs += \
database2plug(models.ccodetrigger) + \
database2plug(models.ccode)
if cleaned_data['databasetransactions']:
plugs += \
database2plug(models.uniek) + \
database2plug(models.mutex) + \
database2plug(models.ta) + \
database2plug(models.filereport) + \
database2plug(models.report)
#~ list(models.persist.objects.all()) + \ #should persist object alos be included?
return plugs
def database2plug(db_table):
#serialize database objects
plugs = serializers.serialize("python", db_table.objects.all())
if plugs:
app,tablename = plugs[0]['model'].split('.',1)
table = django.db.models.get_model(app,tablename)
pk = table._meta.pk.name
#adapt plugs
for plug in plugs:
plug['fields']['plugintype'] = tablename
if pk != 'id':
plug['fields'][pk] = plug['pk']
#convert for correct environment: replace botssys in channels[path, mpath]
if tablename == 'channel':
if 'path' in plug['fields'] and plug['fields']['path'].startswith(botsglobal.ini.get('directories','botssys_org')):
plug['fields']['path'] = plug['fields']['path'].replace(botsglobal.ini.get('directories','botssys_org'),'botssys',1)
if 'testpath' in plug['fields'] and plug['fields']['testpath'].startswith(botsglobal.ini.get('directories','botssys_org')):
plug['fields']['testpath'] = plug['fields']['testpath'].replace(botsglobal.ini.get('directories','botssys_org'),'botssys',1)
return plugs
def make_plugs2string(plugs):
''' return plugs (serialized objects) as unicode strings.
'''
lijst = [u'# -*- coding: utf-8 -*-',u'import datetime',"version = '%s'" % (botsglobal.version),'plugins = [']
lijst.extend([plug2string(plug['fields']) for plug in plugs])
lijst.append(u']\n')
return '\n'.join(lijst)
def plug2string(plugdict):
''' like repr() for a dict, but:
- starts with 'plugintype'
- other entries are sorted; this because of predictability
- produce unicode by using str().decode(unicode_escape): bytes->unicode; converts escaped unicode-chrs to correct unicode. repr produces these.
str().decode(): bytes->unicode
str().encode(): unicode->bytes
'''
terug = u"{" + repr('plugintype') + u": " + repr(plugdict.pop('plugintype'))
for key in sorted(plugdict):
terug += u", " + repr(key) + u": " + repr(plugdict[key])
terug += u'},'
return terug
def plugout_files(cleaned_data):
''' gather list of files for the plugin that is generated.
'''
files2return = []
usersys = unicode(botsglobal.ini.get('directories','usersysabs'))
botssys = unicode(botsglobal.ini.get('directories','botssys'))
if cleaned_data['fileconfiguration']: #gather from usersys
files2return.extend(plugout_files_bydir(usersys,u'usersys'))
if not cleaned_data['charset']: #if edifact charsets are not needed: remove them (are included in default bots installation).
charsetdirs = plugout_files_bydir(os.path.join(usersys,u'charsets'),u'usersys/charsets')
for charset in charsetdirs:
try:
index = files2return.index(charset)
files2return.pop(index)
except ValueError:
pass
else:
if cleaned_data['charset']: #if edifact charsets are not needed: remove them (are included in default bots installation).
files2return.extend(plugout_files_bydir(os.path.join(usersys,u'charsets'),u'usersys/charsets'))
if cleaned_data['config']:
config = botsglobal.ini.get('directories','config')
files2return.extend(plugout_files_bydir(config,u'config'))
if cleaned_data['data']:
data = botsglobal.ini.get('directories','data')
files2return.extend(plugout_files_bydir(data,u'botssys/data'))
if cleaned_data['database']:
files2return.extend(plugout_files_bydir(os.path.join(botssys,u'sqlitedb'),u'botssys/sqlitedb.copy')) #yeah...readign a plugin with a new database will cause a crash...do this manually...
if cleaned_data['infiles']:
files2return.extend(plugout_files_bydir(os.path.join(botssys,u'infile'),u'botssys/infile'))
if cleaned_data['logfiles']:
log_file = botsglobal.ini.get('directories','logging')
files2return.extend(plugout_files_bydir(log_file,u'botssys/logging'))
return files2return
def plugout_files_bydir(dirname,defaultdirname):
''' gather all files from directory dirname'''
files2return = []
for root, dirs, files in os.walk(dirname):
head, tail = os.path.split(root)
#convert for correct environment: replace dirname with the default directory name
rootinplugin = root.replace(dirname,defaultdirname,1)
for bestand in files:
ext = os.path.splitext(bestand)[1]
if ext and (ext in ['.pyc','.pyo'] or bestand in ['__init__.py']):
continue
files2return.append([os.path.join(root,bestand),os.path.join(rootinplugin,bestand)])
return files2return
|
PypiClean
|
/sagemath-standard-10.0b0.tar.gz/sagemath-standard-10.0b0/sage/categories/graded_modules.py
|
r"""
Graded modules
"""
# ****************************************************************************
# Copyright (C) 2008 Teresa Gomez-Diaz (CNRS) <[email protected]>
# 2008-2013 Nicolas M. Thiery <nthiery at users.sf.net>
#
# Distributed under the terms of the GNU General Public License (GPL)
# https://www.gnu.org/licenses/
# *****************************************************************************
from sage.categories.category import Category
from sage.categories.category_types import Category_over_base_ring
from sage.categories.covariant_functorial_construction import RegressiveCovariantConstructionCategory
class GradedModulesCategory(RegressiveCovariantConstructionCategory, Category_over_base_ring):
def __init__(self, base_category):
"""
EXAMPLES::
sage: C = GradedAlgebras(QQ)
sage: C
Category of graded algebras over Rational Field
sage: C.base_category()
Category of algebras over Rational Field
sage: sorted(C.super_categories(), key=str)
[Category of filtered algebras over Rational Field,
Category of graded vector spaces over Rational Field]
sage: AlgebrasWithBasis(QQ).Graded().base_ring()
Rational Field
sage: GradedHopfAlgebrasWithBasis(QQ).base_ring()
Rational Field
TESTS::
sage: GradedModules(ZZ)
Category of graded modules over Integer Ring
sage: Modules(ZZ).Graded()
Category of graded modules over Integer Ring
sage: GradedModules(ZZ) is Modules(ZZ).Graded()
True
"""
super().__init__(base_category, base_category.base_ring())
_functor_category = "Graded"
def _repr_object_names(self):
"""
EXAMPLES::
sage: AlgebrasWithBasis(QQ).Graded() # indirect doctest
Category of graded algebras with basis over Rational Field
"""
return "graded {}".format(self.base_category()._repr_object_names())
@classmethod
def default_super_categories(cls, category, *args):
r"""
Return the default super categories of ``category.Graded()``.
Mathematical meaning: every graded object (module, algebra,
etc.) is a filtered object with the (implicit) filtration
defined by `F_i = \bigoplus_{j \leq i} G_j`.
INPUT:
- ``cls`` -- the class ``GradedModulesCategory``
- ``category`` -- a category
OUTPUT: a (join) category
In practice, this returns ``category.Filtered()``, joined
together with the result of the method
:meth:`RegressiveCovariantConstructionCategory.default_super_categories() <sage.categories.covariant_functorial_construction.RegressiveCovariantConstructionCategory.default_super_categories>`
(that is the join of ``category.Filtered()`` and ``cat`` for
each ``cat`` in the super categories of ``category``).
EXAMPLES:
Consider ``category=Algebras()``, which has ``cat=Modules()``
as super category. Then, a grading of an algebra `G`
is also a filtration of `G`::
sage: Algebras(QQ).Graded().super_categories()
[Category of filtered algebras over Rational Field,
Category of graded vector spaces over Rational Field]
This resulted from the following call::
sage: sage.categories.graded_modules.GradedModulesCategory.default_super_categories(Algebras(QQ))
Join of Category of filtered algebras over Rational Field
and Category of graded vector spaces over Rational Field
"""
cat = super().default_super_categories(category, *args)
return Category.join([category.Filtered(), cat])
class GradedModules(GradedModulesCategory):
r"""
The category of graded modules.
We consider every graded module `M = \bigoplus_i M_i` as a
filtered module under the (natural) filtration given by
.. MATH::
F_i = \bigoplus_{j < i} M_j.
EXAMPLES::
sage: GradedModules(ZZ)
Category of graded modules over Integer Ring
sage: GradedModules(ZZ).super_categories()
[Category of filtered modules over Integer Ring]
The category of graded modules defines the graded structure which
shall be preserved by morphisms::
sage: Modules(ZZ).Graded().additional_structure()
Category of graded modules over Integer Ring
TESTS::
sage: TestSuite(GradedModules(ZZ)).run()
"""
class ParentMethods:
pass
class ElementMethods:
pass
|
PypiClean
|
/MGP_SDK-1.1.1.tar.gz/MGP_SDK-1.1.1/src/MGP_SDK/OGC_Spec/wms.py
|
import requests
import MGP_SDK.process as process
from MGP_SDK.auth.auth import Auth
class WMS:
def __init__(self, auth: Auth, endpoint):
self.auth = auth
self.version = auth.version
self.api_version = auth.api_version
self.endpoint = endpoint
if self.endpoint == 'streaming':
self.base_url = f'{self.auth.api_base_url}/streaming/{self.api_version}/ogc/ows'
elif self.endpoint == 'basemaps':
self.base_url = f'{self.auth.api_base_url}/basemaps/{self.api_version}/seamlines/ows'
elif self.endpoint == 'vector':
self.base_url = f'{self.auth.api_base_url}/analytics/{self.api_version}/vector/change-detection/Maxar/ows'
# TODO Handle raster endpoint
self.querystring = self._init_querystring(None)
self.token = self.auth.refresh_token()
self.authorization = {"Authorization": f"Bearer {self.token}"}
def return_image(self, **kwargs):
"""
Function finds the imagery matching a bbox or feature id
Kwargs:
bbox (string) = Bounding box of AOI. Comma delimited set of coordinates. (miny,minx,maxy,maxx)
filter (string) = CQL filter used to refine data of search.
height (int) = The vertical number of pixels to return
width (int) = The horizontal number of pixels to return
layers (string) = The desired layer. Defaults to 'DigitalGlobe:Imagery'
format (string) = The desired format of the response image either jpeg, png or geotiff
featureprofile (string) = The desired stacking profile. Defaults to account Default
Returns:
requests response object of desired image
"""
layers = kwargs['layers'] if 'layers' in kwargs.items() else None
querystring = self._init_querystring(layers)
querystring.update({'format': kwargs['format']})
keys = list(kwargs.keys())
if 'bbox' in keys:
process._validate_bbox(kwargs['bbox'], srsname=kwargs['srsname'])
if kwargs['srsname'] == "EPSG:4326":
bbox_list = kwargs['bbox'].split(',')
kwargs['bbox'] = ",".join([bbox_list[0], bbox_list[1], bbox_list[2], bbox_list[3]])
else:
bbox_list = [i for i in kwargs['bbox'].split(',')]
kwargs['bbox'] = ",".join([bbox_list[1], bbox_list[0], bbox_list[3], bbox_list[2]])
querystring['crs'] = kwargs['srsname']
querystring.update({'bbox': kwargs['bbox']})
else:
raise Exception('Search function must have a BBOX.')
if 'filter' in keys:
# process.cql_checker(kwargs['filter'])
querystring.update({'cql_filter': kwargs['filter']})
del (kwargs['filter'])
if 'request' in keys:
if kwargs['request'] == 'GetCapabilities':
querystring.update({'request': kwargs['request']})
for item in kwargs.keys():
del kwargs[item]
for key, value in kwargs.items():
querystring[key] = value
request = requests.get(self.base_url, headers=self.authorization, params=querystring, verify=self.auth.SSL)
return process._response_handler(request)
def _init_querystring(self, layers):
if layers is None:
if self.endpoint == 'streaming':
layers = 'Maxar:Imagery'
elif self.endpoint == 'basemaps':
layers = 'Maxar:seamline'
# TODO Handle raster / vector
querystring = {'service': 'WMS',
'request': 'GetMap',
'version': '1.3.0',
'transparent': 'true',
'crs': 'EPSG:4326',
'height': '512',
'width': '512',
'layers': layers,
'format': 'image/jpeg',
'tiled': 'true',
'SDKversion': '{}'.format(self.version)
}
return querystring
|
PypiClean
|
/gitsome-0.8.4.tar.gz/gitsome-0.8.4/xonsh/inspectors.py
|
import os
import io
import sys
import types
import inspect
import itertools
import linecache
import collections
from xonsh.lazyasd import LazyObject
from xonsh.tokenize import detect_encoding
from xonsh.openpy import read_py_file
from xonsh.tools import cast_unicode, safe_hasattr, indent, print_color, format_color
from xonsh.platform import HAS_PYGMENTS, PYTHON_VERSION_INFO
from xonsh.lazyimps import pygments, pyghooks
from xonsh.style_tools import partial_color_tokenize
# builtin docstrings to ignore
_func_call_docstring = LazyObject(
lambda: types.FunctionType.__call__.__doc__, globals(), "_func_call_docstring"
)
_object_init_docstring = LazyObject(
lambda: object.__init__.__doc__, globals(), "_object_init_docstring"
)
_builtin_type_docstrings = LazyObject(
lambda: {
t.__doc__ for t in (types.ModuleType, types.MethodType, types.FunctionType)
},
globals(),
"_builtin_type_docstrings",
)
_builtin_func_type = LazyObject(lambda: type(all), globals(), "_builtin_func_type")
# Bound methods have the same type as builtin functions
_builtin_meth_type = LazyObject(
lambda: type(str.upper), globals(), "_builtin_meth_type"
)
info_fields = LazyObject(
lambda: [
"type_name",
"base_class",
"string_form",
"namespace",
"length",
"file",
"definition",
"docstring",
"source",
"init_definition",
"class_docstring",
"init_docstring",
"call_def",
"call_docstring",
# These won't be printed but will be used to determine how to
# format the object
"ismagic",
"isalias",
"isclass",
"argspec",
"found",
"name",
],
globals(),
"info_fields",
)
def object_info(**kw):
"""Make an object info dict with all fields present."""
infodict = dict(itertools.zip_longest(info_fields, [None]))
infodict.update(kw)
return infodict
def get_encoding(obj):
"""Get encoding for python source file defining obj
Returns None if obj is not defined in a sourcefile.
"""
ofile = find_file(obj)
# run contents of file through pager starting at line where the object
# is defined, as long as the file isn't binary and is actually on the
# filesystem.
if ofile is None:
return None
elif ofile.endswith((".so", ".dll", ".pyd")):
return None
elif not os.path.isfile(ofile):
return None
else:
# Print only text files, not extension binaries. Note that
# getsourcelines returns lineno with 1-offset and page() uses
# 0-offset, so we must adjust.
with io.open(ofile, "rb") as buf: # Tweaked to use io.open for Python 2
encoding, _ = detect_encoding(buf.readline)
return encoding
def getdoc(obj):
"""Stable wrapper around inspect.getdoc.
This can't crash because of attribute problems.
It also attempts to call a getdoc() method on the given object. This
allows objects which provide their docstrings via non-standard mechanisms
(like Pyro proxies) to still be inspected by ipython's ? system."""
# Allow objects to offer customized documentation via a getdoc method:
try:
ds = obj.getdoc()
except Exception: # pylint:disable=broad-except
pass
else:
# if we get extra info, we add it to the normal docstring.
if isinstance(ds, str):
return inspect.cleandoc(ds)
try:
docstr = inspect.getdoc(obj)
encoding = get_encoding(obj)
return cast_unicode(docstr, encoding=encoding)
except Exception: # pylint:disable=broad-except
# Harden against an inspect failure, which can occur with
# SWIG-wrapped extensions.
raise
def getsource(obj, is_binary=False):
"""Wrapper around inspect.getsource.
This can be modified by other projects to provide customized source
extraction.
Inputs:
- obj: an object whose source code we will attempt to extract.
Optional inputs:
- is_binary: whether the object is known to come from a binary source.
This implementation will skip returning any output for binary objects,
but custom extractors may know how to meaningfully process them."""
if is_binary:
return None
else:
# get source if obj was decorated with @decorator
if hasattr(obj, "__wrapped__"):
obj = obj.__wrapped__
try:
src = inspect.getsource(obj)
except TypeError:
if hasattr(obj, "__class__"):
src = inspect.getsource(obj.__class__)
encoding = get_encoding(obj)
return cast_unicode(src, encoding=encoding)
def is_simple_callable(obj):
"""True if obj is a function ()"""
return (
inspect.isfunction(obj)
or inspect.ismethod(obj)
or isinstance(obj, _builtin_func_type)
or isinstance(obj, _builtin_meth_type)
)
def getargspec(obj):
"""Wrapper around :func:`inspect.getfullargspec` on Python 3, and
:func:inspect.getargspec` on Python 2.
In addition to functions and methods, this can also handle objects with a
``__call__`` attribute.
"""
if safe_hasattr(obj, "__call__") and not is_simple_callable(obj):
obj = obj.__call__
return inspect.getfullargspec(obj)
def format_argspec(argspec):
"""Format argspect, convenience wrapper around inspect's.
This takes a dict instead of ordered arguments and calls
inspect.format_argspec with the arguments in the necessary order.
"""
return inspect.formatargspec(
argspec["args"], argspec["varargs"], argspec["varkw"], argspec["defaults"]
)
def call_tip(oinfo, format_call=True):
"""Extract call tip data from an oinfo dict.
Parameters
----------
oinfo : dict
format_call : bool, optional
If True, the call line is formatted and returned as a string. If not, a
tuple of (name, argspec) is returned.
Returns
-------
call_info : None, str or (str, dict) tuple.
When format_call is True, the whole call information is formatted as a
single string. Otherwise, the object's name and its argspec dict are
returned. If no call information is available, None is returned.
docstring : str or None
The most relevant docstring for calling purposes is returned, if
available. The priority is: call docstring for callable instances, then
constructor docstring for classes, then main object's docstring otherwise
(regular functions).
"""
# Get call definition
argspec = oinfo.get("argspec")
if argspec is None:
call_line = None
else:
# Callable objects will have 'self' as their first argument, prune
# it out if it's there for clarity (since users do *not* pass an
# extra first argument explicitly).
try:
has_self = argspec["args"][0] == "self"
except (KeyError, IndexError):
pass
else:
if has_self:
argspec["args"] = argspec["args"][1:]
call_line = oinfo["name"] + format_argspec(argspec)
# Now get docstring.
# The priority is: call docstring, constructor docstring, main one.
doc = oinfo.get("call_docstring")
if doc is None:
doc = oinfo.get("init_docstring")
if doc is None:
doc = oinfo.get("docstring", "")
return call_line, doc
def find_file(obj):
"""Find the absolute path to the file where an object was defined.
This is essentially a robust wrapper around `inspect.getabsfile`.
Returns None if no file can be found.
Parameters
----------
obj : any Python object
Returns
-------
fname : str
The absolute path to the file where the object was defined.
"""
# get source if obj was decorated with @decorator
if safe_hasattr(obj, "__wrapped__"):
obj = obj.__wrapped__
fname = None
try:
fname = inspect.getabsfile(obj)
except TypeError:
# For an instance, the file that matters is where its class was
# declared.
if hasattr(obj, "__class__"):
try:
fname = inspect.getabsfile(obj.__class__)
except TypeError:
# Can happen for builtins
pass
except: # pylint:disable=bare-except
pass
return cast_unicode(fname)
def find_source_lines(obj):
"""Find the line number in a file where an object was defined.
This is essentially a robust wrapper around `inspect.getsourcelines`.
Returns None if no file can be found.
Parameters
----------
obj : any Python object
Returns
-------
lineno : int
The line number where the object definition starts.
"""
# get source if obj was decorated with @decorator
if safe_hasattr(obj, "__wrapped__"):
obj = obj.__wrapped__
try:
try:
lineno = inspect.getsourcelines(obj)[1]
except TypeError:
# For instances, try the class object like getsource() does
if hasattr(obj, "__class__"):
lineno = inspect.getsourcelines(obj.__class__)[1]
else:
lineno = None
except: # pylint:disable=bare-except
return None
return lineno
if PYTHON_VERSION_INFO < (3, 5, 0):
FrameInfo = collections.namedtuple(
"FrameInfo",
["frame", "filename", "lineno", "function", "code_context", "index"],
)
def getouterframes(frame, context=1):
"""Wrapper for getouterframes so that it acts like the Python v3.5 version."""
return [FrameInfo(*f) for f in inspect.getouterframes(frame, context=context)]
else:
getouterframes = inspect.getouterframes
class Inspector(object):
"""Inspects objects."""
def __init__(self, str_detail_level=0):
self.str_detail_level = str_detail_level
def _getdef(self, obj, oname=""):
"""Return the call signature for any callable object.
If any exception is generated, None is returned instead and the
exception is suppressed.
"""
try:
hdef = oname + inspect.signature(*getargspec(obj))
return cast_unicode(hdef)
except: # pylint:disable=bare-except
return None
def noinfo(self, msg, oname):
"""Generic message when no information is found."""
print("No %s found" % msg, end=" ")
if oname:
print("for %s" % oname)
else:
print()
def pdef(self, obj, oname=""):
"""Print the call signature for any callable object.
If the object is a class, print the constructor information.
"""
if not callable(obj):
print("Object is not callable.")
return
header = ""
if inspect.isclass(obj):
header = self.__head("Class constructor information:\n")
obj = obj.__init__
output = self._getdef(obj, oname)
if output is None:
self.noinfo("definition header", oname)
else:
print(header, output, end=" ", file=sys.stdout)
def pdoc(self, obj, oname=""):
"""Print the docstring for any object.
Optional
-formatter: a function to run the docstring through for specially
formatted docstrings.
"""
head = self.__head # For convenience
lines = []
ds = getdoc(obj)
if ds:
lines.append(head("Class docstring:"))
lines.append(indent(ds))
if inspect.isclass(obj) and hasattr(obj, "__init__"):
init_ds = getdoc(obj.__init__)
if init_ds is not None:
lines.append(head("Init docstring:"))
lines.append(indent(init_ds))
elif hasattr(obj, "__call__"):
call_ds = getdoc(obj.__call__)
if call_ds:
lines.append(head("Call docstring:"))
lines.append(indent(call_ds))
if not lines:
self.noinfo("documentation", oname)
else:
print("\n".join(lines))
def psource(self, obj, oname=""):
"""Print the source code for an object."""
# Flush the source cache because inspect can return out-of-date source
linecache.checkcache()
try:
src = getsource(obj)
except: # pylint:disable=bare-except
self.noinfo("source", oname)
else:
print(src)
def pfile(self, obj, oname=""):
"""Show the whole file where an object was defined."""
lineno = find_source_lines(obj)
if lineno is None:
self.noinfo("file", oname)
return
ofile = find_file(obj)
# run contents of file through pager starting at line where the object
# is defined, as long as the file isn't binary and is actually on the
# filesystem.
if ofile.endswith((".so", ".dll", ".pyd")):
print("File %r is binary, not printing." % ofile)
elif not os.path.isfile(ofile):
print("File %r does not exist, not printing." % ofile)
else:
# Print only text files, not extension binaries. Note that
# getsourcelines returns lineno with 1-offset and page() uses
# 0-offset, so we must adjust.
o = read_py_file(ofile, skip_encoding_cookie=False)
print(o, lineno - 1)
def _format_fields_str(self, fields, title_width=0):
"""Formats a list of fields for display using color strings.
Parameters
----------
fields : list
A list of 2-tuples: (field_title, field_content)
title_width : int
How many characters to pad titles to. Default to longest title.
"""
out = []
if title_width == 0:
title_width = max(len(title) + 2 for title, _ in fields)
for title, content in fields:
title_len = len(title)
title = "{BOLD_RED}" + title + ":{NO_COLOR}"
if len(content.splitlines()) > 1:
title += "\n"
else:
title += " ".ljust(title_width - title_len)
out.append(cast_unicode(title) + cast_unicode(content))
return format_color("\n".join(out) + "\n")
def _format_fields_tokens(self, fields, title_width=0):
"""Formats a list of fields for display using color tokens from
pygments.
Parameters
----------
fields : list
A list of 2-tuples: (field_title, field_content)
title_width : int
How many characters to pad titles to. Default to longest title.
"""
out = []
if title_width == 0:
title_width = max(len(title) + 2 for title, _ in fields)
for title, content in fields:
title_len = len(title)
title = "{BOLD_RED}" + title + ":{NO_COLOR}"
if not isinstance(content, str) or len(content.splitlines()) > 1:
title += "\n"
else:
title += " ".ljust(title_width - title_len)
out += partial_color_tokenize(title)
if isinstance(content, str):
out[-1] = (out[-1][0], out[-1][1] + content + "\n")
else:
out += content
out[-1] = (out[-1][0], out[-1][1] + "\n")
out[-1] = (out[-1][0], out[-1][1] + "\n")
return out
def _format_fields(self, fields, title_width=0):
"""Formats a list of fields for display using color tokens from
pygments.
Parameters
----------
fields : list
A list of 2-tuples: (field_title, field_content)
title_width : int
How many characters to pad titles to. Default to longest title.
"""
if HAS_PYGMENTS:
rtn = self._format_fields_tokens(fields, title_width=title_width)
else:
rtn = self._format_fields_str(fields, title_width=title_width)
return rtn
# The fields to be displayed by pinfo: (fancy_name, key_in_info_dict)
pinfo_fields1 = [("Type", "type_name")]
pinfo_fields2 = [("String form", "string_form")]
pinfo_fields3 = [
("Length", "length"),
("File", "file"),
("Definition", "definition"),
]
pinfo_fields_obj = [
("Class docstring", "class_docstring"),
("Init docstring", "init_docstring"),
("Call def", "call_def"),
("Call docstring", "call_docstring"),
]
def pinfo(self, obj, oname="", info=None, detail_level=0):
"""Show detailed information about an object.
Parameters
----------
obj : object
oname : str, optional
name of the variable pointing to the object.
info : dict, optional
a structure with some information fields which may have been
precomputed already.
detail_level : int, optional
if set to 1, more information is given.
"""
info = self.info(obj, oname=oname, info=info, detail_level=detail_level)
displayfields = []
def add_fields(fields):
for title, key in fields:
field = info[key]
if field is not None:
displayfields.append((title, field.rstrip()))
add_fields(self.pinfo_fields1)
add_fields(self.pinfo_fields2)
# Namespace
if info["namespace"] is not None and info["namespace"] != "Interactive":
displayfields.append(("Namespace", info["namespace"].rstrip()))
add_fields(self.pinfo_fields3)
if info["isclass"] and info["init_definition"]:
displayfields.append(("Init definition", info["init_definition"].rstrip()))
# Source or docstring, depending on detail level and whether
# source found.
if detail_level > 0 and info["source"] is not None:
displayfields.append(("Source", cast_unicode(info["source"])))
elif info["docstring"] is not None:
displayfields.append(("Docstring", info["docstring"]))
# Constructor info for classes
if info["isclass"]:
if info["init_docstring"] is not None:
displayfields.append(("Init docstring", info["init_docstring"]))
# Info for objects:
else:
add_fields(self.pinfo_fields_obj)
# Finally send to printer/pager:
if displayfields:
print_color(self._format_fields(displayfields))
def info(self, obj, oname="", info=None, detail_level=0):
"""Compute a dict with detailed information about an object.
Optional arguments:
- oname: name of the variable pointing to the object.
- info: a structure with some information fields which may have been
precomputed already.
- detail_level: if set to 1, more information is given.
"""
obj_type = type(obj)
if info is None:
ismagic = 0
isalias = 0
ospace = ""
else:
ismagic = info.ismagic
isalias = info.isalias
ospace = info.namespace
# Get docstring, special-casing aliases:
if isalias:
if not callable(obj):
if len(obj) >= 2 and isinstance(obj[1], str):
ds = "Alias to the system command:\n {0}".format(obj[1])
else: # pylint:disable=bare-except
ds = "Alias: " + str(obj)
else:
ds = "Alias to " + str(obj)
if obj.__doc__:
ds += "\nDocstring:\n" + obj.__doc__
else:
ds = getdoc(obj)
if ds is None:
ds = "<no docstring>"
# store output in a dict, we initialize it here and fill it as we go
out = dict(name=oname, found=True, isalias=isalias, ismagic=ismagic)
string_max = 200 # max size of strings to show (snipped if longer)
shalf = int((string_max - 5) / 2)
if ismagic:
obj_type_name = "Magic function"
elif isalias:
obj_type_name = "System alias"
else:
obj_type_name = obj_type.__name__
out["type_name"] = obj_type_name
try:
bclass = obj.__class__
out["base_class"] = str(bclass)
except: # pylint:disable=bare-except
pass
# String form, but snip if too long in ? form (full in ??)
if detail_level >= self.str_detail_level:
try:
ostr = str(obj)
str_head = "string_form"
if not detail_level and len(ostr) > string_max:
ostr = ostr[:shalf] + " <...> " + ostr[-shalf:]
ostr = ("\n" + " " * len(str_head.expandtabs())).join(
q.strip() for q in ostr.split("\n")
)
out[str_head] = ostr
except: # pylint:disable=bare-except
pass
if ospace:
out["namespace"] = ospace
# Length (for strings and lists)
try:
out["length"] = str(len(obj))
except: # pylint:disable=bare-except
pass
# Filename where object was defined
binary_file = False
fname = find_file(obj)
if fname is None:
# if anything goes wrong, we don't want to show source, so it's as
# if the file was binary
binary_file = True
else:
if fname.endswith((".so", ".dll", ".pyd")):
binary_file = True
elif fname.endswith("<string>"):
fname = "Dynamically generated function. " "No source code available."
out["file"] = fname
# Docstrings only in detail 0 mode, since source contains them (we
# avoid repetitions). If source fails, we add them back, see below.
if ds and detail_level == 0:
out["docstring"] = ds
# Original source code for any callable
if detail_level:
# Flush the source cache because inspect can return out-of-date
# source
linecache.checkcache()
source = None
try:
try:
source = getsource(obj, binary_file)
except TypeError:
if hasattr(obj, "__class__"):
source = getsource(obj.__class__, binary_file)
if source is not None:
source = source.rstrip()
if HAS_PYGMENTS:
lexer = pyghooks.XonshLexer()
source = list(pygments.lex(source, lexer=lexer))
out["source"] = source
except Exception: # pylint:disable=broad-except
pass
if ds and source is None:
out["docstring"] = ds
# Constructor docstring for classes
if inspect.isclass(obj):
out["isclass"] = True
# reconstruct the function definition and print it:
try:
obj_init = obj.__init__
except AttributeError:
init_def = init_ds = None
else:
init_def = self._getdef(obj_init, oname)
init_ds = getdoc(obj_init)
# Skip Python's auto-generated docstrings
if init_ds == _object_init_docstring:
init_ds = None
if init_def or init_ds:
if init_def:
out["init_definition"] = init_def
if init_ds:
out["init_docstring"] = init_ds
# and class docstring for instances:
else:
# reconstruct the function definition and print it:
defln = self._getdef(obj, oname)
if defln:
out["definition"] = defln
# First, check whether the instance docstring is identical to the
# class one, and print it separately if they don't coincide. In
# most cases they will, but it's nice to print all the info for
# objects which use instance-customized docstrings.
if ds:
try:
cls = getattr(obj, "__class__")
except: # pylint:disable=bare-except
class_ds = None
else:
class_ds = getdoc(cls)
# Skip Python's auto-generated docstrings
if class_ds in _builtin_type_docstrings:
class_ds = None
if class_ds and ds != class_ds:
out["class_docstring"] = class_ds
# Next, try to show constructor docstrings
try:
init_ds = getdoc(obj.__init__)
# Skip Python's auto-generated docstrings
if init_ds == _object_init_docstring:
init_ds = None
except AttributeError:
init_ds = None
if init_ds:
out["init_docstring"] = init_ds
# Call form docstring for callable instances
if safe_hasattr(obj, "__call__") and not is_simple_callable(obj):
call_def = self._getdef(obj.__call__, oname)
if call_def:
call_def = call_def
# it may never be the case that call def and definition
# differ, but don't include the same signature twice
if call_def != out.get("definition"):
out["call_def"] = call_def
call_ds = getdoc(obj.__call__)
# Skip Python's auto-generated docstrings
if call_ds == _func_call_docstring:
call_ds = None
if call_ds:
out["call_docstring"] = call_ds
# Compute the object's argspec as a callable. The key is to decide
# whether to pull it from the object itself, from its __init__ or
# from its __call__ method.
if inspect.isclass(obj):
# Old-style classes need not have an __init__
callable_obj = getattr(obj, "__init__", None)
elif callable(obj):
callable_obj = obj
else:
callable_obj = None
if callable_obj:
try:
argspec = getargspec(callable_obj)
except (TypeError, AttributeError):
# For extensions/builtins we can't retrieve the argspec
pass
else:
# named tuples' _asdict() method returns an OrderedDict, but we
# we want a normal
out["argspec"] = argspec_dict = dict(argspec._asdict())
# We called this varkw before argspec became a named tuple.
# With getfullargspec it's also called varkw.
if "varkw" not in argspec_dict:
argspec_dict["varkw"] = argspec_dict.pop("keywords")
return object_info(**out)
|
PypiClean
|
/pyxmpp-last-1.1.1.tar.gz/pyxmpp-last-1.1.1/pyxmpp/xmppstringprep.py
|
__revision__="$Id: xmppstringprep.py,v 1.16 2004/10/07 22:28:04 jajcus Exp $"
__docformat__="restructuredtext en"
import stringprep
import unicodedata
from pyxmpp.exceptions import StringprepError
class LookupFunction:
"""Class for looking up RFC 3454 tables using function.
:Ivariables:
- `lookup`: the lookup function."""
def __init__(self,function):
"""Initialize `LookupFunction` object.
:Parameters:
- `function`: function taking character code as input and returning
`bool` value or the mapped for `code`."""
self.lookup=function
class LookupTable:
"""Class for looking up RFC 3454 tables using a dictionary and/or list of ranges."""
def __init__(self,singles,ranges):
"""Initialize `LookupTable` object.
:Parameters:
- `singles`: dictionary mapping Unicode characters into other Unicode characters.
- `ranges`: list of ``((start,end),value)`` tuples mapping codes in range (start,end)
to the value."""
self.singles=singles
self.ranges=ranges
def lookup(self,c):
"""Do Unicode character lookup.
:Parameters:
- `c`: Unicode character to look up.
:return: the mapped value."""
if self.singles.has_key(c):
return self.singles[c]
c=ord(c)
for (start,end),value in self.ranges:
if c<start:
return None
if c<=end:
return value
return None
A_1=LookupFunction(stringprep.in_table_a1)
def b1_mapping(uc):
"""Do RFC 3454 B.1 table mapping.
:Parameters:
- `uc`: Unicode character to map.
:returns: u"" if there is `uc` code in the table, `None` otherwise."""
if stringprep.in_table_b1(uc):
return u""
else:
return None
B_1=LookupFunction(b1_mapping)
B_2=LookupFunction(stringprep.map_table_b2)
B_3=LookupFunction(stringprep.map_table_b3)
C_1_1=LookupFunction(stringprep.in_table_c11)
C_1_2=LookupFunction(stringprep.in_table_c12)
C_2_1=LookupFunction(stringprep.in_table_c21)
C_2_2=LookupFunction(stringprep.in_table_c22)
C_3=LookupFunction(stringprep.in_table_c3)
C_4=LookupFunction(stringprep.in_table_c4)
C_5=LookupFunction(stringprep.in_table_c5)
C_6=LookupFunction(stringprep.in_table_c6)
C_7=LookupFunction(stringprep.in_table_c7)
C_8=LookupFunction(stringprep.in_table_c8)
C_9=LookupFunction(stringprep.in_table_c9)
D_1=LookupFunction(stringprep.in_table_d1)
D_2=LookupFunction(stringprep.in_table_d2)
def nfkc(data):
"""Do NFKC normalization of Unicode data.
:Parameters:
- `data`: list of Unicode characters or Unicode string.
:return: normalized Unicode string."""
if type(data) is list:
data=u"".join(data)
return unicodedata.normalize("NFKC",data)
class Profile:
"""Base class for stringprep profiles."""
cache_items=[]
def __init__(self,unassigned,mapping,normalization,prohibited,bidi=1):
"""Initialize Profile object.
:Parameters:
- `unassigned`: the lookup table with unassigned codes
- `mapping`: the lookup table with character mappings
- `normalization`: the normalization function
- `prohibited`: the lookup table with prohibited characters
- `bidi`: if True then bidirectional checks should be done
"""
self.unassigned=unassigned
self.mapping=mapping
self.normalization=normalization
self.prohibited=prohibited
self.bidi=bidi
self.cache={}
def prepare(self,data):
"""Complete string preparation procedure for 'stored' strings.
(includes checks for unassigned codes)
:Parameters:
- `data`: Unicode string to prepare.
:return: prepared string
:raise StringprepError: if the preparation fails
"""
r=self.cache.get(data)
if r is not None:
return r
s=self.map(data)
if self.normalization:
s=self.normalization(s)
s=self.prohibit(s)
s=self.check_unassigned(s)
if self.bidi:
s=self.check_bidi(s)
if type(s) is list:
s=u"".string.join()
if len(self.cache_items)>=stringprep_cache_size:
remove=self.cache_items[:-stringprep_cache_size/2]
for profile,key in remove:
try:
del profile.cache[key]
except KeyError:
pass
self.cache_items[:]=self.cache_items[-stringprep_cache_size/2:]
self.cache_items.append((self,data))
self.cache[data]=s
return s
def prepare_query(self,s):
"""Complete string preparation procedure for 'query' strings.
(without checks for unassigned codes)
:Parameters:
- `s`: Unicode string to prepare.
:return: prepared string
:raise StringprepError: if the preparation fails
"""
s=self.map(s)
if self.normalization:
s=self.normalization(s)
s=self.prohibit(s)
if self.bidi:
s=self.check_bidi(s)
if type(s) is list:
s=u"".string.join(s)
return s
def map(self,s):
"""Mapping part of string preparation."""
r=[]
for c in s:
rc=None
for t in self.mapping:
rc=t.lookup(c)
if rc is not None:
break
if rc is not None:
r.append(rc)
else:
r.append(c)
return r
def prohibit(self,s):
"""Checks for prohibited characters."""
for c in s:
for t in self.prohibited:
if t.lookup(c):
raise StringprepError,"Prohibited character: %r" % (c,)
return s
def check_unassigned(self,s):
"""Checks for unassigned character codes."""
for c in s:
for t in self.unassigned:
if t.lookup(c):
raise StringprepError,"Unassigned character: %r" % (c,)
return s
def check_bidi(self,s):
"""Checks if sting is valid for bidirectional printing."""
has_l=0
has_ral=0
for c in s:
if D_1.lookup(c):
has_ral=1
elif D_2.lookup(c):
has_l=1
if has_l and has_ral:
raise StringprepError,"Both RandALCat and LCat characters present"
if has_l and (D_1.lookup(s[0]) is None or D_1.lookup(s[-1]) is None):
raise StringprepError,"The first and the last character must be RandALCat"
return s
nodeprep=Profile(
unassigned=(A_1,),
mapping=(B_1,B_2),
normalization=nfkc,
prohibited=(C_1_1,C_1_2,C_2_1,C_2_2,C_3,C_4,C_5,C_6,C_7,C_8,C_9,
LookupTable({u'"':True,u'&':True,u"'":True,u"/":True,
u":":True,u"<":True,u">":True,u"@":True},()) ),
bidi=1)
resourceprep=Profile(
unassigned=(A_1,),
mapping=(B_1,),
normalization=nfkc,
prohibited=(C_1_2,C_2_1,C_2_2,C_3,C_4,C_5,C_6,C_7,C_8,C_9),
bidi=1)
stringprep_cache_size=1000
def set_stringprep_cache_size(size):
"""Modify stringprep cache size.
:Parameters:
- `size`: new cache size"""
global stringprep_cache_size
stringprep_cache_size=size
if len(Profile.cache_items)>size:
remove=Profile.cache_items[:-size]
for profile,key in remove:
try:
del profile.cache[key]
except KeyError:
pass
Profile.cache_items=Profile.cache_items[-size:]
# vi: sts=4 et sw=4
|
PypiClean
|
/geezlibs1-2.0.0-py3-none-any.whl/geezlibs/types/inline_mode/inline_query_result_cached_audio.py
|
from typing import Optional, List
import geezlibs
from geezlibs import raw, types, utils, enums
from .inline_query_result import InlineQueryResult
from ...file_id import FileId
class InlineQueryResultCachedAudio(InlineQueryResult):
"""A link to an MP3 audio file stored on the Telegram servers
By default, this audio file will be sent by the user. Alternatively, you can use *input_message_content* to send a
message with the specified content instead of the audio.
Parameters:
audio_file_id (``str``):
A valid file identifier for the audio file.
id (``str``, *optional*):
Unique identifier for this result, 1-64 bytes.
Defaults to a randomly generated UUID4.
caption (``str``, *optional*):
Caption of the photo to be sent, 0-1024 characters.
parse_mode (:obj:`~geezlibs.enums.ParseMode`, *optional*):
By default, texts are parsed using both Markdown and HTML styles.
You can combine both syntaxes together.
caption_entities (List of :obj:`~geezlibs.types.MessageEntity`):
List of special entities that appear in the caption, which can be specified instead of *parse_mode*.
reply_markup (:obj:`~geezlibs.types.InlineKeyboardMarkup`, *optional*):
An InlineKeyboardMarkup object.
input_message_content (:obj:`~geezlibs.types.InputMessageContent`):
Content of the message to be sent instead of the photo.
"""
def __init__(
self,
audio_file_id: str,
id: str = None,
caption: str = "",
parse_mode: Optional["enums.ParseMode"] = None,
caption_entities: List["types.MessageEntity"] = None,
reply_markup: "types.InlineKeyboardMarkup" = None,
input_message_content: "types.InputMessageContent" = None
):
super().__init__("audio", id, input_message_content, reply_markup)
self.audio_file_id = audio_file_id
self.caption = caption
self.parse_mode = parse_mode
self.caption_entities = caption_entities
self.reply_markup = reply_markup
self.input_message_content = input_message_content
async def write(self, client: "geezlibs.Client"):
message, entities = (await utils.parse_text_entities(
client, self.caption, self.parse_mode, self.caption_entities
)).values()
file_id = FileId.decode(self.audio_file_id)
return raw.types.InputBotInlineResultDocument(
id=self.id,
type=self.type,
document=raw.types.InputDocument(
id=file_id.media_id,
access_hash=file_id.access_hash,
file_reference=file_id.file_reference,
),
send_message=(
await self.input_message_content.write(client, self.reply_markup)
if self.input_message_content
else raw.types.InputBotInlineMessageMediaAuto(
reply_markup=await self.reply_markup.write(client) if self.reply_markup else None,
message=message,
entities=entities
)
)
)
|
PypiClean
|
/seaplane.api-0.0.1.tar.gz/seaplane.api-0.0.1/seaplane/api/paths/endpoints_endpoint_request/post.py
|
from dataclasses import dataclass
import typing_extensions
import urllib3
from urllib3._collections import HTTPHeaderDict
from seaplane.api import api_client, exceptions
from datetime import date, datetime # noqa: F401
import decimal # noqa: F401
import functools # noqa: F401
import io # noqa: F401
import re # noqa: F401
import typing # noqa: F401
import typing_extensions # noqa: F401
import uuid # noqa: F401
import frozendict # noqa: F401
from seaplane.api import schemas # noqa: F401
from seaplane.api.model.error import Error
from . import path
# Header params
class XMetaDataSchema(schemas.DictSchema):
class MetaOapg:
class additional_properties(schemas.ListSchema):
class MetaOapg:
items = schemas.StrSchema
def __new__(
cls,
_arg: typing.Union[
typing.Tuple[
typing.Union[
MetaOapg.items,
str,
]
],
typing.List[
typing.Union[
MetaOapg.items,
str,
]
],
],
_configuration: typing.Optional[schemas.Configuration] = None,
) -> "additional_properties":
return super().__new__(
cls,
_arg,
_configuration=_configuration,
)
def __getitem__(self, i: int) -> MetaOapg.items:
return super().__getitem__(i)
def __getitem__(self, name: typing.Union[str,]) -> MetaOapg.additional_properties:
# dict_instance[name] accessor
return super().__getitem__(name)
def get_item_oapg(self, name: typing.Union[str,]) -> MetaOapg.additional_properties:
return super().get_item_oapg(name)
def __new__(
cls,
*_args: typing.Union[
dict,
frozendict.frozendict,
],
_configuration: typing.Optional[schemas.Configuration] = None,
**kwargs: typing.Union[
MetaOapg.additional_properties,
list,
tuple,
],
) -> "XMetaDataSchema":
return super().__new__(
cls,
*_args,
_configuration=_configuration,
**kwargs,
)
RequestRequiredHeaderParams = typing_extensions.TypedDict(
"RequestRequiredHeaderParams", {}
)
RequestOptionalHeaderParams = typing_extensions.TypedDict(
"RequestOptionalHeaderParams",
{
"X-Meta-Data": typing.Union[
XMetaDataSchema,
dict,
frozendict.frozendict,
],
},
total=False,
)
class RequestHeaderParams(RequestRequiredHeaderParams, RequestOptionalHeaderParams):
pass
request_header_x_meta_data = api_client.HeaderParameter(
name="X-Meta-Data",
style=api_client.ParameterStyle.SIMPLE,
schema=XMetaDataSchema,
explode=True,
)
# Path params
class EndpointSchema(schemas.StrSchema):
class MetaOapg:
max_length = 31
min_length = 1
regex = [
{
"pattern": r"^[a-z0-9]+(-[a-z0-9]+)*$", # noqa: E501
}
]
RequestRequiredPathParams = typing_extensions.TypedDict(
"RequestRequiredPathParams",
{
"endpoint": typing.Union[
EndpointSchema,
str,
],
},
)
RequestOptionalPathParams = typing_extensions.TypedDict(
"RequestOptionalPathParams", {}, total=False
)
class RequestPathParams(RequestRequiredPathParams, RequestOptionalPathParams):
pass
request_path_endpoint = api_client.PathParameter(
name="endpoint",
style=api_client.ParameterStyle.SIMPLE,
schema=EndpointSchema,
required=True,
)
# body param
SchemaForRequestBodyApplicationOctetStream = schemas.BinarySchema
request_body_body = api_client.RequestBody(
content={
"application/octet-stream": api_client.MediaType(
schema=SchemaForRequestBodyApplicationOctetStream
),
},
)
_auth = [
"BasicAuth",
]
class SchemaFor201ResponseBodyApplicationJson(schemas.DictSchema):
class MetaOapg:
class properties:
request_id = schemas.StrSchema
__annotations__ = {
"request_id": request_id,
}
@typing.overload
def __getitem__(
self, name: typing_extensions.Literal["request_id"]
) -> MetaOapg.properties.request_id:
...
@typing.overload
def __getitem__(self, name: str) -> schemas.UnsetAnyTypeSchema:
...
def __getitem__(
self, name: typing.Union[typing_extensions.Literal["request_id",], str]
):
# dict_instance[name] accessor
return super().__getitem__(name)
@typing.overload
def get_item_oapg(
self, name: typing_extensions.Literal["request_id"]
) -> typing.Union[MetaOapg.properties.request_id, schemas.Unset]:
...
@typing.overload
def get_item_oapg(
self, name: str
) -> typing.Union[schemas.UnsetAnyTypeSchema, schemas.Unset]:
...
def get_item_oapg(
self, name: typing.Union[typing_extensions.Literal["request_id",], str]
):
return super().get_item_oapg(name)
def __new__(
cls,
*_args: typing.Union[
dict,
frozendict.frozendict,
],
request_id: typing.Union[
MetaOapg.properties.request_id, str, schemas.Unset
] = schemas.unset,
_configuration: typing.Optional[schemas.Configuration] = None,
**kwargs: typing.Union[
schemas.AnyTypeSchema,
dict,
frozendict.frozendict,
str,
date,
datetime,
uuid.UUID,
int,
float,
decimal.Decimal,
None,
list,
tuple,
bytes,
],
) -> "SchemaFor201ResponseBodyApplicationJson":
return super().__new__(
cls,
*_args,
request_id=request_id,
_configuration=_configuration,
**kwargs,
)
@dataclass
class ApiResponseFor201(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[SchemaFor201ResponseBodyApplicationJson,]
headers: schemas.Unset = schemas.unset
_response_for_201 = api_client.OpenApiResponse(
response_cls=ApiResponseFor201,
content={
"application/json": api_client.MediaType(
schema=SchemaFor201ResponseBodyApplicationJson
),
},
)
SchemaFor400ResponseBodyApplicationProblemjson = Error
@dataclass
class ApiResponseFor400(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[SchemaFor400ResponseBodyApplicationProblemjson,]
headers: schemas.Unset = schemas.unset
_response_for_400 = api_client.OpenApiResponse(
response_cls=ApiResponseFor400,
content={
"application/problem+json": api_client.MediaType(
schema=SchemaFor400ResponseBodyApplicationProblemjson
),
},
)
SchemaFor401ResponseBodyApplicationProblemjson = Error
@dataclass
class ApiResponseFor401(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[SchemaFor401ResponseBodyApplicationProblemjson,]
headers: schemas.Unset = schemas.unset
_response_for_401 = api_client.OpenApiResponse(
response_cls=ApiResponseFor401,
content={
"application/problem+json": api_client.MediaType(
schema=SchemaFor401ResponseBodyApplicationProblemjson
),
},
)
SchemaFor404ResponseBodyApplicationProblemjson = Error
@dataclass
class ApiResponseFor404(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[SchemaFor404ResponseBodyApplicationProblemjson,]
headers: schemas.Unset = schemas.unset
_response_for_404 = api_client.OpenApiResponse(
response_cls=ApiResponseFor404,
content={
"application/problem+json": api_client.MediaType(
schema=SchemaFor404ResponseBodyApplicationProblemjson
),
},
)
SchemaFor503ResponseBodyApplicationProblemjson = Error
@dataclass
class ApiResponseFor503(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[SchemaFor503ResponseBodyApplicationProblemjson,]
headers: schemas.Unset = schemas.unset
_response_for_503 = api_client.OpenApiResponse(
response_cls=ApiResponseFor503,
content={
"application/problem+json": api_client.MediaType(
schema=SchemaFor503ResponseBodyApplicationProblemjson
),
},
)
SchemaFor5XXResponseBodyApplicationProblemjson = Error
@dataclass
class ApiResponseFor5XX(api_client.ApiResponse):
response: urllib3.HTTPResponse
body: typing.Union[SchemaFor5XXResponseBodyApplicationProblemjson,]
headers: schemas.Unset = schemas.unset
_response_for_5XX = api_client.OpenApiResponse(
response_cls=ApiResponseFor5XX,
content={
"application/problem+json": api_client.MediaType(
schema=SchemaFor5XXResponseBodyApplicationProblemjson
),
},
)
_status_code_to_response = {
"201": _response_for_201,
"400": _response_for_400,
"401": _response_for_401,
"404": _response_for_404,
"503": _response_for_503,
"5XX": _response_for_5XX,
}
_all_accept_content_types = (
"application/json",
"application/problem+json",
)
class BaseApi(api_client.Api):
@typing.overload
def _submit_to_endpoint_oapg(
self,
content_type: typing_extensions.Literal["application/octet-stream"] = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[ApiResponseFor201,]:
...
@typing.overload
def _submit_to_endpoint_oapg(
self,
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[ApiResponseFor201,]:
...
@typing.overload
def _submit_to_endpoint_oapg(
self,
skip_deserialization: typing_extensions.Literal[True],
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization:
...
@typing.overload
def _submit_to_endpoint_oapg(
self,
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[ApiResponseFor201, api_client.ApiResponseWithoutDeserialization,]:
...
def _submit_to_endpoint_oapg(
self,
content_type: str = "application/octet-stream",
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
"""
Create a endpoint request (i.e. ingress)
:param skip_deserialization: If true then api_response.response will be set but
api_response.body and api_response.headers will not be deserialized into schema
class instances
"""
self._verify_typed_dict_inputs_oapg(RequestHeaderParams, header_params)
self._verify_typed_dict_inputs_oapg(RequestPathParams, path_params)
used_path = path.value
_path_params = {}
for parameter in (request_path_endpoint,):
parameter_data = path_params.get(parameter.name, schemas.unset)
if parameter_data is schemas.unset:
continue
serialized_data = parameter.serialize(parameter_data)
_path_params.update(serialized_data)
for k, v in _path_params.items():
used_path = used_path.replace("{%s}" % k, v)
_headers = HTTPHeaderDict()
for parameter in (request_header_x_meta_data,):
parameter_data = header_params.get(parameter.name, schemas.unset)
if parameter_data is schemas.unset:
continue
serialized_data = parameter.serialize(parameter_data)
_headers.extend(serialized_data)
# TODO add cookie handling
if accept_content_types:
for accept_content_type in accept_content_types:
_headers.add("Accept", accept_content_type)
_fields = None
_body = None
if body is not schemas.unset:
serialized_data = request_body_body.serialize(body, content_type)
_headers.add("Content-Type", content_type)
if "fields" in serialized_data:
_fields = serialized_data["fields"]
elif "body" in serialized_data:
_body = serialized_data["body"]
response = self.api_client.call_api(
resource_path=used_path,
method="post".upper(),
headers=_headers,
fields=_fields,
body=_body,
auth_settings=_auth,
stream=stream,
timeout=timeout,
)
if skip_deserialization:
api_response = api_client.ApiResponseWithoutDeserialization(
response=response
)
else:
response_for_status = _status_code_to_response.get(str(response.status))
if response_for_status:
api_response = response_for_status.deserialize(
response, self.api_client.configuration
)
else:
api_response = api_client.ApiResponseWithoutDeserialization(
response=response
)
if not 200 <= response.status <= 299:
raise exceptions.ApiException(
status=response.status,
reason=response.reason,
api_response=api_response,
)
return api_response
class SubmitToEndpoint(BaseApi):
# this class is used by api classes that refer to endpoints with operationId fn names
@typing.overload
def submit_to_endpoint(
self,
content_type: typing_extensions.Literal["application/octet-stream"] = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[ApiResponseFor201,]:
...
@typing.overload
def submit_to_endpoint(
self,
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[ApiResponseFor201,]:
...
@typing.overload
def submit_to_endpoint(
self,
skip_deserialization: typing_extensions.Literal[True],
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization:
...
@typing.overload
def submit_to_endpoint(
self,
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[ApiResponseFor201, api_client.ApiResponseWithoutDeserialization,]:
...
def submit_to_endpoint(
self,
content_type: str = "application/octet-stream",
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
return self._submit_to_endpoint_oapg(
body=body,
header_params=header_params,
path_params=path_params,
content_type=content_type,
accept_content_types=accept_content_types,
stream=stream,
timeout=timeout,
skip_deserialization=skip_deserialization,
)
class ApiForpost(BaseApi):
# this class is used by api classes that refer to endpoints by path and http method names
@typing.overload
def post(
self,
content_type: typing_extensions.Literal["application/octet-stream"] = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[ApiResponseFor201,]:
...
@typing.overload
def post(
self,
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: typing_extensions.Literal[False] = ...,
) -> typing.Union[ApiResponseFor201,]:
...
@typing.overload
def post(
self,
skip_deserialization: typing_extensions.Literal[True],
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
) -> api_client.ApiResponseWithoutDeserialization:
...
@typing.overload
def post(
self,
content_type: str = ...,
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = ...,
) -> typing.Union[ApiResponseFor201, api_client.ApiResponseWithoutDeserialization,]:
...
def post(
self,
content_type: str = "application/octet-stream",
body: typing.Union[
SchemaForRequestBodyApplicationOctetStream,
bytes,
io.FileIO,
io.BufferedReader,
schemas.Unset,
] = schemas.unset,
header_params: RequestHeaderParams = frozendict.frozendict(),
path_params: RequestPathParams = frozendict.frozendict(),
accept_content_types: typing.Tuple[str] = _all_accept_content_types,
stream: bool = False,
timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None,
skip_deserialization: bool = False,
):
return self._submit_to_endpoint_oapg(
body=body,
header_params=header_params,
path_params=path_params,
content_type=content_type,
accept_content_types=accept_content_types,
stream=stream,
timeout=timeout,
skip_deserialization=skip_deserialization,
)
|
PypiClean
|
/python_jsonschema_objects-0.4.4.tar.gz/python_jsonschema_objects-0.4.4/python_jsonschema_objects/descriptors.py
|
from . import validators, util, wrapper_types
from .classbuilder import ProtocolBase, TypeProxy, TypeRef
class AttributeDescriptor(object):
"""Provides property access for constructed class properties"""
def __init__(self, prop, info, desc=""):
self.prop = prop
self.info = info
self.desc = desc
def __doc__(self):
return self.desc
def __get__(self, obj, owner=None):
if obj is None and owner is not None:
return self
try:
return obj._properties[self.prop]
except KeyError:
raise AttributeError("No such attribute")
def __set__(self, obj, val):
info = self.info
if isinstance(info["type"], (list, tuple)):
ok = False
errors = []
type_checks = []
for typ in info["type"]:
if not isinstance(typ, dict):
type_checks.append(typ)
continue
typ = next(
t for n, t in validators.SCHEMA_TYPE_MAPPING if typ["type"] == n
)
if typ is None:
typ = type(None)
if isinstance(typ, (list, tuple)):
type_checks.extend(typ)
else:
type_checks.append(typ)
for typ in type_checks:
if not isinstance(typ, TypeProxy) and isinstance(val, typ):
ok = True
break
elif hasattr(typ, "isLiteralClass"):
try:
validator = typ(val)
validator.validate()
except Exception as e:
errors.append("Failed to coerce to '{0}': {1}".format(typ, e))
pass
else:
ok = True
break
elif util.safe_issubclass(typ, ProtocolBase):
# force conversion- thus the val rather than validator assignment
try:
val = typ(**util.coerce_for_expansion(val))
val.validate()
except Exception as e:
errors.append("Failed to coerce to '{0}': {1}".format(typ, e))
pass
else:
ok = True
break
elif util.safe_issubclass(typ, wrapper_types.ArrayWrapper):
try:
val = typ(val)
val.validate()
except Exception as e:
errors.append("Failed to coerce to '{0}': {1}".format(typ, e))
pass
else:
ok = True
break
elif isinstance(typ, TypeProxy):
try:
# handle keyword expansion according to expected types
# using keywords like oneOf, value can be an object, array or literal
val = util.coerce_for_expansion(val)
if isinstance(val, dict):
val = typ(**val)
else:
val = typ(val)
val.validate()
except Exception as e:
errors.append("Failed to coerce to '{0}': {1}".format(typ, e))
pass
else:
ok = True
break
if not ok:
errstr = "\n".join(errors)
raise validators.ValidationError(
"Object must be one of {0}: \n{1}".format(info["type"], errstr)
)
elif info["type"] == "array":
val = info["validator"](val)
val.validate()
elif util.safe_issubclass(info["type"], wrapper_types.ArrayWrapper):
# An array type may have already been converted into an ArrayValidator
val = info["type"](val)
val.validate()
elif getattr(info["type"], "isLiteralClass", False) is True:
if not isinstance(val, info["type"]):
validator = info["type"](val)
validator.validate()
if validator._value is not None:
# This allows setting of default Literal values
val = validator
elif util.safe_issubclass(info["type"], ProtocolBase):
if not isinstance(val, info["type"]):
val = info["type"](**util.coerce_for_expansion(val))
val.validate()
elif isinstance(info["type"], TypeProxy):
val = util.coerce_for_expansion(val)
if isinstance(val, dict):
val = info["type"](**val)
else:
val = info["type"](val)
elif isinstance(info["type"], TypeRef):
if not isinstance(val, info["type"].ref_class):
val = info["type"](**val)
val.validate()
elif info["type"] is None:
# This is the null value
if val is not None:
raise validators.ValidationError("None is only valid value for null")
else:
raise TypeError("Unknown object type: '{0}'".format(info["type"]))
obj._properties[self.prop] = val
def __delete__(self, obj):
prop = self.prop
if prop in obj.__required__:
raise AttributeError("'%s' is required" % prop)
else:
obj._properties[prop] = None
|
PypiClean
|
/Sympathy-4.0.1-py3-none-any.whl/sylib/nodes/sympathy/imageprocessing/node_layers.py
|
from sympathy.api import node
from sympathy.api.nodeconfig import Port, Ports, Tag, Tags
import numpy as np
from sylib.imageprocessing.image import Image
from sylib.imageprocessing.algorithm_selector import ImageFiltering_abstract
class OverlayImagesAbstract(ImageFiltering_abstract, node.Node):
author = 'Mathias Broxvall'
version = '0.1'
icon = 'image_overlay.svg'
description = (
'Combines two images by layering the first (top port) image on top of '
'the other (bottom port) image\n, with choice for combining operator. '
'Images must have the same number of channels')
tags = Tags(Tag.ImageProcessing.Layers)
default_parameters = {
'use alpha channel': (
'Use last channel of source images as alpha channel'),
'alpha': 'Alpha value used when no alpha channel is given'
}
algorithms = {
'additive': dict({
'description': 'Adds the two images together where they overlap.',
'unity': 0.0,
'expr': lambda result, im: result + im,
'single_pass': False
}, **default_parameters),
'multiplicative': dict({
'description': 'Multiplies images where they overlap.',
'unity': 1.0,
'expr': lambda result, im: result * im,
'single_pass': False,
}, **default_parameters),
'divide': dict({
'description': (
'Divides bottom image by all other images, one at a time.'),
'unity': 1.0,
'expr': lambda result, im: result / im,
'single_pass': False,
}, **default_parameters),
'subtract': dict({
'description': 'Subtracts top image from bottom.',
'unity': 0.0,
'expr': lambda result, im: result - im,
'single_pass': False,
}, **default_parameters),
'max': dict({
'description': 'Takes the maximum value of the images.',
'unity': 0.0,
'expr': lambda result, im: np.maximum(result, im),
'single_pass': False,
}, **default_parameters),
'min': dict({
'description': 'Takes the minimum value of the images.',
'unity': 0.0,
'expr': lambda result, im: np.minimum(result, im),
'single_pass': False,
}, **default_parameters),
'median': dict({
'description': 'Takes the median value of the images.',
'unity': 0.0,
'expr': lambda images: np.median(np.array(images), axis=0),
'single_pass': True,
}, **default_parameters),
'layer': dict({
'description': (
'Layers on image on top of the other, alpha channel (if any) '
'determines transparency. Otherwise alpha value below'),
'unity': 0.0,
'expr': lambda result, im: im,
'single_pass': False,
}, **default_parameters),
}
options_list = [
'use alpha channel', 'alpha'
]
options_types = {
'use alpha channel': bool, 'alpha': float
}
options_default = {
'use alpha channel': False, 'alpha': 1.0
}
parameters = node.parameters()
parameters.set_string(
'algorithm', value=next(iter(algorithms)), description='',
label='Algorithm')
ImageFiltering_abstract.generate_parameters(
parameters, options_types, options_default)
def execute(self, node_context):
images = self.get_input_images(node_context)
# images = [
# obj.get_image() for obj in node_context.input.group('images')]
params = node_context.parameters
alg_name = params['algorithm'].value
use_alpha = params['use alpha channel'].value
# Reshape so all images guaranteed to have 3 dimensions
images = [
(im.reshape(im.shape + (1,)) if len(im.shape) < 3 else im)
for im in images]
# Compute max size
max_y = max([im.shape[0] for im in images])
max_x = max([im.shape[1] for im in images])
max_c = max([im.shape[2] for im in images])
if alg_name == 'multiplicative':
unity = 1.0
elif alg_name == 'divide':
unity = 1.0
else:
unity = 0.0
result = np.full((max_y, max_x, max_c), unity)
if any([im.dtype.kind == 'c' for im in images]):
result = result.astype(np.complex)
if len(images) == 0:
# Return early for all empty inputs
node_context.output['result'].set_image(result)
return
bot = images[-1]
result[:bot.shape[0], :bot.shape[1], :bot.shape[2]] = bot
rest = images[:-1]
alpha = params['alpha'].value
if OverlayImages.algorithms[alg_name]['single_pass']:
self.set_progress(50)
expr = OverlayImages.algorithms[alg_name]['expr']
result = expr(images)
else:
for i, im in enumerate(rest[::-1]):
self.set_progress(50 + (i*50) / len(rest))
for c in range(im.shape[2]):
expr = OverlayImages.algorithms[alg_name]['expr']
y, x, _ = im.shape
if use_alpha:
result[:y, :x, c] = (
result[:y, :x, c] * (1-im[:, :, -1]) +
im[:, :, -1] * expr(
result[:y, :x, c], im[:, :, c]))
else:
result[:y, :x, c] = (
result[:y, :x, c] * (1-alpha) +
alpha * expr(result[:y, :x, c], im[:, :, c]))
node_context.output['result'].set_image(result)
class OverlayImages(OverlayImagesAbstract):
name = 'Overlay Images'
nodeid = 'syip.overlay'
inputs = Ports([
Port.Custom('image', 'Input images', name='images', n=(2,))
])
outputs = Ports([
Image('result after filtering', name='result'),
])
__doc__ = ImageFiltering_abstract.generate_docstring(
OverlayImagesAbstract.description,
OverlayImagesAbstract.algorithms,
OverlayImagesAbstract.options_list,
inputs, outputs)
def get_input_images(self, node_context):
return [obj.get_image() for obj in node_context.input.group('images')]
class OverlayImagesList(OverlayImagesAbstract):
name = 'Overlay Images List'
nodeid = 'syip.overlay_list'
inputs = Ports([
Port.Custom('[image]', 'Input images', name='images')
])
outputs = Ports([
Image('result after filtering', name='result'),
])
__doc__ = ImageFiltering_abstract.generate_docstring(
OverlayImagesAbstract.description,
OverlayImagesAbstract.algorithms,
OverlayImagesAbstract.options_list,
inputs, outputs)
def get_input_images(self, node_context):
images = []
for i, obj in enumerate(node_context.input['images']):
images.append(obj.get_image())
self.set_progress((50*i)/len(node_context.input['images']))
return images
|
PypiClean
|
/open_aea_cosmpy-0.6.5.tar.gz/open_aea_cosmpy-0.6.5/cosmpy/aerial/wallet.py
|
from abc import ABC, abstractmethod
from collections import UserString
from typing import Optional
from bip_utils import Bip39SeedGenerator, Bip44, Bip44Coins # type: ignore
from cosmpy.crypto.address import Address
from cosmpy.crypto.hashfuncs import sha256
from cosmpy.crypto.interface import Signer
from cosmpy.crypto.keypairs import PrivateKey, PublicKey
class Wallet(ABC, UserString):
"""Wallet Generation.
:param ABC: ABC abstract method
:param UserString: user string
"""
@abstractmethod
def address(self) -> Address:
"""get the address of the wallet.
:return: None
"""
@abstractmethod
def public_key(self) -> PublicKey:
"""get the public key of the wallet.
:return: None
"""
@abstractmethod
def signer(self) -> Signer:
"""get the signer of the wallet.
:return: None
"""
@property
def data(self):
"""Get the address of the wallet.
:return: Address
"""
return self.address()
def __json__(self):
"""
Return the address in string format.
:return: address in string format
"""
return str(self.address())
class LocalWallet(Wallet):
"""Generate local wallet.
:param Wallet: wallet
"""
@staticmethod
def generate(prefix: Optional[str] = None) -> "LocalWallet":
"""generate the local wallet.
:param prefix: prefix, defaults to None
:return: local wallet
"""
return LocalWallet(PrivateKey(), prefix=prefix)
@staticmethod
def from_mnemonic(mnemonic: str, prefix: Optional[str] = None) -> "LocalWallet":
"""Generate local wallet from mnemonic.
:param mnemonic: mnemonic
:param prefix: prefix, defaults to None
:return: local wallet
"""
seed_bytes = Bip39SeedGenerator(mnemonic).Generate()
bip44_def_ctx = Bip44.FromSeed(
seed_bytes, Bip44Coins.COSMOS
).DeriveDefaultPath()
return LocalWallet(
PrivateKey(bip44_def_ctx.PrivateKey().Raw().ToBytes()), prefix=prefix
)
@staticmethod
def from_unsafe_seed(
text: str, index: Optional[int] = None, prefix: Optional[str] = None
) -> "LocalWallet":
"""Generate local wallet from unsafe seed.
:param text: text
:param index: index, defaults to None
:param prefix: prefix, defaults to None
:return: Local wallet
"""
private_key_bytes = sha256(text.encode())
if index is not None:
private_key_bytes = sha256(
private_key_bytes + index.to_bytes(4, byteorder="big")
)
return LocalWallet(PrivateKey(private_key_bytes), prefix=prefix)
def __init__(self, private_key: PrivateKey, prefix: Optional[str] = None):
"""Init wallet with.
:param private_key: private key of the wallet
:param prefix: prefix, defaults to None
"""
self._private_key = private_key
self._prefix = prefix
def address(self) -> Address:
"""Get the wallet address.
:return: Wallet address.
"""
return Address(self._private_key, self._prefix)
def public_key(self) -> PublicKey:
"""Get the public key of the wallet.
:return: public key
"""
return self._private_key
def signer(self) -> PrivateKey:
"""Get the signer of the wallet.
:return: signer
"""
return self._private_key
|
PypiClean
|
/cdktf_cdktf_provider_opentelekomcloud-9.0.0-py3-none-any.whl/cdktf_cdktf_provider_opentelekomcloud/data_opentelekomcloud_css_flavor_v1/__init__.py
|
import abc
import builtins
import datetime
import enum
import typing
import jsii
import publication
import typing_extensions
from typeguard import check_type
from .._jsii import *
import cdktf as _cdktf_9a9027ec
import constructs as _constructs_77d1e7e8
class DataOpentelekomcloudCssFlavorV1(
_cdktf_9a9027ec.TerraformDataSource,
metaclass=jsii.JSIIMeta,
jsii_type="@cdktf/provider-opentelekomcloud.dataOpentelekomcloudCssFlavorV1.DataOpentelekomcloudCssFlavorV1",
):
'''Represents a {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1 opentelekomcloud_css_flavor_v1}.'''
def __init__(
self,
scope: _constructs_77d1e7e8.Construct,
id_: builtins.str,
*,
disk_range: typing.Optional[typing.Union["DataOpentelekomcloudCssFlavorV1DiskRange", typing.Dict[builtins.str, typing.Any]]] = None,
id: typing.Optional[builtins.str] = None,
min_cpu: typing.Optional[jsii.Number] = None,
min_ram: typing.Optional[jsii.Number] = None,
name: typing.Optional[builtins.str] = None,
type: typing.Optional[builtins.str] = None,
version: typing.Optional[builtins.str] = None,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
) -> None:
'''Create a new {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1 opentelekomcloud_css_flavor_v1} Data Source.
:param scope: The scope in which to define this construct.
:param id_: The scoped construct ID. Must be unique amongst siblings in the same scope
:param disk_range: disk_range block. Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#disk_range DataOpentelekomcloudCssFlavorV1#disk_range}
:param id: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#id DataOpentelekomcloudCssFlavorV1#id}. Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
:param min_cpu: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_cpu DataOpentelekomcloudCssFlavorV1#min_cpu}.
:param min_ram: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_ram DataOpentelekomcloudCssFlavorV1#min_ram}.
:param name: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#name DataOpentelekomcloudCssFlavorV1#name}.
:param type: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#type DataOpentelekomcloudCssFlavorV1#type}.
:param version: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#version DataOpentelekomcloudCssFlavorV1#version}.
:param connection:
:param count:
:param depends_on:
:param for_each:
:param lifecycle:
:param provider:
:param provisioners:
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__d584523472665d61c0a56a003409423eea0b05d75c5762b4c1c36dd061b646ed)
check_type(argname="argument scope", value=scope, expected_type=type_hints["scope"])
check_type(argname="argument id_", value=id_, expected_type=type_hints["id_"])
config = DataOpentelekomcloudCssFlavorV1Config(
disk_range=disk_range,
id=id,
min_cpu=min_cpu,
min_ram=min_ram,
name=name,
type=type,
version=version,
connection=connection,
count=count,
depends_on=depends_on,
for_each=for_each,
lifecycle=lifecycle,
provider=provider,
provisioners=provisioners,
)
jsii.create(self.__class__, self, [scope, id_, config])
@jsii.member(jsii_name="putDiskRange")
def put_disk_range(
self,
*,
min_from: typing.Optional[jsii.Number] = None,
min_to: typing.Optional[jsii.Number] = None,
) -> None:
'''
:param min_from: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_from DataOpentelekomcloudCssFlavorV1#min_from}.
:param min_to: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_to DataOpentelekomcloudCssFlavorV1#min_to}.
'''
value = DataOpentelekomcloudCssFlavorV1DiskRange(
min_from=min_from, min_to=min_to
)
return typing.cast(None, jsii.invoke(self, "putDiskRange", [value]))
@jsii.member(jsii_name="resetDiskRange")
def reset_disk_range(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetDiskRange", []))
@jsii.member(jsii_name="resetId")
def reset_id(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetId", []))
@jsii.member(jsii_name="resetMinCpu")
def reset_min_cpu(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetMinCpu", []))
@jsii.member(jsii_name="resetMinRam")
def reset_min_ram(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetMinRam", []))
@jsii.member(jsii_name="resetName")
def reset_name(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetName", []))
@jsii.member(jsii_name="resetType")
def reset_type(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetType", []))
@jsii.member(jsii_name="resetVersion")
def reset_version(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetVersion", []))
@jsii.member(jsii_name="synthesizeAttributes")
def _synthesize_attributes(self) -> typing.Mapping[builtins.str, typing.Any]:
return typing.cast(typing.Mapping[builtins.str, typing.Any], jsii.invoke(self, "synthesizeAttributes", []))
@jsii.python.classproperty
@jsii.member(jsii_name="tfResourceType")
def TF_RESOURCE_TYPE(cls) -> builtins.str:
return typing.cast(builtins.str, jsii.sget(cls, "tfResourceType"))
@builtins.property
@jsii.member(jsii_name="cpu")
def cpu(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "cpu"))
@builtins.property
@jsii.member(jsii_name="diskRange")
def disk_range(self) -> "DataOpentelekomcloudCssFlavorV1DiskRangeOutputReference":
return typing.cast("DataOpentelekomcloudCssFlavorV1DiskRangeOutputReference", jsii.get(self, "diskRange"))
@builtins.property
@jsii.member(jsii_name="ram")
def ram(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "ram"))
@builtins.property
@jsii.member(jsii_name="region")
def region(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "region"))
@builtins.property
@jsii.member(jsii_name="diskRangeInput")
def disk_range_input(
self,
) -> typing.Optional["DataOpentelekomcloudCssFlavorV1DiskRange"]:
return typing.cast(typing.Optional["DataOpentelekomcloudCssFlavorV1DiskRange"], jsii.get(self, "diskRangeInput"))
@builtins.property
@jsii.member(jsii_name="idInput")
def id_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "idInput"))
@builtins.property
@jsii.member(jsii_name="minCpuInput")
def min_cpu_input(self) -> typing.Optional[jsii.Number]:
return typing.cast(typing.Optional[jsii.Number], jsii.get(self, "minCpuInput"))
@builtins.property
@jsii.member(jsii_name="minRamInput")
def min_ram_input(self) -> typing.Optional[jsii.Number]:
return typing.cast(typing.Optional[jsii.Number], jsii.get(self, "minRamInput"))
@builtins.property
@jsii.member(jsii_name="nameInput")
def name_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "nameInput"))
@builtins.property
@jsii.member(jsii_name="typeInput")
def type_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "typeInput"))
@builtins.property
@jsii.member(jsii_name="versionInput")
def version_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "versionInput"))
@builtins.property
@jsii.member(jsii_name="id")
def id(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "id"))
@id.setter
def id(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__a06ef430ce1df160ed29eea99b5738f705feeb9eba34703682f9ddd1207cd010)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "id", value)
@builtins.property
@jsii.member(jsii_name="minCpu")
def min_cpu(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "minCpu"))
@min_cpu.setter
def min_cpu(self, value: jsii.Number) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__c46f2155fa727d635ab889260f5e0444256e95cc50ba99c373940731c443905b)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "minCpu", value)
@builtins.property
@jsii.member(jsii_name="minRam")
def min_ram(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "minRam"))
@min_ram.setter
def min_ram(self, value: jsii.Number) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__3d0bb3ab70925eeceffd84fde08710863f2a66ff73c6cbf14dfe2910d6a62700)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "minRam", value)
@builtins.property
@jsii.member(jsii_name="name")
def name(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "name"))
@name.setter
def name(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__c1e0d5a19a42d76de53da05e49df5fd8b579eca74ef65ad0bbf37544cead41b6)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "name", value)
@builtins.property
@jsii.member(jsii_name="type")
def type(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "type"))
@type.setter
def type(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__e83c70ca9fb60ac00e13c9c0a1a1106c82905e495dc0107ec2fb5005ff66f053)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "type", value)
@builtins.property
@jsii.member(jsii_name="version")
def version(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "version"))
@version.setter
def version(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__a94d507da940d7a10c16205e0a7bda090580f1326d02ec9d5ca03e1b49a69b74)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "version", value)
@jsii.data_type(
jsii_type="@cdktf/provider-opentelekomcloud.dataOpentelekomcloudCssFlavorV1.DataOpentelekomcloudCssFlavorV1Config",
jsii_struct_bases=[_cdktf_9a9027ec.TerraformMetaArguments],
name_mapping={
"connection": "connection",
"count": "count",
"depends_on": "dependsOn",
"for_each": "forEach",
"lifecycle": "lifecycle",
"provider": "provider",
"provisioners": "provisioners",
"disk_range": "diskRange",
"id": "id",
"min_cpu": "minCpu",
"min_ram": "minRam",
"name": "name",
"type": "type",
"version": "version",
},
)
class DataOpentelekomcloudCssFlavorV1Config(_cdktf_9a9027ec.TerraformMetaArguments):
def __init__(
self,
*,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
disk_range: typing.Optional[typing.Union["DataOpentelekomcloudCssFlavorV1DiskRange", typing.Dict[builtins.str, typing.Any]]] = None,
id: typing.Optional[builtins.str] = None,
min_cpu: typing.Optional[jsii.Number] = None,
min_ram: typing.Optional[jsii.Number] = None,
name: typing.Optional[builtins.str] = None,
type: typing.Optional[builtins.str] = None,
version: typing.Optional[builtins.str] = None,
) -> None:
'''
:param connection:
:param count:
:param depends_on:
:param for_each:
:param lifecycle:
:param provider:
:param provisioners:
:param disk_range: disk_range block. Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#disk_range DataOpentelekomcloudCssFlavorV1#disk_range}
:param id: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#id DataOpentelekomcloudCssFlavorV1#id}. Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
:param min_cpu: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_cpu DataOpentelekomcloudCssFlavorV1#min_cpu}.
:param min_ram: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_ram DataOpentelekomcloudCssFlavorV1#min_ram}.
:param name: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#name DataOpentelekomcloudCssFlavorV1#name}.
:param type: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#type DataOpentelekomcloudCssFlavorV1#type}.
:param version: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#version DataOpentelekomcloudCssFlavorV1#version}.
'''
if isinstance(lifecycle, dict):
lifecycle = _cdktf_9a9027ec.TerraformResourceLifecycle(**lifecycle)
if isinstance(disk_range, dict):
disk_range = DataOpentelekomcloudCssFlavorV1DiskRange(**disk_range)
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__9ab3dadae4fbb7e1d0ea6a2acead80979ae970e0b1c7dec3571ccd153cbbd60e)
check_type(argname="argument connection", value=connection, expected_type=type_hints["connection"])
check_type(argname="argument count", value=count, expected_type=type_hints["count"])
check_type(argname="argument depends_on", value=depends_on, expected_type=type_hints["depends_on"])
check_type(argname="argument for_each", value=for_each, expected_type=type_hints["for_each"])
check_type(argname="argument lifecycle", value=lifecycle, expected_type=type_hints["lifecycle"])
check_type(argname="argument provider", value=provider, expected_type=type_hints["provider"])
check_type(argname="argument provisioners", value=provisioners, expected_type=type_hints["provisioners"])
check_type(argname="argument disk_range", value=disk_range, expected_type=type_hints["disk_range"])
check_type(argname="argument id", value=id, expected_type=type_hints["id"])
check_type(argname="argument min_cpu", value=min_cpu, expected_type=type_hints["min_cpu"])
check_type(argname="argument min_ram", value=min_ram, expected_type=type_hints["min_ram"])
check_type(argname="argument name", value=name, expected_type=type_hints["name"])
check_type(argname="argument type", value=type, expected_type=type_hints["type"])
check_type(argname="argument version", value=version, expected_type=type_hints["version"])
self._values: typing.Dict[builtins.str, typing.Any] = {}
if connection is not None:
self._values["connection"] = connection
if count is not None:
self._values["count"] = count
if depends_on is not None:
self._values["depends_on"] = depends_on
if for_each is not None:
self._values["for_each"] = for_each
if lifecycle is not None:
self._values["lifecycle"] = lifecycle
if provider is not None:
self._values["provider"] = provider
if provisioners is not None:
self._values["provisioners"] = provisioners
if disk_range is not None:
self._values["disk_range"] = disk_range
if id is not None:
self._values["id"] = id
if min_cpu is not None:
self._values["min_cpu"] = min_cpu
if min_ram is not None:
self._values["min_ram"] = min_ram
if name is not None:
self._values["name"] = name
if type is not None:
self._values["type"] = type
if version is not None:
self._values["version"] = version
@builtins.property
def connection(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, _cdktf_9a9027ec.WinrmProvisionerConnection]]:
'''
:stability: experimental
'''
result = self._values.get("connection")
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, _cdktf_9a9027ec.WinrmProvisionerConnection]], result)
@builtins.property
def count(
self,
) -> typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]]:
'''
:stability: experimental
'''
result = self._values.get("count")
return typing.cast(typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]], result)
@builtins.property
def depends_on(
self,
) -> typing.Optional[typing.List[_cdktf_9a9027ec.ITerraformDependable]]:
'''
:stability: experimental
'''
result = self._values.get("depends_on")
return typing.cast(typing.Optional[typing.List[_cdktf_9a9027ec.ITerraformDependable]], result)
@builtins.property
def for_each(self) -> typing.Optional[_cdktf_9a9027ec.ITerraformIterator]:
'''
:stability: experimental
'''
result = self._values.get("for_each")
return typing.cast(typing.Optional[_cdktf_9a9027ec.ITerraformIterator], result)
@builtins.property
def lifecycle(self) -> typing.Optional[_cdktf_9a9027ec.TerraformResourceLifecycle]:
'''
:stability: experimental
'''
result = self._values.get("lifecycle")
return typing.cast(typing.Optional[_cdktf_9a9027ec.TerraformResourceLifecycle], result)
@builtins.property
def provider(self) -> typing.Optional[_cdktf_9a9027ec.TerraformProvider]:
'''
:stability: experimental
'''
result = self._values.get("provider")
return typing.cast(typing.Optional[_cdktf_9a9027ec.TerraformProvider], result)
@builtins.property
def provisioners(
self,
) -> typing.Optional[typing.List[typing.Union[_cdktf_9a9027ec.FileProvisioner, _cdktf_9a9027ec.LocalExecProvisioner, _cdktf_9a9027ec.RemoteExecProvisioner]]]:
'''
:stability: experimental
'''
result = self._values.get("provisioners")
return typing.cast(typing.Optional[typing.List[typing.Union[_cdktf_9a9027ec.FileProvisioner, _cdktf_9a9027ec.LocalExecProvisioner, _cdktf_9a9027ec.RemoteExecProvisioner]]], result)
@builtins.property
def disk_range(self) -> typing.Optional["DataOpentelekomcloudCssFlavorV1DiskRange"]:
'''disk_range block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#disk_range DataOpentelekomcloudCssFlavorV1#disk_range}
'''
result = self._values.get("disk_range")
return typing.cast(typing.Optional["DataOpentelekomcloudCssFlavorV1DiskRange"], result)
@builtins.property
def id(self) -> typing.Optional[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#id DataOpentelekomcloudCssFlavorV1#id}.
Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2.
If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
'''
result = self._values.get("id")
return typing.cast(typing.Optional[builtins.str], result)
@builtins.property
def min_cpu(self) -> typing.Optional[jsii.Number]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_cpu DataOpentelekomcloudCssFlavorV1#min_cpu}.'''
result = self._values.get("min_cpu")
return typing.cast(typing.Optional[jsii.Number], result)
@builtins.property
def min_ram(self) -> typing.Optional[jsii.Number]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_ram DataOpentelekomcloudCssFlavorV1#min_ram}.'''
result = self._values.get("min_ram")
return typing.cast(typing.Optional[jsii.Number], result)
@builtins.property
def name(self) -> typing.Optional[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#name DataOpentelekomcloudCssFlavorV1#name}.'''
result = self._values.get("name")
return typing.cast(typing.Optional[builtins.str], result)
@builtins.property
def type(self) -> typing.Optional[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#type DataOpentelekomcloudCssFlavorV1#type}.'''
result = self._values.get("type")
return typing.cast(typing.Optional[builtins.str], result)
@builtins.property
def version(self) -> typing.Optional[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#version DataOpentelekomcloudCssFlavorV1#version}.'''
result = self._values.get("version")
return typing.cast(typing.Optional[builtins.str], result)
def __eq__(self, rhs: typing.Any) -> builtins.bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs: typing.Any) -> builtins.bool:
return not (rhs == self)
def __repr__(self) -> str:
return "DataOpentelekomcloudCssFlavorV1Config(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@cdktf/provider-opentelekomcloud.dataOpentelekomcloudCssFlavorV1.DataOpentelekomcloudCssFlavorV1DiskRange",
jsii_struct_bases=[],
name_mapping={"min_from": "minFrom", "min_to": "minTo"},
)
class DataOpentelekomcloudCssFlavorV1DiskRange:
def __init__(
self,
*,
min_from: typing.Optional[jsii.Number] = None,
min_to: typing.Optional[jsii.Number] = None,
) -> None:
'''
:param min_from: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_from DataOpentelekomcloudCssFlavorV1#min_from}.
:param min_to: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_to DataOpentelekomcloudCssFlavorV1#min_to}.
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__65c2de23cd503c02e07fffc3fd1d28550173e2328aba25284234a3a80566e6e6)
check_type(argname="argument min_from", value=min_from, expected_type=type_hints["min_from"])
check_type(argname="argument min_to", value=min_to, expected_type=type_hints["min_to"])
self._values: typing.Dict[builtins.str, typing.Any] = {}
if min_from is not None:
self._values["min_from"] = min_from
if min_to is not None:
self._values["min_to"] = min_to
@builtins.property
def min_from(self) -> typing.Optional[jsii.Number]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_from DataOpentelekomcloudCssFlavorV1#min_from}.'''
result = self._values.get("min_from")
return typing.cast(typing.Optional[jsii.Number], result)
@builtins.property
def min_to(self) -> typing.Optional[jsii.Number]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/1.35.6/docs/data-sources/css_flavor_v1#min_to DataOpentelekomcloudCssFlavorV1#min_to}.'''
result = self._values.get("min_to")
return typing.cast(typing.Optional[jsii.Number], result)
def __eq__(self, rhs: typing.Any) -> builtins.bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs: typing.Any) -> builtins.bool:
return not (rhs == self)
def __repr__(self) -> str:
return "DataOpentelekomcloudCssFlavorV1DiskRange(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
class DataOpentelekomcloudCssFlavorV1DiskRangeOutputReference(
_cdktf_9a9027ec.ComplexObject,
metaclass=jsii.JSIIMeta,
jsii_type="@cdktf/provider-opentelekomcloud.dataOpentelekomcloudCssFlavorV1.DataOpentelekomcloudCssFlavorV1DiskRangeOutputReference",
):
def __init__(
self,
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
) -> None:
'''
:param terraform_resource: The parent resource.
:param terraform_attribute: The attribute on the parent resource this class is referencing.
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__95249e808e324d7e2531b906289bde693f09eff5c1526f71c1358a5d3f0114a0)
check_type(argname="argument terraform_resource", value=terraform_resource, expected_type=type_hints["terraform_resource"])
check_type(argname="argument terraform_attribute", value=terraform_attribute, expected_type=type_hints["terraform_attribute"])
jsii.create(self.__class__, self, [terraform_resource, terraform_attribute])
@jsii.member(jsii_name="resetMinFrom")
def reset_min_from(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetMinFrom", []))
@jsii.member(jsii_name="resetMinTo")
def reset_min_to(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetMinTo", []))
@builtins.property
@jsii.member(jsii_name="from")
def from_(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "from"))
@builtins.property
@jsii.member(jsii_name="to")
def to(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "to"))
@builtins.property
@jsii.member(jsii_name="minFromInput")
def min_from_input(self) -> typing.Optional[jsii.Number]:
return typing.cast(typing.Optional[jsii.Number], jsii.get(self, "minFromInput"))
@builtins.property
@jsii.member(jsii_name="minToInput")
def min_to_input(self) -> typing.Optional[jsii.Number]:
return typing.cast(typing.Optional[jsii.Number], jsii.get(self, "minToInput"))
@builtins.property
@jsii.member(jsii_name="minFrom")
def min_from(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "minFrom"))
@min_from.setter
def min_from(self, value: jsii.Number) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__177da4e633770cdb4e5eb9022ef08a6e76f543a4a9ff1356e93f43141409a9f7)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "minFrom", value)
@builtins.property
@jsii.member(jsii_name="minTo")
def min_to(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "minTo"))
@min_to.setter
def min_to(self, value: jsii.Number) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__bd6023babf9dadee1c8665d7f980474e39d5a24f52f7ce04350ab75b2ef205fc)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "minTo", value)
@builtins.property
@jsii.member(jsii_name="internalValue")
def internal_value(
self,
) -> typing.Optional[DataOpentelekomcloudCssFlavorV1DiskRange]:
return typing.cast(typing.Optional[DataOpentelekomcloudCssFlavorV1DiskRange], jsii.get(self, "internalValue"))
@internal_value.setter
def internal_value(
self,
value: typing.Optional[DataOpentelekomcloudCssFlavorV1DiskRange],
) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__ccc7ac08b29bc2a6ea5ec790ee31c4461b1ef8e25065fa25ffd9baa24cab80fa)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "internalValue", value)
__all__ = [
"DataOpentelekomcloudCssFlavorV1",
"DataOpentelekomcloudCssFlavorV1Config",
"DataOpentelekomcloudCssFlavorV1DiskRange",
"DataOpentelekomcloudCssFlavorV1DiskRangeOutputReference",
]
publication.publish()
def _typecheckingstub__d584523472665d61c0a56a003409423eea0b05d75c5762b4c1c36dd061b646ed(
scope: _constructs_77d1e7e8.Construct,
id_: builtins.str,
*,
disk_range: typing.Optional[typing.Union[DataOpentelekomcloudCssFlavorV1DiskRange, typing.Dict[builtins.str, typing.Any]]] = None,
id: typing.Optional[builtins.str] = None,
min_cpu: typing.Optional[jsii.Number] = None,
min_ram: typing.Optional[jsii.Number] = None,
name: typing.Optional[builtins.str] = None,
type: typing.Optional[builtins.str] = None,
version: typing.Optional[builtins.str] = None,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__a06ef430ce1df160ed29eea99b5738f705feeb9eba34703682f9ddd1207cd010(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__c46f2155fa727d635ab889260f5e0444256e95cc50ba99c373940731c443905b(
value: jsii.Number,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__3d0bb3ab70925eeceffd84fde08710863f2a66ff73c6cbf14dfe2910d6a62700(
value: jsii.Number,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__c1e0d5a19a42d76de53da05e49df5fd8b579eca74ef65ad0bbf37544cead41b6(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__e83c70ca9fb60ac00e13c9c0a1a1106c82905e495dc0107ec2fb5005ff66f053(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__a94d507da940d7a10c16205e0a7bda090580f1326d02ec9d5ca03e1b49a69b74(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__9ab3dadae4fbb7e1d0ea6a2acead80979ae970e0b1c7dec3571ccd153cbbd60e(
*,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
disk_range: typing.Optional[typing.Union[DataOpentelekomcloudCssFlavorV1DiskRange, typing.Dict[builtins.str, typing.Any]]] = None,
id: typing.Optional[builtins.str] = None,
min_cpu: typing.Optional[jsii.Number] = None,
min_ram: typing.Optional[jsii.Number] = None,
name: typing.Optional[builtins.str] = None,
type: typing.Optional[builtins.str] = None,
version: typing.Optional[builtins.str] = None,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__65c2de23cd503c02e07fffc3fd1d28550173e2328aba25284234a3a80566e6e6(
*,
min_from: typing.Optional[jsii.Number] = None,
min_to: typing.Optional[jsii.Number] = None,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__95249e808e324d7e2531b906289bde693f09eff5c1526f71c1358a5d3f0114a0(
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__177da4e633770cdb4e5eb9022ef08a6e76f543a4a9ff1356e93f43141409a9f7(
value: jsii.Number,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__bd6023babf9dadee1c8665d7f980474e39d5a24f52f7ce04350ab75b2ef205fc(
value: jsii.Number,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__ccc7ac08b29bc2a6ea5ec790ee31c4461b1ef8e25065fa25ffd9baa24cab80fa(
value: typing.Optional[DataOpentelekomcloudCssFlavorV1DiskRange],
) -> None:
"""Type checking stubs"""
pass
|
PypiClean
|
/py-pure-client-1.38.0.tar.gz/py-pure-client-1.38.0/pypureclient/flasharray/FA_2_8/api/remote_protection_group_snapshots_api.py
|
from __future__ import absolute_import
import re
# python 2 and python 3 compatibility library
import six
from typing import List, Optional
from .. import models
class RemoteProtectionGroupSnapshotsApi(object):
def __init__(self, api_client):
self.api_client = api_client
def api28_remote_protection_group_snapshots_delete_with_http_info(
self,
authorization=None, # type: str
x_request_id=None, # type: str
names=None, # type: List[str]
on=None, # type: str
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> None
"""Delete a remote protection group snapshot
Deletes a remote protection group snapshot that has been destroyed and is pending eradication. Eradicated remote protection group snapshots cannot be recovered. Remote protection group snapshots are destroyed using the `PATCH` method. The `names` parameter represents the name of the protection group snapshot. The `on` parameter represents the name of the offload target. The `names` and `on` parameters are required and must be used together.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api28_remote_protection_group_snapshots_delete_with_http_info(async_req=True)
>>> result = thread.get()
:param str authorization: Access token (in JWT format) required to use any API endpoint (except `/oauth2`, `/login`, and `/logout`)
:param str x_request_id: Supplied by client during request or generated by server.
:param list[str] names: Performs the operation on the unique name specified. Enter multiple names in comma-separated format. For example, `name01,name02`.
:param str on: Performs the operation on the target name specified. For example, `targetName01`.
:param bool async_req: Request runs in separate thread and method returns multiprocessing.pool.ApplyResult.
:param bool _return_http_data_only: Returns only data field.
:param bool _preload_content: Response is converted into objects.
:param int _request_timeout: Total request timeout in seconds.
It can also be a tuple of (connection time, read time) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
if names is not None:
if not isinstance(names, list):
names = [names]
params = {k: v for k, v in six.iteritems(locals()) if v is not None}
# Convert the filter into a string
if params.get('filter'):
params['filter'] = str(params['filter'])
if params.get('sort'):
params['sort'] = [str(_x) for _x in params['sort']]
collection_formats = {}
path_params = {}
query_params = []
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
if 'on' in params:
query_params.append(('on', params['on']))
header_params = {}
if 'authorization' in params:
header_params['Authorization'] = params['authorization']
if 'x_request_id' in params:
header_params['X-Request-ID'] = params['x_request_id']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type(
['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(
'/api/2.8/remote-protection-group-snapshots', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
collection_formats=collection_formats,
)
def api28_remote_protection_group_snapshots_get_with_http_info(
self,
authorization=None, # type: str
x_request_id=None, # type: str
destroyed=None, # type: bool
filter=None, # type: str
limit=None, # type: int
names=None, # type: List[str]
offset=None, # type: int
on=None, # type: List[str]
sort=None, # type: List[str]
source_names=None, # type: List[str]
total_item_count=None, # type: bool
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> models.RemoteProtectionGroupSnapshotGetResponse
"""List remote protection group snapshots
Displays a list of remote protection group snapshots.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api28_remote_protection_group_snapshots_get_with_http_info(async_req=True)
>>> result = thread.get()
:param str authorization: Access token (in JWT format) required to use any API endpoint (except `/oauth2`, `/login`, and `/logout`)
:param str x_request_id: Supplied by client during request or generated by server.
:param bool destroyed: If set to `true`, lists only destroyed objects that are in the eradication pending state. If set to `false`, lists only objects that are not destroyed. For destroyed objects, the time remaining is displayed in milliseconds.
:param str filter: Narrows down the results to only the response objects that satisfy the filter criteria.
:param int limit: Limits the size of the response to the specified number of objects on each page. To return the total number of resources, set `limit=0`. The total number of resources is returned as a `total_item_count` value. If the page size requested is larger than the system maximum limit, the server returns the maximum limit, disregarding the requested page size.
:param list[str] names: Performs the operation on the unique name specified. Enter multiple names in comma-separated format. For example, `name01,name02`.
:param int offset: The starting position based on the results of the query in relation to the full set of response objects returned.
:param list[str] on: Performs the operation on the target name specified. Enter multiple target names in comma-separated format. For example, `targetName01,targetName02`.
:param list[str] sort: Returns the response objects in the order specified. Set `sort` to the name in the response by which to sort. Sorting can be performed on any of the names in the response, and the objects can be sorted in ascending or descending order. By default, the response objects are sorted in ascending order. To sort in descending order, append the minus sign (`-`) to the name. A single request can be sorted on multiple objects. For example, you can sort all volumes from largest to smallest volume size, and then sort volumes of the same size in ascending order by volume name. To sort on multiple names, list the names as comma-separated values.
:param list[str] source_names: Performs the operation on the source name specified. Enter multiple source names in comma-separated format. For example, `name01,name02`.
:param bool total_item_count: If set to `true`, the `total_item_count` matching the specified query parameters is calculated and returned in the response. If set to `false`, the `total_item_count` is `null` in the response. This may speed up queries where the `total_item_count` is large. If not specified, defaults to `false`.
:param bool async_req: Request runs in separate thread and method returns multiprocessing.pool.ApplyResult.
:param bool _return_http_data_only: Returns only data field.
:param bool _preload_content: Response is converted into objects.
:param int _request_timeout: Total request timeout in seconds.
It can also be a tuple of (connection time, read time) timeouts.
:return: RemoteProtectionGroupSnapshotGetResponse
If the method is called asynchronously,
returns the request thread.
"""
if names is not None:
if not isinstance(names, list):
names = [names]
if on is not None:
if not isinstance(on, list):
on = [on]
if sort is not None:
if not isinstance(sort, list):
sort = [sort]
if source_names is not None:
if not isinstance(source_names, list):
source_names = [source_names]
params = {k: v for k, v in six.iteritems(locals()) if v is not None}
# Convert the filter into a string
if params.get('filter'):
params['filter'] = str(params['filter'])
if params.get('sort'):
params['sort'] = [str(_x) for _x in params['sort']]
if 'limit' in params and params['limit'] < 1:
raise ValueError("Invalid value for parameter `limit` when calling `api28_remote_protection_group_snapshots_get`, must be a value greater than or equal to `1`")
if 'offset' in params and params['offset'] < 0:
raise ValueError("Invalid value for parameter `offset` when calling `api28_remote_protection_group_snapshots_get`, must be a value greater than or equal to `0`")
collection_formats = {}
path_params = {}
query_params = []
if 'destroyed' in params:
query_params.append(('destroyed', params['destroyed']))
if 'filter' in params:
query_params.append(('filter', params['filter']))
if 'limit' in params:
query_params.append(('limit', params['limit']))
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
if 'offset' in params:
query_params.append(('offset', params['offset']))
if 'on' in params:
query_params.append(('on', params['on']))
collection_formats['on'] = 'csv'
if 'sort' in params:
query_params.append(('sort', params['sort']))
collection_formats['sort'] = 'csv'
if 'source_names' in params:
query_params.append(('source_names', params['source_names']))
collection_formats['source_names'] = 'csv'
if 'total_item_count' in params:
query_params.append(('total_item_count', params['total_item_count']))
header_params = {}
if 'authorization' in params:
header_params['Authorization'] = params['authorization']
if 'x_request_id' in params:
header_params['X-Request-ID'] = params['x_request_id']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type(
['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(
'/api/2.8/remote-protection-group-snapshots', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RemoteProtectionGroupSnapshotGetResponse',
auth_settings=auth_settings,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
collection_formats=collection_formats,
)
def api28_remote_protection_group_snapshots_patch_with_http_info(
self,
remote_protection_group_snapshot=None, # type: models.DestroyedPatchPost
authorization=None, # type: str
x_request_id=None, # type: str
names=None, # type: List[str]
on=None, # type: str
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> models.RemoteProtectionGroupSnapshotResponse
"""Modify a remote protection group snapshot
Modifies a remote protection group snapshot, removing it from the offload target and destroying the snapshot. The `on` parameter represents the name of the offload target. The `ids` or `names` parameter and the `on` parameter are required and must be used together.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api28_remote_protection_group_snapshots_patch_with_http_info(remote_protection_group_snapshot, async_req=True)
>>> result = thread.get()
:param DestroyedPatchPost remote_protection_group_snapshot: (required)
:param str authorization: Access token (in JWT format) required to use any API endpoint (except `/oauth2`, `/login`, and `/logout`)
:param str x_request_id: Supplied by client during request or generated by server.
:param list[str] names: Performs the operation on the unique name specified. Enter multiple names in comma-separated format. For example, `name01,name02`.
:param str on: Performs the operation on the target name specified. For example, `targetName01`.
:param bool async_req: Request runs in separate thread and method returns multiprocessing.pool.ApplyResult.
:param bool _return_http_data_only: Returns only data field.
:param bool _preload_content: Response is converted into objects.
:param int _request_timeout: Total request timeout in seconds.
It can also be a tuple of (connection time, read time) timeouts.
:return: RemoteProtectionGroupSnapshotResponse
If the method is called asynchronously,
returns the request thread.
"""
if names is not None:
if not isinstance(names, list):
names = [names]
params = {k: v for k, v in six.iteritems(locals()) if v is not None}
# Convert the filter into a string
if params.get('filter'):
params['filter'] = str(params['filter'])
if params.get('sort'):
params['sort'] = [str(_x) for _x in params['sort']]
# verify the required parameter 'remote_protection_group_snapshot' is set
if remote_protection_group_snapshot is None:
raise TypeError("Missing the required parameter `remote_protection_group_snapshot` when calling `api28_remote_protection_group_snapshots_patch`")
collection_formats = {}
path_params = {}
query_params = []
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
if 'on' in params:
query_params.append(('on', params['on']))
header_params = {}
if 'authorization' in params:
header_params['Authorization'] = params['authorization']
if 'x_request_id' in params:
header_params['X-Request-ID'] = params['x_request_id']
form_params = []
local_var_files = {}
body_params = None
if 'remote_protection_group_snapshot' in params:
body_params = params['remote_protection_group_snapshot']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type(
['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(
'/api/2.8/remote-protection-group-snapshots', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RemoteProtectionGroupSnapshotResponse',
auth_settings=auth_settings,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
collection_formats=collection_formats,
)
def api28_remote_protection_group_snapshots_post_with_http_info(
self,
authorization=None, # type: str
x_request_id=None, # type: str
apply_retention=None, # type: bool
convert_source_to_baseline=None, # type: bool
for_replication=None, # type: bool
names=None, # type: List[str]
replicate=None, # type: bool
replicate_now=None, # type: bool
source_names=None, # type: List[str]
on=None, # type: str
remote_protection_group_snapshot=None, # type: models.RemoteProtectionGroupSnapshotPost
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> models.RemoteProtectionGroupSnapshotResponse
"""Create remote protection group snapshot
Creates remote protection group snapshots.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api28_remote_protection_group_snapshots_post_with_http_info(async_req=True)
>>> result = thread.get()
:param str authorization: Access token (in JWT format) required to use any API endpoint (except `/oauth2`, `/login`, and `/logout`)
:param str x_request_id: Supplied by client during request or generated by server.
:param bool apply_retention: If `true`, applies the local and remote retention policy to the snapshots.
:param bool convert_source_to_baseline: Set to `true` to have the snapshot be eradicated when it is no longer baseline on source.
:param bool for_replication: If `true`, destroys and eradicates the snapshot after 1 hour.
:param list[str] names: Performs the operation on the unique name specified. Enter multiple names in comma-separated format. For example, `name01,name02`.
:param bool replicate: If set to `true`, queues up and begins replicating to each allowed target after all earlier replication sessions for the same protection group have been completed to that target. The `replicate` and `replicate_now` parameters cannot be used together.
:param bool replicate_now: If set to `true`, replicates the snapshots to each allowed target. The `replicate` and `replicate_now` parameters cannot be used together.
:param list[str] source_names: Performs the operation on the source name specified. Enter multiple source names in comma-separated format. For example, `name01,name02`.
:param str on: Performs the operation on the target name specified. For example, `targetName01`.
:param RemoteProtectionGroupSnapshotPost remote_protection_group_snapshot:
:param bool async_req: Request runs in separate thread and method returns multiprocessing.pool.ApplyResult.
:param bool _return_http_data_only: Returns only data field.
:param bool _preload_content: Response is converted into objects.
:param int _request_timeout: Total request timeout in seconds.
It can also be a tuple of (connection time, read time) timeouts.
:return: RemoteProtectionGroupSnapshotResponse
If the method is called asynchronously,
returns the request thread.
"""
if names is not None:
if not isinstance(names, list):
names = [names]
if source_names is not None:
if not isinstance(source_names, list):
source_names = [source_names]
params = {k: v for k, v in six.iteritems(locals()) if v is not None}
# Convert the filter into a string
if params.get('filter'):
params['filter'] = str(params['filter'])
if params.get('sort'):
params['sort'] = [str(_x) for _x in params['sort']]
collection_formats = {}
path_params = {}
query_params = []
if 'apply_retention' in params:
query_params.append(('apply_retention', params['apply_retention']))
if 'convert_source_to_baseline' in params:
query_params.append(('convert_source_to_baseline', params['convert_source_to_baseline']))
if 'for_replication' in params:
query_params.append(('for_replication', params['for_replication']))
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
if 'replicate' in params:
query_params.append(('replicate', params['replicate']))
if 'replicate_now' in params:
query_params.append(('replicate_now', params['replicate_now']))
if 'source_names' in params:
query_params.append(('source_names', params['source_names']))
collection_formats['source_names'] = 'csv'
if 'on' in params:
query_params.append(('on', params['on']))
header_params = {}
if 'authorization' in params:
header_params['Authorization'] = params['authorization']
if 'x_request_id' in params:
header_params['X-Request-ID'] = params['x_request_id']
form_params = []
local_var_files = {}
body_params = None
if 'remote_protection_group_snapshot' in params:
body_params = params['remote_protection_group_snapshot']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type(
['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(
'/api/2.8/remote-protection-group-snapshots', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RemoteProtectionGroupSnapshotResponse',
auth_settings=auth_settings,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
collection_formats=collection_formats,
)
def api28_remote_protection_group_snapshots_transfer_get_with_http_info(
self,
authorization=None, # type: str
x_request_id=None, # type: str
destroyed=None, # type: bool
filter=None, # type: str
limit=None, # type: int
offset=None, # type: int
on=None, # type: List[str]
sort=None, # type: List[str]
source_names=None, # type: List[str]
total_item_count=None, # type: bool
total_only=None, # type: bool
names=None, # type: List[str]
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> models.RemoteProtectionGroupSnapshotTransferGetResponse
"""List remote protection groups with transfer statistics
Returns a list of remote protection groups and their transfer statistics.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api28_remote_protection_group_snapshots_transfer_get_with_http_info(async_req=True)
>>> result = thread.get()
:param str authorization: Access token (in JWT format) required to use any API endpoint (except `/oauth2`, `/login`, and `/logout`)
:param str x_request_id: Supplied by client during request or generated by server.
:param bool destroyed: If set to `true`, lists only destroyed objects that are in the eradication pending state. If set to `false`, lists only objects that are not destroyed. For destroyed objects, the time remaining is displayed in milliseconds.
:param str filter: Narrows down the results to only the response objects that satisfy the filter criteria.
:param int limit: Limits the size of the response to the specified number of objects on each page. To return the total number of resources, set `limit=0`. The total number of resources is returned as a `total_item_count` value. If the page size requested is larger than the system maximum limit, the server returns the maximum limit, disregarding the requested page size.
:param int offset: The starting position based on the results of the query in relation to the full set of response objects returned.
:param list[str] on: Performs the operation on the target name specified. Enter multiple target names in comma-separated format. For example, `targetName01,targetName02`.
:param list[str] sort: Returns the response objects in the order specified. Set `sort` to the name in the response by which to sort. Sorting can be performed on any of the names in the response, and the objects can be sorted in ascending or descending order. By default, the response objects are sorted in ascending order. To sort in descending order, append the minus sign (`-`) to the name. A single request can be sorted on multiple objects. For example, you can sort all volumes from largest to smallest volume size, and then sort volumes of the same size in ascending order by volume name. To sort on multiple names, list the names as comma-separated values.
:param list[str] source_names: Performs the operation on the source name specified. Enter multiple source names in comma-separated format. For example, `name01,name02`.
:param bool total_item_count: If set to `true`, the `total_item_count` matching the specified query parameters is calculated and returned in the response. If set to `false`, the `total_item_count` is `null` in the response. This may speed up queries where the `total_item_count` is large. If not specified, defaults to `false`.
:param bool total_only: If set to `true`, returns the aggregate value of all items after filtering. Where it makes more sense, the average value is displayed instead. The values are displayed for each name where meaningful. If `total_only=true`, the `items` list will be empty.
:param list[str] names: Performs the operation on the unique name specified. Enter multiple names in comma-separated format. For example, `name01,name02`.
:param bool async_req: Request runs in separate thread and method returns multiprocessing.pool.ApplyResult.
:param bool _return_http_data_only: Returns only data field.
:param bool _preload_content: Response is converted into objects.
:param int _request_timeout: Total request timeout in seconds.
It can also be a tuple of (connection time, read time) timeouts.
:return: RemoteProtectionGroupSnapshotTransferGetResponse
If the method is called asynchronously,
returns the request thread.
"""
if on is not None:
if not isinstance(on, list):
on = [on]
if sort is not None:
if not isinstance(sort, list):
sort = [sort]
if source_names is not None:
if not isinstance(source_names, list):
source_names = [source_names]
if names is not None:
if not isinstance(names, list):
names = [names]
params = {k: v for k, v in six.iteritems(locals()) if v is not None}
# Convert the filter into a string
if params.get('filter'):
params['filter'] = str(params['filter'])
if params.get('sort'):
params['sort'] = [str(_x) for _x in params['sort']]
if 'limit' in params and params['limit'] < 1:
raise ValueError("Invalid value for parameter `limit` when calling `api28_remote_protection_group_snapshots_transfer_get`, must be a value greater than or equal to `1`")
if 'offset' in params and params['offset'] < 0:
raise ValueError("Invalid value for parameter `offset` when calling `api28_remote_protection_group_snapshots_transfer_get`, must be a value greater than or equal to `0`")
collection_formats = {}
path_params = {}
query_params = []
if 'destroyed' in params:
query_params.append(('destroyed', params['destroyed']))
if 'filter' in params:
query_params.append(('filter', params['filter']))
if 'limit' in params:
query_params.append(('limit', params['limit']))
if 'offset' in params:
query_params.append(('offset', params['offset']))
if 'on' in params:
query_params.append(('on', params['on']))
collection_formats['on'] = 'csv'
if 'sort' in params:
query_params.append(('sort', params['sort']))
collection_formats['sort'] = 'csv'
if 'source_names' in params:
query_params.append(('source_names', params['source_names']))
collection_formats['source_names'] = 'csv'
if 'total_item_count' in params:
query_params.append(('total_item_count', params['total_item_count']))
if 'total_only' in params:
query_params.append(('total_only', params['total_only']))
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
header_params = {}
if 'authorization' in params:
header_params['Authorization'] = params['authorization']
if 'x_request_id' in params:
header_params['X-Request-ID'] = params['x_request_id']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type(
['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(
'/api/2.8/remote-protection-group-snapshots/transfer', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RemoteProtectionGroupSnapshotTransferGetResponse',
auth_settings=auth_settings,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
collection_formats=collection_formats,
)
|
PypiClean
|
/sendbird_platform_sdk-0.0.16-py3-none-any.whl/sendbird_platform_sdk/api/user_api.py
|
import re # noqa: F401
import sys # noqa: F401
from sendbird_platform_sdk.api_client import ApiClient, Endpoint as _Endpoint
from sendbird_platform_sdk.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from sendbird_platform_sdk.model.add_registration_or_device_token_data import AddRegistrationOrDeviceTokenData
from sendbird_platform_sdk.model.add_registration_or_device_token_response import AddRegistrationOrDeviceTokenResponse
from sendbird_platform_sdk.model.choose_push_notification_content_template_response import ChoosePushNotificationContentTemplateResponse
from sendbird_platform_sdk.model.create_user_data import CreateUserData
from sendbird_platform_sdk.model.create_user_token_data import CreateUserTokenData
from sendbird_platform_sdk.model.create_user_token_response import CreateUserTokenResponse
from sendbird_platform_sdk.model.leave_my_group_channels_data import LeaveMyGroupChannelsData
from sendbird_platform_sdk.model.list_my_group_channels_response import ListMyGroupChannelsResponse
from sendbird_platform_sdk.model.list_registration_or_device_tokens_response import ListRegistrationOrDeviceTokensResponse
from sendbird_platform_sdk.model.list_users_response import ListUsersResponse
from sendbird_platform_sdk.model.mark_all_messages_as_read_data import MarkAllMessagesAsReadData
from sendbird_platform_sdk.model.register_as_operator_to_channels_with_custom_channel_types_data import RegisterAsOperatorToChannelsWithCustomChannelTypesData
from sendbird_platform_sdk.model.remove_registration_or_device_token_by_token_response import RemoveRegistrationOrDeviceTokenByTokenResponse
from sendbird_platform_sdk.model.remove_registration_or_device_token_from_owner_by_token_response import RemoveRegistrationOrDeviceTokenFromOwnerByTokenResponse
from sendbird_platform_sdk.model.remove_registration_or_device_token_response import RemoveRegistrationOrDeviceTokenResponse
from sendbird_platform_sdk.model.reset_push_preferences_response import ResetPushPreferencesResponse
from sendbird_platform_sdk.model.send_bird_user import SendBirdUser
from sendbird_platform_sdk.model.update_channel_invitation_preference_data import UpdateChannelInvitationPreferenceData
from sendbird_platform_sdk.model.update_channel_invitation_preference_response import UpdateChannelInvitationPreferenceResponse
from sendbird_platform_sdk.model.update_count_preference_of_channel_by_url_data import UpdateCountPreferenceOfChannelByUrlData
from sendbird_platform_sdk.model.update_count_preference_of_channel_by_url_response import UpdateCountPreferenceOfChannelByUrlResponse
from sendbird_platform_sdk.model.update_push_preferences_data import UpdatePushPreferencesData
from sendbird_platform_sdk.model.update_push_preferences_for_channel_by_url_data import UpdatePushPreferencesForChannelByUrlData
from sendbird_platform_sdk.model.update_push_preferences_for_channel_by_url_response import UpdatePushPreferencesForChannelByUrlResponse
from sendbird_platform_sdk.model.update_push_preferences_response import UpdatePushPreferencesResponse
from sendbird_platform_sdk.model.update_user_by_id_data import UpdateUserByIdData
from sendbird_platform_sdk.model.view_channel_invitation_preference_response import ViewChannelInvitationPreferenceResponse
from sendbird_platform_sdk.model.view_count_preference_of_channel_by_url_response import ViewCountPreferenceOfChannelByUrlResponse
from sendbird_platform_sdk.model.view_number_of_channels_by_join_status_response import ViewNumberOfChannelsByJoinStatusResponse
from sendbird_platform_sdk.model.view_number_of_channels_with_unread_messages_response import ViewNumberOfChannelsWithUnreadMessagesResponse
from sendbird_platform_sdk.model.view_number_of_unread_items_response import ViewNumberOfUnreadItemsResponse
from sendbird_platform_sdk.model.view_number_of_unread_messages_response import ViewNumberOfUnreadMessagesResponse
from sendbird_platform_sdk.model.view_push_preferences_for_channel_by_url_response import ViewPushPreferencesForChannelByUrlResponse
from sendbird_platform_sdk.model.view_push_preferences_response import ViewPushPreferencesResponse
from sendbird_platform_sdk.model.view_who_owns_registration_or_device_token_by_token_response import ViewWhoOwnsRegistrationOrDeviceTokenByTokenResponse
class UserApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.add_registration_or_device_token_endpoint = _Endpoint(
settings={
'response_type': (AddRegistrationOrDeviceTokenResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push/{token_type}',
'operation_id': 'add_registration_or_device_token',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'token_type',
'add_registration_or_device_token_data',
],
'required': [
'api_token',
'user_id',
'token_type',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'token_type':
(str,),
'add_registration_or_device_token_data':
(AddRegistrationOrDeviceTokenData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'token_type': 'token_type',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'token_type': 'path',
'add_registration_or_device_token_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.choose_push_notification_content_template_endpoint = _Endpoint(
settings={
'response_type': (ChoosePushNotificationContentTemplateResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push/template',
'operation_id': 'choose_push_notification_content_template',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'body',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'body':
({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.create_user_endpoint = _Endpoint(
settings={
'response_type': (SendBirdUser,),
'auth': [],
'endpoint_path': '/v3/users',
'operation_id': 'create_user',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'api_token',
'create_user_data',
],
'required': [
'api_token',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'create_user_data':
(CreateUserData,),
},
'attribute_map': {
'api_token': 'Api-Token',
},
'location_map': {
'api_token': 'header',
'create_user_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.create_user_token_endpoint = _Endpoint(
settings={
'response_type': (CreateUserTokenResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/token',
'operation_id': 'create_user_token',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'create_user_token_data',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'create_user_token_data':
(CreateUserTokenData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'create_user_token_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.delete_user_by_id_endpoint = _Endpoint(
settings={
'response_type': ({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
'auth': [],
'endpoint_path': '/v3/users/{user_id}',
'operation_id': 'delete_user_by_id',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.leave_my_group_channels_endpoint = _Endpoint(
settings={
'response_type': ({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/my_group_channels/leave',
'operation_id': 'leave_my_group_channels',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'leave_my_group_channels_data',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'leave_my_group_channels_data':
(LeaveMyGroupChannelsData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'leave_my_group_channels_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.list_my_group_channels_endpoint = _Endpoint(
settings={
'response_type': (ListMyGroupChannelsResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/my_group_channels',
'operation_id': 'list_my_group_channels',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'token',
'limit',
'distinct_mode',
'public_mode',
'super_mode',
'hidden_mode',
'member_state_filter',
'unread_filter',
'created_after',
'created_before',
'show_empty',
'show_frozen',
'show_member',
'show_delivery_receipt',
'show_read_receipt',
'order',
'metadata_order_key',
'custom_types',
'custom_type_startswith',
'channel_urls',
'name',
'name_contains',
'name_startswith',
'members_exactly_in',
'members_include_in',
'query_type',
'members_nickname',
'members_nickname_contains',
'search_query',
'search_fields',
'metadata_key',
'metadata_values',
'metadata_value_startswith',
'metacounter_key',
'metacounter_values',
'metacounter_value_gt',
'metacounter_value_gte',
'metacounter_value_lt',
'metacounter_value_lte',
'custom_type',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'token':
(str,),
'limit':
(int,),
'distinct_mode':
(str,),
'public_mode':
(str,),
'super_mode':
(str,),
'hidden_mode':
(str,),
'member_state_filter':
(str,),
'unread_filter':
(str,),
'created_after':
(int,),
'created_before':
(int,),
'show_empty':
(bool,),
'show_frozen':
(bool,),
'show_member':
(bool,),
'show_delivery_receipt':
(bool,),
'show_read_receipt':
(bool,),
'order':
(str,),
'metadata_order_key':
(str,),
'custom_types':
(str,),
'custom_type_startswith':
(str,),
'channel_urls':
(str,),
'name':
(str,),
'name_contains':
(str,),
'name_startswith':
(str,),
'members_exactly_in':
(str,),
'members_include_in':
(str,),
'query_type':
(str,),
'members_nickname':
(str,),
'members_nickname_contains':
(str,),
'search_query':
(str,),
'search_fields':
(str,),
'metadata_key':
(str,),
'metadata_values':
(str,),
'metadata_value_startswith':
(str,),
'metacounter_key':
(str,),
'metacounter_values':
(str,),
'metacounter_value_gt':
(str,),
'metacounter_value_gte':
(str,),
'metacounter_value_lt':
(str,),
'metacounter_value_lte':
(str,),
'custom_type':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'token': 'token',
'limit': 'limit',
'distinct_mode': 'distinct_mode',
'public_mode': 'public_mode',
'super_mode': 'super_mode',
'hidden_mode': 'hidden_mode',
'member_state_filter': 'member_state_filter',
'unread_filter': 'unread_filter',
'created_after': 'created_after',
'created_before': 'created_before',
'show_empty': 'show_empty',
'show_frozen': 'show_frozen',
'show_member': 'show_member',
'show_delivery_receipt': 'show_delivery_receipt',
'show_read_receipt': 'show_read_receipt',
'order': 'order',
'metadata_order_key': 'metadata_order_key',
'custom_types': 'custom_types',
'custom_type_startswith': 'custom_type_startswith',
'channel_urls': 'channel_urls',
'name': 'name',
'name_contains': 'name_contains',
'name_startswith': 'name_startswith',
'members_exactly_in': 'members_exactly_in',
'members_include_in': 'members_include_in',
'query_type': 'query_type',
'members_nickname': 'members_nickname',
'members_nickname_contains': 'members_nickname_contains',
'search_query': 'search_query',
'search_fields': 'search_fields',
'metadata_key': 'metadata_key',
'metadata_values': 'metadata_values',
'metadata_value_startswith': 'metadata_value_startswith',
'metacounter_key': 'metacounter_key',
'metacounter_values': 'metacounter_values',
'metacounter_value_gt': 'metacounter_value_gt',
'metacounter_value_gte': 'metacounter_value_gte',
'metacounter_value_lt': 'metacounter_value_lt',
'metacounter_value_lte': 'metacounter_value_lte',
'custom_type': 'custom_type',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'token': 'query',
'limit': 'query',
'distinct_mode': 'query',
'public_mode': 'query',
'super_mode': 'query',
'hidden_mode': 'query',
'member_state_filter': 'query',
'unread_filter': 'query',
'created_after': 'query',
'created_before': 'query',
'show_empty': 'query',
'show_frozen': 'query',
'show_member': 'query',
'show_delivery_receipt': 'query',
'show_read_receipt': 'query',
'order': 'query',
'metadata_order_key': 'query',
'custom_types': 'query',
'custom_type_startswith': 'query',
'channel_urls': 'query',
'name': 'query',
'name_contains': 'query',
'name_startswith': 'query',
'members_exactly_in': 'query',
'members_include_in': 'query',
'query_type': 'query',
'members_nickname': 'query',
'members_nickname_contains': 'query',
'search_query': 'query',
'search_fields': 'query',
'metadata_key': 'query',
'metadata_values': 'query',
'metadata_value_startswith': 'query',
'metacounter_key': 'query',
'metacounter_values': 'query',
'metacounter_value_gt': 'query',
'metacounter_value_gte': 'query',
'metacounter_value_lt': 'query',
'metacounter_value_lte': 'query',
'custom_type': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_registration_or_device_tokens_endpoint = _Endpoint(
settings={
'response_type': (ListRegistrationOrDeviceTokensResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push/{token_type}',
'operation_id': 'list_registration_or_device_tokens',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'token_type',
],
'required': [
'api_token',
'user_id',
'token_type',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'token_type':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'token_type': 'token_type',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'token_type': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_users_endpoint = _Endpoint(
settings={
'response_type': (ListUsersResponse,),
'auth': [],
'endpoint_path': '/v3/users',
'operation_id': 'list_users',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'token',
'limit',
'active_mode',
'show_bot',
'user_ids',
'nickname',
'nickname_startswith',
'metadatakey',
'metadatavalues_in',
],
'required': [
'api_token',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'token':
(str,),
'limit':
(int,),
'active_mode':
(str,),
'show_bot':
(bool,),
'user_ids':
(str,),
'nickname':
(str,),
'nickname_startswith':
(str,),
'metadatakey':
(str,),
'metadatavalues_in':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'token': 'token',
'limit': 'limit',
'active_mode': 'active_mode',
'show_bot': 'show_bot',
'user_ids': 'user_ids',
'nickname': 'nickname',
'nickname_startswith': 'nickname_startswith',
'metadatakey': 'metadatakey',
'metadatavalues_in': 'metadatavalues_in',
},
'location_map': {
'api_token': 'header',
'token': 'query',
'limit': 'query',
'active_mode': 'query',
'show_bot': 'query',
'user_ids': 'query',
'nickname': 'query',
'nickname_startswith': 'query',
'metadatakey': 'query',
'metadatavalues_in': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.mark_all_messages_as_read_endpoint = _Endpoint(
settings={
'response_type': ({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/mark_as_read_all',
'operation_id': 'mark_all_messages_as_read',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'mark_all_messages_as_read_data',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'mark_all_messages_as_read_data':
(MarkAllMessagesAsReadData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'mark_all_messages_as_read_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.register_as_operator_to_channels_with_custom_channel_types_endpoint = _Endpoint(
settings={
'response_type': ({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/operating_channel_custom_types',
'operation_id': 'register_as_operator_to_channels_with_custom_channel_types',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'register_as_operator_to_channels_with_custom_channel_types_data',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'register_as_operator_to_channels_with_custom_channel_types_data':
(RegisterAsOperatorToChannelsWithCustomChannelTypesData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'register_as_operator_to_channels_with_custom_channel_types_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.remove_registration_or_device_token_endpoint = _Endpoint(
settings={
'response_type': (RemoveRegistrationOrDeviceTokenResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push',
'operation_id': 'remove_registration_or_device_token',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.remove_registration_or_device_token_by_token_endpoint = _Endpoint(
settings={
'response_type': (RemoveRegistrationOrDeviceTokenByTokenResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push/{token_type}/{token}',
'operation_id': 'remove_registration_or_device_token_by_token',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'token_type',
'token',
],
'required': [
'api_token',
'user_id',
'token_type',
'token',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'token_type':
(str,),
'token':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'token_type': 'token_type',
'token': 'token',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'token_type': 'path',
'token': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.remove_registration_or_device_token_from_owner_by_token_endpoint = _Endpoint(
settings={
'response_type': (RemoveRegistrationOrDeviceTokenFromOwnerByTokenResponse,),
'auth': [],
'endpoint_path': '/v3/push/device_tokens/{token_type}/{token}',
'operation_id': 'remove_registration_or_device_token_from_owner_by_token',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'api_token',
'token_type',
'token',
],
'required': [
'api_token',
'token_type',
'token',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'token_type':
(str,),
'token':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'token_type': 'token_type',
'token': 'token',
},
'location_map': {
'api_token': 'header',
'token_type': 'path',
'token': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.reset_push_preferences_endpoint = _Endpoint(
settings={
'response_type': (ResetPushPreferencesResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push_preference',
'operation_id': 'reset_push_preferences',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.update_channel_invitation_preference_endpoint = _Endpoint(
settings={
'response_type': (UpdateChannelInvitationPreferenceResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/channel_invitation_preference',
'operation_id': 'update_channel_invitation_preference',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'update_channel_invitation_preference_data',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'update_channel_invitation_preference_data':
(UpdateChannelInvitationPreferenceData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'update_channel_invitation_preference_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_count_preference_of_channel_by_url_endpoint = _Endpoint(
settings={
'response_type': (UpdateCountPreferenceOfChannelByUrlResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/count_preference/{channel_url}',
'operation_id': 'update_count_preference_of_channel_by_url',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'channel_url',
'update_count_preference_of_channel_by_url_data',
],
'required': [
'api_token',
'user_id',
'channel_url',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'channel_url':
(str,),
'update_count_preference_of_channel_by_url_data':
(UpdateCountPreferenceOfChannelByUrlData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'channel_url': 'channel_url',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'channel_url': 'path',
'update_count_preference_of_channel_by_url_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_push_preferences_endpoint = _Endpoint(
settings={
'response_type': (UpdatePushPreferencesResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push_preference',
'operation_id': 'update_push_preferences',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'update_push_preferences_data',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'update_push_preferences_data':
(UpdatePushPreferencesData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'update_push_preferences_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_push_preferences_for_channel_by_url_endpoint = _Endpoint(
settings={
'response_type': (UpdatePushPreferencesForChannelByUrlResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push_preference/{channel_url}',
'operation_id': 'update_push_preferences_for_channel_by_url',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'channel_url',
'update_push_preferences_for_channel_by_url_data',
],
'required': [
'api_token',
'user_id',
'channel_url',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'channel_url':
(str,),
'update_push_preferences_for_channel_by_url_data':
(UpdatePushPreferencesForChannelByUrlData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'channel_url': 'channel_url',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'channel_url': 'path',
'update_push_preferences_for_channel_by_url_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_user_by_id_endpoint = _Endpoint(
settings={
'response_type': (SendBirdUser,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}',
'operation_id': 'update_user_by_id',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'update_user_by_id_data',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'update_user_by_id_data':
(UpdateUserByIdData,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'update_user_by_id_data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.view_channel_invitation_preference_endpoint = _Endpoint(
settings={
'response_type': (ViewChannelInvitationPreferenceResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/channel_invitation_preference',
'operation_id': 'view_channel_invitation_preference',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_count_preference_of_channel_by_url_endpoint = _Endpoint(
settings={
'response_type': (ViewCountPreferenceOfChannelByUrlResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/count_preference/{channel_url}',
'operation_id': 'view_count_preference_of_channel_by_url',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'channel_url',
],
'required': [
'api_token',
'user_id',
'channel_url',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'channel_url':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'channel_url': 'channel_url',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'channel_url': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_number_of_channels_by_join_status_endpoint = _Endpoint(
settings={
'response_type': (ViewNumberOfChannelsByJoinStatusResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/group_channel_count',
'operation_id': 'view_number_of_channels_by_join_status',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'state',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'state':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'state': 'state',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'state': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_number_of_channels_with_unread_messages_endpoint = _Endpoint(
settings={
'response_type': (ViewNumberOfChannelsWithUnreadMessagesResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/unread_channel_count',
'operation_id': 'view_number_of_channels_with_unread_messages',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'custom_types',
'super_mode',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'custom_types':
([str],),
'super_mode':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'custom_types': 'custom_types',
'super_mode': 'super_mode',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'custom_types': 'query',
'super_mode': 'query',
},
'collection_format_map': {
'custom_types': 'multi',
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_number_of_unread_items_endpoint = _Endpoint(
settings={
'response_type': (ViewNumberOfUnreadItemsResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/unread_item_count',
'operation_id': 'view_number_of_unread_items',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'custom_type',
'item_keys',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'custom_type':
(str,),
'item_keys':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'custom_type': 'custom_type',
'item_keys': 'item_keys',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'custom_type': 'query',
'item_keys': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_number_of_unread_messages_endpoint = _Endpoint(
settings={
'response_type': (ViewNumberOfUnreadMessagesResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/unread_message_count',
'operation_id': 'view_number_of_unread_messages',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'custom_types',
'super_mode',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'custom_types':
(str,),
'super_mode':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'custom_types': 'custom_types',
'super_mode': 'super_mode',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'custom_types': 'query',
'super_mode': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_push_preferences_endpoint = _Endpoint(
settings={
'response_type': (ViewPushPreferencesResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push_preference',
'operation_id': 'view_push_preferences',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_push_preferences_for_channel_by_url_endpoint = _Endpoint(
settings={
'response_type': (ViewPushPreferencesForChannelByUrlResponse,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}/push_preference/{channel_url}',
'operation_id': 'view_push_preferences_for_channel_by_url',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'channel_url',
],
'required': [
'api_token',
'user_id',
'channel_url',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'channel_url':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'channel_url': 'channel_url',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'channel_url': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_user_by_id_endpoint = _Endpoint(
settings={
'response_type': (SendBirdUser,),
'auth': [],
'endpoint_path': '/v3/users/{user_id}',
'operation_id': 'view_user_by_id',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'user_id',
'include_unread_count',
'custom_types',
'super_mode',
],
'required': [
'api_token',
'user_id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'user_id':
(str,),
'include_unread_count':
(bool,),
'custom_types':
(str,),
'super_mode':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'user_id': 'user_id',
'include_unread_count': 'include_unread_count',
'custom_types': 'custom_types',
'super_mode': 'super_mode',
},
'location_map': {
'api_token': 'header',
'user_id': 'path',
'include_unread_count': 'query',
'custom_types': 'query',
'super_mode': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.view_who_owns_registration_or_device_token_by_token_endpoint = _Endpoint(
settings={
'response_type': (ViewWhoOwnsRegistrationOrDeviceTokenByTokenResponse,),
'auth': [],
'endpoint_path': '/v3/push/device_tokens/{token_type}/{token}',
'operation_id': 'view_who_owns_registration_or_device_token_by_token',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'api_token',
'token_type',
'token',
],
'required': [
'api_token',
'token_type',
'token',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'api_token':
(str,),
'token_type':
(str,),
'token':
(str,),
},
'attribute_map': {
'api_token': 'Api-Token',
'token_type': 'token_type',
'token': 'token',
},
'location_map': {
'api_token': 'header',
'token_type': 'path',
'token': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
def add_registration_or_device_token(
self,
api_token,
user_id,
token_type,
**kwargs
):
"""Add a registration or device token # noqa: E501
## Add a registration or device token > __Note__: A user can have up to 20 FCM registration tokens, 20 HMS device tokens, and 20 APNs device tokens each. The oldest token will be deleted before a new token is added for a user who already has 20 registration or device tokens. Only the 20 newest tokens will be maintained for users who already have more than 20 of each token type. To send notification requests to push notification services on behalf of your server, adds a specific user's FCM registration token, HMS device token, or APNs device token to Sendbird server. Depending on which push service you are using, you can pass one of two values in `token_type`: `gcm`, `huawei`, or `apns`. A FCM registration token and an APNs device token allow identification of each client app instance on each device, and are generated and registered by Android and iOS apps through the corresponding SDKs. Use this method if you need to register a token via your own server. > __Note__: For more information on the registration token and device token, visit the Google's [FCM](https://firebase.google.com/docs/auth/admin/verify-id-tokens) page, Huawei's [Push kit](https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/service-introduction-0000001050040060) and Apple's [APNs](https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html) page. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-add-a-registration-or-device-token ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.add_registration_or_device_token(api_token, user_id, token_type, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
token_type (str):
Keyword Args:
add_registration_or_device_token_data (AddRegistrationOrDeviceTokenData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
AddRegistrationOrDeviceTokenResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
kwargs['token_type'] = \
token_type
return self.add_registration_or_device_token_endpoint.call_with_http_info(**kwargs)
def choose_push_notification_content_template(
self,
api_token,
user_id,
**kwargs
):
"""Choose a push notification content template # noqa: E501
## Choose a push notification content template Chooses a push notification content template of a user's own. The push notifications feature is only available for group channels. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-choose-a-push-notification-content-template ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.choose_push_notification_content_template(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
body ({str: (bool, date, datetime, dict, float, int, list, str, none_type)}): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ChoosePushNotificationContentTemplateResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.choose_push_notification_content_template_endpoint.call_with_http_info(**kwargs)
def create_user(
self,
api_token,
**kwargs
):
"""Create a user # noqa: E501
## Create a user Creates a new user in the application. A user is identified by its unique user ID, and additionally have a changeable nickname, profile image, and so on. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-create-a-user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_user(api_token, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
Keyword Args:
create_user_data (CreateUserData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
SendBirdUser
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
return self.create_user_endpoint.call_with_http_info(**kwargs)
def create_user_token(
self,
api_token,
user_id,
**kwargs
):
"""Create user token # noqa: E501
## Create user token # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_user_token(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
create_user_token_data (CreateUserTokenData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
CreateUserTokenResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.create_user_token_endpoint.call_with_http_info(**kwargs)
def delete_user_by_id(
self,
api_token,
user_id,
**kwargs
):
"""Delete a user # noqa: E501
## Delete a user Deletes a user. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-delete-a-user ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_user_by_id(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
{str: (bool, date, datetime, dict, float, int, list, str, none_type)}
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.delete_user_by_id_endpoint.call_with_http_info(**kwargs)
def leave_my_group_channels(
self,
api_token,
user_id,
**kwargs
):
"""Leave my group channels # noqa: E501
## Leave my group channels Makes a user leave all joined group channels. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-leave-my-group-channels ---------------------------- `user_id` Type: string Description: Specifies the unique ID of the user to leave all joined group channels. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.leave_my_group_channels(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
leave_my_group_channels_data (LeaveMyGroupChannelsData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
{str: (bool, date, datetime, dict, float, int, list, str, none_type)}
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.leave_my_group_channels_endpoint.call_with_http_info(**kwargs)
def list_my_group_channels(
self,
api_token,
user_id,
**kwargs
):
"""List my group channels # noqa: E501
## List my group channels Retrieves all group channels that the user has joined. You can create a request based on various query parameters. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-list-my-group-channels ---------------------------- `user_id` Type: string Description: Specifies the unique ID of the target user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_my_group_channels(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
token (str): [optional]
limit (int): [optional]
distinct_mode (str): [optional]
public_mode (str): [optional]
super_mode (str): [optional]
hidden_mode (str): [optional]
member_state_filter (str): [optional]
unread_filter (str): [optional]
created_after (int): [optional]
created_before (int): [optional]
show_empty (bool): [optional]
show_frozen (bool): [optional]
show_member (bool): [optional]
show_delivery_receipt (bool): [optional]
show_read_receipt (bool): [optional]
order (str): [optional]
metadata_order_key (str): [optional]
custom_types (str): [optional]
custom_type_startswith (str): [optional]
channel_urls (str): [optional]
name (str): [optional]
name_contains (str): [optional]
name_startswith (str): [optional]
members_exactly_in (str): [optional]
members_include_in (str): [optional]
query_type (str): [optional]
members_nickname (str): [optional]
members_nickname_contains (str): [optional]
search_query (str): [optional]
search_fields (str): [optional]
metadata_key (str): [optional]
metadata_values (str): [optional]
metadata_value_startswith (str): [optional]
metacounter_key (str): [optional]
metacounter_values (str): [optional]
metacounter_value_gt (str): [optional]
metacounter_value_gte (str): [optional]
metacounter_value_lt (str): [optional]
metacounter_value_lte (str): [optional]
custom_type (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ListMyGroupChannelsResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.list_my_group_channels_endpoint.call_with_http_info(**kwargs)
def list_registration_or_device_tokens(
self,
api_token,
user_id,
token_type,
**kwargs
):
"""List registration or device tokens # noqa: E501
## List registration or device tokens Retrieves a list of a specific user's FCM registration tokens, HMS device tokens, or APNs device tokens. You can specify either `gcm`, `huawei`, or `apns` in the `token_type` parameter, depending on which push notification service you are using. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-list-registration-or-device-tokens ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_registration_or_device_tokens(api_token, user_id, token_type, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
token_type (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ListRegistrationOrDeviceTokensResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
kwargs['token_type'] = \
token_type
return self.list_registration_or_device_tokens_endpoint.call_with_http_info(**kwargs)
def list_users(
self,
api_token,
**kwargs
):
"""List users # noqa: E501
## List users Retrieves a list of users in your application. You can query the list using various parameters. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-list-users ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_users(api_token, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
Keyword Args:
token (str): [optional]
limit (int): [optional]
active_mode (str): [optional]
show_bot (bool): [optional]
user_ids (str): [optional]
nickname (str): [optional]
nickname_startswith (str): [optional]
metadatakey (str): [optional]
metadatavalues_in (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ListUsersResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
return self.list_users_endpoint.call_with_http_info(**kwargs)
def mark_all_messages_as_read(
self,
api_token,
user_id,
**kwargs
):
"""Mark all messages as read # noqa: E501
## Mark all messages as read Marks all of a user's unread messages as read in the joined group channels. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-mark-all-messages-as-read ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.mark_all_messages_as_read(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
mark_all_messages_as_read_data (MarkAllMessagesAsReadData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
{str: (bool, date, datetime, dict, float, int, list, str, none_type)}
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.mark_all_messages_as_read_endpoint.call_with_http_info(**kwargs)
def register_as_operator_to_channels_with_custom_channel_types(
self,
api_token,
user_id,
**kwargs
):
"""Register as an operator to channels with custom channel types # noqa: E501
## Register as an operator to channels with custom channel types Registers a user as an operator to channels with particular custom channel types. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-register-as-an-operator-to-channels-with-custom-channel-types ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.register_as_operator_to_channels_with_custom_channel_types(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
register_as_operator_to_channels_with_custom_channel_types_data (RegisterAsOperatorToChannelsWithCustomChannelTypesData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
{str: (bool, date, datetime, dict, float, int, list, str, none_type)}
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.register_as_operator_to_channels_with_custom_channel_types_endpoint.call_with_http_info(**kwargs)
def remove_registration_or_device_token(
self,
api_token,
user_id,
**kwargs
):
"""Remove a registration or device token - When unregistering all device tokens # noqa: E501
## Remove a registration or device token Removes a specific user's one or more FCM registration tokens, HMS device tokens, or APNs device tokens. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-remove-a-registration-or-device-token ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.remove_registration_or_device_token(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
RemoveRegistrationOrDeviceTokenResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.remove_registration_or_device_token_endpoint.call_with_http_info(**kwargs)
def remove_registration_or_device_token_by_token(
self,
api_token,
user_id,
token_type,
token,
**kwargs
):
"""Remove a registration or device token - When unregistering a specific token # noqa: E501
## Remove a registration or device token Removes a specific user's one or more FCM registration tokens, HMS device tokens, or APNs device tokens. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-remove-a-registration-or-device-token ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.remove_registration_or_device_token_by_token(api_token, user_id, token_type, token, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
token_type (str):
token (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
RemoveRegistrationOrDeviceTokenByTokenResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
kwargs['token_type'] = \
token_type
kwargs['token'] = \
token
return self.remove_registration_or_device_token_by_token_endpoint.call_with_http_info(**kwargs)
def remove_registration_or_device_token_from_owner_by_token(
self,
api_token,
token_type,
token,
**kwargs
):
"""Remove a registration or device token from an owner # noqa: E501
## Remove a registration or device token from an owner Removes a registration or device token from a user who owns it. You can pass one of two values in `token_type`: `gcm`, `huawei`, or `apns`, depending on which push service you are using. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-remove-a-registration-or-device-token-from-an-owner ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.remove_registration_or_device_token_from_owner_by_token(api_token, token_type, token, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
token_type (str):
token (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
RemoveRegistrationOrDeviceTokenFromOwnerByTokenResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['token_type'] = \
token_type
kwargs['token'] = \
token
return self.remove_registration_or_device_token_from_owner_by_token_endpoint.call_with_http_info(**kwargs)
def reset_push_preferences(
self,
api_token,
user_id,
**kwargs
):
"""Reset push preferences # noqa: E501
## Reset push preferences Resets a user's push preferences. After performing this action, `do_not_disturb` and `snooze_enabled` are set to false. The values of the parameters associated with the time frame are all set to 0. `timezone` is reset to `UTC`. `push_sound` is reset to `default`. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-reset-push-preferences ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.reset_push_preferences(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ResetPushPreferencesResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.reset_push_preferences_endpoint.call_with_http_info(**kwargs)
def update_channel_invitation_preference(
self,
api_token,
user_id,
**kwargs
):
"""Update channel invitation preference # noqa: E501
## Update channel invitation preference Updates the channel invitation preference for a user's [private](https://sendbird.com/docs/chat/v3/platform-api/guides/group-channel#-3-private-vs-public) group channels. > __Note__: Using the [update default channel invitation preference](https://sendbird.com/docs/chat/v3/platform-api/guides/application#2-update-default-channel-invitation-preference) action, you can update the value of channel invitation preference which is globally applied to all users within the application. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-update-channel-invitation-preference # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_channel_invitation_preference(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
update_channel_invitation_preference_data (UpdateChannelInvitationPreferenceData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
UpdateChannelInvitationPreferenceResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.update_channel_invitation_preference_endpoint.call_with_http_info(**kwargs)
def update_count_preference_of_channel_by_url(
self,
api_token,
user_id,
channel_url,
**kwargs
):
"""Update count preference of a channel # noqa: E501
## Update count preference of a channel Updates count preference of a specific group channel of a user. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-update-count-preference-of-a-channel ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_count_preference_of_channel_by_url(api_token, user_id, channel_url, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
channel_url (str):
Keyword Args:
update_count_preference_of_channel_by_url_data (UpdateCountPreferenceOfChannelByUrlData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
UpdateCountPreferenceOfChannelByUrlResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
kwargs['channel_url'] = \
channel_url
return self.update_count_preference_of_channel_by_url_endpoint.call_with_http_info(**kwargs)
def update_push_preferences(
self,
api_token,
user_id,
**kwargs
):
"""Update push preferences # noqa: E501
## Update push preferences Updates a user's push preferences. Through this action, you can set `do_not_disturb` for a user, and update the time frame in which the setting applies. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-update-push-preferences ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_push_preferences(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
update_push_preferences_data (UpdatePushPreferencesData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
UpdatePushPreferencesResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.update_push_preferences_endpoint.call_with_http_info(**kwargs)
def update_push_preferences_for_channel_by_url(
self,
api_token,
user_id,
channel_url,
**kwargs
):
"""Update push preferences for a channel # noqa: E501
## Update push preferences for a channel Updates push preferences for a user's specific group channel. The push notifications feature is only available for group channels. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-update-push-preferences-for-a-channel ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_push_preferences_for_channel_by_url(api_token, user_id, channel_url, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
channel_url (str):
Keyword Args:
update_push_preferences_for_channel_by_url_data (UpdatePushPreferencesForChannelByUrlData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
UpdatePushPreferencesForChannelByUrlResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
kwargs['channel_url'] = \
channel_url
return self.update_push_preferences_for_channel_by_url_endpoint.call_with_http_info(**kwargs)
def update_user_by_id(
self,
api_token,
user_id,
**kwargs
):
"""Update a user # noqa: E501
## Update a user Updates information on a user. In addition to changing a user's nickname or profile image, you can issue a new access token for the user. The new access token replaces the previous one as the necessary token for authentication. You can also deactivate or reactivate a user. If the `leave_all_when_deactivated` is true (which it is by default), a user leaves all joined group channels when deactivated. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-update-a-user ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_user_by_id(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
update_user_by_id_data (UpdateUserByIdData): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
SendBirdUser
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.update_user_by_id_endpoint.call_with_http_info(**kwargs)
def view_channel_invitation_preference(
self,
api_token,
user_id,
**kwargs
):
"""View channel invitation preference # noqa: E501
## View channel invitation preference Retrieves channel invitation preference for a user's [private](https://sendbird.com/docs/chat/v3/platform-api/guides/group-channel#-3-private-vs-public) group channels. > __Note__: Using the [view default channel invitation preference](https://sendbird.com/docs/chat/v3/platform-api/guides/application#2-view-default-channel-invitation-preference) action, you can retrieve the value of channel invitation preference which is globally applied to all users within the application. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-channel-invitation-preference # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_channel_invitation_preference(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewChannelInvitationPreferenceResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.view_channel_invitation_preference_endpoint.call_with_http_info(**kwargs)
def view_count_preference_of_channel_by_url(
self,
api_token,
user_id,
channel_url,
**kwargs
):
"""View count preference of a channel # noqa: E501
## View count preference of a channel Retrieves count preference of a specific group channel of a user. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-count-preference-of-a-channel ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_count_preference_of_channel_by_url(api_token, user_id, channel_url, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
channel_url (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewCountPreferenceOfChannelByUrlResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
kwargs['channel_url'] = \
channel_url
return self.view_count_preference_of_channel_by_url_endpoint.call_with_http_info(**kwargs)
def view_number_of_channels_by_join_status(
self,
api_token,
user_id,
**kwargs
):
"""View number of channels by join status # noqa: E501
## View number of channels by join status Retrieves the number of a user's group channels by their join status. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-number-of-channels-by-join-status ---------------------------- `user_id` Type: string Description: Specifies the unique ID of the user to retrieve the number. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_number_of_channels_by_join_status(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
state (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewNumberOfChannelsByJoinStatusResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.view_number_of_channels_by_join_status_endpoint.call_with_http_info(**kwargs)
def view_number_of_channels_with_unread_messages(
self,
api_token,
user_id,
**kwargs
):
"""View number of channels with unread messages # noqa: E501
## View number of channels with unread messages Retrieves the total number of a user's group channels with unread messages. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-number-of-channels-with-unread-messages ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_number_of_channels_with_unread_messages(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
custom_types ([str]): [optional]
super_mode (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewNumberOfChannelsWithUnreadMessagesResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.view_number_of_channels_with_unread_messages_endpoint.call_with_http_info(**kwargs)
def view_number_of_unread_items(
self,
api_token,
user_id,
**kwargs
):
"""View number of unread items # noqa: E501
## View number of unread items Retrieves a set of total numbers of a user's unread messages, unread mentioned messages, or received invitations in either super or non-super group channels. This action is only available for the group channels. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-number-of-unread-items ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_number_of_unread_items(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
custom_type (str): [optional]
item_keys (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewNumberOfUnreadItemsResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.view_number_of_unread_items_endpoint.call_with_http_info(**kwargs)
def view_number_of_unread_messages(
self,
api_token,
user_id,
**kwargs
):
"""View number of unread messages # noqa: E501
## View number of unread messages Retrieves the total number of a user's currently unread messages in the group channels. The unread counts feature is only available for the group channels. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-number-of-unread-messages ---------------------------- `user_id` Type: string Description: Specifies the unique ID of the user to retrieve the number. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_number_of_unread_messages(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
custom_types (str): [optional]
super_mode (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewNumberOfUnreadMessagesResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.view_number_of_unread_messages_endpoint.call_with_http_info(**kwargs)
def view_push_preferences(
self,
api_token,
user_id,
**kwargs
):
"""View push preferences # noqa: E501
## View push preferences Retrieves a user's push preferences about whether the user has set `do_not_disturb` to pause notifications for a certain period of time, and the time frame in which the user has applied the setting. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-push-preferences ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_push_preferences(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewPushPreferencesResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.view_push_preferences_endpoint.call_with_http_info(**kwargs)
def view_push_preferences_for_channel_by_url(
self,
api_token,
user_id,
channel_url,
**kwargs
):
"""View push preferences for a channel # noqa: E501
## View push preferences for a channel Retrieves whether a user has turned on notification messages for a specific channel. The push notifications feature is only available for group channels. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-push-preferences-for-a-channel ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_push_preferences_for_channel_by_url(api_token, user_id, channel_url, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
channel_url (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewPushPreferencesForChannelByUrlResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
kwargs['channel_url'] = \
channel_url
return self.view_push_preferences_for_channel_by_url_endpoint.call_with_http_info(**kwargs)
def view_user_by_id(
self,
api_token,
user_id,
**kwargs
):
"""View a user # noqa: E501
## View a user Retrieves information on a user. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-a-user ---------------------------- `user_id` Type: string Description: Specifies the unique ID of the user to retrieve. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_user_by_id(api_token, user_id, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
user_id (str):
Keyword Args:
include_unread_count (bool): [optional]
custom_types (str): [optional]
super_mode (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
SendBirdUser
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['user_id'] = \
user_id
return self.view_user_by_id_endpoint.call_with_http_info(**kwargs)
def view_who_owns_registration_or_device_token_by_token(
self,
api_token,
token_type,
token,
**kwargs
):
"""View who owns a registration or device token # noqa: E501
## View who owns a registration or device token Retrieves a user who owns a FCM registration token, HMS device token, or APNs device token. You can pass one of two values in `token_type`: `gcm`, `huawei`, or `apns`, depending on which push service you are using. https://sendbird.com/docs/chat/v3/platform-api/guides/user#2-view-who-owns-a-registration-or-device-token ---------------------------- # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_who_owns_registration_or_device_token_by_token(api_token, token_type, token, async_req=True)
>>> result = thread.get()
Args:
api_token (str):
token_type (str):
token (str):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
ViewWhoOwnsRegistrationOrDeviceTokenByTokenResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['api_token'] = \
api_token
kwargs['token_type'] = \
token_type
kwargs['token'] = \
token
return self.view_who_owns_registration_or_device_token_by_token_endpoint.call_with_http_info(**kwargs)
|
PypiClean
|
/copr-common-0.20.tar.gz/copr-common-0.20/copr_common/enums.py
|
import random
import string
from six import with_metaclass
# We don't know how to define the enums without `class`.
# pylint: disable=too-few-public-methods
class EnumType(type):
def _wrap(cls, attr=None):
if attr is None:
raise NotImplementedError
if isinstance(attr, int):
for k, v in cls.vals.items():
if v == attr:
return k
raise KeyError("num {0} is not mapped".format(attr))
return cls.vals[attr]
def __call__(cls, attr):
return cls._wrap(attr)
def __getattr__(cls, attr):
return cls._wrap(attr)
class ActionTypeEnum(with_metaclass(EnumType, object)):
vals = {
"delete": 0,
"rename": 1,
"legal-flag": 2,
"createrepo": 3,
"update_comps": 4,
"gen_gpg_key": 5,
"rawhide_to_release": 6,
"fork": 7,
"update_module_md": 8,
"build_module": 9,
"cancel_build": 10,
"remove_dirs": 11,
}
class ActionResult(with_metaclass(EnumType, object)):
vals = {
'WAITING': 0,
'SUCCESS': 1,
'FAILURE': 2,
}
class DefaultActionPriorityEnum(with_metaclass(EnumType, object)):
"""
The higher the 'priority' is, the later the task is taken.
Keep actions priority in range -100 to 100
"""
vals = {
"gen_gpg_key": -70,
"cancel_build": -10,
"createrepo": 0,
"fork": 0,
"build_module": 0,
"update_comps": 0,
"delete": 60,
"rawhide_to_release": 70,
}
class ActionPriorityEnum(with_metaclass(EnumType, object)):
"""
Naming/assigning the values is a little bit tricky because
how the current implementation works (i.e. it is inverted).
However, from the most abstract point of view,
"highest priority" means "do this as soon as possible"
"""
vals = {"highest": -99, "lowest": 99}
class BackendResultEnum(with_metaclass(EnumType, object)):
vals = {"waiting": 0, "success": 1, "failure": 2}
class RoleEnum(with_metaclass(EnumType, object)):
vals = {"user": 0, "admin": 1}
class StatusEnum(with_metaclass(EnumType, object)):
vals = {
"failed": 0, # build failed
"succeeded": 1, # build succeeded
"canceled": 2, # build was canceled
"running": 3, # SRPM or RPM build is running
"pending": 4, # build(-chroot) is waiting to be picked
"skipped": 5, # if there was this package built already
"starting": 6, # build was picked by worker but no VM initialized yet
"importing": 7, # SRPM is being imported into dist-git
"forked": 8, # build(-chroot) was forked
"waiting": 9, # build(-chroot) is waiting for something else to finish
"unknown": 1000, # undefined
}
def _filtered_status_enum(keys):
new_values = {}
for key, value in StatusEnum.vals.items():
if key in keys:
new_values[key] = value
return new_values
class ModuleStatusEnum(StatusEnum):
vals = _filtered_status_enum(["canceled", "running", "starting", "pending",
"failed", "succeeded", "waiting", "unknown"])
class BuildSourceEnum(with_metaclass(EnumType, object)):
vals = {"unset": 0,
"link": 1, # url
"upload": 2, # pkg, tmp, url
"pypi": 5, # package_name, version, python_versions
"rubygems": 6, # gem_name
"scm": 8, # type, clone_url, committish, subdirectory, spec, srpm_build_method
"custom": 9, # user-provided script to build sources
"distgit": 10, # distgit_instance, package_name, committish
}
class FailTypeEnum(with_metaclass(EnumType, object)):
vals = {"unset": 0,
# General errors mixed with errors for SRPM URL/upload:
"unknown_error": 1,
"build_error": 2,
"srpm_import_failed": 3,
"srpm_download_failed": 4,
"srpm_query_failed": 5,
"import_timeout_exceeded": 6,
"git_clone_failed": 31,
"git_wrong_directory": 32,
"git_checkout_error": 33,
"srpm_build_error": 34,
}
|
PypiClean
|
/ir_datasets-0.5.5-py3-none-any.whl/ir_datasets/indices/zpickle_docstore.py
|
import os
import shutil
import json
import zlib
import pickle
from contextlib import contextmanager
from .indexed_tsv_docstore import NumpyPosIndex
import ir_datasets
_logger = ir_datasets.log.easy()
class ZPickleKeyValueStore:
def __init__(self, path, id_idx, doc_cls):
self._path = path
self._id_idx = id_idx
self._doc_cls = doc_cls
self._idx = None
self._bin = None
def built(self):
return len(self) > 0
def idx(self):
if self._idx is None:
self._idx = NumpyPosIndex(os.path.join(self._path, 'idx'))
return self._idx
def bin(self):
if self._bin is None:
self._bin = open(os.path.join(self._path, 'bin'), 'rb')
return self._bin
def purge(self):
if self._idx:
self._idx.close()
self._idx = None
if self._bin:
self._bin.close()
self._bin = None
@contextmanager
def transaction(self):
os.makedirs(self._path, exist_ok=True)
with ZPickleDocStoreTransaction(self) as trans:
yield trans
def __getitem__(self, value):
if isinstance(value, tuple) and len(value) == 2:
key, field = value
else:
# assume key and all fields
key, field = value, Ellipsis
binf = self.bin()
binf.seek(self.idx().get(key))
content_length = int.from_bytes(binf.read(4), 'little')
content = binf.read(content_length)
content = zlib.decompress(content)
content = pickle.loads(content)
if content[self._id_idx][1] != key:
raise KeyError(f'key={key} not found')
if field is Ellipsis:
content = dict(content)
return self._doc_cls(*(content.get(f) for f in self._doc_cls._fields))
for f, val in content:
if field == f:
return val
raise KeyError(f'field={field} not found for key={key}')
def path(self, force=True):
return self._path
def __iter__(self):
# iterates documents
binf = self.bin()
binf.seek(0)
while binf.read(1): # peek
binf.seek(-1, 1) # un-peek
content_length = int.from_bytes(binf.read(4), 'little')
content = binf.read(content_length)
content = zlib.decompress(content)
content = pickle.loads(content)
content = dict(content)
yield self._doc_cls(*(content.get(f) for f in self._doc_cls._fields))
def __len__(self):
# number of keys
return len(self.idx())
class ZPickleDocStoreTransaction:
def __init__(self, docstore):
self.docstore = docstore
self.path = self.docstore.path()
self.idx = NumpyPosIndex(os.path.join(self.path, 'idx'))
self.bin = open(os.path.join(self.path, 'bin'), 'wb')
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if not exc_val:
self.commit()
else:
self.discard()
def commit(self):
self.idx.commit()
self.bin.flush()
self.bin.close()
def discard(self):
shutil.rmtree(self.path)
def add(self, key, fields):
self.idx.add(key, self.bin.tell())
content = tuple(zip(type(fields)._fields, fields))
content = pickle.dumps(content)
content = zlib.compress(content)
content_length = len(content)
self.bin.write(content_length.to_bytes(4, 'little'))
self.bin.write(content)
class ZPickleDocStore:
file_ext = 'zpkl'
def __init__(self, path, doc_cls, id_field='doc_id'):
self._path = path
self._doc_cls = doc_cls
self._id_field = id_field
self._id_field_idx = doc_cls._fields.index(id_field)
self._store = ZPickleKeyValueStore(path, self._id_field_idx, self._doc_cls)
def built(self):
return os.path.exists(self._path)
def purge(self):
self._store.purge()
def build(self, documents):
with self._store.transaction() as trans:
for doc in documents:
trans.add(doc[self._id_field_idx], doc)
def get(self, did, field=None):
if field is not None:
return self._store[did, field]
return self._store[did]
def get_many(self, dids, field=None):
result = {}
for did in dids:
try:
result[did] = self.get(did, field)
except ValueError:
pass
return result
def num_docs(self):
return len(self._store)
def docids(self):
return iter(self._store.idx())
def __iter__(self):
return iter(self._store)
def path(self, force=True):
return self._path
|
PypiClean
|
/djdgcore-0.8.15.1.tar.gz/djdgcore-0.8.15.1/djdg_core/memberbao/models.py
|
from __future__ import unicode_literals, absolute_import
from django.db import models
class PayResult(models.Model):
"""
支付结果信息
"""
notify_time = models.DateTimeField('通知时间', auto_now_add=True)
notify_type = models.CharField(max_length=128, verbose_name='通知类型', null=True)
notify_id = models.IntegerField(verbose_name='通知id')
order_id = models.IntegerField('订单id')
total_fee = models.DecimalField(max_digits=11, decimal_places=2, verbose_name='订单总额', default=0)
cash_fee = models.DecimalField(max_digits=11, decimal_places=2, verbose_name='实际支付金额', default=0)
coupon_fee = models.DecimalField(max_digits=11, decimal_places=2, verbose_name='代金券或者优惠金额', default=0)
coupon_count = models.IntegerField('代金券或立减优惠使用数量', default=0)
pay_time = models.DateTimeField('支付时间', auto_now_add=True)
pay_way = models.IntegerField('支付类型', null=True)
app_id = models.CharField(max_length=64, verbose_name='app id', null=True)
pay_type = models.CharField(max_length=64, verbose_name='支付类型', null=True, blank=True)
create_time = models.DateTimeField('记录创建时间', auto_now_add=True)
class AllocateRecord(models.Model):
"""
会员宝分账记录
"""
request_no = models.CharField(max_length=128, verbose_name='请求id')
notify_id = models.IntegerField(verbose_name='通知id')
app_id = models.CharField(max_length=128, verbose_name='appid 对应 merchantId')
operator_id = models.PositiveSmallIntegerField('操作id')
user_id = models.IntegerField('用户id,对应userIn')
amount = models.DecimalField(max_digits=11, decimal_places=2, verbose_name='金额')
desc = models.TextField('描述')
create_time = models.DateTimeField('记录创建时间', auto_now_add=True)
class RefundRecord(models.Model):
"""
会员宝退款记录
"""
notify_id = models.IntegerField(verbose_name='通知id')
amount = models.DecimalField(max_digits=11, decimal_places=2, verbose_name='计划退回金额')
confirm_amount = models.DecimalField(max_digits=11, decimal_places=2, verbose_name='实际退回金额')
order_id = models.IntegerField('订单id')
refund_id = models.CharField(max_length=64,verbose_name='退款id,由会员宝返回')
user_id = models.IntegerField('退款用户id,即common api的用户id')
create_time = models.DateTimeField('记录创建时间', auto_now_add=True)
|
PypiClean
|
/levo-0.4.9-py3-none-any.whl/levocli/callbacks.py
|
from contextlib import contextmanager
from typing import Dict, Generator, Tuple
from urllib.parse import urlparse
import click
from . import utils
from .docker_utils import is_docker, map_hostpath_to_container
from .logger import get_logger
from .utils import health_check_http_url
LOCALHOST = "localhost"
LOCALHOST_IP = "127.0.0.1"
DOCKER_HOST_PREFIX = "host.docker"
log = get_logger(__name__)
def validate_url(
ctx: click.core.Context, param: click.core.Parameter, raw_value: str
) -> str:
url_type = (
"'Schema URL'" if (param and (param.name == "schema")) else "'Target URL'"
)
if not raw_value:
raise click.BadParameter("{} cannot be empty.".format(url_type))
if is_docker() and (LOCALHOST in raw_value.lower() or LOCALHOST_IP in raw_value):
raise click.BadArgumentUsage(
"{} cannot be localhost/127.0.0.1 when running in Docker."
" Please use host.docker.internal instead.".format(url_type)
)
if not is_docker() and DOCKER_HOST_PREFIX in raw_value.lower():
click.secho(
"You are running the CLI outside Docker but the {} is a Docker host. Please double check.".format(
url_type
),
fg="yellow",
)
# Before we parse the URL, prefix the URL with http:// if it's a localhost or host.docker.internal
target_url = (
f"http://{raw_value}"
if raw_value.lower().startswith(LOCALHOST)
or raw_value.startswith(LOCALHOST_IP)
or raw_value.lower().startswith(DOCKER_HOST_PREFIX)
else raw_value
)
try:
result = urlparse(target_url)
log.debug(f"Parsed URL: {result}")
if not result.scheme or not result.netloc:
raise click.BadParameter(
"{} should have a scheme and host.".format(url_type)
)
except ValueError as exc:
raise click.BadParameter(
"Please provide a valid URL (e.g. https://api.example.com)"
) from exc
status_code = health_check_http_url(target_url)
if param and (param.name == "schema"):
# For schema URLs we need to be able to pull the schema successfully.
# Which means, we need a 2XX status code
if (status_code < 200) or (status_code >= 300):
raise click.BadArgumentUsage(
"(HTTP code:{}) Cannot load {}: {}".format(
status_code, url_type, raw_value
)
)
# End of execution here
else:
# For target URLs, we just need a non 5XX status, as the target URL is a base URL,
# and may not have any well defined response.
if status_code >= 500:
raise click.BadArgumentUsage(
"Cannot reach {}: {}".format(url_type, raw_value)
)
# End of execution here
return target_url # Target is healthy
def validate_schema(
ctx: click.core.Context, param: click.core.Parameter, raw_value: str
) -> str:
if "app" not in ctx.params:
try:
netloc = urlparse(raw_value).netloc
except ValueError as exc:
raise click.UsageError(
"Invalid schema, must be a valid URL or file path."
) from exc
if not netloc:
mapped_path: str = map_hostpath_to_container(raw_value)
if not utils.file_exists(mapped_path):
raise click.UsageError(_get_env_specific_schema_file_usage_error())
# Click ends execution here
return mapped_path
else:
validate_url(ctx, param, raw_value)
return raw_value
def _get_env_specific_schema_file_usage_error() -> str:
"""Return an appropriate message based on the env - Docker or no Docker"""
if is_docker():
return "Cannot access schema file. \nPlease ensure the file exists, and the path provided is accessible by the Levo CLI container."
else:
return "Cannot access schema file."
@contextmanager
def reraise_format_error(raw_value: str) -> Generator[None, None, None]:
try:
yield
except ValueError as exc:
raise click.BadParameter(
f"Should be in KEY:VALUE format. Got: {raw_value}"
) from exc
def validate_headers(
ctx: click.core.Context, param: click.core.Parameter, raw_value: Tuple[str, ...]
) -> Dict[str, str]:
headers = {}
for header in raw_value:
with reraise_format_error(header):
key, value = header.split(":", maxsplit=1)
value = value.lstrip()
key = key.strip()
if not key:
raise click.BadParameter("Header name should not be empty")
if not utils.is_latin_1_encodable(key):
raise click.BadParameter("Header name should be latin-1 encodable")
if not utils.is_latin_1_encodable(value):
raise click.BadParameter("Header value should be latin-1 encodable")
if utils.has_invalid_characters(key, value):
raise click.BadParameter(
"Invalid return character or leading space in header"
)
headers[key] = value
return headers
|
PypiClean
|
/convertapi-1.7.0.tar.gz/convertapi-1.7.0/README.md
|
# ConvertAPI Python Client
[](https://badge.fury.io/py/convertapi)
[](https://github.com/ConvertAPI/convertapi-python/actions)
[](https://opensource.org/licenses/MIT)
## Convert your files with our online file conversion API
ConvertAPI helps to convert various file formats. Creating PDF and Images from various sources like Word, Excel, Powerpoint, images, web pages or raw HTML codes. Merge, Encrypt, Split, Repair and Decrypt PDF files and many other manipulations. You can integrate it into your application in just a few minutes and use it easily.
## Installation
Install with [pip](https://pypi.org/project/pip/):
pip install --upgrade convertapi
Install from source with:
python setup.py install
### Requirements
* Python 3.3+
## Usage
### Configuration
You can get your secret at https://www.convertapi.com/a
```python
import convertapi
convertapi.api_secret = 'your-api-secret'
```
#### Proxy configuration
If you need to use a proxy, you can specify it using `HTTPS_PROXY` environment variable when running your script.
Example:
```
CONVERT_API_SECRET=secret HTTPS_PROXY=https://user:[email protected]:9000/ python convert_word_to_pdf_and_png.py
```
### File conversion
Convert a file to PDF example. All supported file formats and options can be found
[here](https://www.convertapi.com/conversions).
```python
result = convertapi.convert('pdf', { 'File': '/path/to/my_file.docx' })
# save to file
result.file.save('/path/to/save/file.pdf')
```
Other result operations:
```python
# save all result files to folder
result.save_files('/path/to/save/files')
# get conversion cost
conversion_cost = result.conversion_cost
```
#### Convert file url
```python
result = convertapi.convert('pdf', { 'File': 'https://website/my_file.docx' })
```
#### Specifying from format
```python
result = convertapi.convert(
'pdf',
{ 'File': '/path/to/my_file' },
from_format = 'docx'
)
```
#### Additional conversion parameters
ConvertAPI accepts additional conversion parameters depending on selected formats. All conversion
parameters and explanations can be found [here](https://www.convertapi.com/conversions).
```python
result = convertapi.convert(
'pdf',
{
'File': '/path/to/my_file.docx',
'PageRange': '1-10',
'PdfResolution': '150',
}
)
```
### User information
You can always check your remaining seconds amount programatically by fetching [user information](https://www.convertapi.com/doc/user).
```python
user_info = convertapi.user()
print(user_info['SecondsLeft'])
```
### Alternative domain
Set `base_uri` parameter to use other service domains. Dedicated to the region [domain list](https://www.convertapi.com/doc/servers-location).
```python
convertapi.base_uri = 'https://eu-v2.convertapi.com/'
```
### More examples
Find more advanced examples in the [/examples](https://github.com/ConvertAPI/convertapi-python/tree/master/examples) folder.
## Development
Execute `CONVERT_API_SECRET=your_secret nosetests --nocapture` to run the tests.
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/ConvertAPI/convertapi-python. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
## License
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
PypiClean
|
/downward_ch-0.0.10-py3-none-manylinux1_x86_64.whl/downward_ch-0.0.10.data/purelib/downward_ch/misc/autodoc/autodoc.py
|
import argparse
import logging
import os
from os.path import dirname, join
import re
import subprocess
import sys
import time
import xmlrpc.client as xmlrpclib
import markup
# How many seconds to wait after a failed requests. Will be doubled after each failed request.
# Don't lower this below ~5, or we may get locked out for an hour.
sleep_time = 10
BOT_USERNAME = "XmlRpcBot"
PASSWORD_FILE = ".downward-xmlrpc.secret" # relative to this source file or in the home directory
WIKI_URL = "http://www.fast-downward.org"
DOC_PREFIX = "Doc/"
# a list of characters allowed to be used in doc titles
TITLE_WHITE_LIST = r"[\w\+-]" # match 'word characters' (including '_'), '+', and '-'
SCRIPT_DIR = os.path.abspath(os.path.dirname(__file__))
REPO_ROOT_DIR = os.path.dirname(os.path.dirname(SCRIPT_DIR))
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--build", default="release")
parser.add_argument("--dry-run", action="store_true")
return parser.parse_args()
def read_password():
path = join(dirname(__file__), PASSWORD_FILE)
if not os.path.exists(path):
path = os.path.expanduser(join('~', PASSWORD_FILE))
try:
with open(path) as password_file:
return password_file.read().strip()
except OSError:
logging.critical("Could not find password file %s!\nIs it present?"
% PASSWORD_FILE)
sys.exit(1)
def connect():
wiki = xmlrpclib.ServerProxy(WIKI_URL + "?action=xmlrpc2", allow_none=True)
auth_token = wiki.getAuthToken(BOT_USERNAME, read_password())
multi_call = xmlrpclib.MultiCall(wiki)
multi_call.applyAuthToken(auth_token)
return multi_call
def get_all_titles_from_wiki():
multi_call = connect()
multi_call.getAllPages()
response = list(multi_call())
assert(response[0] == 'SUCCESS' and len(response) == 2)
return response[1]
def get_pages_from_wiki(titles):
multi_call = connect()
for title in titles:
multi_call.getPage(title)
response = list(multi_call())
assert(response[0] == 'SUCCESS')
return dict(zip(titles, response[1:]))
def send_pages(pages):
multi_call = connect()
for page_name, page_text in pages:
multi_call.putPage(page_name, page_text)
return multi_call()
def attempt(func, *args):
global sleep_time
try:
result = func(*args)
except xmlrpclib.Fault as error:
# This usually means the page content did not change.
logging.exception("Error: %s\nShould not happen anymore." % error)
sys.exit(1)
except xmlrpclib.ProtocolError as err:
logging.warning("Error: %s\n"
"Will retry after %s seconds." % (err.errcode, sleep_time))
# Retry after sleeping.
time.sleep(sleep_time)
sleep_time *= 2
return attempt(func, *args)
except Exception:
logging.exception("Unexpected error: %s" % sys.exc_info()[0])
sys.exit(1)
else:
for entry in result:
logging.info(repr(entry))
logging.info("Call to %s successful." % func.__name__)
return result
def insert_wiki_links(text, titles):
def make_link(m, prefix=''):
anchor = m.group('anchor') or ''
link_name = m.group('link')
target = prefix + link_name
if anchor:
target += '#' + anchor
link_name = anchor
link_name = link_name.replace("_", " ")
# Leave out the prefix in the link name.
result = m.group('before') + "[[" + target + "|" + link_name + "]]" + m.group('after')
return result
def make_doc_link(m):
return make_link(m, prefix=DOC_PREFIX)
re_link = r"(?P<before>\W)(?P<link>%s)(#(?P<anchor>" + TITLE_WHITE_LIST + r"+))?(?P<after>\W)"
doctitles = [title[4:] for title in titles if title.startswith(DOC_PREFIX)]
for key in doctitles:
text = re.sub(re_link % key, make_doc_link, text)
othertitles = [title for title in titles
if not title.startswith(DOC_PREFIX) and title not in doctitles]
for key in othertitles:
text = re.sub(re_link % key, make_link, text)
return text
def build_planner(build):
subprocess.check_call(["./build.py", build, "downward"], cwd=REPO_ROOT_DIR)
def get_pages_from_planner(build):
out = subprocess.check_output(
["./fast-downward.py", "--build", build, "--search", "--", "--help", "--txt2tags"],
cwd=REPO_ROOT_DIR).decode("utf-8")
# Split the output into tuples (title, markup_text).
pagesplitter = re.compile(r'>>>>CATEGORY: ([\w\s]+?)<<<<(.+?)>>>>CATEGORYEND<<<<', re.DOTALL)
pages = dict()
for title, markup_text in pagesplitter.findall(out):
document = markup.Document(date='')
document.add_text("<<TableOfContents>>")
document.add_text(markup_text)
rendered_text = document.render("moin").strip()
pages[DOC_PREFIX + title] = rendered_text
return pages
def get_changed_pages(old_doc_pages, new_doc_pages, all_titles):
def add_page(title, text):
# Check if this page is new or changed.
if old_doc_pages.get(title, '') != text:
print(title, "changed")
changed_pages.append([title, text])
else:
print(title, "unchanged")
changed_pages = []
overview_lines = []
for title, text in sorted(new_doc_pages.items()):
overview_lines.append(" * [[" + title + "]]")
text = insert_wiki_links(text, all_titles)
add_page(title, text)
overview_title = DOC_PREFIX + "Overview"
overview_text = "\n".join(overview_lines)
add_page(overview_title, overview_text)
return changed_pages
if __name__ == '__main__':
args = parse_args()
logging.info("building planner...")
build_planner(args.build)
logging.info("getting new pages from planner...")
new_doc_pages = get_pages_from_planner(args.build)
if args.dry_run:
for title, content in sorted(new_doc_pages.items()):
print("=" * 25, title, "=" * 25)
print(content)
print()
print()
sys.exit()
logging.info("getting existing page titles from wiki...")
old_titles = attempt(get_all_titles_from_wiki)
old_doc_titles = [title for title in old_titles if title.startswith(DOC_PREFIX)]
all_titles = set(old_titles) | set(new_doc_pages.keys())
logging.info("getting existing doc page contents from wiki...")
old_doc_pages = attempt(get_pages_from_wiki, old_doc_titles)
logging.info("looking for changed pages...")
changed_pages = get_changed_pages(old_doc_pages, new_doc_pages, all_titles)
if changed_pages:
logging.info("sending changed pages...")
attempt(send_pages, changed_pages)
else:
logging.info("no changes found")
missing_titles = set(old_doc_titles) - set(new_doc_pages.keys()) - {DOC_PREFIX + "Overview"}
if missing_titles:
sys.exit(
"There are pages in the wiki documentation "
"that are not created by Fast Downward:\n" +
"\n".join(sorted(missing_titles)))
print("Done")
|
PypiClean
|
/science_optimization-9.0.2-cp310-cp310-manylinux_2_35_x86_64.whl/science_optimization/problems/separable_resource_allocation.py
|
from science_optimization.builder import BuilderOptimizationProblem
from science_optimization.builder import Objective
from science_optimization.builder import Variable
from science_optimization.builder import Constraint
from science_optimization.function import FunctionsComposite
class SeparableResourceAllocation(BuilderOptimizationProblem):
"""Concrete builder implementation.
This class builds a dual decomposition optimization problem.
"""
# objective function(s)
_f_i = None
# equality constraint function(s)
_coupling_eq_constraints = None
# inequality constraint function(s)
_coupling_ineq_constraints = None
# the variables' bounds
_x_bounds = None
def __init__(self, f_i, coupling_eq_constraints, coupling_ineq_constraints, x_bounds):
"""Constructor of a Dual Decomposition problem builder.
Args:
f_i : Objective functions composition with i individual functions.
coupling_eq_constraints : Composition with functions in equality coupling.
coupling_ineq_constraints: Composition with functions in inequality coupling.
x_bounds : Lower bound and upper bounds.
"""
self.f_i = f_i
self.coupling_eq_constraints = coupling_eq_constraints
self.coupling_ineq_constraints = coupling_ineq_constraints
self.x_bounds = x_bounds
# gets
@property
def f_i(self):
return self._f_i
@property
def coupling_eq_constraints(self):
return self._coupling_eq_constraints
@property
def coupling_ineq_constraints(self):
return self._coupling_ineq_constraints
@property
def x_bounds(self):
return self._x_bounds
@f_i.setter
def f_i(self, value):
self._f_i = value
# sets
@coupling_eq_constraints.setter
def coupling_eq_constraints(self, value):
self._coupling_eq_constraints = value
@coupling_ineq_constraints.setter
def coupling_ineq_constraints(self, value):
self._coupling_ineq_constraints = value
@x_bounds.setter
def x_bounds(self, value):
self._x_bounds = value
# methods
def build_objectives(self):
# instantiate composition
obj_fun = FunctionsComposite()
for f in self.f_i:
obj_fun.add(f)
objective = Objective(objective=obj_fun)
return objective
def build_constraints(self):
# instantiate composition
eq_cons = FunctionsComposite()
ineq_cons = FunctionsComposite()
for eq_g in self.coupling_eq_constraints:
eq_cons.add(eq_g)
for ineq_g in self.coupling_ineq_constraints:
ineq_cons.add(ineq_g)
constraints = Constraint(eq_cons=eq_cons, ineq_cons=ineq_cons)
return constraints
def build_variables(self):
# variables
variables = Variable(x_min=self.x_bounds[:, 0].reshape(-1, 1),
x_max=self.x_bounds[:, 1].reshape(-1, 1))
return variables
|
PypiClean
|
/h2o_autodoc-1.0.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl/h2o_autodoc/templates/templates/sections/Experiment Overview/DAI Reproducibility Settings.md
|
User-defined settings control the Driverless AI experiment framework. {% if experiment.parameters.seed == True %} In the *{{experiment.description }}* experiment, the setting for reproducibility was turned on. The following settings can be used to reproduce the *{{experiment.description}}* experiment’s results given the same environment. {% else %} The *{{experiment.description}}* experiment did not turn on the reproducible settings, which means the results may not be the same if the same experiment settings are tested.
{% endif %}
|
PypiClean
|
/NovalIDE-1.1.8-py3-none-any.whl/noval/python/parser/scope.py
|
import config,nodeast
from utils import CmpMember,py_sorted
import intellisence
import noval.util.utils as logger_utils
class Scope(object):
def __init__(self,line_start,line_end,parent=None):
self._line_start = line_start
self._line_end = line_end
self._parent = parent
self._child_scopes = []
if self._parent != None:
self.Parent.AppendChildScope(self)
def __str__(self):
return 'scope line start %d,end %d' % (self._line_start,self._line_end)
@property
def Parent(self):
return self._parent
@property
def LineStart(self):
return self._line_start
@property
def LineEnd(self):
return self._line_end
@LineEnd.setter
def LineEnd(self,line):
self._line_end = line
@property
def ChildScopes(self):
return self._child_scopes
def HasNoChild(self):
return 0 == len(self._child_scopes)
def AppendChildScope(self,scope):
if isinstance(scope,list):
self._child_scopes.extend(scope)
else:
self._child_scopes.append(scope)
def IslocateInScope(self,line):
if self.LineStart <= line and self.LineEnd >= line:
return True
return False
def CompareScopeLine(self,x,y):
if x.LineEnd > y.LineEnd:
return 1
return -1
def RouteChildScopes(self):
#按行号对子范围进行排序
self._child_scopes = py_sorted(self._child_scopes,self.CompareScopeLine)
last_scope = None
for child_scope in self.ChildScopes:
##exclude child scopes which is import from other modules
if child_scope.Root.Module.Path != self.Root.Module.Path:
continue
if child_scope.Node.Type == config.NODE_FUNCDEF_TYPE:
child_scope.RouteChildScopes()
elif child_scope.Node.Type == config.NODE_CLASSDEF_TYPE:
child_scope.RouteChildScopes()
if last_scope is not None:
if child_scope.LineStart > last_scope.LineEnd:
last_scope.LineEnd = child_scope.LineStart -1
last_scope.Parent.LineEnd = last_scope.LineEnd
last_scope = child_scope
#最后一个子范围的末尾行作为此范围的末尾行
if last_scope is not None:
last_scope.Parent.LineEnd = last_scope.LineEnd
def FindScope(self,line):
for child_scope in self.ChildScopes:
if child_scope.IslocateInScope(line):
if self.IsRoutetoEnd(child_scope):
return child_scope
else:
return child_scope.FindScope(line)
def FindScopeInChildScopes(self,name):
child_scopes = []
for child_scope in self.ChildScopes:
if child_scope.EqualName(name):
child_scopes.append(child_scope)
return child_scopes
def IsRoutetoEnd(self,scope):
for child_scope in scope.ChildScopes:
if not child_scope.HasNoChild():
return False
return True
def FindScopeInScope(self,name):
found_scopes = []
parent = self
while parent is not None:
child_scopes = parent.FindScopeInChildScopes(name)
if child_scopes:
found_scopes.extend(child_scopes)
parent = parent.Parent
return found_scopes
def FindTopScope(self,names):
find_scopes= []
i = len(names)
find_name = ""
while True:
if i <= 0:
break
find_name = ".".join(names[0:i])
scopes = self.FindScopeInScope(find_name)
if scopes:
find_scopes.extend(scopes)
i -= 1
return find_scopes
def FindScopes(self,names):
##TODO:注意!查找的范围列表可能包含自身
scopes = self.FindTopScope(names)
#search for __builtin__ member at last
if not scopes and len(names) == 1:
#get builtin module name from module scope,which python2 is __builtin__ and python3 is builtins
scopes = self.FindTopScope([self.Root._builtin_module_name] + names)
return scopes
def GetDefinitions(self,name):
if not name.strip():
return []
names = name.split('.')
find_scopes = self.FindScopes(names)
if not find_scopes:
return []
definitions = []
for find_scope in find_scopes:
members = find_scope.GetMember(name)
for member in members:
#可能有重复的定义,只取一个
if member not in definitions:
definitions.append(member)
return definitions
def FindNameScopes(self,name):
if not name.strip():
return []
names = name.split('.')
#when like self. or cls., route to parent class scope
if (names[0] == 'self' and self.IsMethodScope()) or (names[0] == 'cls' and self.IsClassMethodScope()):
if len(names) == 1:
return [self.Parent]
else:
return self.FindScopes(names[1:])
else:
return self.FindScopes(names)
def IsMethodScope(self):
return False
def IsClassMethodScope(self):
return False
def MakeBeautyDoc(self,alltext):
"""Returns the formatted calltip string for the document.
"""
if alltext is None:
return None
# split the text into natural paragraphs (a blank line separated)
paratext = alltext.split("\n\n")
# add text by paragraph until text limit or all paragraphs
textlimit = 800
if len(paratext[0]) < textlimit:
numpara = len(paratext)
calltiptext = paratext[0]
ii = 1
while ii < numpara and \
(len(calltiptext) + len(paratext[ii])) < textlimit:
calltiptext = calltiptext + "\n\n" + paratext[ii]
ii = ii + 1
# if not all texts are added, add "[...]"
if ii < numpara:
calltiptext = calltiptext + "\n[...]"
# present the function signature only (first newline)
else:
calltiptext = alltext.split("\n")[0]
return calltiptext
class ModuleScope(Scope):
MAX_CHILD_SCOPE = 100
def __init__(self,module,line_count):
super(ModuleScope,self).__init__(0,line_count)
self._module = module
self._builtin_module_name = "__builtin__"
@property
def Module(self):
return self._module
def MakeModuleScopes(self):
self.MakeScopes(self.Module,self)
def MakeImportScope(self,from_import_scope,parent_scope):
from_import_name = from_import_scope.Node.Name
member_names = []
for child_scope in from_import_scope.ChildScopes:
#get all import members
if child_scope.Node.Name == "*":
member_names.extend(intellisence.IntellisenceManager().GetModuleMembers(from_import_name,""))
break
#get one import member
else:
member_names.append(child_scope.Node.Name)
#TODO:must reduce the child scope number,will elapse short time to finish
#TODO:these code will expend a lot of time to finisih,should optimise later
for member_name in member_names[0:self.MAX_CHILD_SCOPE]:
member_scope = intellisence.IntellisenceManager().GetModuleMember(from_import_name,member_name)
if member_scope is not None:
parent_scope.AppendChildScope(member_scope)
def MakeScopes(self,node,parent_scope):
for child in node.Childs:
#类属性也有可能是函数,但是其type为NODE_CLASS_PROPERTY
if child.Type == config.NODE_FUNCDEF_TYPE or type(child) == nodeast.FuncDef:
func_def_scope = FuncDefScope(child,parent_scope,self)
for arg in child.Args:
ArgScope(arg,func_def_scope,self)
self.MakeScopes(child,func_def_scope)
elif child.Type == config.NODE_CLASSDEF_TYPE:
class_def_scope = ClassDefScope(child,parent_scope,self)
self.MakeScopes(child,class_def_scope)
elif child.Type == config.NODE_CLASS_PROPERTY or\
child.Type == config.NODE_ASSIGN_TYPE:
NameScope(child,parent_scope,self)
elif child.Type == config.NODE_IMPORT_TYPE:
ImportScope(child,parent_scope,self)
#from xx import x
if child.Parent.Type == config.NODE_FROMIMPORT_TYPE:
self.MakeImportScope(parent_scope,parent_scope.Parent)
elif child.Type == config.NODE_BUILTIN_IMPORT_TYPE:
ImportScope(child,parent_scope,self)
#get currenet interpreter builtin module name
self._builtin_module_name = child.Name
elif child.Type == config.NODE_FROMIMPORT_TYPE:
from_import_scope = FromImportScope(child,parent_scope,self)
self.MakeScopes(child,from_import_scope)
elif child.Type == config.NODE_MAIN_FUNCTION_TYPE:
MainFunctionScope(child,parent_scope,self)
elif child.Type == config.NODE_RETURN_TYPE:
ReturnScope(child,parent_scope,self)
elif child.Type == config.NODE_UNKNOWN_TYPE:
UnknownScope(child,parent_scope,self)
def FindScope(self,line):
find_scope = Scope.FindScope(self,line)
if find_scope == None:
return self
return find_scope
def GetMemberList(self):
return intellisence.IntellisenceManager().GetModuleMembers(self.Module.Name,"")
@property
def Root(self):
return self
def EqualName(self,name):
return self.Module.Name == name
def GetMembers(self):
return self.Module.GetMemberList()
def GetDoc(self):
return self.MakeBeautyDoc(self.Module.Doc)
class NodeScope(Scope):
NAME_SELF_KEYWARD = "self"
def __init__(self,node,parent,root):
super(NodeScope,self).__init__(node.Line,node.Line,parent)
self._node= node
self._root = root
@property
def Node(self):
return self._node
def EqualName(self,name):
if self.Hasself() and name.find(self.NAME_SELF_KEYWARD) != -1:
return (self.NAME_SELF_KEYWARD + "." + self.Node.Name) == name
return self.Node.Name == name
def GetMemberList(self):
return self.Node.GetMemberList()
def __eq__(self, other):
if other is None:
return False
return self.Node.Name == other.Node.Name and self.Node.Line == other.Node.Line and self.Node.Col == other.Node.Col
@property
def Root(self):
return self._root
def GetMember(self,name):
if name == "" or self.EqualName(name):
return [self]
else:
return []
def MakeFixName(self,name):
if self.Hasself() and name.find(self.NAME_SELF_KEYWARD) != -1:
node_name = self.NAME_SELF_KEYWARD + "." + self.Node.Name
else:
node_name = self.Node.Name
#muse only replace once
fix_name = name.replace(node_name,"",1)
if fix_name.startswith("."):
fix_name = fix_name[1:]
return fix_name
def GetDoc(self):
return self.MakeBeautyDoc(self.Node.Doc)
def GetArgTip(self):
return ''
def __str__(self):
return Scope.__str__(self) + ",name %s,type %s" % (self._node.Name,self._node.__class__.__name__)
def Hasself(self):
return self.IsMethodScope()
class ArgScope(NodeScope):
def __init__(self,arg_node,parent,root):
super(ArgScope,self).__init__(arg_node,parent,root)
def GetArgName(self):
if self.Node.IsKeyWord:
return "**" + self.Node.Name
elif self.Node.IsVar:
return "*" + self.Node.Name
elif self.Node.IsDefault:
return self.Node.Name
else:
return self.Node.Name
class FuncDefScope(NodeScope):
def __init__(self,func_def_node,parent,root):
super(FuncDefScope,self).__init__(func_def_node,parent,root)
def MakeFixName(self,name):
if self.Node.IsMethod:
name = name.replace("self.","",1)
fix_name = name.replace(self.Node.Name,"",1)
if fix_name.startswith("."):
fix_name = fix_name[1:]
return fix_name
def GetMember(self,name):
fix_name = self.MakeFixName(name)
if fix_name == "":
return [self]
return []
def IsMethodScope(self):
return self.Node.IsMethod
def IsClassMethodScope(self):
return self.Node.IsClassMethod
def GetMemberList(self):
return []
def GetArgTip(self):
info = ''
arg_names = []
for child_scope in self.ChildScopes:
if child_scope.Node.Type == config.NODE_ARG_TYPE:
arg_names.append(child_scope.GetArgName())
if len(arg_names) > 0:
info = "("
info += ','.join(arg_names)
info += ")"
return info
class ClassDefScope(NodeScope):
INIT_METHOD_NAME = "__init__"
def __init__(self,class_def_node,parent,root):
super(ClassDefScope,self).__init__(class_def_node,parent,root)
def FindScopeInChildScopes(self,name):
#先在在类的儿子中去查找
found_scopes = Scope.FindScopeInChildScopes(self,name)
#再在类的基类的儿子中去查找
if not found_scopes:
for base in self.Node.Bases:
base_scopes = self.Parent.FindNameScopes(base)
if base_scopes:
return self.FindBasescopes(base_scopes,base,name)
return found_scopes
def FindBasescopes(self,base_scopes,base,name):
find_scopes = []
for base_scope in base_scopes:
if base_scope.Node.Type == config.NODE_IMPORT_TYPE:
child_scopes = base_scope.GetMember(base + "."+ name)
find_scopes.extend(child_scopes)
else:
child_scopes = base_scope.FindScopeInChildScopes(name)
find_scopes.extend(child_scopes)
return find_scopes
def UniqueInitMember(self,member_list):
while member_list.count(self.INIT_METHOD_NAME) > 1:
member_list.remove(self.INIT_METHOD_NAME)
def GetMemberList(self):
member_list = NodeScope.GetMemberList(self)
for base in self.Node.Bases:
base_scopes = self.Parent.FindNameScopes(base)
for base_scope in base_scopes:
if base_scope.Node.Type == config.NODE_IMPORT_TYPE:
member_list.extend(base_scope.GetImportMemberList(base))
else:
member_list.extend(base_scope.GetMemberList())
self.UniqueInitMember(member_list)
return member_list
def GetClassMembers(self,sort=True):
return self.Node.GetClassMembers(sort)
def GetClassMemberList(self,sort=True):
member_list = self.GetClassMembers(False)
for base in self.Node.Bases:
base_scope = self.Parent.FindDefinitionScope(base)
if base_scope is not None:
if base_scope.Node.Type == config.NODE_IMPORT_TYPE or\
base_scope.Node.Type == config.NODE_BUILTIN_IMPORT_TYPE:
member_list.extend(base_scope.GetImportMemberList(base))
else:
member_list.extend(base_scope.GetClassMembers(False))
self.UniqueInitMember(member_list)
if sort:
member_list.sort(CmpMember)
return member_list
def GetMember(self,name):
fix_name = self.MakeFixName(name)
if fix_name == "":
return [self]
return self.FindScopeInChildScopes(fix_name)
#class arg tip is the arg tip of class __init__ method
def GetArgTip(self):
for child_scope in self.ChildScopes:
if child_scope.Node.Type == config.NODE_FUNCDEF_TYPE and child_scope.Node.IsConstructor:
return child_scope.GetArgTip()
return ''
class NameScope(NodeScope):
def __init__(self,name_property_node,parent,root):
super(NameScope,self).__init__(name_property_node,parent,root)
def GetMemberList(self):
member_list = []
if self.Node.ValueType == config.ASSIGN_TYPE_OBJECT:
found_scopes = self.FindNameScopes(self.Node.Value)
if found_scopes:
for found_scope in found_scopes:
if found_scope.Node.Type == config.NODE_IMPORT_TYPE:
member_list = found_scope.GetImportMemberList(self.Node.Value)
else:
member_list = found_scope.GetMemberList()
else:
member_list = intellisence.IntellisenceManager().GetTypeObjectMembers(self.Node.ValueType)
return member_list
def GetMember(self,name):
fix_name = self.MakeFixName(name)
if fix_name == "":
return [self]
if not self.Node.Value:
return []
found_scopes = self.FindNameScopes(self.Node.Value)
members = []
if found_scopes:
#查找类对象里面的属性或方法
for found_scope in found_scopes:
#不能包含自己,否则会出现无限递归调用
if found_scope == self:
continue
if found_scope.Node.Type == config.NODE_IMPORT_TYPE:
members.extend(found_scope.GetMember(self.Node.Value + "." + fix_name))
else:
assert(found_scope != self)
members.extend(found_scope.GetMember(fix_name))
return members
def EqualName(self,name):
if self.Hasself():
return (self.NAME_SELF_KEYWARD + "." + self.Node.Name) == name
return self.Node.Name == name
def Hasself(self):
return (self.Parent.IsMethodScope() or type(self.Parent) == ClassDefScope) and self._node.Type == config.NODE_CLASS_PROPERTY
class UnknownScope(NodeScope):
def __init__(self,unknown_type_node,parent,root):
super(UnknownScope,self).__init__(unknown_type_node,parent,root)
class ImportScope(NodeScope):
def __init__(self,import_node,parent,root):
super(ImportScope,self).__init__(import_node,parent,root)
def EqualName(self,name):
if self.Node.AsName is not None:
return self.Node.AsName == name
else:
return NodeScope.EqualName(self,name)
def MakeFixName(self,name):
#should only replace the first find name
if self.Node.AsName is not None:
fix_name = name.replace(self.Node.AsName,"",1)
else:
fix_name = name.replace(self.Node.Name,"",1)
if fix_name.startswith("."):
fix_name = fix_name[1:]
return fix_name
def GetImportMemberList(self,name):
fix_name = self.MakeFixName(name)
member_list = intellisence.IntellisenceManager().GetModuleMembers(self.Node.Name,fix_name)
return member_list
def GetMember(self,name):
fix_name = self.MakeFixName(name)
if fix_name == "":
return [self]
return intellisence.IntellisenceManager().GetModuleMember(self.Node.Name,fix_name)
def GetDoc(self):
doc = intellisence.IntellisenceManager().GetModuleDoc(self.Node.Name)
return self.MakeBeautyDoc(doc)
def GetImportMemberArgTip(self,name):
fix_name = self.MakeFixName(name)
if fix_name == "":
return ''
return intellisence.IntellisenceManager().GetModuleMemberArgmentTip(self.Node.Name,fix_name)
class FromImportScope(NodeScope):
def __init__(self,from_import_node,parent,root):
super(FromImportScope,self).__init__(from_import_node,parent,root)
def EqualName(self,name):
for child_scope in self.ChildScopes:
if child_scope.EqualName(name):
return True
return False
class MainFunctionScope(NodeScope):
def __init__(self,main_function_node,parent,root):
super(MainFunctionScope,self).__init__(main_function_node,parent,root)
class ReturnScope(NodeScope):
def __init__(self,return_node,parent,root):
super(ReturnScope,self).__init__(return_node,parent,root)
|
PypiClean
|
/numericube-twistranet-2.0.0.zip/numericube-twistranet-2.0.0/twistranet/themes/twistheme/static/js/jquery.dd.js
|
*/
;eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}(';(5($){3 1J="";3 34=5(p,q){3 r=p;3 s=1b;3 q=$.35({1g:3S,2g:7,3a:23,1K:11,1L:3T,3b:\'1Y\',1M:15,3c:\'3U\',2A:\'\',1k:\'\'},q);1b.1U=2h 3d();3 t="";3 u={};u.2B=11;u.2i=15;u.2j=1m;3 v=15;3 w={2C:\'3V\',1N:\'3W\',1O:\'3X\',1P:\'3Y\',1f:\'3Z\',2D:\'41\',2E:\'42\',43:\'44\',2k:\'45\',3e:\'46\'};3 x={1Y:q.3b,2F:\'2F\',2G:\'2G\',2H:\'2H\',1q:\'1q\',1j:.30,2I:\'2I\',2l:\'2l\',2m:\'2m\'};3 y={3f:"2n,2J,2K,1Q,2o,2p,1r,1B,2q,1R,47,1Z,2L",48:"1C,1s,1j,49"};1b.1D=2h 3d();3 z=$(r).12("19");4(3g(z)=="1a"||z.1c<=0){z="4a"+$.1S.3h++;$(r).12("19",z)};3 A=$(r).12("1k");q.1k+=(A==1a)?"":A;3 B=$(r).3i();v=($(r).12("1C")>1||$(r).12("1s")==11)?11:15;4(v){q.2g=$(r).12("1C")};3 C={};3 D=5(a){18 z+w[a]};3 E=5(a){3 b=a;3 c=$(b).12("1k");18 c};3 F=5(a){3 b=$("#"+z+" 2r:8");4(b.1c>1){1t(3 i=0;i<b.1c;i++){4(a==b[i].1h){18 11}}}1d 4(b.1c==1){4(b[0].1h==a){18 11}};18 15};3 G=5(a,b,c,d){3 e="";3 f=(d=="2M")?D("2E"):D("2D");3 g=(d=="2M")?f+"2N"+(b)+"2N"+(c):f+"2N"+(b);3 h="";3 i="";4(q.1M!=15){i=\' \'+q.1M+\' \'+a.3j}1d{h=$(a).12("1V");h=(h.1c==0)?"":\'<3k 3l="\'+h+\'" 3m="3n" /> \'};3 j=$(a).1o();3 k=$(a).4b();3 l=($(a).12("1j")==11)?"1j":"21";C[g]={1E:h+j,22:k,1o:j,1h:a.1h,19:g};3 m=E(a);4(F(a.1h)==11){e+=\'<a 3o="3p:3q(0);" 1p="8 \'+l+i+\'"\'}1d{e+=\'<a 3o="3p:3q(0);" 1p="\'+l+i+\'"\'};4(m!==15&&m!==1a){e+=" 1k=\'"+m+"\'"};e+=\' 19="\'+g+\'">\';e+=h+\'<1u 1p="\'+x.1q+\'">\'+j+\'</1u></a>\';18 e};3 H=5(){3 f=B;4(f.1c==0)18"";3 g="";3 h=D("2D");3 i=D("2E");f.2O(5(c){3 d=f[c];4(d.4c=="4d"){g+="<1v 1p=\'4e\'>";g+="<1u 1k=\'3r-4f:4g;3r-1k:4h; 4i:4j;\'>"+$(d).12("4k")+"</1u>";3 e=$(d).3i();e.2O(5(a){3 b=e[a];g+=G(b,c,a,"2M")});g+="</1v>"}1d{g+=G(d,c,"","")}});18 g};3 I=5(){3 a=D("1N");3 b=D("1f");3 c=q.1k;1W="";1W+=\'<1v 19="\'+b+\'" 1p="\'+x.2H+\'"\';4(!v){1W+=(c!="")?\' 1k="\'+c+\'"\':\'\'}1d{1W+=(c!="")?\' 1k="2s-1w:4l 4m #4n;1x:2t;1y:2P;\'+c+\'"\':\'\'};1W+=\'>\';18 1W};3 J=5(){3 a=D("1O");3 b=D("2k");3 c=D("1P");3 d=D("3e");3 e="";3 f="";4(6.9(z).1F.1c>0){e=$("#"+z+" 2r:8").1o();f=$("#"+z+" 2r:8").12("1V")};f=(f.1c==0||f==1a||q.1K==15||q.1M!=15)?"":\'<3k 3l="\'+f+\'" 3m="3n" /> \';3 g=\'<1v 19="\'+a+\'" 1p="\'+x.2F+\'"\';g+=\'>\';g+=\'<1u 19="\'+b+\'" 1p="\'+x.2G+\'"></1u><1u 1p="\'+x.1q+\'" 19="\'+c+\'">\'+f+\'<1u 1p="\'+x.1q+\'">\'+e+\'</1u></1u></1v>\';18 g};3 K=5(){3 c=D("1f");$("#"+c+" a.21").1I("1Q");$("#"+c+" a.21").1e("1Q",5(a){a.24();N(1b);4(!v){$("#"+c).1I("1B");P(15);3 b=(q.1K==15)?$(1b).1o():$(1b).1E();T(b);s.25()};X()})};3 L=5(){3 d=15;3 e=D("1N");3 f=D("1O");3 g=D("1P");3 h=D("1f");3 i=D("2k");3 j=$("#"+z).2Q();j=j+2;3 k=q.1k;4($("#"+e).1c>0){$("#"+e).2u();d=11};3 l=\'<1v 19="\'+e+\'" 1p="\'+x.1Y+\'"\';l+=(k!="")?\' 1k="\'+k+\'"\':\'\';l+=\'>\';l+=J();l+=I();l+=H();l+="</1v>";l+="</1v>";4(d==11){3 m=D("2C");$("#"+m).2R(l)}1d{$("#"+z).2R(l)};4(v){3 f=D("1O");$("#"+f).2v()};$("#"+e).14("2Q",j+"1T");$("#"+h).14("2Q",(j-2)+"1T");4(B.1c>q.2g){3 n=26($("#"+h+" a:3s").14("28-3t"))+26($("#"+h+" a:3s").14("28-1w"));3 o=((q.3a)*q.2g)-n;$("#"+h).14("1g",o+"1T")}1d 4(v){3 o=$("#"+z).1g();$("#"+h).14("1g",o+"1T")};4(d==15){S();O(z)};4($("#"+z).12("1j")==11){$("#"+e).14("2w",x.1j)};R();$("#"+f).1e("1B",5(a){2S(1)});$("#"+f).1e("1R",5(a){2S(0)});K();$("#"+h+" a.1j").14("2w",x.1j);4(v){$("#"+h).1e("1B",5(c){4(!u.2i){u.2i=11;$(6).1e("1Z",5(a){3 b=a.3u;u.2j=b;4(b==39||b==40){a.24();a.2x();U();X()};4(b==37||b==38){a.24();a.2x();V();X()}})}})};$("#"+h).1e("1R",5(a){P(15);$(6).1I("1Z");u.2i=15;u.2j=1m});$("#"+f).1e("1Q",5(b){P(15);4($("#"+h+":3v").1c==1){$("#"+h).1I("1B")}1d{$("#"+h).1e("1B",5(a){P(11)});s.3w()}});$("#"+f).1e("1R",5(a){P(15)});4(q.1K&&q.1M!=15){W()}};3 M=5(a){1t(3 i 2y C){4(C[i].1h==a){18 C[i]}};18-1};3 N=5(a){3 b=D("1f");4($("#"+b+" a.8").1c==1){t=$("#"+b+" a.8").1o()};4(!v){$("#"+b+" a.8").1G("8")};3 c=$("#"+b+" a.8").12("19");4(c!=1a){3 d=(u.1X==1a||u.1X==1m)?C[c].1h:u.1X};4(a&&!v){$(a).1z("8")};4(v){3 e=u.2j;4($("#"+z).12("1s")==11){4(e==17){u.1X=C[$(a).12("19")].1h;$(a).4o("8")}1d 4(e==16){$("#"+b+" a.8").1G("8");$(a).1z("8");3 f=$(a).12("19");3 g=C[f].1h;1t(3 i=2T.4p(d,g);i<=2T.4q(d,g);i++){$("#"+M(i).19).1z("8")}}1d{$("#"+b+" a.8").1G("8");$(a).1z("8");u.1X=C[$(a).12("19")].1h}}1d{$("#"+b+" a.8").1G("8");$(a).1z("8");u.1X=C[$(a).12("19")].1h}}};3 O=5(a){3 b=a;6.9(b).4r=5(e){$("#"+b).1S(q)}};3 P=5(a){u.2B=a};3 Q=5(){18 u.2B};3 R=5(){3 b=D("1N");3 c=y.3f.4s(",");1t(3 d=0;d<c.1c;d++){3 e=c[d];3 f=Y(e);4(f==11){3x(e){1n"2n":$("#"+b).1e("4t",5(a){6.9(z).2n()});1i;1n"1Q":$("#"+b).1e("1Q",5(a){$("#"+z).1H("1Q")});1i;1n"2o":$("#"+b).1e("2o",5(a){$("#"+z).1H("2o")});1i;1n"2p":$("#"+b).1e("2p",5(a){$("#"+z).1H("2p")});1i;1n"1r":$("#"+b).1e("1r",5(a){$("#"+z).1H("1r")});1i;1n"1B":$("#"+b).1e("1B",5(a){$("#"+z).1H("1B")});1i;1n"2q":$("#"+b).1e("2q",5(a){$("#"+z).1H("2q")});1i;1n"1R":$("#"+b).1e("1R",5(a){$("#"+z).1H("1R")});1i}}}};3 S=5(){3 a=D("2C");$("#"+z).2R("<1v 1p=\'"+x.2I+"\' 1k=\'1g:4u;4v:4w;1y:3y;\' 19=\'"+a+"\'></1v>");$("#"+z).4x($("#"+a))};3 T=5(a){3 b=D("1P");$("#"+b).1E(a)};3 U=5(){3 a=D("1P");3 b=D("1f");3 c=$("#"+b+" a.21");1t(3 d=0;d<c.1c;d++){3 e=c[d];3 f=$(e).12("19");4($(e).3z("8")&&d<c.1c-1){$("#"+b+" a.8").1G("8");$(c[d+1]).1z("8");3 g=$("#"+b+" a.8").12("19");4(!v){3 h=(q.1K==15)?C[g].1o:C[g].1E;T(h)};4(26(($("#"+g).1y().1w+$("#"+g).1g()))>=26($("#"+b).1g())){$("#"+b).29(($("#"+b).29())+$("#"+g).1g()+$("#"+g).1g())};1i}}};3 V=5(){3 a=D("1P");3 b=D("1f");3 c=$("#"+b+" a.21");1t(3 d=0;d<c.1c;d++){3 e=c[d];3 f=$(e).12("19");4($(e).3z("8")&&d!=0){$("#"+b+" a.8").1G("8");$(c[d-1]).1z("8");3 g=$("#"+b+" a.8").12("19");4(!v){3 h=(q.1K==15)?C[g].1o:C[g].1E;T(h)};4(26(($("#"+g).1y().1w+$("#"+g).1g()))<=0){$("#"+b).29(($("#"+b).29()-$("#"+b).1g())-$("#"+g).1g())};1i}}};3 W=5(){4(q.1M!=15){3 a=D("1P");3 b=6.9(z).1F[6.9(z).1l].3j;4(b.1c>0){3 c=D("1f");3 d=$("#"+c+" a."+b).12("19");3 e=$("#"+d).14("2a-4y");3 f=$("#"+d).14("2a-1y");3 g=$("#"+d).14("28-3A");4(e!=1a){$("#"+a).2b("."+x.1q).12(\'1k\',"2a:"+e)};4(f!=1a){$("#"+a).2b("."+x.1q).14(\'2a-1y\',f)};4(g!=1a){$("#"+a).2b("."+x.1q).14(\'28-3A\',g)};$("#"+a).2b("."+x.1q).14(\'2a-3B\',\'4z-3B\');$("#"+a).2b("."+x.1q).14(\'28-3t\',\'4A\')}}};3 X=5(){3 a=D("1f");3 b=$("#"+a+" a.8");4(b.1c==1){3 c=$("#"+a+" a.8").1o();3 d=$("#"+a+" a.8").12("19");4(d!=1a){3 e=C[d].22;6.9(z).1l=C[d].1h};4(q.1K&&q.1M!=15)W()}1d 4(b.1c>1){3 f=$("#"+z+" > 2r:8").4B("8");1t(3 i=0;i<b.1c;i++){3 d=$(b[i]).12("19");3 g=C[d].1h;6.9(z).1F[g].8="8"}};3 h=6.9(z).1l;s.1U["1l"]=h};3 Y=5(a){4($("#"+z).12("4C"+a)!=1a){18 11};3 b=$("#"+z).2U("4D");4(b&&b[a]){18 11};18 15};3 Z=5(){3 b=D("1f");4(Y(\'2K\')==11){3 c=C[$("#"+b+" a.8").12("19")].1o;4($.3C(t)!==$.3C(c)&&t!==""){$("#"+z).1H("2K")}};4(Y(\'1r\')==11){$("#"+z).1H("1r")};4(Y(\'2J\')==11){$(6).1e("1r",5(a){$("#"+z).2n();$("#"+z)[0].2J();X();$(6).1I("1r")})}};3 2S=5(a){3 b=D("2k");4(a==1)$("#"+b).14({3D:\'0 4E%\'});1d $("#"+b).14({3D:\'0 0\'})};3 3E=5(){1t(3 i 2y 6.9(z)){4(3g(6.9(z)[i])!=\'5\'&&6.9(z)[i]!==1a&&6.9(z)[i]!==1m){s.1A(i,6.9(z)[i],11)}}};3 3F=5(a,b){4(M(b)!=-1){6.9(z)[a]=b;3 c=D("1f");$("#"+c+" a.8").1G("8");$("#"+M(b).19).1z("8");3 d=M(6.9(z).1l).1E;T(d)}};3 3G=5(i,a){4(a==\'d\'){1t(3 b 2y C){4(C[b].1h==i){4F C[b];1i}}};3 c=0;1t(3 b 2y C){C[b].1h=c;c++}};3 2V=5(){3 a=D("1f");3 b=D("1N");3 c=$("#"+b).1y();3 d=$("#"+b).1g();3 e=$(3H).1g();3 f=$(3H).29();3 g=$("#"+a).1g();3 h={1L:q.1L,1w:(c.1w+d)+"1T",1x:"2c"};3 i=q.3c;3 j=15;3 k=x.2m;$("#"+a).1G(x.2m);$("#"+a).1G(x.2l);4((e+f)<2T.4G(g+d+c.1w)){3 l=c.1w-g;4((c.1w-g)<0){l=10};h={1L:q.1L,1w:l+"1T",1x:"2c"};i="2W";j=11;k=x.2l};18{2X:j,3I:i,14:h,2s:k}};1b.3w=5(){4((s.2d("1j",11)==11)||(s.2d("1F",11).1c==0))18;3 c=D("1f");4(1J!=""&&c!=1J){$("#"+1J).3J("2Y");$("#"+1J).14({1L:\'0\'})};4($("#"+c).14("1x")=="2c"){t=C[$("#"+c+" a.8").12("19")].1o;$(6).1e("1Z",5(a){3 b=a.3u;4(b==39||b==40){a.24();a.2x();U()};4(b==37||b==38){a.24();a.2x();V()};4(b==27||b==13){s.25();X()};4($("#"+z).12("3K")!=1a){6.9(z).3K()}});$(6).1e("2L",5(a){4($("#"+z).12("3L")!=1a){6.9(z).3L()}});$(6).1e("1r",5(a){4(Q()==15){s.25()}});3 d=2V();$("#"+c).14(d.14);4(d.2X==11){$("#"+c).14({1x:\'2t\'});$("#"+c).1z(d.2s);4(s.1D["2z"]!=1m){2e(s.1D["2z"])(s)}}1d{$("#"+c)[d.3I]("2Y",5(){$("#"+c).1z(d.2s);4(s.1D["2z"]!=1m){2e(s.1D["2z"])(s)}})};4(c!=1J){1J=c}}};1b.25=5(){3 b=D("1f");$(6).1I("1Z");$(6).1I("2L");$(6).1I("1r");3 c=2V();4(c.2X==11){$("#"+b).14("1x","2c")};$("#"+b).3J("2Y",5(a){Z();$("#"+b).14({1L:\'0\'});4(s.1D["3M"]!=1m){2e(s.1D["3M"])(s)}})};1b.1l=5(i){s.1A("1l",i)};1b.1A=5(a,b,c){4(a==1a||b==1a)3N{3O:"1A 4H 4I?"};s.1U[a]=b;4(c!=11){3x(a){1n"1l":3F(a,b);1i;1n"1j":s.1j(b,11);1i;1n"1s":6.9(z)[a]=b;v=($(r).12("1C")>0||$(r).12("1s")==11)?11:15;4(v){3 d=$("#"+z).1g();3 f=D("1f");$("#"+f).14("1g",d+"1T");3 g=D("1O");$("#"+g).2v();3 f=D("1f");$("#"+f).14({1x:\'2t\',1y:\'2P\'});K()};1i;1n"1C":6.9(z)[a]=b;4(b==0){6.9(z).1s=15};v=($(r).12("1C")>0||$(r).12("1s")==11)?11:15;4(b==0){3 g=D("1O");$("#"+g).2W();3 f=D("1f");$("#"+f).14({1x:\'2c\',1y:\'3y\'});3 h="";4(6.9(z).1l>=0){3 i=M(6.9(z).1l);h=i.1E;N($("#"+i.19))};T(h)}1d{3 g=D("1O");$("#"+g).2v();3 f=D("1f");$("#"+f).14({1x:\'2t\',1y:\'2P\'})};1i;4J:4K{6.9(z)[a]=b}4L(e){};1i}}};1b.2d=5(a,b){4(a==1a&&b==1a){18 s.1U};4(a!=1a&&b==1a){18(s.1U[a]!=1a)?s.1U[a]:1m};4(a!=1a&&b!=1a){18 6.9(z)[a]}};1b.3v=5(a){3 b=D("1N");4(a==11){$("#"+b).2W()}1d 4(a==15){$("#"+b).2v()}1d{18 $("#"+b).14("1x")}};1b.4M=5(a,b){3 c=a;3 d=c.1o;3 e=(c.22==1a||c.22==1m)?d:c.22;3 f=(c["1V"]==1a||c["1V"]==1m)?\'\':c["1V"];3 i=(b==1a||b==1m)?6.9(z).1F.1c:b;6.9(z).1F[i]=2h 4N(d,e);4(f!=\'\')6.9(z).1F[i]["1V"]=f;3 g=M(i);4(g!=-1){3 h=G(6.9(z).1F[i],i,"","");$("#"+g.19).1E(h)}1d{3 h=G(6.9(z).1F[i],i,"","");3 j=D("1f");$("#"+j).4O(h);K()}};1b.2u=5(i){6.9(z).2u(i);4((M(i))!=-1){$("#"+M(i).19).2u();3G(i,\'d\')};4(6.9(z).1c==0){T("")}1d{3 a=M(6.9(z).1l).1E;T(a)};s.1A("1l",6.9(z).1l)};1b.1j=5(a,b){6.9(z).1j=a;3 c=D("1N");4(a==11){$("#"+c).14("2w",x.1j);s.25()}1d 4(a==15){$("#"+c).14("2w",1)};4(b!=11){s.1A("1j",a)}};1b.2Z=5(){18(6.9(z).2Z==1a)?1m:6.9(z).2Z};1b.31=5(){4(2f.1c==1){18 6.9(z).31(2f[0])}1d 4(2f.1c==2){18 6.9(z).31(2f[0],2f[1])}1d{3N{3O:"4P 1h 4Q 4R!"}}};1b.3P=5(a){18 6.9(z).3P(a)};1b.1s=5(a){4(a==1a){18 s.2d("1s")}1d{s.1A("1s",a)}};1b.1C=5(a){4(a==1a){18 s.2d("1C")}1d{s.1A("1C",a)}};1b.4S=5(a,b){s.1D[a]=b};1b.4T=5(a){2e(s.1D[a])(s)};3 3Q=5(){s.1A("32",$.1S.32);s.1A("33",$.1S.33)};3 3R=5(){L();3E();3Q();4(q.2A!=\'\'){2e(q.2A)(s)}};3R()};$.1S={32:2.36,33:"4U 4V",3h:20,4W:5(a,b){18 $(a).1S(b).2U("1Y")}};$.4X.35({1S:5(b){18 1b.2O(5(){3 a=2h 34(1b,b);$(1b).2U(\'1Y\',a)})}})})(4Y);',62,309,'|||var|if|function|document||selected|getElementById||||||||||||||||||||||||||||||||||||||||||||||||||||||true|attr||css|false|||return|id|undefined|this|length|else|bind|postChildID|height|index|break|disabled|style|selectedIndex|null|case|text|class|ddTitleText|mouseup|multiple|for|span|div|top|display|position|addClass|set|mouseover|size|onActions|html|options|removeClass|trigger|unbind|bh|showIcon|zIndex|useSprite|postID|postTitleID|postTitleTextID|click|mouseout|msDropDown|px|ddProp|title|sDiv|oldIndex|dd|keydown||enabled|value||preventDefault|close|parseInt||padding|scrollTop|background|find|none|get|eval|arguments|visibleRows|new|keyboardAction|currentKey|postArrowID|borderTop|noBorderTop|focus|dblclick|mousedown|mousemove|option|border|block|remove|hide|opacity|stopPropagation|in|onOpen|onInit|insideWindow|postElementHolder|postAID|postOPTAID|ddTitle|arrow|ddChild|ddOutOfVision|blur|change|keyup|opt|_|each|relative|width|after|bj|Math|data|bn|show|opp|fast|form||item|version|author|bi|extend|||||rowHeight|mainCSS|animStyle|Object|postInputhidden|actions|typeof|counter|children|className|img|src|align|absmiddle|href|javascript|void|font|first|bottom|keyCode|visible|open|switch|absolute|hasClass|left|repeat|trim|backgroundPosition|bk|bl|bm|window|ani|slideUp|onkeydown|onkeyup|onClose|throw|message|namedItem|bo|bp|120|9999|slideDown|_msddHolder|_msdd|_title|_titletext|_child||_msa|_msopta|postInputID|_msinput|_arrow|_inp|keypress|prop|tabindex|msdrpdd|val|nodeName|OPTGROUP|opta|weight|bold|italic|clear|both|label|1px|solid|c3c3c3|toggleClass|min|max|refresh|split|mouseenter|0px|overflow|hidden|appendTo|image|no|2px|removeAttr|on|events|100|delete|floor|to|what|default|try|catch|add|Option|append|An|is|required|addMyEvent|fireEvent|Marghoob|Suleman|create|fn|jQuery'.split('|'),0,{}))
|
PypiClean
|
/vban_cmd-2.4.9.tar.gz/vban_cmd-2.4.9/vban_cmd/strip.py
|
import time
from abc import abstractmethod
from typing import Union
from .iremote import IRemote
from .kinds import kinds_all
from .meta import channel_bool_prop, channel_label_prop, strip_output_prop
class Strip(IRemote):
"""
Implements the common interface
Defines concrete implementation for strip
"""
@abstractmethod
def __str__(self):
pass
@property
def identifier(self) -> str:
return f"strip[{self.index}]"
@property
def limit(self) -> int:
return
@limit.setter
def limit(self, val: int):
self.setter("limit", val)
@property
def gain(self) -> float:
val = self.getter("gain")
if val is None:
val = self.gainlayer[0].gain
return round(val, 1)
@gain.setter
def gain(self, val: float):
self.setter("gain", val)
def fadeto(self, target: float, time_: int):
self.setter("FadeTo", f"({target}, {time_})")
time.sleep(self._remote.DELAY)
def fadeby(self, change: float, time_: int):
self.setter("FadeBy", f"({change}, {time_})")
time.sleep(self._remote.DELAY)
class PhysicalStrip(Strip):
@classmethod
def make(cls, remote, index):
return type(
f"PhysicalStrip{remote.kind}",
(cls,),
{
"comp": StripComp(remote, index),
"gate": StripGate(remote, index),
"denoiser": StripDenoiser(remote, index),
"eq": StripEQ(remote, index),
},
)
def __str__(self):
return f"{type(self).__name__}{self.index}"
@property
def device(self):
return
@property
def sr(self):
return
class StripComp(IRemote):
@property
def identifier(self) -> str:
return f"strip[{self.index}].comp"
@property
def knob(self) -> float:
return
@knob.setter
def knob(self, val: float):
self.setter("", val)
@property
def gainin(self) -> float:
return
@gainin.setter
def gainin(self, val: float):
self.setter("GainIn", val)
@property
def ratio(self) -> float:
return
@ratio.setter
def ratio(self, val: float):
self.setter("Ratio", val)
@property
def threshold(self) -> float:
return
@threshold.setter
def threshold(self, val: float):
self.setter("Threshold", val)
@property
def attack(self) -> float:
return
@attack.setter
def attack(self, val: float):
self.setter("Attack", val)
@property
def release(self) -> float:
return
@release.setter
def release(self, val: float):
self.setter("Release", val)
@property
def knee(self) -> float:
return
@knee.setter
def knee(self, val: float):
self.setter("Knee", val)
@property
def gainout(self) -> float:
return
@gainout.setter
def gainout(self, val: float):
self.setter("GainOut", val)
@property
def makeup(self) -> bool:
return
@makeup.setter
def makeup(self, val: bool):
self.setter("makeup", 1 if val else 0)
class StripGate(IRemote):
@property
def identifier(self) -> str:
return f"strip[{self.index}].gate"
@property
def knob(self) -> float:
return
@knob.setter
def knob(self, val: float):
self.setter("", val)
@property
def threshold(self) -> float:
return
@threshold.setter
def threshold(self, val: float):
self.setter("Threshold", val)
@property
def damping(self) -> float:
return
@damping.setter
def damping(self, val: float):
self.setter("Damping", val)
@property
def bpsidechain(self) -> int:
return
@bpsidechain.setter
def bpsidechain(self, val: int):
self.setter("BPSidechain", val)
@property
def attack(self) -> float:
return
@attack.setter
def attack(self, val: float):
self.setter("Attack", val)
@property
def hold(self) -> float:
return
@hold.setter
def hold(self, val: float):
self.setter("Hold", val)
@property
def release(self) -> float:
return
@release.setter
def release(self, val: float):
self.setter("Release", val)
class StripDenoiser(IRemote):
@property
def identifier(self) -> str:
return f"strip[{self.index}].denoiser"
@property
def knob(self) -> float:
return
@knob.setter
def knob(self, val: float):
self.setter("", val)
class StripEQ(IRemote):
@property
def identifier(self) -> str:
return f"strip[{self.index}].eq"
@property
def on(self):
return
@on.setter
def on(self, val: bool):
self.setter("on", 1 if val else 0)
@property
def ab(self):
return
@ab.setter
def ab(self, val: bool):
self.setter("ab", 1 if val else 0)
class VirtualStrip(Strip):
def __str__(self):
return f"{type(self).__name__}{self.index}"
mc = channel_bool_prop("mc")
mono = mc
@property
def k(self) -> int:
return
@k.setter
def k(self, val: int):
self.setter("karaoke", val)
def appgain(self, name: str, gain: float):
self.setter("AppGain", f'("{name}", {gain})')
def appmute(self, name: str, mute: bool = None):
self.setter("AppMute", f'("{name}", {1 if mute else 0})')
class StripLevel(IRemote):
def __init__(self, remote, index):
super().__init__(remote, index)
phys_map = tuple((i, i + 2) for i in range(0, remote.kind.phys_in * 2, 2))
virt_map = tuple(
(i, i + 8)
for i in range(
remote.kind.phys_in * 2,
remote.kind.phys_in * 2 + remote.kind.virt_in * 8,
8,
)
)
self.level_map = phys_map + virt_map
self.range = self.level_map[self.index]
def getter(self):
"""Returns a tuple of level values for the channel."""
def fget(i):
return round((((1 << 16) - 1) - i) * -0.01, 1)
if not self._remote.stopped() and self._remote.event.ldirty:
return tuple(
fget(i)
for i in self._remote.cache["strip_level"][
self.range[0] : self.range[-1]
]
)
return tuple(
fget(i)
for i in self._remote._get_levels(self.public_packet)[0][
self.range[0] : self.range[-1]
]
)
@property
def identifier(self) -> str:
return f"strip[{self.index}]"
@property
def prefader(self) -> tuple:
return self.getter()
@property
def postfader(self) -> tuple:
return
@property
def postmute(self) -> tuple:
return
@property
def isdirty(self) -> bool:
"""
Returns dirty status for this specific channel.
Expected to be used in a callback only.
"""
return any(self._remote._strip_comp[self.range[0] : self.range[-1]])
is_updated = isdirty
class GainLayer(IRemote):
def __init__(self, remote, index, i):
super().__init__(remote, index)
self._i = i
@property
def identifier(self) -> str:
return f"strip[{self.index}]"
@property
def gain(self) -> float:
def fget():
val = getattr(self.public_packet, f"stripgainlayer{self._i+1}")[self.index]
if 0 <= val <= 1200:
return val * 0.01
return (((1 << 16) - 1) - val) * -0.01
val = self.getter(f"GainLayer[{self._i}]")
return round(val if val else fget(), 1)
@gain.setter
def gain(self, val: float):
self.setter(f"GainLayer[{self._i}]", val)
def _make_gainlayer_mixin(remote, index):
"""Creates a GainLayer mixin"""
return type(
f"GainlayerMixin",
(),
{
"gainlayer": tuple(
GainLayer(remote, index, i) for i in range(remote.kind.num_bus)
)
},
)
def _make_channelout_mixin(kind):
"""Creates a channel out property mixin"""
return type(
f"ChannelOutMixin{kind}",
(),
{
**{
f"A{i}": strip_output_prop(f"A{i}") for i in range(1, kind.phys_out + 1)
},
**{
f"B{i}": strip_output_prop(f"B{i}") for i in range(1, kind.virt_out + 1)
},
},
)
_make_channelout_mixins = {
kind.name: _make_channelout_mixin(kind) for kind in kinds_all
}
def strip_factory(is_phys_strip, remote, i) -> Union[PhysicalStrip, VirtualStrip]:
"""
Factory method for strips
Mixes in required classes
Returns a physical or virtual strip subclass
"""
STRIP_cls = PhysicalStrip.make(remote, i) if is_phys_strip else VirtualStrip
CHANNELOUTMIXIN_cls = _make_channelout_mixins[remote.kind.name]
GAINLAYERMIXIN_cls = _make_gainlayer_mixin(remote, i)
return type(
f"{STRIP_cls.__name__}{remote.kind}",
(STRIP_cls, CHANNELOUTMIXIN_cls, GAINLAYERMIXIN_cls),
{
"levels": StripLevel(remote, i),
**{param: channel_bool_prop(param) for param in ["mono", "solo", "mute"]},
"label": channel_label_prop(),
},
)(remote, i)
def request_strip_obj(is_phys_strip, remote, i) -> Strip:
"""
Strip entry point. Wraps factory method.
Returns a reference to a strip subclass of a kind
"""
return strip_factory(is_phys_strip, remote, i)
|
PypiClean
|
/commanderbot-0.18.0.tar.gz/commanderbot-0.18.0/README.md
|
# commanderbot-py
A collection of utilities and extensions for discord.py bots.
[![package-badge]](https://pypi.python.org/pypi/commanderbot/)
[![version-badge]](https://pypi.python.org/pypi/commanderbot/)
## Requirements
- Python 3.10+
- discord.py 2.0+
## Running your bot
You can run your own bot without writing any code.
You will need the following:
1. Your own [Discord Application](https://discordapp.com/developers/applications) with a bot token.
2. A [configuration file](#configuring-your-bot) for the bot.
3. A Python 3.10+ environment.
- It is recommended to use a [virtual environment](https://docs.python.org/3/tutorial/venv.html) for this.
- You can use [pyenv](https://github.com/pyenv/pyenv) to build and run Python 3.10.
4. If you have [poetry](https://python-poetry.org/), you can `poetry install` instead of using `pip`. (Just make sure that dev dependencies are also installed.) Otherwise, you need to install a few packages with `pip`:
- Run `pip install commanderbot` to install the bot core package.
- Run `pip install git+https://github.com/Rapptz/discord.py.git@848d752` to install the latest (and final) version of the discord.py 2.0 beta from GitHub.
- Run `pip install git+https://github.com/vberlier/nbtlib@main` to install the latest version of nbtlib from GitHub.
The first thing you should do is check the CLI help menu:
```bash
python -m commanderbot --help
```
There are three ways to provide your bot token:
1. (Recommended) As the `BOT_TOKEN` environment variable: `BOT_TOKEN=put_your_bot_token_here`
2. As a CLI option: `--token put_your_bot_token_here`
3. Manually, when prompted during start-up
Here's an example that provides the bot token as an argument:
```bash
python -m commanderbot bot.json --token put_your_bot_token_here
```
## Configuring your bot
The current set of configuration options is limited. Following is an example configuration that sets the command prefix and loads the `status` and `faq` extensions.
> Note that with this configuration, the `faq` extension will require read-write access to `faq.json` in the working directory.
```json
{
"command_prefix": ">",
"extensions": [
"commanderbot.ext.status",
{
"name": "commanderbot.ext.faq",
"enabled": true,
"options": {
"database": "faq.json",
"prefix": "?"
}
}
]
}
```
[package-badge]: https://img.shields.io/pypi/v/commanderbot.svg
[version-badge]: https://img.shields.io/pypi/pyversions/commanderbot.svg
|
PypiClean
|
/suds-bis-1.0.0.tar.gz/suds-bis-1.0.0/suds/cache.py
|
import suds
from suds.sax.parser import Parser
from suds.sax.element import Element
from datetime import datetime as dt
from datetime import timedelta
import os
from tempfile import gettempdir as tmp
try:
import pickle as pickle
except Exception:
import pickle
from logging import getLogger
log = getLogger(__name__)
class Cache:
"""
An object object cache.
"""
def get(self, id):
"""
Get a object from the cache by ID.
@param id: The object ID.
@type id: str
@return: The object, else None
@rtype: any
"""
raise Exception("not-implemented")
def put(self, id, object):
"""
Put a object into the cache.
@param id: The object ID.
@type id: str
@param object: The object to add.
@type object: any
"""
raise Exception("not-implemented")
def purge(self, id):
"""
Purge a object from the cache by id.
@param id: A object ID.
@type id: str
"""
raise Exception("not-implemented")
def clear(self):
"""
Clear all objects from the cache.
"""
raise Exception("not-implemented")
class NoCache(Cache):
"""
The passthru object cache.
"""
def get(self, id):
return None
def put(self, id, object):
pass
class FileCache(Cache):
"""
A file-based URL cache.
@cvar fnprefix: The file name prefix.
@type fnsuffix: str
@ivar duration: The cached file duration which defines how
long the file will be cached.
@type duration: (unit, value)
@ivar location: The directory for the cached files.
@type location: str
"""
fnprefix = "suds"
units = ("months", "weeks", "days", "hours", "minutes", "seconds")
def __init__(self, location=None, **duration):
"""
@param location: The directory for the cached files.
@type location: str
@param duration: The cached file duration which defines how
long the file will be cached. A duration=0 means forever.
The duration may be: (months|weeks|days|hours|minutes|seconds).
@type duration: {unit:value}
"""
if location is None:
location = os.path.join(tmp(), "suds")
self.location = location
self.duration = (None, 0)
self.setduration(**duration)
self.checkversion()
def fnsuffix(self):
"""
Get the file name suffix
@return: The suffix
@rtype: str
"""
return "gcf"
def setduration(self, **duration):
"""
Set the caching duration which defines how long the
file will be cached.
@param duration: The cached file duration which defines how
long the file will be cached. A duration=0 means forever.
The duration may be: (months|weeks|days|hours|minutes|seconds).
@type duration: {unit:value}
"""
if len(duration) == 1:
arg = list(duration.items())[0]
if not arg[0] in self.units:
raise Exception("must be: %s" % str(self.units))
self.duration = arg
return self
def setlocation(self, location):
"""
Set the location (directory) for the cached files.
@param location: The directory for the cached files.
@type location: str
"""
self.location = location
def mktmp(self):
"""
Make the I{location} directory if it doesn't already exits.
"""
try:
if not os.path.isdir(self.location):
os.makedirs(self.location)
except Exception:
log.debug(self.location, exc_info=1)
return self
def put(self, id, bfr):
try:
fn = self.__fn(id)
f = self.open(fn, "wb")
try:
f.write(bfr)
finally:
f.close()
return bfr
except Exception:
log.debug(id, exc_info=1)
return bfr
def get(self, id):
try:
f = self.getf(id)
try:
return f.read()
finally:
f.close()
except Exception:
pass
def getf(self, id):
try:
fn = self.__fn(id)
self.validate(fn)
return self.open(fn, "rb")
except Exception:
pass
def validate(self, fn):
"""
Validate that the file has not expired based on the I{duration}.
@param fn: The file name.
@type fn: str
"""
if self.duration[1] < 1:
return
created = dt.fromtimestamp(os.path.getctime(fn))
d = {self.duration[0]: self.duration[1]}
expired = created + timedelta(**d)
if expired < dt.now():
log.debug("%s expired, deleted", fn)
os.remove(fn)
def clear(self):
for fn in os.listdir(self.location):
path = os.path.join(self.location, fn)
if os.path.isdir(path):
continue
if fn.startswith(self.fnprefix):
os.remove(path)
log.debug("deleted: %s", path)
def purge(self, id):
fn = self.__fn(id)
try:
os.remove(fn)
except Exception:
pass
def open(self, fn, *args):
"""
Open the cache file making sure the directory is created.
"""
self.mktmp()
return open(fn, *args)
def checkversion(self):
path = os.path.join(self.location, "version")
try:
f = self.open(path)
version = f.read()
f.close()
if version != suds.__version__:
raise Exception()
except Exception:
self.clear()
f = self.open(path, "w")
f.write(suds.__version__)
f.close()
def __fn(self, id):
name = id
suffix = self.fnsuffix()
fn = "%s-%s.%s" % (self.fnprefix, name, suffix)
return os.path.join(self.location, fn)
class DocumentCache(FileCache):
"""
Provides xml document caching.
"""
def fnsuffix(self):
return "xml"
def get(self, id):
try:
fp = self.getf(id)
if fp is None:
return None
p = Parser()
return p.parse(fp)
except Exception:
self.purge(id)
def put(self, id, object):
if isinstance(object, Element):
FileCache.put(self, id, suds.byte_str(str(object)))
return object
class ObjectCache(FileCache):
"""
Provides pickled object caching.
@cvar protocol: The pickling protocol.
@type protocol: int
"""
protocol = 2
def fnsuffix(self):
return "px"
def get(self, id):
try:
fp = self.getf(id)
if fp is None:
return None
return pickle.load(fp)
except Exception:
self.purge(id)
def put(self, id, object):
bfr = pickle.dumps(object, self.protocol)
FileCache.put(self, id, bfr)
return object
|
PypiClean
|
/django_handyhelpers-0.3.9-py3-none-any.whl/handyhelpers/views/export.py
|
import datetime
import csv
import xlwt
from django.http import HttpResponse
from django.views.generic import View
from handyhelpers.mixins.view_mixins import FilterByQueryParamsMixin
class CsvExportView(FilterByQueryParamsMixin, View):
"""
View to dump a queryset to a csv file
class parameters:
queryset - queryset to be rendered on the page
filename - filename for the output file created; model name used if not provided
"""
queryset = None
filename = None
def get(self, request):
try:
model = self.queryset.model
if not self.filename:
self.filename = "{}.csv".format(model._meta.model_name)
response = HttpResponse(content_type='text/csv')
cd = 'attachment; filename="{0}"'.format(self.filename)
response['Content-Disposition'] = cd
headers = [field.name for field in model._meta.fields]
writer = csv.DictWriter(response, fieldnames=headers)
writer.writeheader()
queryset = self.filter_by_query_params()
for row in queryset:
writer.writerow({column: str(getattr(row, column)) for column in headers})
return response
except AttributeError:
return HttpResponse(content_type='text/csv')
class ExcelExportView(FilterByQueryParamsMixin, View):
"""
View to dump a queryset to a xls file
class parameters:
queryset - queryset to be rendered on the page
filename - filename for the output file created; model name used if not provided
"""
queryset = None
filename = None
def get(self, request):
try:
model = self.queryset.model
if not self.filename:
self.filename = "{}.xls".format(model._meta.model_name)
response = HttpResponse(content_type='application/ms-excel')
response['Content-Disposition'] = 'attachment; filename="{}"'.format(self.filename)
wb = xlwt.Workbook(encoding='utf-8')
ws = wb.add_sheet(model._meta.model_name)
# Sheet header, first row
row_num = 0
font_style = xlwt.XFStyle()
font_style.font.bold = True
columns = [field.name for field in model._meta.fields]
for col_num in range(len(columns)):
ws.write(row_num, col_num, columns[col_num], font_style)
# Sheet body, remaining rows
font_style = xlwt.XFStyle()
queryset = self.filter_by_query_params()
for row in queryset.values_list():
row_num += 1
for col_num in range(len(row)):
if type(row[col_num]) == datetime.datetime:
cell_data = str(row[col_num])
else:
cell_data = row[col_num]
ws.write(row_num, col_num, cell_data, font_style)
wb.save(response)
return response
except AttributeError:
return HttpResponse(content_type='application/ms-excel')
|
PypiClean
|
/unicms-0.30.2-py3-none-any.whl/cms/contexts/hooks.py
|
from django.contrib.auth import get_user_model
from django.contrib.contenttypes.models import ContentType
from django.db.models.fields.related import (ForeignKey,
OneToOneField)
from cms.contexts.models import EntryUsedBy
from taggit.managers import TaggableManager
def used_by(obj):
# TODO - they should be configurable in global settings file
user_model = get_user_model()
excluded_types = (user_model, TaggableManager,)
parents = []
for field in obj._meta.fields:
if type(field) in (ForeignKey, OneToOneField):
parent = getattr(obj, field.name, None)
if parent and parent.__class__ not in excluded_types:
parents.append(parent)
for m2m in obj._meta.many_to_many:
if m2m and m2m.__class__ not in excluded_types:
entries = getattr(obj, m2m.name).all()
for entry in entries:
parents.append(entry)
used_by_content_type = ContentType.objects.get_for_model(obj)
already_used = EntryUsedBy.objects.filter(object_id=obj.pk,
content_type=used_by_content_type)
already_used.delete()
for parent in parents:
content_type = ContentType.objects.get_for_model(parent)
entry_dict = dict(object_id=parent.pk,
content_type=content_type,
used_by_content_type=used_by_content_type,
used_by_object_id=obj.pk)
already_used = EntryUsedBy.objects.filter(**entry_dict)
if already_used.exists():
continue
EntryUsedBy.objects.create(**entry_dict)
# inlines fks
childs = []
for child in obj._meta.related_objects:
if child.related_model not in excluded_types:
q = {child.field.name: obj}
for entry in child.related_model.objects.filter(**q):
childs.append(entry)
for child in childs:
content_type = ContentType.objects.get_for_model(child)
entry_dict = dict(object_id=child.pk,
content_type=content_type,
used_by_content_type=used_by_content_type,
used_by_object_id=obj.pk)
already_used = EntryUsedBy.objects.filter(**entry_dict)
if already_used.exists():
continue
EntryUsedBy.objects.create(**entry_dict)
|
PypiClean
|
/solve360-0.9.2.tar.gz/solve360-0.9.2/README.rst
|
Solve360 API Python wrapper
===========================
Python wrapper for `Norada CRM Solve360 <http://norada.com/>`__ API.
Solve360 API Documentation
--------------------------
http://norada.com/answers/api/external\_api\_introduction
Installation
------------
::
$ pip install solve360
Usage
-----
The API methods and parameters are the same for all record types, i.e.
Contacts, Companies and Project Blogs. Simply use the appropriate
segment name for the record type. For example, if creating a:
- Contact - Use crm.create\_contact()
- Company - Use crm.create\_company()
- Projectblog - Use crm.create\_projectblog()
Initiate solve360 object
~~~~~~~~~~~~~~~~~~~~~~~~
::
>>> from solve360 import Solve360
>>> crm = Solve360(your_email, your_token)
List contacts
~~~~~~~~~~~~~
::
>>> crm.list_contacts()
{u'status': 'success',
u'count': 2,
u'12345': {...},
u'12346': {...}}
`Reference <https://solve360.com/api/contacts/#list>`__
Get contacts - paginated
~~~~~~~~~~~~~~~~~~~~~~~~
The solve360 API have a fixed upper limit on objects each list request
will return, currently set to 5000. To fetch more objects in a single
request this wrapper offers a parameter ``pages``. The request will
continue to fetch objects until either all objects are returned or the
number of given pages have been reached.
::
>>> contacts = crm.list_contacts(limit=solve360.LIST_MAX_LIMIT, pages=2)
>>> contacts
{u'status': 'success',
u'count': 12000,
u'12345': {...},
u'12346': {...},
...}
>>> len(contacts)
10002 # Keys 'status' and 'count' plus 10000 contacts
Parameter ``pages`` must be a positive number. There is currently no
parameter that fetches all objects available disregard how many there is
totally. Just set ``pages`` to a number high enough to include the
number of objects required.
Show contact
~~~~~~~~~~~~
::
>>> crm.show_contact(12345)
{u'status': 'success',
u'id': 12345,
u'fields': {...},
...}
`Reference <https://solve360.com/api/contacts/#show>`__
Create contact
~~~~~~~~~~~~~~
::
>>> crm.create_contact({'firstname': 'test', 'lastname': 'creation'})
{'status': 'success',
'item': {'id': 12347, ...},
...}
`Reference <https://solve360.com/api/contacts/#create>`__
Update contact
~~~~~~~~~~~~~~
::
>>> crm.update_contact(12345, {'firstname': 'updated', 'lastname': 'name'})
{'status': 'success',
'item': {'id': 12345, ...},
...}
`Reference <https://solve360.com/api/contacts/#update>`__
Destroy contact
~~~~~~~~~~~~~~~
::
>>> crm.destroy_contact(12345)
{'status': 'success'}
`Reference <https://solve360.com/api/contacts/#destroy>`__
Show report activities
~~~~~~~~~~~~~~~~~~~~~~
::
>>> crm.show_report_activities('2014-03-05', '2014-03-11')
{u'status': 'success',
u'66326826': {u'comments': [],
u'created': u'2014-03-05T08:48:07+00:00',
u'fields': {u'assignedto': u'88842777',
u'assignedto_cn': u'John Doe',
u'completed': u'0',
u'duedate': u'2014-03-07T00:00:00+00:00',
u'priority': u'0',
u'remindtime': u'0',
...},
...
}
`Reference <https://solve360.com/api/activity-reports/#show>`__
Error handling
--------------
Successful requests with ``response.status_code == 2XX`` will parse the
json response body and only return the response data in python data
format.
Invalid requests with ``response.status_code == 4XX or 5XX`` will raise
an ``requests.HTTPException`` using requests ``raise_for_status()``
returning the complete stacktrace including server error message if
available.
Test
----
::
$ pip install pytest httpretty
$ py.test solve360/tests.py
Dependencies
------------
- `requests <https://pypi.python.org/pypi/requests>`__
- `iso8601 <https://pypi.python.org/pypi/iso8601>`__
Testing
~~~~~~~
- `pytest <https://pypi.python.org/pypi/pytest>`__
- `httpretty <https://pypi.python.org/pypi/httpretty>`__
|
PypiClean
|
/idem-azure-2.2.0.tar.gz/idem-azure-2.2.0/idem_azure/states/azure/authorization/role_assignments.py
|
import copy
import uuid
from typing import Any
from typing import Dict
__contracts__ = ["resource"]
async def present(
hub,
ctx,
name: str,
scope: str,
role_definition_id: str,
principal_id: str,
resource_id: str = None,
role_assignment_name: str = None,
) -> Dict[str, Any]:
r"""Create or update Role Assignments.
Args:
name(str): The identifier for this state.
scope(str): The scope of the role assignment to create. The scope can be any REST resource instance.
For example, use '/subscriptions/{subscription-id}/' for a subscription,
'/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}' for a resource group,
and '/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/{resource-provider}/{resource-type}/{resource-name}' for a resource.
role_definition_id(str): The role definition ID used in the role assignment.
principal_id(str): The principal ID assigned to the role. This maps to the ID inside the Active Directory. It can point to a user, service principal, or security group.
resource_id(str, Optional): Role Assignment resource id on Azure.
role_assignment_name(str, Optional): A GUID for the role assignment to create. The name must be unique and different for each role assignment. This will be automatically generated if not specified.
Returns:
Dict
Examples:
.. code-block:: sls
resource_is_present:
azure.authorization.role_assignments.present:
- name: value
- scope: value
- role_assignment_name: value
"""
result = {
"name": name,
"result": True,
"old_state": None,
"new_state": None,
"comment": [],
}
response_get = None
if role_assignment_name:
if resource_id is None:
resource_id = f"{scope}/providers/Microsoft.Authorization/roleAssignments/{role_assignment_name}"
response_get = await hub.exec.azure.authorization.role_assignments.get(
ctx, resource_id=resource_id, raw=True
)
if response_get["result"]:
if not response_get["ret"]:
if role_assignment_name is None:
role_assignment_name = uuid.uuid4()
if ctx.get("test", False):
# Return a proposed state by Idem state --test
result[
"new_state"
] = hub.tool.azure.test_state_utils.generate_test_state(
enforced_state={},
desired_state={
"name": name,
"scope": scope,
"role_assignment_name": role_assignment_name,
"resource_id": resource_id,
"role_definition_id": role_definition_id,
"principal_id": principal_id,
},
)
result["comment"].append(
f"Would create azure.authorization.role_assignments '{name}'"
)
return result
else:
# PUT operation to create a resource
payload = hub.tool.azure.authorization.role_assignments.convert_present_to_raw_role_assignments(
role_definition_id=role_definition_id,
principal_id=principal_id,
)
response_put = await hub.exec.request.json.put(
ctx,
url=f"{ctx.acct.endpoint_url}{resource_id}?api-version=2015-07-01",
success_codes=[201],
json=payload,
)
if not response_put["result"]:
hub.log.debug(
f"Could not create azure.authorization.role_assignments {response_put['comment']} {response_put['ret']}"
)
result["comment"].extend(
hub.tool.azure.result_utils.extract_error_comments(response_put)
)
result["result"] = False
return result
result[
"new_state"
] = hub.tool.azure.authorization.role_assignments.convert_raw_role_assignments_to_present(
resource=response_put["ret"],
idem_resource_name=name,
role_assignment_name=role_assignment_name,
resource_id=resource_id,
)
result["comment"].append(
f"Created azure.authorization.role_assignments '{name}'"
)
return result
else:
existing_resource = response_get["ret"]
result[
"old_state"
] = hub.tool.azure.authorization.role_assignments.convert_raw_role_assignments_to_present(
resource=existing_resource,
idem_resource_name=name,
role_assignment_name=role_assignment_name,
resource_id=resource_id,
)
# No role assignment property can be updated without resource re-creation.
result["comment"].append(
f"azure.authorization.role_assignments '{name}' has no property to be updated."
)
result["new_state"] = copy.deepcopy(result["old_state"])
return result
else:
hub.log.debug(
f"Could not get azure.authorization.role_assignments {response_get['comment']} {response_get['ret']}"
)
result["result"] = False
result["comment"].extend(
hub.tool.azure.result_utils.extract_error_comments(response_get)
)
return result
async def absent(
hub, ctx, name: str, scope: str, role_assignment_name: str, resource_id: str = None
) -> Dict[str, Any]:
r"""Delete Role Assignments.
Args:
name(str): The identifier for this state.
scope(str, Optional): The scope of the role assignment to delete.
role_assignment_name(str, Optional): The name of the role assignment to delete.
resource_id(str, Optional): Role assignment resource id on Azure. Either resource_id or a combination of scope
and role_assignment_name need to be specified. Idem will automatically consider a resource as absent if both
options are not specified.
Returns:
Dict
Examples:
.. code-block:: sls
resource_is_absent:
azure.authorization.role_assignments.absent:
- name: value
- scope: value
- role_assignment_name: value
"""
result = dict(name=name, result=True, comment=[], old_state=None, new_state=None)
if scope is not None and role_assignment_name is not None:
constructed_resource_id = f"{scope}/providers/Microsoft.Authorization/roleAssignments/{role_assignment_name}"
if resource_id is not None and resource_id != constructed_resource_id:
result["result"] = False
result["comment"].append(
f"azure.authorization.role_assignments '{name}' resource_id {resource_id} does not match the constructed resource id"
)
return result
resource_id = constructed_resource_id
response_get = await hub.exec.azure.authorization.role_assignments.get(
ctx,
resource_id=resource_id,
)
if response_get["result"]:
if response_get["ret"]:
result["old_state"] = response_get["ret"]
result["old_state"]["name"] = name
if ctx.get("test", False):
result["comment"].append(
f"Would delete azure.authorization.role_assignments '{name}'"
)
return result
response_delete = await hub.exec.request.raw.delete(
ctx,
url=f"{ctx.acct.endpoint_url}/{resource_id}?api-version=2015-07-01",
success_codes=[200, 204],
)
if not response_delete["result"]:
hub.log.debug(
f"Could not delete azure.authorization.role_assignments '{name}' {response_delete['comment']} {response_delete['ret']}"
)
result["result"] = False
result["comment"].extend(
hub.tool.azure.result_utils.extract_error_comments(response_delete)
)
return result
result["comment"].append(
f"Deleted azure.authorization.role_assignments '{name}'"
)
return result
else:
# If Azure returns 'Not Found' error, it means the resource has been absent.
result["comment"].append(
f"azure.authorization.role_assignments '{name}' already absent"
)
else:
hub.log.debug(
f"Could not get azure.authorization.role_assignments '{name}' {response_get['comment']} {response_get['ret']}"
)
result["result"] = False
result["comment"].extend(
hub.tool.azure.result_utils.extract_error_comments(response_get)
)
return result
async def describe(hub, ctx) -> Dict[str, Dict[str, Any]]:
r"""Describe the resource in a way that can be recreated/managed with the corresponding "present" function.
Lists all Role Assignments under the same subscription.
Returns:
Dict[str, Any]
Examples:
.. code-block:: bash
$ idem describe azure.authorization.role_assignments
"""
result = {}
ret_list = await hub.exec.azure.authorization.role_assignments.list(ctx)
if not ret_list["ret"]:
hub.log.debug(f"Could not describe role assignment {ret_list['comment']}")
return result
for resource in ret_list["ret"]:
resource_id = resource["resource_id"]
result[resource_id] = {
"azure.authorization.role_assignments.present": [
{parameter_key: parameter_value}
for parameter_key, parameter_value in resource.items()
]
}
return result
|
PypiClean
|
/python_grains-0.10.1.tar.gz/python_grains-0.10.1/python_grains/dynamic_settings/dynamic_settings.py
|
from python_grains.dynamic_settings.lua import LuaScripts
import requests
from requests.sessions import Session
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
import json
import datetime
import pytz
import redis
import os
DEFAULT_DEBUG_VERSION = 'QUICK_PREVIEW'
DEFAULT_MAX_AGE = datetime.timedelta(days=21)
class DynamicSetting(object):
fallback_version = 'fallback'
download_in_progress_ttl = 5
data_margin_seconds = 24 * 60 * 60
max_file_count = 3
def __init__(self,
slug,
url,
redis_client,
domain,
logger,
data_dir,
fallback_data=None,
instance_id=None,
max_age=DEFAULT_MAX_AGE,
download_subkey=None,
version_subkey=None,
download_options=None,
debug_version=DEFAULT_DEBUG_VERSION,
fallback_allowed=False,
health_relevant=True,
download_timeout=1.5,
verbose=False):
self.slug = slug
self.content = None
self.version = None
self.download_time = None
self.load_time = None
self.source = None
self.fallback_allowed = fallback_allowed
self.health_relevant = health_relevant
self.verbose = verbose
self.domain = domain
self.url = url
self.download_subkey = download_subkey
self.version_subkey = version_subkey
self.download_options = download_options or {}
self.max_age = max_age
self.debug_version = debug_version
self.redis_client = redis_client
self.logger = logger
self.instance_id = instance_id
self.data_dir = data_dir
self.download_try_time = None
self.fallback_data = fallback_data or {}
self.download_timeout = download_timeout
self.data_ttl = self.max_age.total_seconds() + self.data_margin_seconds
self.attach_scripts_to_redis_client()
@classmethod
def from_setting_object(cls,
obj,
redis_client,
domain,
data_dir,
logger=None,
instance_id=None,
verbose=False):
return cls(slug=obj['slug'],
url=obj['url'],
redis_client=redis_client,
domain=domain,
logger=logger,
data_dir=data_dir,
fallback_data=obj['fallback_data'],
instance_id=instance_id,
max_age=obj['max_age'],
download_options=obj.get('download_options'),
download_subkey=obj.get('subkey'),
version_subkey=obj.get('version'),
fallback_allowed=obj.get('fallback_allowed', False),
health_relevant=obj.get('health_relevant', True),
download_timeout=obj.get('download_timeout', 1.5),
verbose=verbose)
def get(self, version=None):
if self.content is None or self.is_fallback:
if self.verbose:
print(f'{self.slug} - Loading from storage, since content is None or this is the fallback')
self.load_from_storage(version)
elif self.loaded_version_is_ok(version) and not self.too_old:
if self.verbose:
print(f'{self.slug} - Loaded content is ok')
return self.content
else:
self.load_from_storage(version)
if self.content is None:
if self.verbose:
print(f'{self.slug} - Loading fallback')
self.load_fallback()
return self.content
def load_content(self, data, version, download_time, source):
if hasattr(self, 'process_' + self.slug) and callable(getattr(self, 'process_' + self.slug)):
data = getattr(self, 'process_' + self.slug)(data)
self.content = data
self.version = version
self.download_time = download_time
self.load_time = self.current_time
self.source = source
def load_from_storage(self, version):
self.load_from_cache(version)
if self.content is None or self.too_old:
self.download()
def load_from_cache(self, version):
if self.verbose:
print(f'{self.slug} - Loading from cache')
try:
data = self.redis_client.get_latest_data(
keys=[
self.current_version_key
],
args=[
self.data_key_prefix,
self.data_ttl
]
)
cache_version, download_time, content = self.parse_cache_data(data=data)
source = 'cache'
if self.verbose:
print(f'{self.slug} - Loaded from cache and content is {"" if content is None else "not "}None')
except (redis.connection.ConnectionError, redis.connection.TimeoutError):
if self.verbose:
print(f'{self.slug} - Loading from cache failed')
cache_version, download_time, content = self.load_from_file()
source = 'disk'
if not content is None:
if self.is_larger_than_loaded_version(cache_version):
if self.verbose:
print(f'{self.slug} - Cache version larger than loaded version')
self.load_content(data=content,
version=cache_version,
download_time=download_time,
source=source)
if self.loaded_version_is_ok(version):
if self.verbose:
print(f'{self.slug} - Loaded version is ok')
return
self.download()
def load_from_file(self):
self.logger.warning('Loading settings from file', data=self.log_data)
if self.verbose:
print(f'{self.slug} - Loading from disk')
latest_files = [fn for fn in [self.parse_filename(filename) for filename in os.listdir(self.data_dir)
if filename.endswith('.json') and filename.startswith(self.slug)
and not filename.endswith('fallback.json')] if not fn is None]
if not latest_files:
if self.verbose:
print(f'{self.slug} - No file to load from')
return None, None, None
latest_file = sorted(latest_files, key=lambda x: int(x['version']))[-1]
try:
with open(latest_file['file_path'], 'r') as f:
data = json.load(f)
if self.verbose:
print(f'{self.slug} - Loaded from disk')
return self.parse_data(data)
except json.JSONDecodeError:
if self.verbose:
print(f'{self.slug} - Invalid file on disk')
return None, None, None
def parse_data(self, data):
version = data['version']
download_time = pytz.utc.localize(datetime.datetime.utcfromtimestamp(float(data['timestamp'])))
content = data['content']
return version, download_time, content
def parse_cache_data(self, data):
if data is None:
return None, None, None
data = data.decode('utf-8') if isinstance(data, bytes) else data
data = json.loads(data)
return self.parse_data(data=data)
def load_fallback(self):
self.load_content(data=self.fallback_data,
version=self.fallback_version,
download_time=None,
source='fallback')
def is_debug_version(self, version):
return str(version).upper() == self.debug_version
def loaded_version_is_ok(self, version):
if self.is_debug_version(version) or version is None:
return True
try:
return int(self.version) >= int(version)
except ValueError:
return True
def is_larger_than_loaded_version(self, version):
if self.version is None or self.version == self.fallback_version:
return True
try:
return int(version) > int(self.version)
except ValueError:
return False
def attach_scripts_to_redis_client(self):
self.redis_client.set_data_with_version = self.redis_client.register_script(
LuaScripts.set_data_with_version())
self.redis_client.get_latest_data = self.redis_client.register_script(
LuaScripts.get_latest_data())
self.redis_client.set_download_in_progress = self.redis_client.register_script(
LuaScripts.set_download_in_progress())
def validate_download_data(self, data):
if hasattr(self, 'validate_' + self.slug) and callable(getattr(self, 'validate_' + self.slug)):
valid, error_message = getattr(self, 'validate_' + self.slug)(data)
if not valid:
self.logger.error(f'Response from url did not validate', data={'error': error_message, **self.log_data})
return
return data
def parse_download_data(self, data):
if not self.version_subkey is None:
if not isinstance(data, dict) or not self.version_subkey in data:
self.logger.error(f'Response from url should be a JSON object containing the key {self.version_subkey}',
data=self.log_data)
return None, None
version = str(data[self.version_subkey])
else:
# round this bigtime, otherwise multiple instances will cascade download files
version = str(self.current_timestamp // 1814400 * 1814400)
if not self.download_subkey is None:
if not isinstance(data, dict) or not self.download_subkey in data:
if self.fallback_allowed or not self.health_relevant:
self.logger.warning(
f'Response from url should be a JSON object containing the key {self.download_subkey}',
data=self.log_data)
else:
self.logger.error(f'Response from url should be a JSON object containing the key {self.download_subkey}',
data=self.log_data)
return None, None
data = data[self.download_subkey]
return version, data
def download_in_progress(self):
if not self.download_try_time is None and \
self.download_try_time > self.current_time - datetime.timedelta(seconds=self.download_in_progress_ttl):
return True
self.download_try_time = self.current_time
try:
t = self.redis_client.set_download_in_progress(
keys=[self.download_in_progress_key],
args=[self.download_in_progress_ttl]
)
if t == b'0':
return True
return False
except (redis.exceptions.TimeoutError, redis.exceptions.ConnectionError):
return False
def download(self):
if self.download_in_progress():
if self.verbose:
print(f'{self.slug} - Download already in progress')
return
try:
if self.verbose:
print(f'{self.slug} - Downloading...')
r = self.session.get(url=self.url, **self.download_options, timeout=self.download_timeout)
r.raise_for_status()
self.logger.info('Downloaded data from url',
data=self.log_data)
except requests.exceptions.RequestException as e:
self.logger.error('Failed to get data from url', data={'error': str(e), **self.log_data})
return
try:
data = r.json()
except json.JSONDecodeError:
self.logger.error('Response from url was no valid json', data=self.log_data)
return
version, data = self.parse_download_data(data=data)
if data is None:
if self.verbose:
print(f'{self.slug} - Downloaded data invalid')
return
data = self.validate_download_data(data=data)
if data is None:
if self.verbose:
print(f'{self.slug} - Downloaded data did not validate')
return
download_time = self.current_time
try:
self.write_data_to_redis(data=data, version=version, download_time=download_time)
except (redis.exceptions.TimeoutError, redis.exceptions.ConnectionError):
pass
self.write_data_to_file(data=data, version=version, download_time=download_time)
self.load_content(data=data,
version=version,
download_time=download_time,
source='download')
def write_data_to_file(self, data, version, download_time):
data = self.build_data_object(content=data, version=version, download_time=download_time)
file_path = self.build_file_path(version=version)
with open(file_path, 'w') as f:
json.dump(data, f)
self.constrain_file_count()
def constrain_file_count(self):
latest_files = [fn for fn in [self.parse_filename(filename) for filename in os.listdir(self.data_dir)
if filename.endswith('.json') and filename.startswith(self.slug)
and not filename.endswith('fallback.json')] if not fn is None]
if len(latest_files) > self.max_file_count:
for file in sorted(latest_files, key=lambda x: int(x['version']))[:-3]:
os.remove(file['file_path'])
def parse_filename(self, filename):
parts = filename.split('.')
if len(parts) < 3:
return None
return {
'slug': '.'.join(parts[:-2]),
'version': parts[-2],
'file_path': os.path.join(self.data_dir, filename)
}
def build_file_path(self, version):
file_name = f'{self.slug}.{version}.json'
return os.path.join(self.data_dir, file_name)
def build_data_object(self, content, version, download_time):
return {
'content': content,
'version': version,
'timestamp': download_time.timestamp()
}
def write_data_to_redis(self, data, version, download_time):
if self.verbose:
print(f'{self.slug} - Writing data to redis')
data = self.build_data_object(content=data, version=version, download_time=download_time)
result = self.redis_client.set_data_with_version(
keys=[
self.current_version_key,
self.data_key(version=version)
],
args=[
version,
json.dumps(data),
self.data_key_prefix,
self.data_ttl
]
)
if int(result[0]) == 1:
if self.verbose:
print(f'{self.slug} - Wrote data to redis successful')
return
data = result[1]
if self.verbose:
print(f'{self.slug} - Version is cache is newer')
if not data is None:
cache_version, download_time, content = self.parse_cache_data(data=data)
if self.is_larger_than_loaded_version(cache_version):
self.load_content(data=content,
version=cache_version,
download_time=download_time,
source='cache')
@property
def download_in_progress_key(self):
return f'settings:{self.domain}:{self.slug}:download_in_progress'
@property
def current_version(self):
v = self.redis_client.get(self.current_version_key)
if v is None:
return None
return v.decode()
@property
def current_version_key(self):
return f'settings:{self.domain}:{self.slug}:version'
def data_key(self, version):
return f'{self.data_key_prefix}{version}'
@property
def data_key_prefix(self):
return f'settings:{self.domain}:{self.slug}:data:'
@property
def session(self):
if not hasattr(self, '_session'):
self._session = Session()
retry_strategy = Retry(
total=3,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504],
allowed_methods=['GET', 'POST'],
respect_retry_after_header=True
)
self._session.mount('https://', HTTPAdapter(max_retries=retry_strategy))
return self._session
@property
def current_time(self):
return pytz.utc.localize(datetime.datetime.utcnow())
@property
def current_timestamp(self):
return int(self.current_time.timestamp())
@property
def age(self):
if self.download_time is None:
return None
return self.current_time - self.download_time
@property
def too_old(self):
if self.age is None:
return False
return self.age > self.max_age - datetime.timedelta(seconds=self.data_margin_seconds)
@property
def is_fallback(self):
return self.version == self.fallback_version
def _validate_string_only_array(self, raw):
if not isinstance(raw, list):
return False, 'Should be a list'
if not all(isinstance(r, str) for r in raw):
return False, 'All entries should be strings'
if not all(not r.strip() == '' for r in raw):
return False, 'Empty string not allowed as entry'
return True, None
def _validate_dict(self, raw, type=str):
if not isinstance(raw, dict):
return False, 'Should be a dictionary'
if not all(isinstance(k, str) for k in raw):
return False, 'All keys should be strings'
if not all(isinstance(v, type) for v in raw.values()):
return False, f'All values should be {type.__name__}'
return True, None
def health(self):
if self.content is None:
self.get(version=self.debug_version)
healthy = False
if self.is_fallback and not self.fallback_allowed:
reason = 'Fallback in use'
elif not self.download_time is None and self.current_time > self.download_time + self.max_age:
reason = 'Settings too old'
else:
healthy = True
reason = None
download_time_iso = self.download_time.isoformat() if not self.download_time is None else None
d = {
'healthy': healthy,
'slug': self.slug,
'load_time': self.load_time.isoformat(),
'download_time': download_time_iso,
'age': str(self.age) if not self.age is None else None,
'max_age': str(self.max_age),
'allow_fallback': self.fallback_allowed,
'fallback': self.is_fallback,
'source': self.source,
'version': self.version,
'health_relevant': self.health_relevant
}
if not healthy:
d.update({'reason': reason})
return d
@property
def log_data(self):
return {'slug': self.slug,
'instance_id': self.instance_id,
'url': self.url}
|
PypiClean
|
/lbrlabs_scaleway-0.4.1a1662912002.tar.gz/lbrlabs_scaleway-0.4.1a1662912002/pulumiverse_scaleway/get_instance_servers.py
|
import copy
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
__all__ = [
'GetInstanceServersResult',
'AwaitableGetInstanceServersResult',
'get_instance_servers',
'get_instance_servers_output',
]
@pulumi.output_type
class GetInstanceServersResult:
"""
A collection of values returned by getInstanceServers.
"""
def __init__(__self__, id=None, name=None, organization_id=None, project_id=None, servers=None, tags=None, zone=None):
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if organization_id and not isinstance(organization_id, str):
raise TypeError("Expected argument 'organization_id' to be a str")
pulumi.set(__self__, "organization_id", organization_id)
if project_id and not isinstance(project_id, str):
raise TypeError("Expected argument 'project_id' to be a str")
pulumi.set(__self__, "project_id", project_id)
if servers and not isinstance(servers, list):
raise TypeError("Expected argument 'servers' to be a list")
pulumi.set(__self__, "servers", servers)
if tags and not isinstance(tags, list):
raise TypeError("Expected argument 'tags' to be a list")
pulumi.set(__self__, "tags", tags)
if zone and not isinstance(zone, str):
raise TypeError("Expected argument 'zone' to be a str")
pulumi.set(__self__, "zone", zone)
@property
@pulumi.getter
def id(self) -> str:
"""
The provider-assigned unique ID for this managed resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
The name of the server.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="organizationId")
def organization_id(self) -> str:
"""
The organization ID the server is associated with.
"""
return pulumi.get(self, "organization_id")
@property
@pulumi.getter(name="projectId")
def project_id(self) -> str:
"""
The ID of the project the server is associated with.
"""
return pulumi.get(self, "project_id")
@property
@pulumi.getter
def servers(self) -> Sequence['outputs.GetInstanceServersServerResult']:
"""
List of found servers
"""
return pulumi.get(self, "servers")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The tags associated with the server.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def zone(self) -> str:
"""
The zone in which the server is.
"""
return pulumi.get(self, "zone")
class AwaitableGetInstanceServersResult(GetInstanceServersResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetInstanceServersResult(
id=self.id,
name=self.name,
organization_id=self.organization_id,
project_id=self.project_id,
servers=self.servers,
tags=self.tags,
zone=self.zone)
def get_instance_servers(name: Optional[str] = None,
project_id: Optional[str] = None,
tags: Optional[Sequence[str]] = None,
zone: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetInstanceServersResult:
"""
Gets information about multiple instance servers.
## Examples
### Basic
```python
import pulumi
import pulumi_scaleway as scaleway
my_key = scaleway.get_instance_servers(name="myserver",
zone="fr-par-2")
```
:param str name: The server name used as filter. Servers with a name like it are listed.
:param str project_id: The ID of the project the server is associated with.
:param Sequence[str] tags: List of tags used as filter. Servers with these exact tags are listed.
:param str zone: `zone`) The zone in which servers exist.
"""
__args__ = dict()
__args__['name'] = name
__args__['projectId'] = project_id
__args__['tags'] = tags
__args__['zone'] = zone
opts = pulumi.InvokeOptions.merge(_utilities.get_invoke_opts_defaults(), opts)
__ret__ = pulumi.runtime.invoke('scaleway:index/getInstanceServers:getInstanceServers', __args__, opts=opts, typ=GetInstanceServersResult).value
return AwaitableGetInstanceServersResult(
id=__ret__.id,
name=__ret__.name,
organization_id=__ret__.organization_id,
project_id=__ret__.project_id,
servers=__ret__.servers,
tags=__ret__.tags,
zone=__ret__.zone)
@_utilities.lift_output_func(get_instance_servers)
def get_instance_servers_output(name: Optional[pulumi.Input[Optional[str]]] = None,
project_id: Optional[pulumi.Input[Optional[str]]] = None,
tags: Optional[pulumi.Input[Optional[Sequence[str]]]] = None,
zone: Optional[pulumi.Input[Optional[str]]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetInstanceServersResult]:
"""
Gets information about multiple instance servers.
## Examples
### Basic
```python
import pulumi
import pulumi_scaleway as scaleway
my_key = scaleway.get_instance_servers(name="myserver",
zone="fr-par-2")
```
:param str name: The server name used as filter. Servers with a name like it are listed.
:param str project_id: The ID of the project the server is associated with.
:param Sequence[str] tags: List of tags used as filter. Servers with these exact tags are listed.
:param str zone: `zone`) The zone in which servers exist.
"""
...
|
PypiClean
|
/photostash-client-0.0.3.tar.gz/photostash-client-0.0.3/photostash/runner.py
|
import base64
import sys
import subprocess
from clint.textui import puts, columns
from queryset_client import Client
from queryset_client.client import ObjectDoesNotExist
from slumber.exceptions import HttpServerError
from photostash.exceptions import CommandError
class Runner(object):
DEFULT_BASE_URL = 'http://photostash.herokuapp.com/api/v1/'
def __init__(self, base_url=None, client=Client, stream=sys.stdout.write):
if base_url is None:
base_url = self.DEFULT_BASE_URL
self.base_url = base_url
self.client = client(self.base_url)
self.stream = stream
def create(self, album_name):
try:
album = self.client.albums.objects.create(name=album_name)
puts('%s has been created.' % album.name, stream=self.stream)
except HttpServerError:
raise CommandError('%s already exists.' % album_name)
def list(self, album_name):
album = self._get_album(album_name)
photos = self._get_photos(album)
ids = ', '.join([str(photo.id) for photo in photos])
if ids:
puts('Photos: %s' % ids, stream=self.stream)
else:
puts('%s has no photos.' % album.name, stream=self.stream)
puts('type "stash add %s <path>' % album.name, stream=self.stream)
def delete(self, album_name, photo_id):
album = self._get_album(album_name)
photo = self._get_photo(photo_id)
try:
rel = self.client.albumphotos.objects.filter(photo=photo.id, album=album.id)[0]
rel.delete()
puts('%s has been removed from %s.' % (photo.id, album.name), stream=self.stream)
except IndexError:
raise CommandError('%s doest not belong to %s.' % (photo.id, album.name))
def add(self, album_name, photo_path):
album = self._get_album(album_name)
try:
with open(photo_path, 'rb') as fp:
image = '%s:%s' % (photo_path, base64.b64encode(fp.read()))
photo = self.client.photos.objects.create(image=image)
except IOError as e:
try:
photo = self._get_photo(photo_path)
except CommandError:
raise CommandError(e)
self.client.albumphotos.objects.create(album=album, photo=photo)
puts('%s has been added to %s.' % (photo.id, album.name), stream=self.stream)
def open(self, photo_id):
photo = self._get_photo(photo_id)
puts('Opening %s...' % photo.id, stream=self.stream)
subprocess.call(['open', photo.image])
def stats(self):
albums = self.client.albums.objects.all()
stats = sorted((album.name, self._get_photos(album)) for album in albums)
names = ['| Album'] + ['| {0}'.format(album) for album, photos in stats]
col1 = ['\n'.join(names), max([len(name) for name in names]) + 1]
photos = ['| Photos'] + ['| {0}'.format(', '.join([photo.id for photo in photos]))
for album, photos in stats]
col2 = ['\n'.join(photos), max([len(photo) for photo in photos]) + 1]
col3 = ['\n'.join(['|' for i in range(len(stats) + 1)]), None]
table = columns(col1, col2, col3)
rows = table.splitlines()
header = rows.pop(0)
divider = '-' * len(header.strip())
puts(divider, stream=self.stream)
puts(header, stream=self.stream)
puts(divider, stream=self.stream)
puts('\n'.join(rows), stream=self.stream)
puts(divider, stream=self.stream)
def _get_album(self, album_name):
try:
return self.client.albums.objects.get(name=album_name)
except ObjectDoesNotExist:
raise CommandError('Album {0} does not exist. type "stash create '
'{0}" to add this album.'.format(album_name))
def _get_photo(self, photo_id):
try:
return self.client.photos.objects.get(id=photo_id)
except ObjectDoesNotExist:
raise CommandError('Photo #{0} does not exist.'.format(photo_id))
def _get_photos(self, album):
return self.client.photos.objects.filter(albumphotos__album=album.id)
|
PypiClean
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.