id
stringlengths
1
8
text
stringlengths
6
1.05M
dataset_id
stringclasses
1 value
/dj-ango-0.2.0.tar.gz/dj-ango-0.2.0/README.rst
============================= dj-ango ============================= .. image:: https://badge.fury.io/py/dj-ango.png :target: https://badge.fury.io/py/dj-ango .. image:: https://travis-ci.org/pydanny/dj-ango.png?branch=master :target: https://travis-ci.org/pydanny/dj-ango Simplifying the import structure of Django. Documentation ------------- The full documentation is at https://dj-ango.readthedocs.org. Quickstart ---------- Install dj-ango:: pip install dj-ango Then use it in a project: .. code-block:: python from ango import settings, TemplateView, url class AboutView(TemplateView): template_name = "about.html" def get_context_data(self, **kwargs): context = super(AboutView, self).get_context_data(**kwargs) context['is_debug_mode'] = settings.DEBUG return context urlpatterns = [ url( regex=r'^about/$', view=AboutView.as_view(), name='about' ) ] Running Tests -------------- Does the code actually work? :: source <YOURVIRTUALENV>/bin/activate (myenv) $ pip install -r requirements-text.txt (myenv) $ python runtests.py Credits --------- Tools used in rendering this package: * Cookiecutter_ * `cookiecutter-pypackage`_ .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`cookiecutter-pypackage`: https://github.com/pydanny/cookiecutter-djangopackage
PypiClean
/pymso5000-tspspi-0.0.6.tar.gz/pymso5000-tspspi-0.0.6/README.md
# Rigol MSO5xxx oscilloscope Python library (unofficial) A simple Python library and utility to control and query data from Rigol MSO5xxx oscilloscopes (not supporting all features of the oscilloscope, work in progress). This library implements the [Oscilloscope](https://github.com/tspspi/pylabdevs/blob/master/src/labdevices/oscilloscope.py) class from the [pylabdevs](https://github.com/tspspi/pylabdevs) package which exposes the public interface. ## Installing There is a PyPi package that can be installed using ``` pip install pymso5000-tspspi ``` ## Simple example to fetch waveforms: ``` from pymso5000.mso5000 import MSO5000 with MSO5000(address = "10.0.0.123") as mso: print(f"Identify: {mso.identify()}") mso.set_channel_enable(1, True) mso.set_channel_enable(2, True) data = mso.query_waveform((1, 2)) print(data) import matplotlib.pyplot as plt plt.plot(data['x'], data['y0'], label = "Ch1") plt.plot(data['x'], data['y1'], label = "Ch2") plt.show() ``` Note that ```numpy``` usage is optional for this implementation. One can enable numpy support using ```useNumpy = True``` in the constructor. ## Querying additional statistics This module allows - via the ```pylabdevs``` base class to query additional statistics: * ```mean``` Calculates the mean values and standard deviations * A single value for each channels mean at ```["means"]["yN_avg"]``` and a single value for each standard deviation at ```["means"]["yN_std"]``` where ```N``` is the channel number * ```fft``` runs Fourier transform on all queried traces * The result is stored in ```["fft"]["yN"]``` (complex values) and in ```["fft"]["yN_real"]``` for the real valued Fourier transform. Again ```N``` is the channel number * ```ifft``` runs inverse Fourier transform on all queried traces * Works as ```fft``` but runs the inverse Fourier transform and stores its result in ```ifft``` instead of ```fft``` * ```correlate``` calculates the correlation between all queried waveform pairs. * The result of the correlations are stored in ```["correlation"]["yNyM"]``` for the correlation between channels ```M``` and ```N``` * ```autocorrelate``` performs calculation of the autocorrelation of each queried channel. * The result of the autocorrelation is stored in ```["autocorrelation"]["yN"]``` for channel ```N``` To request calculation of statistics pass the string for the desired statistic or a list of statistics to the ```stats``` parameter of ```query_waveform```: ``` with MSO5000(address = "10.0.0.123") as mso: data = mso.query_waveform((1,2), stats = [ 'mean', 'fft' ]) ``` ## Supported methods More documentation in progress ... * ```identify()``` * Connection management (when not using ```with``` context management): * ```connect()``` * ```disconnect()``` * ```set_channel_enable(channel, enabled)``` * ```is_channel_enabled(channel)``` * ```set_sweep_mode(mode)``` * ```get_sweep_mode()``` * ```set_trigger_mode(mode)``` * ```get_trigger_mode()``` * ```force_trigger()``` * ```set_timebase_mode(mode)``` * ```get_timebase_mode()``` * ```set_run_mode(mode)``` * ```get_run_mode()``` * ```set_timebase_scale(secondsPerDivision)``` * ```get_timebase_scale()``` * ```set_channel_coupling(channel, couplingMode)``` * ```get_channel_coupling(channel)``` * ```set_channel_probe_ratio(channel, ratio)``` * ```get_channel_probe_ratio(channel)``` * ```set_channel_scale(channel, scale)``` * ```get_channel_scale(channel)``` * ```query_waveform(channel, stats = None)``` * ```off()```
PypiClean
/nnisgf-0.4-py3-none-manylinux1_x86_64.whl/nnisgf-0.4.data/data/nni/node_modules/iconv-lite/Changelog.md
# 0.4.23 / 2018-05-07 * Fix deprecation warning in Node v10 due to the last usage of `new Buffer` (#185, by @felixbuenemann) * Switched from NodeBuffer to Buffer in typings (#155 by @felixfbecker, #186 by @larssn) # 0.4.22 / 2018-05-05 * Use older semver style for dependencies to be compatible with Node version 0.10 (#182, by @dougwilson) * Fix tests to accomodate fixes in Node v10 (#182, by @dougwilson) # 0.4.21 / 2018-04-06 * Fix encoding canonicalization (#156) * Fix the paths in the "browser" field in package.json (#174 by @LMLB) * Removed "contributors" section in package.json - see Git history instead. # 0.4.20 / 2018-04-06 * Updated `new Buffer()` usages with recommended replacements as it's being deprecated in Node v10 (#176, #178 by @ChALkeR) # 0.4.19 / 2017-09-09 * Fixed iso8859-1 codec regression in handling untranslatable characters (#162, caused by #147) * Re-generated windows1255 codec, because it was updated in iconv project * Fixed grammar in error message when iconv-lite is loaded with encoding other than utf8 # 0.4.18 / 2017-06-13 * Fixed CESU-8 regression in Node v8. # 0.4.17 / 2017-04-22 * Updated typescript definition file to support Angular 2 AoT mode (#153 by @larssn) # 0.4.16 / 2017-04-22 * Added support for React Native (#150) * Changed iso8859-1 encoding to usine internal 'binary' encoding, as it's the same thing (#147 by @mscdex) * Fixed typo in Readme (#138 by @jiangzhuo) * Fixed build for Node v6.10+ by making correct version comparison * Added a warning if iconv-lite is loaded not as utf-8 (see #142) # 0.4.15 / 2016-11-21 * Fixed typescript type definition (#137) # 0.4.14 / 2016-11-20 * Preparation for v1.0 * Added Node v6 and latest Node versions to Travis CI test rig * Deprecated Node v0.8 support * Typescript typings (@larssn) * Fix encoding of Euro character in GB 18030 (inspired by @lygstate) * Add ms prefix to dbcs windows encodings (@rokoroku) # 0.4.13 / 2015-10-01 * Fix silly mistake in deprecation notice. # 0.4.12 / 2015-09-26 * Node v4 support: * Added CESU-8 decoding (#106) * Added deprecation notice for `extendNodeEncodings` * Added Travis tests for Node v4 and io.js latest (#105 by @Mithgol) # 0.4.11 / 2015-07-03 * Added CESU-8 encoding. # 0.4.10 / 2015-05-26 * Changed UTF-16 endianness heuristic to take into account any ASCII chars, not just spaces. This should minimize the importance of "default" endianness. # 0.4.9 / 2015-05-24 * Streamlined BOM handling: strip BOM by default, add BOM when encoding if addBOM: true. Added docs to Readme. * UTF16 now uses UTF16-LE by default. * Fixed minor issue with big5 encoding. * Added io.js testing on Travis; updated node-iconv version to test against. Now we just skip testing SBCS encodings that node-iconv doesn't support. * (internal refactoring) Updated codec interface to use classes. * Use strict mode in all files. # 0.4.8 / 2015-04-14 * added alias UNICODE-1-1-UTF-7 for UTF-7 encoding (#94) # 0.4.7 / 2015-02-05 * stop official support of Node.js v0.8. Should still work, but no guarantees. reason: Packages needed for testing are hard to get on Travis CI. * work in environment where Object.prototype is monkey patched with enumerable props (#89). # 0.4.6 / 2015-01-12 * fix rare aliases of single-byte encodings (thanks @mscdex) * double the timeout for dbcs tests to make them less flaky on travis # 0.4.5 / 2014-11-20 * fix windows-31j and x-sjis encoding support (@nleush) * minor fix: undefined variable reference when internal error happens # 0.4.4 / 2014-07-16 * added encodings UTF-7 (RFC2152) and UTF-7-IMAP (RFC3501 Section 5.1.3) * fixed streaming base64 encoding # 0.4.3 / 2014-06-14 * added encodings UTF-16BE and UTF-16 with BOM # 0.4.2 / 2014-06-12 * don't throw exception if `extendNodeEncodings()` is called more than once # 0.4.1 / 2014-06-11 * codepage 808 added # 0.4.0 / 2014-06-10 * code is rewritten from scratch * all widespread encodings are supported * streaming interface added * browserify compatibility added * (optional) extend core primitive encodings to make usage even simpler * moved from vows to mocha as the testing framework
PypiClean
/returnn-1.20230902.233313.tar.gz/returnn-1.20230902.233313/tools/extract_state_tying_from_dataset.py
from __future__ import annotations import os import gzip from argparse import ArgumentParser from pprint import pprint from xml.etree import ElementTree import collections from collections import defaultdict import typing import _setup_returnn_env # noqa from returnn.datasets import init_dataset from returnn.datasets.lm import Lexicon, AllophoneState from returnn.log import log from returnn.util.basic import uniq def get_segment_name(tree): """ :param tree: :return: """ def _m(x): if "name" in x.attrib: return x.attrib["name"] if x.tag == "segment": return "1" assert False, "unknown name: %r, %r" % (x, vars(x)) return "/".join(map(_m, tree)) def iter_bliss_orth(filename): """ :param str filename: :return: """ corpus_file = open(filename, "rb") if filename.endswith(".gz"): corpus_file = gzip.GzipFile(fileobj=corpus_file) # noinspection PyShadowingNames def getelements(tag): """Yield *tag* elements from *filename_or_file* xml incrementally.""" context = iter(ElementTree.iterparse(corpus_file, events=("start", "end"))) _, root = next(context) # get root element tree = [root] for event, elem in context: if event == "start": tree += [elem] elif event == "end": assert tree[-1] is elem tree = tree[:-1] if event == "end" and elem.tag == tag: yield tree, elem for tree, elem in getelements("segment"): elem_orth = elem.find("orth") orth_raw = elem_orth.text or "" # should be unicode orth_split = orth_raw.split() orth = " ".join(orth_split) yield get_segment_name(tree + [elem]), orth def iter_dataset_targets(dataset): """ :type dataset: Dataset.Dataset """ dataset.init_seq_order(epoch=1) seq_idx = 0 while dataset.is_less_than_num_seqs(seq_idx): dataset.load_seqs(seq_idx, seq_idx + 1) segment_name = dataset.get_tag(seq_idx) targets = dataset.get_targets("classes", seq_idx) assert targets.ndim == 1 # sparse targets = targets.astype("int32") yield segment_name, targets seq_idx += 1 class OrthHandler: """ Orthography handler. """ allo_add_all = False # only via lexicon def __init__(self, lexicon, si_label=None, allo_num_states=3, allo_context_len=1, allow_ci_in_words=True): """ :param Lexicon lexicon: :param int si_label: :param int allo_num_states: :param int allo_context_len: :param bool allow_ci_in_words: """ self.lexicon = lexicon self.phonemes = sorted(self.lexicon.phonemes.keys(), key=lambda s: self.lexicon.phonemes[s]["index"]) self.word_boundary_phones = {-1: set(), 1: set()} self.phon_to_possible_ctx_via_lex = {-1: {}, 1: {}} for lemma in self.lexicon.lemmas.values(): for pron in lemma["phons"]: phons = pron["phon"].split() assert phons self.word_boundary_phones[-1].add(phons[0]) self.word_boundary_phones[1].add(phons[-1]) for i in range(len(phons)): ps = [phons[i + j] if (0 <= (i + j) < len(phons)) else "" for j in [-1, 0, 1]] self.phon_to_possible_ctx_via_lex[1].setdefault(ps[1], set()).add(ps[2]) self.phon_to_possible_ctx_via_lex[-1].setdefault(ps[1], set()).add(ps[0]) for phone in self.lexicon.phoneme_list: if "" in self.phon_to_possible_ctx_via_lex[-1][phone]: self.phon_to_possible_ctx_via_lex[-1][phone].update(self.word_boundary_phones[1]) if "" in self.phon_to_possible_ctx_via_lex[1][phone]: self.phon_to_possible_ctx_via_lex[1][phone].update(self.word_boundary_phones[-1]) if allow_ci_in_words: for phone in self.lexicon.phoneme_list: self.phon_to_possible_ctx_via_lex[-1][phone].add("") self.phon_to_possible_ctx_via_lex[1][phone].add("") self.si_lemma = self.lexicon.lemmas["[SILENCE]"] self.si_phone = self.si_lemma["phons"][0]["phon"] # type: str self.si_label = si_label self.allo_num_states = allo_num_states # e.g. 3 -> 3-state HMM self.allo_context_len = allo_context_len # e.g. 1 -> one left&right, i.e. triphone def expected_num_labels_for_monophone_state_tying(self): """ Silence has 1 state, all others have allo_num_states. :rtype: int """ num_phones = len(self.lexicon.phonemes) return (num_phones - 1) * self.allo_num_states + 1 def iter_orth(self, orth): """ :param str orth: :return: yields lemmas """ symbols = list(orth.split()) i = 0 while i < len(symbols): symbol = symbols[i] try: lemma = self.lexicon.lemmas[symbol] except KeyError: if "/" in symbol: symbols[i : i + 1] = symbol.split("/") continue if "-" in symbol: symbols[i : i + 1] = symbol.split("-") continue raise i += 1 yield lemma def _iter_possible_ctx(self, phon_id, direction): """ :param str phon_id: e.g. "aa", "aw", "uh", "z", etc. :param int direction: 1 or -1 :rtype: list[tuple[str]] """ if self.lexicon.phonemes[phon_id]["variation"] == "none": return [()] if self.allo_add_all: res = [()] # type: typing.List[typing.Tuple[str, ...]] res += [ (p,) for p in sorted(self.lexicon.phonemes.keys()) if self.lexicon.phonemes[p]["variation"] == "context" ] return res return [((p,) if p else ()) for p in sorted(self.phon_to_possible_ctx_via_lex[direction][phon_id])] def num_states_for_phone(self, phon_id): """ :param str phon_id: :return: number of allophone states for this phone :rtype: int """ if phon_id == self.si_phone: return 1 return self.allo_num_states def all_allophone_variations(self, phon, states=None, all_boundary_variations=False): """ :param str phon: :param None|list[int] states: which states to yield for this phone :param bool all_boundary_variations: :return: yields AllophoneState's :rtype: list[AllophoneState] """ if states is None: states = range(self.num_states_for_phone(phon)) if all_boundary_variations: boundary_variations = [0, 1, 2, 3] else: boundary_variations = [0] for left_ctx in self._iter_possible_ctx(phon, -1): for right_ctx in self._iter_possible_ctx(phon, 1): for state in states: for boundary in boundary_variations: a = AllophoneState() a.id = phon a.context_history = left_ctx a.context_future = right_ctx a.state = state a.boundary = boundary if not all_boundary_variations: if not left_ctx: a.mark_initial() if not right_ctx: a.mark_final() yield a # noinspection PyMethodMayBeStatic def _phones_to_allos(self, phones): for p in phones: a = AllophoneState() a.id = p yield a def _allos_set_context(self, allos): if self.allo_context_len == 0: return ctx = [] for a in allos: if self.lexicon.phonemes[a.id]["variation"] == "context": a.context_history = tuple(ctx) ctx += [a.id] ctx = ctx[-self.allo_context_len :] else: ctx = [] ctx = [] for a in reversed(allos): if self.lexicon.phonemes[a.id]["variation"] == "context": a.context_future = tuple(reversed(ctx)) ctx += [a.id] ctx = ctx[-self.allo_context_len :] else: ctx = [] def _allos_add_states(self, allos): for _a in allos: if _a.id == self.si_phone: yield _a else: # non-silence for state in range(self.allo_num_states): a = AllophoneState() a.id = _a.id a.context_history = _a.context_history a.context_future = _a.context_future a.boundary = _a.boundary a.state = state yield a def orth_to_allophone_states(self, orth): """ :param str orth: orthography as a str. orth.split() should give words in the lexicon :rtype: list[AllophoneState] :returns allophone state list. those will have repetitions etc """ allos = [] for lemma in self.iter_orth(orth): assert len(lemma["phons"]) == 1, "TODO..." phon = lemma["phons"][0] l_allos = list(self._phones_to_allos(phon["phon"].split())) l_allos[0].mark_initial() l_allos[-1].mark_final() allos += l_allos self._allos_set_context(allos) allos = list(self._allos_add_states(allos)) return allos def main(): """ Main entry. """ arg_parser = ArgumentParser() arg_parser.add_argument("--action") arg_parser.add_argument("--print_seq", action="store_true") arg_parser.add_argument("--print_allos", action="store_true") arg_parser.add_argument("--print_targets", action="store_true") arg_parser.add_argument("--dataset") arg_parser.add_argument("--corpus") arg_parser.add_argument("--lexicon", help="filename") arg_parser.add_argument("--silence", type=int, help="index") arg_parser.add_argument("--context", default=1, type=int) arg_parser.add_argument("--hmm_states", default=3, type=int) arg_parser.add_argument("--state_tying_type", help="'monophone' or 'full'") arg_parser.add_argument("--state_tying_output", help="filename") arg_parser.add_argument("--allo_add_all", action="store_true") args = arg_parser.parse_args() dataset = init_dataset(args.dataset) if args.dataset else None corpus = dict(iter_bliss_orth(filename=args.corpus)) if args.corpus else None lexicon = Lexicon(filename=args.lexicon) if args.lexicon else None silence_label = args.silence if args.action == "show_corpus": pprint(corpus) return print("Num phones: %i" % len(lexicon.phonemes), file=log.v1) print("Phones: %r" % sorted(lexicon.phonemes.keys()), file=log.v1) orth_handler = OrthHandler(lexicon=lexicon, allo_context_len=args.context, allo_num_states=args.hmm_states) map_idx_to_allo = defaultdict(set) # type: typing.Dict[int, typing.Set[AllophoneState]] map_allo_to_idx = {} # type: typing.Dict[AllophoneState, int] if args.allo_add_all: orth_handler.allo_add_all = True print("Num HMM states: %i" % orth_handler.allo_num_states, file=log.v1) if args.state_tying_type == "monophone": print("Monophone state tying.", file=log.v1) num_labels = orth_handler.expected_num_labels_for_monophone_state_tying() all_label_idx_are_used = True elif args.state_tying_type == "full": print("Full state tying.", file=log.v1) phone_idxs = { k: i + 1 for (i, k) in enumerate(lexicon.phoneme_list) } # +1 to keep 0 reserved as the term-symbol for phon in lexicon.phoneme_list: for allo in orth_handler.all_allophone_variations(phon, all_boundary_variations=True): allo_idx = allo.index( phone_idxs=phone_idxs, num_states=orth_handler.allo_num_states, context_length=orth_handler.allo_context_len, ) map_idx_to_allo[allo_idx].add(allo) num_labels = max(map_idx_to_allo.keys()) + 1 all_label_idx_are_used = False else: raise Exception("invalid state tying type %r" % args.state_tying_type) print("Num labels: %i" % num_labels, file=log.v1) if dataset: count = 0 for segment_name, targets in iter_dataset_targets(dataset): count += 1 if silence_label is None or count == 1: likely_silence_label = collections.Counter(targets).most_common(1)[0][0] if silence_label is None: silence_label = likely_silence_label if silence_label != likely_silence_label: print("warning: silence %i but likely %i" % (silence_label, likely_silence_label), file=log.v2) print("Silence label: %i" % silence_label, file=log.v1) orth_handler.si_label = silence_label # Monophone state tying: for allo in orth_handler.all_allophone_variations(orth_handler.si_phone): map_idx_to_allo[silence_label].add(allo) map_allo_to_idx[allo] = silence_label assert segment_name in corpus orth = corpus[segment_name] allo_states = orth_handler.orth_to_allophone_states(orth=orth) if args.print_seq: print("%r %r" % (segment_name, orth)) if args.print_allos: print(" allophone state seq: %r" % allo_states) tgt_seq = [t for t in uniq(targets) if t != silence_label] if args.print_targets: print(" target seq: %r" % (tgt_seq,)) assert len(allo_states) == len(tgt_seq), "check --hmm_states or so" for allo, t in zip(allo_states, tgt_seq): allo.boundary = 0 # do not differ between boundaries allos = map_idx_to_allo[t] if allo in map_allo_to_idx: assert allo in allos, "bad mapping" else: assert allo not in allos allos.add(allo) map_allo_to_idx[allo] = t if len(map_idx_to_allo) >= num_labels: assert len(map_idx_to_allo) == num_labels assert 0 in map_idx_to_allo assert num_labels - 1 in map_idx_to_allo print("Finished with uniq mapping after %i sequences." % count, file=log.v1) break if count % 100 == 0: print("Have indices: %i (num labels: %i)" % (len(map_idx_to_allo), num_labels), file=log.v1) print("Finished. Have indices: %i (num labels: %i)" % (len(map_idx_to_allo), num_labels), file=log.v1) if len(map_idx_to_allo) < num_labels: found = [] not_found = [] for p in sorted(lexicon.phonemes.keys()): allo = AllophoneState(p, state=0) if allo in map_allo_to_idx: found.append(p) else: not_found.append(p) print("Phonemes found: %r" % found) print("Phonemes not found: %r" % not_found) if args.state_tying_output: assert not os.path.exists(args.state_tying_output) if all_label_idx_are_used: assert len(map_idx_to_allo) == num_labels assert 0 in map_idx_to_allo assert num_labels - 1 in map_idx_to_allo f = open(args.state_tying_output, "w") for i, allos in sorted(map_idx_to_allo.items()): for allo in allos: f.write("%s %i\n" % (allo.format(), i)) f.close() print("Wrote state tying to %r." % args.state_tying_output, file=log.v1) print("The end.") if __name__ == "__main__": from returnn.util import better_exchook better_exchook.install() log.initialize(verbosity=[2]) main()
PypiClean
/custom-awscli-1.27.51.tar.gz/custom-awscli-1.27.51/awscli/examples/greengrassv2/get-deployment.rst
**To get a deployment** The following ``get-deployment`` example gets information about the deployment of the AWS IoT Greengrass nucleus component to a group of core devices. :: aws greengrassv2 get-deployment \ --deployment-id a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 Output:: { "targetArn": "arn:aws:iot:us-west-2:123456789012:thinggroup/MyGreengrassCoreGroup", "revisionId": "14", "deploymentId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111", "deploymentName": "Deployment for MyGreengrassCoreGroup", "deploymentStatus": "ACTIVE", "iotJobId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", "iotJobArn": "arn:aws:iot:us-west-2:123456789012:job/a1b2c3d4-5678-90ab-cdef-EXAMPLE22222", "components": { "aws.greengrass.Nucleus": { "componentVersion": "2.0.3", "configurationUpdate": { "merge": "{\"jvmOptions\":\"-Xmx64m\",\"logging\":{\"level\":\"WARN\"}}", "reset": [ "/networkProxy", "/mqtt" ] } } }, "deploymentPolicies": { "failureHandlingPolicy": "ROLLBACK", "componentUpdatePolicy": { "timeoutInSeconds": 60, "action": "NOTIFY_COMPONENTS" }, "configurationValidationPolicy": { "timeoutInSeconds": 60 } }, "iotJobConfiguration": {}, "creationTimestamp": "2021-01-07T17:21:20.691000-08:00", "isLatestForTarget": false, "tags": {} } For more information, see `Deploy components to devices <https://docs.aws.amazon.com/greengrass/v2/developerguide/manage-deployments.html>`__ in the *AWS IoT Greengrass V2 Developer Guide*.
PypiClean
/ansible-8.3.0-py3-none-any.whl/ansible_collections/hetzner/hcloud/plugins/module_utils/hcloud.py
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) from ansible.module_utils.ansible_release import __version__ from ansible.module_utils.basic import env_fallback, missing_required_lib from ansible_collections.hetzner.hcloud.plugins.module_utils.vendor import hcloud HAS_REQUESTS = True HAS_DATEUTIL = True try: import requests # pylint: disable=unused-import except ImportError: HAS_REQUESTS = False try: import dateutil # pylint: disable=unused-import except ImportError: HAS_DATEUTIL = False class Hcloud: def __init__(self, module, represent): self.module = module self.represent = represent self.result = {"changed": False, self.represent: None} if not HAS_REQUESTS: module.fail_json(msg=missing_required_lib("requests")) if not HAS_DATEUTIL: module.fail_json(msg=missing_required_lib("python-dateutil")) self._build_client() def _build_client(self): self.client = hcloud.Client( token=self.module.params["api_token"], api_endpoint=self.module.params["endpoint"], application_name="ansible-module", application_version=__version__, ) def _mark_as_changed(self): self.result["changed"] = True @staticmethod def base_module_arguments(): return { "api_token": { "type": "str", "required": True, "fallback": (env_fallback, ["HCLOUD_TOKEN"]), "no_log": True, }, "endpoint": {"type": "str", "default": "https://api.hetzner.cloud/v1"}, } def _prepare_result(self): """Prepare the result for every module :return: dict """ return {} def get_result(self): if getattr(self, self.represent) is not None: self.result[self.represent] = self._prepare_result() return self.result
PypiClean
/hwi-2.3.1.tar.gz/hwi-2.3.1/hwilib/devices/jadepy/jade.py
import cbor import hashlib import json import time import logging import collections import collections.abc import traceback import random import sys # JadeError from .jade_error import JadeError # Low-level comms backends from .jade_serial import JadeSerialImpl from .jade_tcp import JadeTCPImpl # 'jade' logger logger = logging.getLogger(__name__) device_logger = logging.getLogger(f'{__name__}-device') # Default serial connection DEFAULT_SERIAL_DEVICE = '/dev/ttyUSB0' DEFAULT_BAUD_RATE = 115200 DEFAULT_SERIAL_TIMEOUT = 120 # Default BLE connection DEFAULT_BLE_DEVICE_NAME = 'Jade' DEFAULT_BLE_SERIAL_NUMBER = None DEFAULT_BLE_SCAN_TIMEOUT = 60 def _hexlify(data): """ Helper to map bytes-like types into hex-strings to make for prettier message-logging. Parameters ---------- data : any The object to hexlify. - bytes or bytearrays have 'hex()' method invoked - list and dicts (values) have this function mapped over them - Otherwise the input is returned unchanged """ if data is None: return None elif isinstance(data, bytes) or isinstance(data, bytearray): return data.hex() elif isinstance(data, list): return [_hexlify(item) for item in data] elif isinstance(data, dict): return {k: _hexlify(v) for k, v in data.items()} else: return data try: import requests def _http_request(params): """ Simple http request function which can be used when a Jade response requires an external http call. The default implementation used in JadeAPI._jadeRpc() below. NOTE: Only available if the 'requests' dependency is available. Callers can supply their own implmentation of this call where it is required. Parameters ---------- data : dict A dictionary structure describing the http call to make Returns ------- dict with single key 'body', whose value is the json returned from the call """ logger.debug('_http_request: {}'.format(params)) # Use the first non-onion url url = [url for url in params['urls'] if not url.endswith('.onion')][0] if params['method'] == 'GET': assert 'data' not in params, 'Cannot pass body to requests.get' f = requests.get(url) elif params['method'] == 'POST': data = json.dumps(params['data']) f = requests.post(url, data) logger.debug("http_request received reply: {}".format(f.text)) if f.status_code != 200: logger.error("http error {} : {}".format(f.status_code, f.text)) raise ValueError(f.status_code) assert params['accept'] == 'json' f = f.json() return {'body': f} except ImportError as e: logger.info(e) logger.info('Default _http_requests() function will not be available') class JadeAPI: """ High-Level Jade Client API Builds on a JadeInterface to provide a meaningful API Either: a) use with JadeAPI.create_[serial|ble]() as jade: (recommended) or: b) use JadeAPI.create_[serial|ble], then call connect() before using, and disconnect() when finished (caveat cranium) or: c) use ctor to wrap existing JadeInterface instance (caveat cranium) """ def __init__(self, jade): assert jade is not None self.jade = jade def __enter__(self): self.connect() return self def __exit__(self, exc_type, exc, tb): if (exc_type): logger.info("Exception causing JadeAPI context exit.") logger.info(exc_type) logger.info(exc) traceback.print_tb(tb) self.disconnect(exc_type is not None) @staticmethod def create_serial(device=None, baud=None, timeout=None): """ Create a JadeAPI object using the serial interface described. Parameters ---------- device : str, optional The device identifier for the serial device. Underlying implementation will default (to /dev/ttyUSB0) baud : int, optional The communication baud rate. Underlying implementation will default (to 115200) timeout : int, optional The serial read timeout when awaiting messages. Underlying implementation will default (to 120s) Returns ------- JadeAPI API object configured to use given serial parameters. NOTE: the api instance has not yet tried to contact the hw - caller must call 'connect()' before trying to use the Jade. """ impl = JadeInterface.create_serial(device, baud, timeout) return JadeAPI(impl) @staticmethod def create_ble(device_name=None, serial_number=None, scan_timeout=None, loop=None): """ Create a JadeAPI object using the BLE interface described. NOTE: raises JadeError if BLE dependencies not installed. Parameters ---------- device_name : str, optional The device name of the desired BLE device. Underlying implementation will default (to 'Jade') serial_number : int, optional The serial number of the desired BLE device - used to disambiguate multiple beacons with the same 'device name' Underlying implementation will connect to the first beacon it scans with the matching 'device name'. scan_timeout : int, optional The timeout when scanning for devices which match the device name/serial number. Underlying implementation will default (to 60s) loop : optional The asynchio event loop to use, if required. Underlying implementation will default (to asyncio.get_event_loop()) Returns ------- JadeAPI API object configured to use given BLE parameters. NOTE: the api instance has not yet tried to contact the hw - caller must call 'connect()' before trying to use the Jade. Raises ------ JadeError if BLE backend not available (ie. BLE dependencies not installed) """ impl = JadeInterface.create_ble(device_name, serial_number, scan_timeout, loop) return JadeAPI(impl) def connect(self): """ Try to connect the underlying transport interface (eg. serial, ble, etc.) Raises an exception on failure. """ self.jade.connect() def disconnect(self, drain=False): """ Disconnect the underlying transport (eg. serial, ble, etc.) Parameters ---------- drain : bool, optional When true log any/all remaining messages/data, otherwise silently discard. NOTE: can prevent disconnection if data is arriving constantly. Defaults to False. """ self.jade.disconnect(drain) def drain(self): """ Log any/all outstanding messages/data. NOTE: can run indefinitely if data is arriving constantly. """ self.jade.drain() @staticmethod def _get_result_or_raise_error(reply): """ Raise any error message returned from a Jade rpc call as an exception. Parameters ---------- reply : dict Dictionary representing a reply from a Jade rpc call. Returns ------- dict Any nested 'result' structure, if the reply is not an error. Raises ------ JadeError If the reply represented an error, including all details received. """ if 'error' in reply: e = reply['error'] raise JadeError(e.get('code'), e.get('message'), e.get('data')) return reply['result'] def _jadeRpc(self, method, params=None, inputid=None, http_request_fn=None, long_timeout=False): """ Helper to make a request/reply rpc call over the underlying transport interface. NOTE: interface must be 'connected'. If the call returns an 'http_request' structure, this is handled here and the http call is made, and the result is passed into the rpc method given in 'on reply', by calling this function recursively. Parameters ---------- method : str rpc method to invoke params : dict, optional any parameters to pass to the rpc method Defaults to None. inputid : str, optional Any specific 'id' to use in the rpc message. Defaults to a using a pseudo-random id generated in-situ. http_request_fn : function, optional A function which accepts a dict (containing a description of the http request), makes the described http call, and returns the body data in an element called 'body'. Defaults to _http_request() above. long_timeout : bool, optional Whether the rpc call should use an indefinitely long timeout, rather than that set on construction. (Useful if the call involves a non-trivial user interaction with the device.) Defaults to False. Returns ------- dict The reply from the rpc call. NOTE: will return the last/final reply after a sequence of calls, where 'http_request' was returned and remote data was fetched and passed into s subsequent call. """ newid = inputid if inputid else str(random.randint(100000, 999999)) request = self.jade.build_request(newid, method, params) reply = self.jade.make_rpc_call(request, long_timeout) result = self._get_result_or_raise_error(reply) # The Jade can respond with a request for interaction with a remote # http server. This is used for interaction with the pinserver but the # code below acts as a dumb proxy and simply makes the http request and # forwards the response back to the Jade. # Note: the function called to make the http-request can be passed in, # or it can default to the simple _http_request() function above, if available. if isinstance(result, collections.abc.Mapping) and 'http_request' in result: this_module = sys.modules[__name__] make_http_request = http_request_fn or getattr(this_module, '_http_request', None) assert make_http_request, 'Default _http_request() function not available' http_request = result['http_request'] http_response = make_http_request(http_request['params']) return self._jadeRpc( http_request['on-reply'], http_response['body'], http_request_fn=make_http_request, long_timeout=long_timeout) return result def get_version_info(self): """ RPC call to fetch summary details pertaining to the hardware unit and running firmware. Returns ------- dict Contains keys for various info describing the hw and running fw """ return self._jadeRpc('get_version_info') def add_entropy(self, entropy): """ RPC call to add client entropy into the unit RNG entropy pool. Parameters ---------- entropy : bytes Bytes to fold into the hw entropy pool. Returns ------- bool True on success """ params = {'entropy': entropy} return self._jadeRpc('add_entropy', params) def set_epoch(self, epoch=None): """ RPC call to set the current time epoch value, required for TOTP use. NOTE: The time is lost on each power-down and must be reset on restart/reconnect before TOTP can be used. Parameters ---------- epoch : int, optional Current epoch value, in seconds. Defaults to int(time.time()) value. Returns ------- bool True on success """ params = {'epoch': epoch if epoch is not None else int(time.time())} return self._jadeRpc('set_epoch', params) def ota_update(self, fwcmp, fwlen, chunksize, patchlen=None, cb=None): """ RPC call to attempt to update the unit's firmware. Parameters ---------- fwcmp : bytes The compressed firmware image to upload to the Jade unit. Can be a full firmware or and incremental diff to be applied to the currently running firmware image. fwlen : int The size of the new complete (uncompressed) firmware image (after any delta is applied). chunksize : int The size of the chunks used to upload the compressed firmware. Each chunk is uploaded and ack'd by the hw unit. The maximum supported chunk size is given in the version info data, under the key 'JADE_OTA_MAX_CHUNK'. patchlen: int, optional If the compressed firmware bytes are an incremental diff to be applied to the running firmware image, this is the size of that patch when uncompressed. Defaults to None, implying the compressed data is a full firmware image upload. (Compare with fwlen - the size of the final fw image.) cb : function, optional Callback function accepting two integers - the amount of compressed firmware sent thus far, and the total length of the compressed firmware to send. If passed, this function is invoked each time a fw chunk is successfully uploaded and ack'd by the hw, to notify of upload progress. Defaults to None, and nothing is called to report upload progress. Returns ------- bool True if no errors were reported - on next restart the hw unit will attempt to boot the new firmware. """ # Compute the sha256 hash of the compressed file being uploaded cmphasher = hashlib.sha256() cmphasher.update(fwcmp) cmphash = cmphasher.digest() cmplen = len(fwcmp) # Initiate OTA ota_method = 'ota' params = {'fwsize': fwlen, 'cmpsize': cmplen, 'cmphash': cmphash} if patchlen is not None: ota_method = 'ota_delta' params['patchsize'] = patchlen result = self._jadeRpc(ota_method, params) assert result is True # Write binary chunks written = 0 while written < cmplen: remaining = cmplen - written length = min(remaining, chunksize) chunk = bytes(fwcmp[written:written + length]) result = self._jadeRpc('ota_data', chunk) assert result is True written += length if (cb): cb(written, cmplen) # All binary data uploaded return self._jadeRpc('ota_complete') def run_remote_selfcheck(self): """ RPC call to run in-built tests. NOTE: Only available in a DEBUG build of the firmware. Returns ------- bool True on success. """ return self._jadeRpc('debug_selfcheck', long_timeout=True) def capture_image_data(self, check_qr=False): """ RPC call to capture raw image data from the camera. See also scan_qr() below. NOTE: Only available in a DEBUG build of the firmware. Parameters ---------- check_qr : bool, optional If True only images which contain a valid qr code are captured and returned. If False, any image is considered valid and is returned. Defaults to False Returns ------- bytes Raw image data from the camera framebuffer """ params = {'check_qr': check_qr} return self._jadeRpc('debug_capture_image_data', params) def scan_qr(self, image): """ RPC call to scan a passed image and return any data extracted from any qr image. Exercises the camera image capture, but ignores result and uses passed image instead. See also capture_image_data() above. NOTE: Only available in a DEBUG build of the firmware. Parameters ---------- image : bytes The image data (as obtained from capture_image_data() above). Returns ------- bytes String or byte data obtained from the image (via qr code) """ params = {'image': image} return self._jadeRpc('debug_scan_qr', params) def clean_reset(self): """ RPC call to clean/reset memory and storage, as much as is practical. NOTE: Only available in a DEBUG build of the firmware. Returns ------- bool True on success. """ return self._jadeRpc('debug_clean_reset') def set_mnemonic(self, mnemonic, passphrase=None, temporary_wallet=False): """ RPC call to set the wallet mnemonic (in RAM only - flash storage is untouched). NOTE: Only available in a DEBUG build of the firmware. Parameters ---------- mnemonic : str The wallet mnemonic to set. passphrase : str, optional Any bip39 passphrase to apply. Defaults to None. temporary_wallet : bool, optional Whether to treat this wallet/mnemonic as an 'Emergency Restore' temporary wallet, as opposed to one successfully loaded from the flash storage. NOTE: in either case the wallet is only set in RAM, and flash storage is not affected. Defaults to False. Returns ------- bool True on success. """ params = {'mnemonic': mnemonic, 'passphrase': passphrase, 'temporary_wallet': temporary_wallet} return self._jadeRpc('debug_set_mnemonic', params) def set_seed(self, seed): """ RPC call to set the wallet seed. NOTE: Only available in a DEBUG build of the firmware. NOTE: Setting a seed always sets a 'temporary' wallet. Parameters ---------- seed : bytes The wallet seed to set as a temporary wallet (cannot be persisted in flash). Returns ------- bool True on success. """ params = {'seed': seed} return self._jadeRpc('debug_set_mnemonic', params) def set_pinserver(self, urlA=None, urlB=None, pubkey=None, cert=None): """ RPC call to explicitly set (override) the details of the blind pinserver used to authenticate the PIN entered on the Jade unit. This data is recorded in the hw flash, and returned to the caller when authenticating (in auth_user(), below). Parameters ---------- urlA : str, optional The primary url of the pinserver to use. urlB : str, optional Any secondary url of the pinserver to use. pubkey : bytes, optional The public key used to verify pinserver signed payloads. cert : bytes, optional Any additional certificate required to verify the pinserver identity. Returns ------- bool True on success. """ params = {} if urlA is not None or urlB is not None: params['urlA'] = urlA params['urlB'] = urlB if pubkey is not None: params['pubkey'] = pubkey if cert is not None: params['certificate'] = cert return self._jadeRpc('update_pinserver', params) def reset_pinserver(self, reset_details, reset_certificate): """ RPC call to reset any formerly overidden pinserver details to their defauts. Parameters ---------- reset_details : bool, optional If set, any overidden urls and pubkey are reset to their defaults. reset_certificate : bool, optional If set, any additional certificate is reset (to None). Returns ------- bool True on success. """ params = {'reset_details': reset_details, 'reset_certificate': reset_certificate} return self._jadeRpc('update_pinserver', params) def auth_user(self, network, http_request_fn=None, epoch=None): """ RPC call to authenticate the user on the hw device, for using with the network provided. Parameters ---------- network : str The name of the network intended for use - eg. 'mainnet', 'liquid', 'testnet' etc. This is verified against the networks allowed on the hardware. http_request_fn : function, optional Optional http-request function to pass http requests to the Jade pinserver. Default behaviour is to use the '_http_request()' function which defers to the 'requests' module. If the 'reqests' module is not available, no default http-request function is created, and one must be supplied here. epoch : int, optional Current epoch value, in seconds. Defaults to int(time.time()) value. Returns ------- bool True is returned immediately if the hw is already unlocked for use on the given network. True if the PIN is entered and verified with the remote blind pinserver. False if the PIN entered was incorrect. """ params = {'network': network, 'epoch': epoch if epoch is not None else int(time.time())} return self._jadeRpc('auth_user', params, http_request_fn=http_request_fn, long_timeout=True) def register_otp(self, otp_name, otp_uri): """ RPC call to register a new OTP record on the hw device. Parameters ---------- otp_name : str An identifying name for this OTP record otp_uri : str The uri of this OTP record - must begin 'otpauth://' Returns ------- bool True if the OTP uri was validated and persisted on the hw """ params = {'name': otp_name, 'uri': otp_uri} return self._jadeRpc('register_otp', params) def get_otp_code(self, otp_name, value_override=None): """ RPC call to fetch a new OTP code from the hw device. Parameters ---------- otp_name : str An identifying name for the OTP record to use value_override : int An overriding HOTP counter or TOTP timestamp to use. NOTE: Only available in a DEBUG build of the firmware. Returns ------- bool True if the OTP uri was validated and persisted on the hw """ params = {'name': otp_name} if value_override is not None: params['override'] = value_override return self._jadeRpc('get_otp_code', params) def get_xpub(self, network, path): """ RPC call to fetch an xpub for the given bip32 path for the given network. Parameters ---------- network : str Network to which the xpub applies - eg. 'mainnet', 'liquid', 'testnet', etc. path : [int] bip32 path for which the xpub should be generated. Returns ------- str base58 encoded xpub """ params = {'network': network, 'path': path} return self._jadeRpc('get_xpub', params) def get_registered_multisigs(self): """ RPC call to fetch brief summaries of any multisig wallets registered to this signer. Returns ------- dict Brief description of registered multisigs, keyed by registration name. Each entry contains keys: variant - str, script type, eg. 'sh(wsh(multi(k)))' sorted - boolean, whether bip67 key sorting is applied threshold - int, number of signers required,N num_signers - total number of signatories, M master_blinding_key - 32-bytes, any liquid master blinding key for this wallet """ return self._jadeRpc('get_registered_multisigs') def register_multisig(self, network, multisig_name, variant, sorted_keys, threshold, signers, master_blinding_key=None): """ RPC call to register a new multisig wallet, which must contain the hw signer. A registration name is provided - if it already exists that record is overwritten. Parameters ---------- network : string Network to which the multisig should apply - eg. 'mainnet', 'liquid', 'testnet', etc. multisig_name : string Name to use to identify this multisig wallet registration record. If a registration record exists with the name given, that record is overwritten. variant : str The script type - one of 'sh(multi(k))', 'wsh(multi(k))', 'sh(wsh(multi(k)))' sorted_keys : bool Whether this is a 'sortedmulti()' wallet - ie. whether to apply bip67 sorting to the pubkeys when generating redeem scripts. threshold : int Number of signers required. signers : [dict] Description of signers - should include keys: - 'fingerprint' - 4 bytes, origin fingerprint - 'derivation' - [int], bip32 path from origin to signer xpub provided - 'xpub' - str, base58 xpub of signer - will be verified for hw unit signer - 'path' - [int], any fixed path to always apply after the xpub - usually empty. master_blinding_key : 32-bytes, optional The master blinding key to use for this multisig wallet on liquid. Optional, defaults to None. Logically mandatory when 'network' indicates a liquid network and the Jade is to be used to generate confidential addresses, blinding keys, blinding nonces, asset blinding factors or output commitments. Returns ------- bool True on success, implying the mutisig wallet can now be used. """ params = {'network': network, 'multisig_name': multisig_name, 'descriptor': {'variant': variant, 'sorted': sorted_keys, 'threshold': threshold, 'signers': signers, 'master_blinding_key': master_blinding_key}} return self._jadeRpc('register_multisig', params) def get_receive_address(self, *args, recovery_xpub=None, csv_blocks=0, variant=None, multisig_name=None, confidential=None): """ RPC call to generate, show, and return an address for the given path. The call has three forms. Parameters ---------- network: str Network to which the address should apply - eg. 'mainnet', 'liquid', 'testnet', etc. Then either: 1. Blockstream Green (multisig shield) addresses subaccount : int Blockstream Green subaccount branch : int Blockstream Green derivation branch pointer : int Blockstream Green address pointer recovery_xpub : str, optional xpub of recovery key for 2of3 subaccounts. Otherwise should be omitted. Defaults to None (ie. not a 2of3 subaccount). csv_blocks : int, optional Number of blocks to include in csv redeem script, if this is a csv-enabled account. Otherwise should be omitted. Defaults to 0 (ie. does not apply/not a csv-enabled account.) 2. Generic single-sig addresses path: [int] bip32 path for which the xpub should be generated. variant: str The script type - one of 'pkh(k)', 'wpkh(k)', 'sh(wpkh(k))' 3. Generic multisig addresses paths: [[int]] bip32 path suffixes, one for each signer, applied as a suffix to the registered signer path. Usually these path suffixes will all be identical. multisig_name : str The name of the registered multisig wallet record used to generate the address. Returns ------- str The address generated for the given parameters. """ if multisig_name is not None: assert len(args) == 2 keys = ['network', 'paths', 'multisig_name'] args += (multisig_name,) elif variant is not None: assert len(args) == 2 keys = ['network', 'path', 'variant'] args += (variant,) else: assert len(args) == 4 keys = ['network', 'subaccount', 'branch', 'pointer', 'recovery_xpub', 'csv_blocks'] args += (recovery_xpub, csv_blocks) params = dict(zip(keys, args)) if confidential is not None: params['confidential'] = confidential return self._jadeRpc('get_receive_address', params) def sign_message(self, path, message, use_ae_signatures=False, ae_host_commitment=None, ae_host_entropy=None): """ RPC call to format and sign the given message, using the given bip32 path. Supports RFC6979 and anti-exfil signatures. Parameters ---------- path : [int] bip32 path for which the signature should be generated. message : str Message string to format and sign. ae_host_commitment : 32-bytes, optional The host-commitment to use for Antil-Exfil signatures ae_host_entropy : 32-bytes, optional The host-entropy to use for Antil-Exfil signatures Returns ------- 1. Legacy/RFC6979 signatures str base64-encoded signature 2. Anti-exfil signatures (bytes, str) signer-commitment, base64-encoded signature """ if use_ae_signatures: # Anti-exfil protocol: # We send the signing request and receive the signer-commitment in # reply once the user confirms. # We can then request the actual signature passing the ae-entropy. params = {'path': path, 'message': message, 'ae_host_commitment': ae_host_commitment} signer_commitment = self._jadeRpc('sign_message', params) params = {'ae_host_entropy': ae_host_entropy} signature = self._jadeRpc('get_signature', params) return signer_commitment, signature else: # Standard EC signature, simple case params = {'path': path, 'message': message} return self._jadeRpc('sign_message', params) def get_identity_pubkey(self, identity, curve, key_type, index=0): """ RPC call to fetch a pubkey for the given identity (slip13/slip17). NOTE: this api returns an uncompressed public key Parameters ---------- identity : str Identity string to format and sign. For example ssh://[email protected] curve : str Name of curve to use - currently only 'nist256p1' is supported key_type : str Key derivation type - must be either 'slip-0013' for an identity pubkey, or 'slip-0017' for an ecdh pubkey. index : int, optional Index number (if require multiple keys/sigs per identity) Defaults to 0 Returns ------- 65-bytes Uncompressed public key for the given identity and index. Consistent with 'sign_identity' or 'get_identity_shared_key', depending on the 'key_type'. """ params = {'identity': identity, 'curve': curve, 'type': key_type, 'index': index} return self._jadeRpc('get_identity_pubkey', params) def get_identity_shared_key(self, identity, curve, their_pubkey, index=0): """ RPC call to fetch a SLIP-0017 shared ecdh key for the identity and counterparty public key. NOTE: this api takes an uncompressed public key Parameters ---------- identity : str Identity string to format and sign. For example ssh://[email protected] curve : str Name of curve to use - currently only 'nist256p1' is supported their_pubkey : 65-bytes The counterparty's uncompressed public key index : int, optional Index number (if require multiple keys/sigs per identity) Defaults to 0 Returns ------- 32-bytes The shared ecdh key for the given identity and cpty public key Consistent with 'get_identity_pubkey' with 'key_type=slip-0017' """ params = {'identity': identity, 'curve': curve, 'index': index, 'their_pubkey': their_pubkey} return self._jadeRpc('get_identity_shared_key', params) def sign_identity(self, identity, curve, challenge, index=0): """ RPC call to authenticate the given identity through a challenge. Supports RFC6979. Returns the signature and the associated SLIP-0013 pubkey NOTE: this api returns an uncompressed public key Parameters ---------- identity : str Identity string to format and sign. For example ssh://[email protected] curve : str Name of curve to use - currently only 'nist256p1' is supported challenge : bytes Challenge bytes to sign index : int, optional Index number (if require multiple keys/sigs per identity) Defaults to 0 Returns ------- dict Contains keys: pubkey - 65-bytes, the uncompressed SLIP-0013 public key, consistent with 'get_identity_pubkey' with 'key_type=slip-0013' signature - 65-bytes, RFC6979 deterministic signature, prefixed with 0x00 """ params = {'identity': identity, 'curve': curve, 'index': index, 'challenge': challenge} return self._jadeRpc('sign_identity', params) def get_master_blinding_key(self): """ RPC call to fetch the master (SLIP-077) blinding key for the hw signer. NOTE: the master blinding key of any registered multisig wallets can be obtained from the result of `get_registered_multisigs()`. Returns ------- 32-bytes SLIP-077 master blinding key """ return self._jadeRpc('get_master_blinding_key') def get_blinding_key(self, script, multisig_name=None): """ RPC call to fetch the public blinding key for the hw signer. Parameters ---------- script : bytes The script for which the public blinding key is required. multisig_name : str, optional The name of any registered multisig wallet for which to fetch the blinding key. Defaults to None Returns ------- 33-bytes Public blinding key for the passed script. """ params = {'script': script, 'multisig_name': multisig_name} return self._jadeRpc('get_blinding_key', params) def get_shared_nonce(self, script, their_pubkey, include_pubkey=False, multisig_name=None): """ RPC call to get the shared secret to unblind a tx, given the receiving script and the pubkey of the sender (sometimes called "blinding nonce" in Liquid). Optionally fetch the hw signer's public blinding key also. Parameters ---------- script : bytes The script for which the blinding nonce is required. their_pubkey : 33-bytes The counterparty public key. include_pubkey : bool, optional Whether to also return the wallet's public blinding key. Defaults to False. multisig_name : str, optional The name of any registered multisig wallet for which to fetch the blinding nonce. Defaults to None Returns ------- 1. include_pubkey is False 33-bytes Public blinding nonce for the passed script and counterparty public key. 2. include_pubkey is True dict Contains keys: shared_nonce - 32-bytes, public blinding nonce for the passed script as above. blinding_key - 33-bytes, public blinding key for the passed script. """ params = {'script': script, 'their_pubkey': their_pubkey, 'include_pubkey': include_pubkey, 'multisig_name': multisig_name} return self._jadeRpc('get_shared_nonce', params) def get_blinding_factor(self, hash_prevouts, output_index, bftype, multisig_name=None): """ RPC call to get a deterministic "trusted" blinding factor to blind an output. Normally the blinding factors are generated and returned in the `get_commitments` call, but for the last output the vbf must be generated on the host, so this call allows the host to get a valid abf to compute the generator and then the "final" vbf. Nonetheless, this call is kept generic, and can also generate vbfs, hence the "bftype" parameter. Parameters ---------- hash_prevouts : 32-bytes This value is computed as specified in bip143. It is verified immediately since at this point Jade doesn't have the tx in question. It will be checked later during `sign_liquid_tx()`. output_index : int The index of the output we are trying to blind bftype : str Can be eitehr "ASSET" or "VALUE", to generate abfs or vbfs. multisig_name : str, optional The name of any registered multisig wallet for which to fetch the blinding factor. Defaults to None Returns ------- 32-bytes The requested blinding factor """ params = {'hash_prevouts': hash_prevouts, 'output_index': output_index, 'type': bftype, 'multisig_name': multisig_name} return self._jadeRpc('get_blinding_factor', params) def get_commitments(self, asset_id, value, hash_prevouts, output_index, vbf=None, multisig_name=None): """ RPC call to generate deterministic blinding factors and commitments for a given output. Can optionally get a "custom" VBF, normally used for the last input where the vbf is not computed here, but generated on the host according to all the other values. The commitments generated here should be passed back into `sign_liquid_tx()`. Parameters ---------- asset_id : 32-bytes asset_id as usually displayed - ie. reversed compared to network/consensus order value : int value in 'satoshi' or equivalent atomic integral unit hash_prevouts : 32-bytes This value is computed as specified in bip143. It is verified immediately since at this point Jade doesn't have the tx in question. It will be checked later during `sign_liquid_tx()`. output_index : int The index of the output we are trying to blind vbf : 32-bytes, optional The vbf to use, in preference to deterministically generating one in this call. multisig_name : str, optional The name of any registered multisig wallet for which to fetch the blinding factor. Defaults to None Returns ------- dict Containing the following the blinding factors and output commitments. """ params = {'asset_id': asset_id, 'value': value, 'hash_prevouts': hash_prevouts, 'output_index': output_index, 'vbf': vbf, 'multisig_name': multisig_name} return self._jadeRpc('get_commitments', params) def _send_tx_inputs(self, base_id, inputs, use_ae_signatures): """ Helper call to send the tx inputs to Jade for signing. Handles legacy RFC6979 signatures, as well as the Anti-Exfil protocol. Parameters ---------- base_id : int The ids of the messages sent will be increments from this base id. inputs : [dict] The tx inputs - see `sign_tx()` / `sign_liquid_tx()` for details. use_ae_signatures : bool Whether to use the anti-exfil protocol to generate the signatures Returns ------- 1. if use_ae_signatures is False [bytes] An array of signatures corresponding to the array of inputs passed. The signatures are in DER format with the sighash appended. 'None' placeholder elements are used for inputs not requiring a signature. 2. if use_ae_signatures is True [(32-bytes, bytes)] An array of pairs of signer-commitments and signatures corresponding to the inputs. The signatures are in DER format with the sighash appended. (None, None) placeholder elements are used for inputs not requiring a signature. """ if use_ae_signatures: # Anti-exfil protocol: # We send one message per input (which includes host-commitment *but # not* the host entropy) and receive the signer-commitment in reply. # Once all n input messages are sent, we can request the actual signatures # (as the user has a chance to confirm/cancel at this point). # We request the signatures passing the ae-entropy for each one. # Send inputs one at a time, receiving 'signer-commitment' in reply signer_commitments = [] host_ae_entropy_values = [] for txinput in inputs: # ae-protocol - do not send the host entropy immediately txinput = txinput.copy() # shallow copy host_ae_entropy_values.append(txinput.pop('ae_host_entropy', None)) base_id += 1 input_id = str(base_id) reply = self._jadeRpc('tx_input', txinput, input_id) signer_commitments.append(reply) # Request the signatures one at a time, sending the entropy signatures = [] for (i, host_ae_entropy) in enumerate(host_ae_entropy_values, 1): base_id += 1 sig_id = str(base_id) params = {'ae_host_entropy': host_ae_entropy} reply = self._jadeRpc('get_signature', params, sig_id) signatures.append(reply) assert len(signatures) == len(inputs) return list(zip(signer_commitments, signatures)) else: # Legacy protocol: # We send one message per input - without expecting replies. # Once all n input messages are sent, the hw then sends all n replies # (as the user has a chance to confirm/cancel at this point). # Then receive all n replies for the n signatures. # NOTE: *NOT* a sequence of n blocking rpc calls. # NOTE: at some point this flow should be removed in favour of the one # above, albeit without passing anti-exfil entropy or commitment data. # Send all n inputs requests = [] for txinput in inputs: base_id += 1 msg_id = str(base_id) request = self.jade.build_request(msg_id, 'tx_input', txinput) self.jade.write_request(request) requests.append(request) time.sleep(0.1) # Receive all n signatures signatures = [] for request in requests: reply = self.jade.read_response() self.jade.validate_reply(request, reply) signature = self._get_result_or_raise_error(reply) signatures.append(signature) assert len(signatures) == len(inputs) return signatures def sign_liquid_tx(self, network, txn, inputs, commitments, change, use_ae_signatures=False, asset_info=None): """ RPC call to sign a liquid transaction. Parameters ---------- network : str Network to which the address should apply - eg. 'liquid', 'liquid-testnet', etc. txn : bytes The transaction to sign inputs : [dict] The tx inputs. Should contain keys: is_witness, bool - whether this is a segwit input value_commitment, 33-bytes - The value commitment of ths input These are only required if signing this input: script, bytes- the redeem script path, [int] - the bip32 path to sign with These are only required for Anti-Exfil signatures: ae_host_commitment, 32-bytes - The host-commitment for Anti-Exfil signatures ae_host_entropy, 32-bytes - The host-entropy for Anti-Exfil signatures commitments : [dict] An array sized for the number of outputs. Unblinded outputs should have a 'null' placeholder element. The commitments as retrieved from `get_commitments()`, with the addition of: 'blinding_key', <bytes> - the output's public blinding key (as retrieved from `get_blinding_key()`) change : [dict] An array sized for the number of outputs. Outputs which are not change should have a 'null' placeholder element. Change elements with data will be automatically verified by Jade, and not by the user. Populated elements should contain sufficient data to generate the change address. See `get_receive_address()` use_ae_signatures : bool, optional Whether to use the anti-exfil protocol to generate the signatures. Defaults to False. asset_info : [dict] Any asset-registry data relevant to the assets being transacted, such that Jade can display a meaningful name, issuer, ticker etc. rather than just asset-id. At the very least must contain 'asset_id', 'contract' and 'issuance_prevout' items, exactly as in the registry data. NOTE: asset_info for the network policy-asset is not required. Defaults to None. Returns ------- 1. if use_ae_signatures is False [bytes] An array of signatures corresponding to the array of inputs passed. The signatures are in DER format with the sighash appended. 'None' placeholder elements are used for inputs not requiring a signature. 2. if use_ae_signatures is True [(32-bytes, bytes)] An array of pairs of signer-commitments and signatures corresponding to the inputs. The signatures are in DER format with the sighash appended. (None, None) placeholder elements are used for inputs not requiring a signature. """ # 1st message contains txn and number of inputs we are going to send. # Reply ok if that corresponds to the expected number of inputs (n). base_id = 100 * random.randint(1000, 9999) params = {'network': network, 'txn': txn, 'num_inputs': len(inputs), 'trusted_commitments': commitments, 'use_ae_signatures': use_ae_signatures, 'change': change, 'asset_info': asset_info} reply = self._jadeRpc('sign_liquid_tx', params, str(base_id)) assert reply # Send inputs and receive signatures return self._send_tx_inputs(base_id, inputs, use_ae_signatures) def sign_tx(self, network, txn, inputs, change, use_ae_signatures=False): """ RPC call to sign a btc transaction. Parameters ---------- network : str Network to which the address should apply - eg. 'mainnet', 'testnet', etc. txn : bytes The transaction to sign inputs : [dict] The tx inputs. Should contain keys: is_witness, bool - whether this is a segwit input These are only required if signing this input: script, bytes- the redeem script path, [int] - the bip32 path to sign with One of these is required: input_tx, bytes - The prior transaction which created the utxo of this input satoshi, int - The satoshi amount of this input - can be used in place of 'input_tx' for a tx with a single segwit input These are only required for Anti-Exfil signatures: ae_host_commitment, 32-bytes - The host-commitment for Anti-Exfil signatures ae_host_entropy, 32-bytes - The host-entropy for Anti-Exfil signatures change : [dict] An array sized for the number of outputs. Outputs which are not change should have a 'null' placeholder element. Change elements with data will be automatically verified by Jade, and not by the user. Populated elements should contain sufficient data to generate the change address. See `get_receive_address()` use_ae_signatures : bool Whether to use the anti-exfil protocol to generate the signatures Returns ------- 1. if use_ae_signatures is False [bytes] An array of signatures corresponding to the array of inputs passed. The signatures are in DER format with the sighash appended. 'None' placeholder elements are used for inputs not requiring a signature. 2. if use_ae_signatures is True [(32-bytes, bytes)] An array of pairs of signer-commitments and signatures corresponding to the inputs. The signatures are in DER format with the sighash appended. (None, None) placeholder elements are used for inputs not requiring a signature. """ # 1st message contains txn and number of inputs we are going to send. # Reply ok if that corresponds to the expected number of inputs (n). base_id = 100 * random.randint(1000, 9999) params = {'network': network, 'txn': txn, 'num_inputs': len(inputs), 'use_ae_signatures': use_ae_signatures, 'change': change} reply = self._jadeRpc('sign_tx', params, str(base_id)) assert reply # Send inputs and receive signatures return self._send_tx_inputs(base_id, inputs, use_ae_signatures) class JadeInterface: """ Mid-level interface to Jade Wraps either a serial or a ble connection Calls to send and receive bytes and cbor messages over the interface. Either: a) use wrapped with JadeAPI (recommended) or: b) use with JadeInterface.create_[serial|ble]() as jade: ... or: c) use JadeInterface.create_[serial|ble], then call connect() before using, and disconnect() when finished (caveat cranium) or: d) use ctor to wrap existing low-level implementation instance (caveat cranium) """ def __init__(self, impl): assert impl is not None self.impl = impl def __enter__(self): self.connect() return self def __exit__(self, exc_type, exc, tb): if (exc_type): logger.info("Exception causing JadeInterface context exit.") logger.info(exc_type) logger.info(exc) traceback.print_tb(tb) self.disconnect(exc_type is not None) @staticmethod def create_serial(device=None, baud=None, timeout=None): """ Create a JadeInterface object using the serial interface described. Parameters ---------- device : str, optional The device identifier for the serial device. Underlying implementation will default (to /dev/ttyUSB0) baud : int, optional The communication baud rate. Underlying implementation will default (to 115200) timeout : int, optional The serial read timeout when awaiting messages. Underlying implementation will default (to 120s) Returns ------- JadeInterface Inerface object configured to use given serial parameters. NOTE: the instance has not yet tried to contact the hw - caller must call 'connect()' before trying to use the Jade. """ if device and JadeTCPImpl.isSupportedDevice(device): impl = JadeTCPImpl(device, timeout or DEFAULT_SERIAL_TIMEOUT) else: impl = JadeSerialImpl(device or DEFAULT_SERIAL_DEVICE, baud or DEFAULT_BAUD_RATE, timeout or DEFAULT_SERIAL_TIMEOUT) return JadeInterface(impl) @staticmethod def create_ble(device_name=None, serial_number=None, scan_timeout=None, loop=None): """ Create a JadeInterface object using the BLE interface described. NOTE: raises JadeError if BLE dependencies not installed. Parameters ---------- device_name : str, optional The device name of the desired BLE device. Underlying implementation will default (to 'Jade') serial_number : int, optional The serial number of the desired BLE device - used to disambiguate multiple beacons with the same 'device name' Underlying implementation will connect to the first beacon it scans with the matching 'device name'. scan_timeout : int, optional The timeout when scanning for devices which match the device name/serial number. Underlying implementation will default (to 60s) loop : optional The asynchio event loop to use, if required. Underlying implementation will default (to asyncio.get_event_loop()) Returns ------- JadeInterface Inerface object configured to use given BLE parameters. NOTE: the instance has not yet tried to contact the hw - caller must call 'connect()' before trying to use the Jade. Raises ------ JadeError if BLE backend not available (ie. BLE dependencies not installed) """ this_module = sys.modules[__name__] if not hasattr(this_module, "JadeBleImpl"): raise JadeError(1, "BLE support not installed", None) impl = JadeBleImpl(device_name or DEFAULT_BLE_DEVICE_NAME, serial_number or DEFAULT_BLE_SERIAL_NUMBER, scan_timeout or DEFAULT_BLE_SCAN_TIMEOUT, loop=loop) return JadeInterface(impl) def connect(self): """ Try to connect the underlying transport interface (eg. serial, ble, etc.) Raises an exception on failure. """ self.impl.connect() def disconnect(self, drain=False): """ Disconnect the underlying transport (eg. serial, ble, etc.) Parameters ---------- drain : bool, optional When true log any/all remaining messages/data, otherwise silently discard. NOTE: can prevent disconnection if data is arriving constantly. Defaults to False. """ if drain: self.drain() self.impl.disconnect() def drain(self): """ Log any/all outstanding messages/data. NOTE: can run indefinitely if data is arriving constantly. """ logger.warn("Draining interface...") drained = bytearray() finished = False while not finished: byte_ = self.impl.read(1) drained.extend(byte_) finished = byte_ == b'' if finished or byte_ == b'\n' or len(drained) > 256: try: device_logger.warn(drained.decode('utf-8')) except Exception as e: # Dump the bytes raw and as hex if decoding as utf-8 failed device_logger.warn("Raw:") device_logger.warn(drained) device_logger.warn("----") device_logger.warn("Hex dump:") device_logger.warn(drained.hex()) # Clear and loop to continue collecting drained.clear() @staticmethod def build_request(input_id, method, params=None): """ Build a request dict from passed parameters Parameters ---------- input_id : str The id of the request message to construct method : str rpc method to invoke params : dict, optional any parameters to pass to the rpc method Defaults to None. Returns ------- dict The request object as a dict """ request = {"method": method, "id": input_id} if params is not None: request["params"] = params return request @staticmethod def serialise_cbor_request(request): """ Method to format a request dict as a cbor message Parameters ---------- request : dict The request dict Returns ------- bytes The request formatted as cbor message bytes """ dump = cbor.dumps(request) len_dump = len(dump) if 'method' in request and 'ota_data' in request['method']: msg = 'Sending ota_data message {} as cbor of size {}'.format(request['id'], len_dump) logger.info(msg) else: logger.info('Sending: {} as cbor of size {}'.format(_hexlify(request), len_dump)) return dump def write(self, bytes_): """ Write bytes over the underlying interface Parameters ---------- bytes_ : bytes The bytes to write Returns ------- int The number of bytes written """ logger.debug("Sending: {} bytes".format(len(bytes_))) wrote = self.impl.write(bytes_) logger.debug("Sent: {} bytes".format(len(bytes_))) return wrote def write_request(self, request): """ Write a request dict over the underlying interface, formatted as cbor. Parameters ---------- request : dict The request dict to write """ msg = self.serialise_cbor_request(request) written = 0 while written < len(msg): written += self.write(msg[written:]) def read(self, n): """ Try to read bytes from the underlying interface. Returns ------- bytes The bytes received """ logger.debug("Reading {} bytes...".format(n)) bytes_ = self.impl.read(n) logger.debug("Received: {} bytes".format(len(bytes_))) return bytes_ def read_cbor_message(self): """ Try to read a single cbor (response) message from the underlying interface. Respects the any read timeout. If any 'log' messages are received, logs them locally at the nearest corresponding level and awaits the next message. Returns when it receives what appears to be a reply message. Returns ------- dict The message received, as a dict """ while True: # 'self' is sufficiently 'file-like' to act as a load source. # Throws EOFError on end of stream/timeout/lost-connection etc. message = cbor.load(self) if isinstance(message, collections.abc.Mapping): # A message response (to a prior request) if 'id' in message: logger.info("Received msg: {}".format(_hexlify(message))) return message # A log message - handle as normal if 'log' in message: response = message['log'] log_method = device_logger.error try: response = message['log'].decode("utf-8") log_methods = { 'E': device_logger.error, 'W': device_logger.warn, 'I': device_logger.info, 'D': device_logger.debug, 'V': device_logger.debug, } if len(response) > 1 and response[1] == ' ': lvl = response[0] log_method = log_methods.get(lvl, device_logger.error) except Exception as e: logger.error('Error processing log message: {}'.format(e)) log_method('>> {}'.format(response)) continue # Unknown/unhandled/unexpected message logger.error("Unhandled message received") device_logger.error(message) def read_response(self, long_timeout=False): """ Try to read a single cbor (response) message from the underlying interface. If any 'log' messages are received, logs them locally at the nearest corresponding level and awaits the next message. Returns when it receives what appears to be a reply message. If `long_timeout` is false, any read-timeout is respected. If True, the call will block indefinitely awaiting a response message. Parameters ---------- long_timeout : bool Whether to wait indefinitely for the next (response) message. Returns ------- dict The message received, as a dict """ while True: try: return self.read_cbor_message() except EOFError as e: if not long_timeout: raise @staticmethod def validate_reply(request, reply): """ Helper to minimally validate a reply message, in the context of a request. Asserts if the reply does contain the expected minimal fields. """ assert isinstance(reply, dict) and 'id' in reply assert ('result' in reply) != ('error' in reply) assert reply['id'] == request['id'] or \ reply['id'] == '00' and 'error' in reply def make_rpc_call(self, request, long_timeout=False): """ Method to send a request over the underlying interface, and await a response. The request is minimally validated before it is sent, and the response is simialrly validated before being returned. Any read-timeout is respected unless 'long_timeout' is passed, in which case the call blocks indefinitely awaiting a response. Parameters ---------- long_timeout : bool Whether to wait indefinitely for the response. Returns ------- dict The (minimally validated) response message received, as a dict """ # Write outgoing request message assert isinstance(request, dict) assert 'id' in request and len(request['id']) > 0 assert 'method' in request and len(request['method']) > 0 assert len(request['id']) < 16 and len(request['method']) < 32 self.write_request(request) # Read and validate incoming message reply = self.read_response(long_timeout) self.validate_reply(request, reply) return reply
PypiClean
/LabtoolSuite-0.1.3.tar.gz/LabtoolSuite-0.1.3/Labtools/custom_widgets.py
import sip,os os.environ['QT_API'] = 'pyqt' sip.setapi("QString", 2) sip.setapi("QVariant", 2) # Import the core and GUI elements of Qt from PyQt4.QtGui import * from PyQt4.QtCore import * import interface from widgets.sliding import Ui_Form as Ui_Sliding from widgets.clicking import Ui_Form as Ui_Clicking from widgets.clickingOptions import Ui_Form as Ui_ClickingOptions class CustomWidgets: parent=None def __init__(self): print "widgets imported" self.I=interface.Interface() def newWidget(self,widget_type,**args): b=widget_type(**args) if(args.has_key('object_name')): b.setObjectName(args.get('object_name')) if(args.has_key('text')): b.setText(args.get('text')) if(args.has_key('items')): for a in args.get('items'): b.addItem(a) self.updateWidgetBay(b) return b def assignCommand(self,widget,signal,slot,*args): buttonCallback = functools.partial(slot,*args) QObject.connect(widget, SIGNAL(signal), buttonCallback) class sineHandler(QFrame,Ui_Sliding): def __init__(self,chan): super(CustomWidgets.sineHandler, self).__init__() #QFrame.__init__(self) #Ui_Sliding.__init__(self) self.I=interface.Interface() self.setupUi(self) self.name=['SINE1','SINE2'][chan-1] self.label.setText(self.name) self.chan=chan self.slider.setMinimum(0) self.slider.setMaximum(500000) def setValue(self,val): self.label.setText(self.name+':'+str(val)+' Hz') if self.chan==1:self.I.set_sine1(val) elif self.chan==2:self.I.set_sine2(val) def widget_sine1(self): self.updateWidgetBay(self.sineHandler(1)) def widget_sine2(self): self.updateWidgetBay(self.sineHandler(2)) class gainHandler(QFrame,Ui_Sliding): def __init__(self,chan): super(CustomWidgets.gainHandler, self).__init__() self.I=interface.Interface() self.setupUi(self) self.slider.setMinimum(0) self.slider.setMaximum(7) self.gaintxt=['1x','2x','4x','5x','8x','10x','16x','32x'] self.name=chan self.label.setText(self.name) def setValue(self,val): self.label.setText(self.name+':'+self.gaintxt[val]) self.I.set_gain(self.name,val) def widget_ch1(self): self.updateWidgetBay(self.gainHandler('CH1')) def widget_ch2(self): self.updateWidgetBay(self.gainHandler('CH2')) def widget_ch3(self): self.updateWidgetBay(self.gainHandler('CH3')) def widget_ch4(self): self.updateWidgetBay(self.gainHandler('CH4')) def widget_ch5(self): self.updateWidgetBay(self.gainHandler('CH5')) class voltHandler(QFrame,Ui_Clicking): def __init__(self,chan): super(CustomWidgets.voltHandler, self).__init__() #QFrame.__init__(self) #Ui_Sliding.__init__(self) self.I=interface.Interface() self.setupUi(self) self.name='READ '+chan self.button.setText(self.name) self.chan=chan def clicked(self): val = self.I.get_average_voltage(self.chan) self.label.setText('%.3f V'%(val)) def widget_volt1(self): self.updateWidgetBay(self.voltHandler('CH1')) def widget_volt2(self): self.updateWidgetBay(self.voltHandler('CH2')) def widget_volt3(self): self.updateWidgetBay(self.voltHandler('CH3')) def widget_volt4(self): self.updateWidgetBay(self.voltHandler('CH4')) def widget_volt5(self): self.updateWidgetBay(self.voltHandler('CH5')) class voltAllHandler(QFrame,Ui_ClickingOptions): def __init__(self): super(CustomWidgets.voltAllHandler, self).__init__() #QFrame.__init__(self) #Ui_Sliding.__init__(self) self.I=interface.Interface() self.setupUi(self) self.names=['CH1','CH2','CH3','CH4','CH5','CH6','CH7','CH8','CH9','5V','9V','IN1','SEN'] self.button.setText('Read') self.items.addItems(self.names) def clicked(self): val = self.I.get_average_voltage(self.items.currentText()) self.label.setText('%.3f V'%(val)) def widget_voltAll(self): self.updateWidgetBay(self.voltAllHandler()) def widget_inductance(self): class Handler(QFrame,Ui_Clicking): def __init__(self): super(Handler, self).__init__() self.I=interface.Interface() self.setupUi(self) self.button.setText('INDUCTANCE') def clicked(self): val = self.I.get_inductance() self.label.setText('%.3f'%(val)) self.updateWidgetBay(Handler()) class timingHandler(QFrame,Ui_ClickingOptions): def __init__(self,cmd): super(CustomWidgets.timingHandler, self).__init__() #QFrame.__init__(self) #Ui_Sliding.__init__(self) self.I=interface.Interface() self.setupUi(self) self.cmd = getattr(self.I,cmd) self.cmdname=cmd self.button.setText(cmd) self.items.addItems(['ID1','ID2','ID3','ID4','LMETER']) def clicked(self): val = self.cmd(self.items.currentIndex()) if self.cmdname=='duty_cycle': if(val[0]!=-1):p=100*val[1]/val[0] else: p=0 self.label.setText(' %.2f %%'%(p)) elif 'time' in self.cmdname:self.label.setText('%.2e S'%(val)) else:self.label.setText('%.1f Hz'%(val)) def widget_freq(self): self.updateWidgetBay(self.timingHandler('get_freq')) def widget_high_freq(self): self.updateWidgetBay(self.timingHandler('get_high_freq')) def widget_f2ftime(self): self.updateWidgetBay(self.timingHandler('f2f_time')) def widget_r2rtime(self): self.updateWidgetBay(self.timingHandler('r2r_time')) def widget_dutycycle(self): self.updateWidgetBay(self.timingHandler('duty_cycle')) def widget_pulse(self): self.updateWidgetBay(self.timingHandler('pulse_time')) class sourceHandler(QFrame,Ui_Sliding): def __init__(self,name): super(CustomWidgets.sourceHandler, self).__init__() self.I=interface.Interface() self.setupUi(self) self.name=name if name=='pvs1': self.slider.setRange(0,4095) if name=='pvs2': self.slider.setRange(0,4095) elif name=='pvs3': self.slider.setRange(0,31) elif name=='pcs': self.slider.setRange(0,31) def setValue(self,val): if self.name=='pvs1': retval=self.I.set_pvs1(val*10./4095 - 5) elif self.name=='pvs2': retval=self.I.set_pvs2(val*3.3/4095) elif self.name=='pvs3': retval=self.I.set_pvs3(val*6.6/31 - 3.3) elif self.name=='pcs': retval=self.I.set_pcs(val*3.3/31) self.label.setText(self.name+': %.3f'%(retval)) def widget_pvs1(self): self.updateWidgetBay(self.sourceHandler('pvs1')) def widget_pvs2(self): self.updateWidgetBay(self.sourceHandler('pvs2')) def widget_pvs3(self): self.updateWidgetBay(self.sourceHandler('pvs3')) def widget_pcs(self): self.updateWidgetBay(self.sourceHandler('pcs'))
PypiClean
/pulumi_azure_native-2.5.1a1693590910.tar.gz/pulumi_azure_native-2.5.1a1693590910/pulumi_azure_native/network/v20230201/public_ip_prefix.py
import copy import warnings import pulumi import pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union, overload from ... import _utilities from . import outputs from ._enums import * from ._inputs import * __all__ = ['PublicIPPrefixArgs', 'PublicIPPrefix'] @pulumi.input_type class PublicIPPrefixArgs: def __init__(__self__, *, resource_group_name: pulumi.Input[str], custom_ip_prefix: Optional[pulumi.Input['SubResourceArgs']] = None, extended_location: Optional[pulumi.Input['ExtendedLocationArgs']] = None, id: Optional[pulumi.Input[str]] = None, ip_tags: Optional[pulumi.Input[Sequence[pulumi.Input['IpTagArgs']]]] = None, location: Optional[pulumi.Input[str]] = None, nat_gateway: Optional[pulumi.Input['NatGatewayArgs']] = None, prefix_length: Optional[pulumi.Input[int]] = None, public_ip_address_version: Optional[pulumi.Input[Union[str, 'IPVersion']]] = None, public_ip_prefix_name: Optional[pulumi.Input[str]] = None, sku: Optional[pulumi.Input['PublicIPPrefixSkuArgs']] = None, tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None, zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None): """ The set of arguments for constructing a PublicIPPrefix resource. :param pulumi.Input[str] resource_group_name: The name of the resource group. :param pulumi.Input['SubResourceArgs'] custom_ip_prefix: The customIpPrefix that this prefix is associated with. :param pulumi.Input['ExtendedLocationArgs'] extended_location: The extended location of the public ip address. :param pulumi.Input[str] id: Resource ID. :param pulumi.Input[Sequence[pulumi.Input['IpTagArgs']]] ip_tags: The list of tags associated with the public IP prefix. :param pulumi.Input[str] location: Resource location. :param pulumi.Input['NatGatewayArgs'] nat_gateway: NatGateway of Public IP Prefix. :param pulumi.Input[int] prefix_length: The Length of the Public IP Prefix. :param pulumi.Input[Union[str, 'IPVersion']] public_ip_address_version: The public IP address version. :param pulumi.Input[str] public_ip_prefix_name: The name of the public IP prefix. :param pulumi.Input['PublicIPPrefixSkuArgs'] sku: The public IP prefix SKU. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Resource tags. :param pulumi.Input[Sequence[pulumi.Input[str]]] zones: A list of availability zones denoting the IP allocated for the resource needs to come from. """ pulumi.set(__self__, "resource_group_name", resource_group_name) if custom_ip_prefix is not None: pulumi.set(__self__, "custom_ip_prefix", custom_ip_prefix) if extended_location is not None: pulumi.set(__self__, "extended_location", extended_location) if id is not None: pulumi.set(__self__, "id", id) if ip_tags is not None: pulumi.set(__self__, "ip_tags", ip_tags) if location is not None: pulumi.set(__self__, "location", location) if nat_gateway is not None: pulumi.set(__self__, "nat_gateway", nat_gateway) if prefix_length is not None: pulumi.set(__self__, "prefix_length", prefix_length) if public_ip_address_version is not None: pulumi.set(__self__, "public_ip_address_version", public_ip_address_version) if public_ip_prefix_name is not None: pulumi.set(__self__, "public_ip_prefix_name", public_ip_prefix_name) if sku is not None: pulumi.set(__self__, "sku", sku) if tags is not None: pulumi.set(__self__, "tags", tags) if zones is not None: pulumi.set(__self__, "zones", zones) @property @pulumi.getter(name="resourceGroupName") def resource_group_name(self) -> pulumi.Input[str]: """ The name of the resource group. """ return pulumi.get(self, "resource_group_name") @resource_group_name.setter def resource_group_name(self, value: pulumi.Input[str]): pulumi.set(self, "resource_group_name", value) @property @pulumi.getter(name="customIPPrefix") def custom_ip_prefix(self) -> Optional[pulumi.Input['SubResourceArgs']]: """ The customIpPrefix that this prefix is associated with. """ return pulumi.get(self, "custom_ip_prefix") @custom_ip_prefix.setter def custom_ip_prefix(self, value: Optional[pulumi.Input['SubResourceArgs']]): pulumi.set(self, "custom_ip_prefix", value) @property @pulumi.getter(name="extendedLocation") def extended_location(self) -> Optional[pulumi.Input['ExtendedLocationArgs']]: """ The extended location of the public ip address. """ return pulumi.get(self, "extended_location") @extended_location.setter def extended_location(self, value: Optional[pulumi.Input['ExtendedLocationArgs']]): pulumi.set(self, "extended_location", value) @property @pulumi.getter def id(self) -> Optional[pulumi.Input[str]]: """ Resource ID. """ return pulumi.get(self, "id") @id.setter def id(self, value: Optional[pulumi.Input[str]]): pulumi.set(self, "id", value) @property @pulumi.getter(name="ipTags") def ip_tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IpTagArgs']]]]: """ The list of tags associated with the public IP prefix. """ return pulumi.get(self, "ip_tags") @ip_tags.setter def ip_tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['IpTagArgs']]]]): pulumi.set(self, "ip_tags", value) @property @pulumi.getter def location(self) -> Optional[pulumi.Input[str]]: """ Resource location. """ return pulumi.get(self, "location") @location.setter def location(self, value: Optional[pulumi.Input[str]]): pulumi.set(self, "location", value) @property @pulumi.getter(name="natGateway") def nat_gateway(self) -> Optional[pulumi.Input['NatGatewayArgs']]: """ NatGateway of Public IP Prefix. """ return pulumi.get(self, "nat_gateway") @nat_gateway.setter def nat_gateway(self, value: Optional[pulumi.Input['NatGatewayArgs']]): pulumi.set(self, "nat_gateway", value) @property @pulumi.getter(name="prefixLength") def prefix_length(self) -> Optional[pulumi.Input[int]]: """ The Length of the Public IP Prefix. """ return pulumi.get(self, "prefix_length") @prefix_length.setter def prefix_length(self, value: Optional[pulumi.Input[int]]): pulumi.set(self, "prefix_length", value) @property @pulumi.getter(name="publicIPAddressVersion") def public_ip_address_version(self) -> Optional[pulumi.Input[Union[str, 'IPVersion']]]: """ The public IP address version. """ return pulumi.get(self, "public_ip_address_version") @public_ip_address_version.setter def public_ip_address_version(self, value: Optional[pulumi.Input[Union[str, 'IPVersion']]]): pulumi.set(self, "public_ip_address_version", value) @property @pulumi.getter(name="publicIpPrefixName") def public_ip_prefix_name(self) -> Optional[pulumi.Input[str]]: """ The name of the public IP prefix. """ return pulumi.get(self, "public_ip_prefix_name") @public_ip_prefix_name.setter def public_ip_prefix_name(self, value: Optional[pulumi.Input[str]]): pulumi.set(self, "public_ip_prefix_name", value) @property @pulumi.getter def sku(self) -> Optional[pulumi.Input['PublicIPPrefixSkuArgs']]: """ The public IP prefix SKU. """ return pulumi.get(self, "sku") @sku.setter def sku(self, value: Optional[pulumi.Input['PublicIPPrefixSkuArgs']]): pulumi.set(self, "sku", value) @property @pulumi.getter def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]: """ Resource tags. """ return pulumi.get(self, "tags") @tags.setter def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]): pulumi.set(self, "tags", value) @property @pulumi.getter def zones(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: """ A list of availability zones denoting the IP allocated for the resource needs to come from. """ return pulumi.get(self, "zones") @zones.setter def zones(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]): pulumi.set(self, "zones", value) class PublicIPPrefix(pulumi.CustomResource): @overload def __init__(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions] = None, custom_ip_prefix: Optional[pulumi.Input[pulumi.InputType['SubResourceArgs']]] = None, extended_location: Optional[pulumi.Input[pulumi.InputType['ExtendedLocationArgs']]] = None, id: Optional[pulumi.Input[str]] = None, ip_tags: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['IpTagArgs']]]]] = None, location: Optional[pulumi.Input[str]] = None, nat_gateway: Optional[pulumi.Input[pulumi.InputType['NatGatewayArgs']]] = None, prefix_length: Optional[pulumi.Input[int]] = None, public_ip_address_version: Optional[pulumi.Input[Union[str, 'IPVersion']]] = None, public_ip_prefix_name: Optional[pulumi.Input[str]] = None, resource_group_name: Optional[pulumi.Input[str]] = None, sku: Optional[pulumi.Input[pulumi.InputType['PublicIPPrefixSkuArgs']]] = None, tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None, zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None, __props__=None): """ Public IP prefix resource. :param str resource_name: The name of the resource. :param pulumi.ResourceOptions opts: Options for the resource. :param pulumi.Input[pulumi.InputType['SubResourceArgs']] custom_ip_prefix: The customIpPrefix that this prefix is associated with. :param pulumi.Input[pulumi.InputType['ExtendedLocationArgs']] extended_location: The extended location of the public ip address. :param pulumi.Input[str] id: Resource ID. :param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['IpTagArgs']]]] ip_tags: The list of tags associated with the public IP prefix. :param pulumi.Input[str] location: Resource location. :param pulumi.Input[pulumi.InputType['NatGatewayArgs']] nat_gateway: NatGateway of Public IP Prefix. :param pulumi.Input[int] prefix_length: The Length of the Public IP Prefix. :param pulumi.Input[Union[str, 'IPVersion']] public_ip_address_version: The public IP address version. :param pulumi.Input[str] public_ip_prefix_name: The name of the public IP prefix. :param pulumi.Input[str] resource_group_name: The name of the resource group. :param pulumi.Input[pulumi.InputType['PublicIPPrefixSkuArgs']] sku: The public IP prefix SKU. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Resource tags. :param pulumi.Input[Sequence[pulumi.Input[str]]] zones: A list of availability zones denoting the IP allocated for the resource needs to come from. """ ... @overload def __init__(__self__, resource_name: str, args: PublicIPPrefixArgs, opts: Optional[pulumi.ResourceOptions] = None): """ Public IP prefix resource. :param str resource_name: The name of the resource. :param PublicIPPrefixArgs args: The arguments to use to populate this resource's properties. :param pulumi.ResourceOptions opts: Options for the resource. """ ... def __init__(__self__, resource_name: str, *args, **kwargs): resource_args, opts = _utilities.get_resource_args_opts(PublicIPPrefixArgs, pulumi.ResourceOptions, *args, **kwargs) if resource_args is not None: __self__._internal_init(resource_name, opts, **resource_args.__dict__) else: __self__._internal_init(resource_name, *args, **kwargs) def _internal_init(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions] = None, custom_ip_prefix: Optional[pulumi.Input[pulumi.InputType['SubResourceArgs']]] = None, extended_location: Optional[pulumi.Input[pulumi.InputType['ExtendedLocationArgs']]] = None, id: Optional[pulumi.Input[str]] = None, ip_tags: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['IpTagArgs']]]]] = None, location: Optional[pulumi.Input[str]] = None, nat_gateway: Optional[pulumi.Input[pulumi.InputType['NatGatewayArgs']]] = None, prefix_length: Optional[pulumi.Input[int]] = None, public_ip_address_version: Optional[pulumi.Input[Union[str, 'IPVersion']]] = None, public_ip_prefix_name: Optional[pulumi.Input[str]] = None, resource_group_name: Optional[pulumi.Input[str]] = None, sku: Optional[pulumi.Input[pulumi.InputType['PublicIPPrefixSkuArgs']]] = None, tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None, zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None, __props__=None): opts = pulumi.ResourceOptions.merge(_utilities.get_resource_opts_defaults(), opts) if not isinstance(opts, pulumi.ResourceOptions): raise TypeError('Expected resource options to be a ResourceOptions instance') if opts.id is None: if __props__ is not None: raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource') __props__ = PublicIPPrefixArgs.__new__(PublicIPPrefixArgs) __props__.__dict__["custom_ip_prefix"] = custom_ip_prefix __props__.__dict__["extended_location"] = extended_location __props__.__dict__["id"] = id __props__.__dict__["ip_tags"] = ip_tags __props__.__dict__["location"] = location __props__.__dict__["nat_gateway"] = nat_gateway __props__.__dict__["prefix_length"] = prefix_length __props__.__dict__["public_ip_address_version"] = public_ip_address_version __props__.__dict__["public_ip_prefix_name"] = public_ip_prefix_name if resource_group_name is None and not opts.urn: raise TypeError("Missing required property 'resource_group_name'") __props__.__dict__["resource_group_name"] = resource_group_name __props__.__dict__["sku"] = sku __props__.__dict__["tags"] = tags __props__.__dict__["zones"] = zones __props__.__dict__["etag"] = None __props__.__dict__["ip_prefix"] = None __props__.__dict__["load_balancer_frontend_ip_configuration"] = None __props__.__dict__["name"] = None __props__.__dict__["provisioning_state"] = None __props__.__dict__["public_ip_addresses"] = None __props__.__dict__["resource_guid"] = None __props__.__dict__["type"] = None alias_opts = pulumi.ResourceOptions(aliases=[pulumi.Alias(type_="azure-native:network:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20180701:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20180801:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20181001:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20181101:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20181201:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20190201:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20190401:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20190601:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20190701:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20190801:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20190901:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20191101:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20191201:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20200301:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20200401:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20200501:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20200601:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20200701:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20200801:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20201101:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20210201:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20210301:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20210501:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20210801:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20220101:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20220501:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20220701:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20220901:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20221101:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20230401:PublicIPPrefix"), pulumi.Alias(type_="azure-native:network/v20230501:PublicIPPrefix")]) opts = pulumi.ResourceOptions.merge(opts, alias_opts) super(PublicIPPrefix, __self__).__init__( 'azure-native:network/v20230201:PublicIPPrefix', resource_name, __props__, opts) @staticmethod def get(resource_name: str, id: pulumi.Input[str], opts: Optional[pulumi.ResourceOptions] = None) -> 'PublicIPPrefix': """ Get an existing PublicIPPrefix resource's state with the given name, id, and optional extra properties used to qualify the lookup. :param str resource_name: The unique name of the resulting resource. :param pulumi.Input[str] id: The unique provider ID of the resource to lookup. :param pulumi.ResourceOptions opts: Options for the resource. """ opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id)) __props__ = PublicIPPrefixArgs.__new__(PublicIPPrefixArgs) __props__.__dict__["custom_ip_prefix"] = None __props__.__dict__["etag"] = None __props__.__dict__["extended_location"] = None __props__.__dict__["ip_prefix"] = None __props__.__dict__["ip_tags"] = None __props__.__dict__["load_balancer_frontend_ip_configuration"] = None __props__.__dict__["location"] = None __props__.__dict__["name"] = None __props__.__dict__["nat_gateway"] = None __props__.__dict__["prefix_length"] = None __props__.__dict__["provisioning_state"] = None __props__.__dict__["public_ip_address_version"] = None __props__.__dict__["public_ip_addresses"] = None __props__.__dict__["resource_guid"] = None __props__.__dict__["sku"] = None __props__.__dict__["tags"] = None __props__.__dict__["type"] = None __props__.__dict__["zones"] = None return PublicIPPrefix(resource_name, opts=opts, __props__=__props__) @property @pulumi.getter(name="customIPPrefix") def custom_ip_prefix(self) -> pulumi.Output[Optional['outputs.SubResourceResponse']]: """ The customIpPrefix that this prefix is associated with. """ return pulumi.get(self, "custom_ip_prefix") @property @pulumi.getter def etag(self) -> pulumi.Output[str]: """ A unique read-only string that changes whenever the resource is updated. """ return pulumi.get(self, "etag") @property @pulumi.getter(name="extendedLocation") def extended_location(self) -> pulumi.Output[Optional['outputs.ExtendedLocationResponse']]: """ The extended location of the public ip address. """ return pulumi.get(self, "extended_location") @property @pulumi.getter(name="ipPrefix") def ip_prefix(self) -> pulumi.Output[str]: """ The allocated Prefix. """ return pulumi.get(self, "ip_prefix") @property @pulumi.getter(name="ipTags") def ip_tags(self) -> pulumi.Output[Optional[Sequence['outputs.IpTagResponse']]]: """ The list of tags associated with the public IP prefix. """ return pulumi.get(self, "ip_tags") @property @pulumi.getter(name="loadBalancerFrontendIpConfiguration") def load_balancer_frontend_ip_configuration(self) -> pulumi.Output['outputs.SubResourceResponse']: """ The reference to load balancer frontend IP configuration associated with the public IP prefix. """ return pulumi.get(self, "load_balancer_frontend_ip_configuration") @property @pulumi.getter def location(self) -> pulumi.Output[Optional[str]]: """ Resource location. """ return pulumi.get(self, "location") @property @pulumi.getter def name(self) -> pulumi.Output[str]: """ Resource name. """ return pulumi.get(self, "name") @property @pulumi.getter(name="natGateway") def nat_gateway(self) -> pulumi.Output[Optional['outputs.NatGatewayResponse']]: """ NatGateway of Public IP Prefix. """ return pulumi.get(self, "nat_gateway") @property @pulumi.getter(name="prefixLength") def prefix_length(self) -> pulumi.Output[Optional[int]]: """ The Length of the Public IP Prefix. """ return pulumi.get(self, "prefix_length") @property @pulumi.getter(name="provisioningState") def provisioning_state(self) -> pulumi.Output[str]: """ The provisioning state of the public IP prefix resource. """ return pulumi.get(self, "provisioning_state") @property @pulumi.getter(name="publicIPAddressVersion") def public_ip_address_version(self) -> pulumi.Output[Optional[str]]: """ The public IP address version. """ return pulumi.get(self, "public_ip_address_version") @property @pulumi.getter(name="publicIPAddresses") def public_ip_addresses(self) -> pulumi.Output[Sequence['outputs.ReferencedPublicIpAddressResponse']]: """ The list of all referenced PublicIPAddresses. """ return pulumi.get(self, "public_ip_addresses") @property @pulumi.getter(name="resourceGuid") def resource_guid(self) -> pulumi.Output[str]: """ The resource GUID property of the public IP prefix resource. """ return pulumi.get(self, "resource_guid") @property @pulumi.getter def sku(self) -> pulumi.Output[Optional['outputs.PublicIPPrefixSkuResponse']]: """ The public IP prefix SKU. """ return pulumi.get(self, "sku") @property @pulumi.getter def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]: """ Resource tags. """ return pulumi.get(self, "tags") @property @pulumi.getter def type(self) -> pulumi.Output[str]: """ Resource type. """ return pulumi.get(self, "type") @property @pulumi.getter def zones(self) -> pulumi.Output[Optional[Sequence[str]]]: """ A list of availability zones denoting the IP allocated for the resource needs to come from. """ return pulumi.get(self, "zones")
PypiClean
/python-reapy-0.10.0.tar.gz/python-reapy-0.10.0/reapy/tools/network/server.py
import reapy from reapy.tools import json from .socket import Socket import socket import traceback class Server(Socket): """ Server part of the ``reapy`` dist API. It is instantiated inside REAPER. It receives and processes API call requests coming from the outside. """ def __init__(self, port): super().__init__() self._socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.bind(("0.0.0.0", port)) self.listen() self.connections = {} self.settimeout(.0001) @Socket._non_blocking def _get_request(self, connection, address): try: request = connection.recv() request = json.loads(request.decode()) except (ConnectionAbortedError, ConnectionResetError): # Client has disconnected # Pretend client has nicely requested to disconnect input = {"args": (address, ), "kwargs": {}} request = { "function": self.disconnect, "input": input } return request def _hold_connection(self, address): connection = self.connections[address] result = {"type": "result", "value": None} self._send_result(connection, result) request = self._get_request(connection, address) while request is None or request["function"] != "RELEASE": if request is None: request = self._get_request(connection, address) continue result = self._process_request(request, address) try: self._send_result(connection, result) request = self._get_request(connection, address) except (ConnectionAbortedError, ConnectionResetError): # request was to disconnect request = {"function": "RELEASE"} result = {"type": "result", "value": None} return result def _process_request(self, request, address): if request["function"] == "HOLD": return self._hold_connection(address) args, kwargs = request["input"]["args"], request["input"]["kwargs"] result = {} try: result["value"] = request["function"](*args, **kwargs) result["type"] = "result" except Exception: # Errors are sent back to the client instead of raised in REAPER # (which would cause the server to crash). result["traceback"] = traceback.format_exc() result["type"] = "error" return result def _send_result(self, connection, result): result = json.dumps(result).encode() connection.send(result) @Socket._non_blocking def accept(self, *args, **kwargs): connection, address = super().accept() self.connections[address] = connection connection.send("{}".format(address).encode("ascii")) return connection, address def disconnect(self, address): connection = self.connections[address] connection.shutdown(socket.SHUT_RDWR) connection.close() del self.connections[address] def get_requests(self): requests = {} for address, connection in self.connections.items(): request = self._get_request(connection, address) if request is not None: requests[address] = request return requests def process_requests(self, requests): results = {} for address, request in requests.items(): result = self._process_request(request, address) results[address] = result return results def send_results(self, results): for address, result in results.items(): try: connection = self.connections[address] self._send_result(connection, result) except ( KeyError, BrokenPipeError, ConnectionAbortedError, ConnectionResetError ): # Happens when the client requested to disconnect. # Nothing must be returned in that case. pass
PypiClean
/love_course_2016_2019-2023.3.1.0-py3-none-any.whl/LoveCourse20162019/docs/ai-shang-qing-gan/爱上情感《魅力男神全套》:07读懂女人心(一开口,就让女人知道你懂她):阻碍你吸引女人的错误思维!教你清除头脑中的垃圾价值观!.md
# 爱上情感《魅力男神全套》:07 读懂女人心(一开口,就让女人知道你懂她):阻碍你吸引女人的错误思维!教你清除头脑中的垃圾价值观! 【鋼琴】,大家帶來了一集我們這個,內涯精品靠,我們一期要看今天要講的主題,其實今天這個主題,剛我真的非常多,你要講的是,大部分男生,對於女生的錯誤的思維導致什麼呢,導致沒有女孩喜歡你,幫主在下去。 清楚你練習中的那些垃圾下之關,很多兄弟們,你自己錯誤的思維導致沒有女孩喜歡你,但是你自己可能還不知道,但是這些思維正在,正在幹嘛的約束著你,正在限制著你,沒有女孩喜歡,那麼如果說你能夠在你的腦海當中。 把這些垃圾的錯誤的思維,錯誤的練習下之關,去爭調,去爭調之後,那麼就會有很多女孩喜歡你了,那麼這款那種希望大家認認的去聽,我們今天講到這些錯誤的思維,以及拉著的價值觀,可能90%的男生身上都會有。 特別是有一些新手,有一些職男,有一些剛剛接觸我們的一些新的粉絲,你們身上可能都會有,我把今天講一下你們就會明白,那麼念愛中的拉著的價值觀,有哪些呢,一共分為四大塊,一共分為四大塊,那麼哪四大塊呢。 第一塊就都平派女孩,那麼之前我們在講課的那種,也講過很多,我們總是告訴大家,不要去平派女孩,不要去平派女孩,但是呢,還有大量的男生喜歡去平派女孩,比如女孩晚睡了,或者說睡得很晚,就會平派女孩。 這個女孩不是好女人,所以說呢,不要去平派女孩,我們後面也會相信對去講,那麼第二點是什麼,是處女情節,很多人現在會有處女情節,對不對,那麼我後面也會相信,把這個案例來講,這個情況來講,對吧。 那麼之前那是你練愛當中的一些錯誤的價值觀,如果說你有就代表什麼,代表你是個路段,代表你不是一個優秀的男人,那麼就一個成功的男人,一個有妹的男人,他是不會去平派女孩的,而且他是絕對不會有處女情節的。 那麼第三個什麼,叫做好女孩情節,好女孩情節是什麼呢,我們會不會具體的來講,那麼第四塊是共良者行為,那麼今天那口誠主要分為這四大塊,我會一塊一塊的來給大家去講,你錯誤的行為導致,你沒有女孩喜歡你。 你錯誤的練愛價值觀,導致沒有女孩喜歡你,那麼首先我們來講第一點,叫做平派女人,那麼,我現在問大家一個問題,我現在問大家一個問題,你現在和一個女孩正在約會,你們正在聊天,然後你們彼此聊到了感情經歷。 然後如果說你為女孩說,你談過幾個男朋友,你為女孩說你談過幾個男朋友,假如說這個女孩,他談過十個男朋友,你覺得這個女孩會不會真實的去告訴你,他談過十個,你們思考這個問題,把你們的絕對這個答案打出來。 那麼不再比如說,你們在聊天在這個約會,聊天的過程當中,這個女孩問你,問你說你談過幾個女朋友,假如說你談過十個,你會不會又如實的去告訴你女孩呢,你敢不敢去告訴女孩呢,我們現在講第一個問題,問題一。 在約會的時候女生,問你談過幾個女孩,談過幾個男朋友,但是女孩假如說他談過十個,假如他談過十個男朋友,你覺得女孩會不會如實地告訴你,那麼就像下面這個答案一樣,很多兄弟們打出來的這個答案是什麼意思。 是什麼呢,是兩個字,不會,那我會發現什麼,很多兄弟們還是比較聰明了,還是比較明白社會上一些問題,那麼你們要思考一個問題,為什麼女孩不敢真實地,把他談過十個男朋友的行為去告訴你,你們來思考這個問題。 你們來思考一個這麼一個問題,為什麼呢,其實說白的是什麼呢,是因為女人害怕你評價他,所以女孩談過十個男朋友之後,他不敢去如實地跟你講,所有的女孩,都會跟你說我談過三個,三是一個很標準的詞語。 所有的女孩都會這樣去講,那麼女生是絕對不敢如實地跟你去講,為什麼女孩不敢如實地,就跟你說他談過十個男朋友,或者說他談過五個六個,他不敢如實地去跟你講,是因為女人害怕你評價他,你想一下。 一個女孩如果談過十個男朋友,然後讓你知道了,讓你知道之後呢,你會怎麼去想這個女孩,你會幹嘛,你會覺得這個女人很鬧,你會覺得這個女人很水性氧化,你會覺得這個女人不是一個好女人,這個女人是個扎女。 這個女人幹嘛呢,不是那種對吧,可以談戀愛的,不是可以結婚的女人,你就會覺得這個女孩不是一個好女人,對不對,是不是如此,你們想一下,就是因為你們,這個社會人大部分的男人,會去評價女孩。 所以導致女生沒辦法跟你去說真話,女生只敢跟你去說,我談過一個,我談過三個,我談過兩個女生只敢去這樣給你小,女人不敢跟你去說真話,因為女人呢,因為現在這個女孩,他不知道你是不是一個會評價,這個女人的男人。 很多這個,從來很多這個質難,一聽到女孩談了,比如說談了六個吧,談了七個,或者說談了十個男朋友,就會去評價他,我們這是第一個問題,我們把它講解,講解為什麼女孩不敢去跟你說,就是因為女孩,還把你評價他。 那麼第二個,第二個問題是,假用女孩,問你,假用女孩問你說,問你談過幾個女朋友,假如說你談過十個,你敢不敢入實地去跟你說呢,那麼這件事情呢,是一個成兩面性的,這個問題是成兩面性,成兩面性,首先來說。 可能大部分的男生,你沒有談過十個,對不對,那麼這種情況,很多的,貼難,很多貼難,就是很多這個社會上,沒有談過戀愛的一些男生,你去問他說你談過戀愛嗎,他可能沒有談過戀愛,但是他跟他幹嘛,他會撒謊。 他不是我談過戀愛,我談過三次個,也有談過挺多的呀,那也就是什麼呢,很多男生,很多貼難,會在這件事情上去,誇大自己的數量,假如他談過一個女朋友,他可能會別人問他,說你談過幾段戀愛,他說我談過三段。 我談過四段,他會去幹嘛,他會去誇大自己的這個,談戀愛的這個次數,這就是社會上的一些現狀,但是女人不會這樣去做,女人只會縮減,只會去縮減,女人,幹嘛呢,他要成一二,女人比如說他跟你說,談過三個。 那你可能要成一二,他去把他談過六個,但是他在於這件事情了,在於這件事情,如果說有女孩,真實的去問你說,你談過幾個女朋友,假如說你真的談過十個,你也沒有關係,你也不需要去隱瞞,你也不需要隱瞞。 你也不用誇大去說,你只需要幹嘛,你只需要,如實的據說,你談過幾個女朋友就可以了,你不要去撒謊,因為女孩是能夠,他的社交社區,是能夠很好的去,判斷出來,你是在撒謊,還是在真實的據說。 假如你只談過一個女朋友,你跟這個女孩說,我談過六個,女孩很明顯,就知道你在撒謊,但是你沒有談過六個,你沒有談過十個,你千萬不要撒謊,去誇大你的數量去講,你如實的據說就可以了,但是如果你真的談過十個。 你也可以幹嘛,你也可以如實的去跟女孩講,因為真正一個有魅力的男人幹嘛,你要誠實,你要真實,做真實的你自己,但是你在講的過程當中,你不能讓別人覺得你很炫耀,你是愛我談過十個,我密不厲害。 你不能以這樣炫耀的姿態,炫耀的這種感覺,去傳給女孩,作為一個男人,作為一個有魅力的男人,你要敢於說實話,敢於說真話,你還要讓女人知道,你不會評價女人,你不會再乎女孩,你也要讓女孩知道說。 你不再乎他對你的評價,這個能理解嗎,就是說我們作為男人,作為一個有魅力的男人,你不能再乎女孩,別人對你的評價,就說女人為這種不敢去如說的說,是因為他害怕別人對他評價,但是我們作為一個有魅力的男人。 我們是不害怕,別人對我們的評價的,比如說我去跟女孩說,我談過十個,我是不會害怕這個女孩,對我有評價說我是個砸男,說我是一個不好的男人,我不會害怕這樣的評價了,因為這就是真實的我自己,我就談過十個沒關係。 我願意跟你去講,那麼這是一種非常自信,非常有魅力的感覺,女孩會被你身上這種戲之所吸引,但是她坐奶王,我在這件事情做得非常的失敗,她會去誇大自己的數量去講,那麼女孩她的社交解決,就會很明顯的知道。 這是在吹牛,這是在吹牛,那麼這就會給女生一種不好的感覺,所以大家要記錄一句話是什麼,如果說你想讓女生在你面前,謝謝她的面具,那麼首先你需要在女生面前,做一個真實的你自己,這句話也是之前。 我非常喜歡去講的,我經常去講的,你想一下,我們今天還想讓女孩,在我面前做真實的自己對不對,但是首先你自己,你作為一個男人,都不敢在女生面前做真實的你自己,你覺得這個女孩會在你面前,做真實的她嗎。 我在聊歐洲人家中講過一句話,如果成為一個內心強大的男人,你要幹嘛,首先第一點要幹嘛,你要敢一句說真話,作為一個內心真正強大的男人,是不在乎別人別人,外人對我的自身的一個評價的,我會是敢一句說實話的。 敢一句說真話的,所以這句話真的非常的重要,如果說你想讓女孩在你面前,真實的去表達自己,她喜歡什麼,她不喜歡什麼,她談過幾段年愛,她喜歡等等一些更私密的一些問題,你想讓女孩在你面前,真實的去聊這些東西。 去討論這些東西,那麼前提是,你自己需要在她面前先做真實的自己,你不能是的吧,你不能是你很這個號色,或者說你很喜歡女孩的身材,但是你去了跟你你嘴上說,你喜歡吃什麼,但是你的眼神卻在表女孩的身材,對不對。 所以這句話我再重複一下,如果說你想讓女生在你面前,摘下面具做真實的自己,那麼首先你需要在女生面前,做真實的自己,作為一個內心強大的男人,你要敢於去說真話,敢於去說實話,把她這個句話,如果說你平價女生。 如果說你是一個平價女生的男人,那麼女人會通過你的言行舉止,去判斷出來,女孩是很容易判斷出來,你會不會平價女孩,因為只要是坐男,只要是吊絲,都會去平價女人,那麼一旦女生知道了,你是一個會平價女人的男人。 那麼女人在你面前,就再也不會說真話了,再也不會說實話了,那麼你也別想和這個女孩在一起了,因為女孩在你面前,會包裹得非常地嚴,然後舉個非常簡單的例子,比如說女孩抽菸,比如說女孩抽菸。 但是你傳遞出來的一種感覺,你覺得你傳遞的感覺,是什麼呢,比如說抽菸的女孩,都是壞女孩,你傳遞出來這麼一種感覺,就是什麼呢,這是在幹嘛,你這是在平價女人,那麼這個女孩,知道了你會平價女人,這個女人。 永遠都不會在你面前再抽菸了,但是相對的,你也沒有辦法在和這個女孩在一起了,因為這個女孩在你面前,沒有辦法真實的,做他自己,所以記住,平價女人會讓你得不到女人,平價女人會讓你得不到女人,那麼這是這一點。 大家懂了嗎,大家呢,不要說通過女孩的某一個行為,就覺得這個女孩,為什麼不去平價女孩呢,你要知道人是有雙面性的,人是有雙面性的,比如說一個歎圖,一個歎圖非常凶狠的一個壞人,非常凶狠的一個壞人,他是個歎圖。 是個很壞的人,但是他也有可能是一個,很慈祥,很有父愛的一個父親,所以人是有雙面性的,大家去看這個,看電影,每個電影,比如說有一個電影,有一個,有一個做小姐的一個女孩,比如說一個做小姐的一個女人。 他可能是一個封城女子,對不對,他是一個封城女子,但是相對,他也可能是一個母親,他也可能是一個,這個非常好,對自己孩子非常好,非常有母愛的一個母席,所以人是有雙面性的,你不要通過一個人。 他的某一個行為去評價他,比如說之前有一部電影,叫做我不是藥神,我不知道有多少人看過,我不是藥神,里邊有一個,里邊有一個女生,叫做他是跳光光舞的,他在那個夜店裡跳光光舞,那麼但是,實際上他的孩子得了。 還要得了這個,白眼病,還要是他需要去振錢,去幹嘛呢,去讓自己的,給自己的孩子去治病,這個你不能說,你不能如果通過判了,就這個女孩,在夜店裡,跳光上舞這個女孩,就是一個鬧女,這個女孩就是一個。 這個不好的女人,說的,不要去評價女孩,評價女人,會讓你得不到女人,那麼這一點,聽懂了,可以敲一個6,那麼如何做到不評價女人,怎麼做到不評價女人,想知道了嗎,想知道了,可以摔一波,我來給她舉個例子。 比如說,你問女生談過幾個男朋友,比如說你問女孩,這個問題,但是同時你要幹嘛,你要傳遞出來,你要表達出來,你不會評價別人的觀點,你要讓女人知道,你可以真實地在我面前做知識,我不會評價你,那麼如何去做呢。 想知道嗎,你問女孩說,你談過幾個男朋友,那麼這時候大部分女孩,他不敢真實地,去對你去表達,為什麼,因為他害怕你評價他,那麼在我們問這個問題的同時,我們要幹嘛,我們要表達出來,我們自己的。 不會評價女人的這種觀點,我們要讓女人去,給女人去傳遞一種,我們不會評價你的觀點,那我們怎麼去做呢,非常簡單,再問這個問題的同時,你可以去說這樣一句話,你說別告訴我,對吧,是三個,你這麼漂亮。 如果說你只談過三個,誰信呢,這是在告訴女孩什麼,在傳遞給女孩一種感覺,我不會評價你,你可以真實地在我面前做知識,對吧,你不用告訴我是三個,因為所有的妹子,你們如果說跟女孩接觸很多,你們會知道。 大部分的女孩,會告訴你,我談過三個男朋友,對不對,你還可以這樣去說,你可以說,你說,哎,如果說你只談過三個男朋友,對吧,我會對你很失望,因為我喜歡經驗,你還豐富一點的女孩子,同時呢,傳遞出來。 自己不會評價別人的觀點,如果說,你可以這樣傳遞,如果說一個女孩子,談的男朋友特別的少,我呢,會覺得這個女孩,可能會有問題,要不然是這個女孩,不夠漂亮,要不然是這個女孩,沒有魅力,你在傳遞出來,這種感覺。 你還可以說,一般男生呢,喜歡吹牛,對吧,說談過十個男朋友的,十個女朋友的一般呢,要做廚法,但是女孩呢,一般要做懲罰,那麼你在同時,把這些觀點,把這些話傳遞出來,那麼女孩就會知道,你不會評價女人。 你不會評價女人,那麼當然這是非常,簡單的一個方法,那麼大家可以通過,這個我上班去給大家,舉的這個例子,去舉一反三,去學會這個觀點,去學會,去做到,當然你自己的內心當中,你自己的內心當中。 一定要真實的做到,不會評價女人,你不要嘴上去這樣,說但是你內心當中,還是會評價女人,那麼女人也不會真實的,在裡面先做她自己,你只想說,對吧,別告訴我這是三個,你那麼漂亮,對不對,你只是說。 你只談過這三個誰信呢,當然你內心當中,比如說女孩跟你說,她談過六個,你就幹嘛呢,你就覺得這個女孩,這個很浪,和水晶淹化,對不對,那麼這種行為,也是非常不好的,那麼這就是,去不評價女人,那麼,這就是句話。 如果說你評價女人,代表什麼,代表你是一個共揚者,代表你是一個共揚者,只有說,路的男人,吊思共揚者會去幹嘛,會去評價男人,是不是如此,你們想一下,你們想一下一個共揚者,有了和一個女孩結婚了。 或者是和一個女孩在一起的,但是這個共揚者,知道了這個女孩,之前談過好幾個男朋友,而且和那些男朋友,還發生了關係,那麼這個共揚者,這位很生氣,很腦路,很生氣很腦路,她會覺得這個女人,不乾淨。 或者這個女人不好,為什麼如此呢,是因為這個共揚者,她從來就沒有得到過女人,她之前,就是說,她之前可能說,除了這個女人以外,她沒有能力,在獲得任何女人對她的喜歡,所以說她的思維,她整個人就是一個共揚者。 女人會非常不喜歡這種男人,那麼女人會給共揚者去帶女貿易,甚至你未來的孩子,都有可能不確定,是不是真實的你自己的,所以千萬不要去評價女人,如果說你不會評價女人,那麼代表什麼,代表你是一個有妹女的男人。 代表你是一個情人士思維的男人,那麼第一點我就講完了,第一點就那麼,不要評價女人,我相信我們在直播間的,我們在直播間的可能有,可能有一半以上,或者說90%的男人,之前都會去評價女人,但是當你聽了。 這個今天這節課中之後,我希望你們不要再去評價女人了,那我們今天再講,那麼下部來講第二點,第二點是什麼呢,第二點是處女情節,這個問題也非常的大,也非常的大,我現在為大家一個問題對吧。 如果說你知道你現在的女朋友,或者說你真的追的一個女孩,她不是一個處女,你會是什麼感覺,你可以把自己的感覺打出來,那可以把自己的感覺打出來,我相信可能有很多男生,就會覺得說,這個女孩不是一個好女孩。 不乾淨,有很多人就會有這樣的觀念,很多共產者,很多表示的男人就會幹嘛,他就會不自覺的很憤怒,他就會不會,他就會不自覺的感產生一種憤怒,他會產生一種被修了的感覺,他會感覺很傷心,因為什麼呢。 因為觸及了這個男人,他的戰友欲,他覺得對方很瘋,對方很瘋,那麼很多男人都有這樣的一個問題,這叫什麼呢,這叫處女情節,很多表示男人都有這樣的問題,如果說你覺得一個女人,跟別的男人發生過關係。 那麼這個女人就被電污了,那麼只能說明嗎,只能說明你是一個貢囊者,因為你自己沒有辦法,去獲得這種關係,去和別的女人去發生關係,所以你會覺得,和別的男人上過床,或者說和別的男人,發生過關係的這種女人。 就很瘋,那只能證明你是一個療斯,你是一個療斯,如果說你覺得,對吧這種不是處女的女朋友,這個女孩,不能當未來結婚,只能幹嘛只能玩一玩,結婚還是要去找這個處,對不對,那只能說明你是一個貢囊。 那麼處女情節本身來說幹嘛,你是你有一種處女情節,本身就是什麼呢,本身就是一種平派,本身就是一種平派的行為,他又回歸了我們今天的第一點,如果說你有處女情節,那麼你就幹嘛,你就是在等於在平派女人。 所以說不要有處女情節,處女情節說白的是什麼,是犧牲心態,是犧牲心態,你想一下,假如說你自己跟十個以上的女孩,對吧,發生過關係,那麼你就不會在乎你的女朋友,是不是衝擊,或者說你這台療子女孩。 是不是無對你的是無所謂,因為什麼,因為你不會在乎這個東西,你不是,你不是什麼,你不是犧牲心態,那麼犧牲心態是什麼呢,就是你只有這一個女孩,所以你會很擔心,所以你會覺得自己有一種,被羞辱的感覺。 你會有一種不自覺的這種憤怒,你會覺得對方比較安藏,所以呢大家,對吧,千萬不要有這個衝擊情節對吧,記住的吧,你越不會平派女孩,那麼女孩越會在你面前做他真實的自己,這也是為什麼我們導師,總是能夠和女孩。 比如說要聊了一二十句,這個女孩就能夠在我面前做真實的自己,就是因為女孩,知道我不會去平派她,她在我面前說什麼東西做什麼行為,對吧都可以真實去做,比如說約會的時候的吧,女孩會在我面前去抽菸。 然後女孩會在我面前做他真實的自己,女孩會在我面前去說髒話,因為女孩知道,我不會通過這些行為去平派女孩,我不會覺得抽菸的女孩就是壞女孩,我不會覺得說髒話的女孩都是壞女孩,所以女孩才會在我面前做真實的自己。 記住你越不會平派女孩,女孩越會在你面前做一個真實的自己,那麼你想一下一個女人越能在你面前做真實的自己,代表什麼代表你越吸引這個女孩,代表這個女孩很喜歡你,代表你走進了這個女孩的世界。 那麼第二點就是處理情形我們講完了,那麼第三點是什麼呢,是好女孩情節,好女孩情節是什麼呢,就像我前面說講的對吧,很多職男,或者說很多這個錯男,很多表示她會覺得這個世界上女人就分為兩種。 一種是好女孩一種是壞女孩,什麼是好女孩呢,好女孩就是不去夜店,不去酒吧不去KDV,不幹壞事,不抽菸喝酒文身,對吧,不說髒話,感情經歷少,不喜歡這種嗚,修答答的這種女孩,和男生接觸的少,這就是好女孩。 這是很多男人,他們的思維,他們覺得這是好女孩,什麼是壞女孩,就坐上面這些行為大的,去夜店去酒吧去KDV,這個抽菸喝酒,說髒話這些女孩,他們就會覺得什麼,他們覺得這種女孩是壞女孩,再比如說有女孩喜歡這個。 比如說比較嗚,他就會覺得這種女孩是壞女孩,這是什麼呢,這叫做好女孩情節,叫做好女孩情節,這相對來講,他還有什麼呢,他也是一種評價,也是一種評價,就像我前面都講的,不要去評價女孩,如果說你覺得。 去酒吧的女孩都是壞女孩,那麼就代表什麼,代表你評價別人,那麼這個女孩,其實他去過酒吧,他絕對也不會告訴你,他絕對不會告訴你說,我去過酒吧,我平常去他不會告訴你,因為他害怕你評價他,你想一下是不是如此。 你們問女孩說,你去過酒吧,女孩跟你說我沒有,但實際上他可能天天晚上去酒吧,你問女孩說你去過夜店嗎,女孩說夜店是什麼,夜店是看書的嗎,但是實際上他可能天天晚上去夜店,但是他不敢在你面前去說。 因為你是一個坐男,因為你是一個屌私,那麼你們要真實的去做一個,有妹女男人,去給女孩傳遞出來一種,你不會評價他,你不會有好女孩情節,你不會有處理情節,你不會去評價女人的男人,那麼女人就會在你面前。 做真實的自己,比如女孩會約你出來喝酒,喝你一起去酒吧喝你一起去夜店,女孩會真實的去跟你說,他的感情心力,真實的去跟你講,他的一些,對吧,情況他的自己的一些事情,但是一旦女生知道。 你是一個會評價女人的男人,那麼糟糕的事情就會發生,糟糕的是女孩發生,女生就會幹嘛,開始對你關閉大門,不會在你面前做真實的自己,女孩去過夜店也不會告訴你,女孩,對吧,見到去玩也不會告訴你。 女孩只會在你面前幹嘛,去裝純潔,因為女孩知道你喜歡好女孩,對吧,你喜歡那麼純潔的,修大大的女孩,那麼女孩就會幹嘛,在你面前去裝得非常的純潔,修大大的,你遷她手,女孩幹嘛,不要牽我手,你怎麼都這樣呢。 對不對,我當你當哥哥,你怎麼這樣呢,那麼糟糕的事情就已經開始了,你就已經被這個女孩,化入了共揚車,化入了共揚車,如果你認為說所有的換女孩,如果說你認為做,以上這些事情就是換女孩,你讓女孩知道了。 那麼女生就會在你面前開始裝純潔了,所以現在大家記住千萬不要,千萬不要讓女生,不說不說讓女生知道你,你沒有這種好女孩親戚,而是你真的要做到不去評價別人,你真的要去做到沒有好女孩親戚,如果說你讓女孩知道了。 女孩知道你是一個貢男,知道你會評價女人,你覺得去這個夜店的女孩,都不是什麼好女生,不乾淨,那麼這時候女孩就會告訴你,我沒有去過夜店,但實際上,但實際上,女生幾乎都去過,幾乎都去過,除非是一些三四顯城市。 沒有夜店的這種城市,你要在12000城市,幾乎所有的女孩都去過,那麼有一些高中生的女孩,都已經開始去夜店,開始去酒吧了,我在酒吧認識好多女孩,她都是什麼呢,她都是高中生,或者說是剛剛畢業的。 剛畢業的大學上都有很多,所以那女生都會去這種地方,你不要覺得,這個女孩嘴香跟你說,我沒有去過,這只能代表什麼呢,女孩沒有在你面前做真實的自己,那麼這就是好女孩情節,希望大家記住,一個女生她的射尿直覺。 是50個男人的一個總核,你不要覺得,對吧,你去裝,你不要覺得說,其實你內心當中,你有好女孩情節,但是你去,給在這個女孩面前去裝,去裝什麼的,去裝成沒有好女孩情節,那麼你要知道,女人的射尿直覺。 女人的射尿直覺,女人的情商,是50個男人的總核,女人可以很輕易地去判斷出來,你是在說真話還是假話,如果你評價女生,那麼女生就會開始,幹嘛,開始裝純,在你面前裝純解,那麼女生在你面前裝純解。 你作為一個男人,你是根本看不出來的,你根本看不出來,但是如果說你不評價女人,那麼女人就可以在你面前去,展現真實的自己的,真實的樣子,當你能夠讓女孩在你面前,展示自己真實的樣子的時候,那麼女生。 就是和你在一起的時候,也代表女生和你在一起的時間,速度也會變得很快,很多女孩,很多女孩,第一次跟我約會,很多女孩第一次約會,和我約會,然後呢,就和我在一起了,第一次和我約會就和我在一起了,那我們可能就。 把一起晚上就一起住了,那麼聽到這裡,你會怎麼想呢,你會怎麼想呢,可以把自己的答案打出來,你們會怎麼去想這件事情,我相信有些男生可能會覺得說,這個女孩就是個壞女人,這個女孩就比較浪漫,這個女孩就比較。 不是那種純粹的女孩,純粹的女孩,絕對不可能說,一次約會就和你這個在一起了,但實際上,她能夠在我面前這樣,但是一旦當你去跟她約會的時候,她在你面前就會裝了無比的純粹,你想牽拿手都不可能。 你想牽拿手都不可能,所以不要去平常女生,不要覺得這個行為,她就是一個什麼樣的女孩,不要通過某一件事情,某一件事情某一個行為,去判斷女孩去平常女孩,那這種事情很多,我每次,不能說每次吧。 就是經常我經常和女孩約會,然後一次約會我們只是,我們只是第一次見面,第一次見面,然後我們就可能會在一起,我們就會在一起,那麼聽到這裡,很多的質難可能,就腦海大中就怒了,就生氣了,那絕對,那絕對吧。 這個女生一定不是這麼好女人,好女人絕對不會這樣,但實際上,這個女人在你面前,就是一個非常好的女人,你根本就不知道,她做的一些事情,這個女人可能會欺騙你一輩子,你也沒有辦法和這個女人在一起。 那麼來講今天的第四點,第四點是我們叫做共揚者行為,共揚者行為也是一種平常女人,也是一種垃圾的戀愛下之觀,什麼是共揚者行為呢,那麼記住什麼是共揚者行為,共揚者行為的一個定義就是,想要通過給別人提供價值。 提供資源來得到這個女人,那麼如果說你做以上這些行為,你做以上這些行為,都代表什麼,都代表你是一個共揚者,這都是共揚者的行為,比如說車機車送女孩,前女孩吃大餐,送禮物送紅包,給女孩買單,對女孩虛害問的。 等等等等,還有很多,如果讓你想通過以上這些行為,通過以上這些提供這些資源,來得到這個女人和你在一起,那麼這就是共揚者的行為,你通過你希望去交換資源,比如說請她吃飯,給這方提供的資源,交換資源。 我前女吃飯你是不是把她,該跟我在一起了呀,你是不是該喜歡我了呀,你這是在幹嘛,你就是在做交換,這是共揚者的一種行為,那麼記住感情這件事情,記住感情這件事情,讓女孩喜歡你這件事情,並不是你付出了金錢。 女孩就會喜歡你,那麼共揚者這個行為呢,也是一樣的,並不是你付出的金錢,才是共揚者,沒有付出過金錢,也會是共揚者,共揚者主要體現的是什麼,主要體現在你的思維,主要體現進來你的思維,如果說你的思維是。 想為對方做一件某件,事情讓對方喜歡,比如說我關心你,我對你無比的關心,我多麼多麼多麼愛你,我多麼多麼多麼多麼多,能喜歡你,然後你就該喜歡我了,那麼這都是共揚者的思維,這都是共揚者的思維。 這都是共揚者的思維,你要明白你喜歡女孩,跟女孩有毛線關係,你對他好,人家讓你對他好,人家不需要,但是你確決都說我對你好,我喜歡你,你就該喜歡我,對方欠你了嗎,不欠你了,那麼共揚者主要體現在你的思維。 如果說你又以上的,也思維,那麼代表你是一個共揚者,別用女孩說,我最近生病了,我最近生病了,你是什麼反應,去問一問你自己,女孩說我生病了,你會怎麼做,你還告訴你,我生病了你會怎麼做,再比你還跟你說。 我想吃外賣了,而我今天這個肚子餓了,想吃什麼什麼外賣,你是什麼反應,你會怎麼做,女孩說,我最近在找工作,我最近自己在找工作,挺忙的,你會怎麼去做,那麼這三件事情,思考一下,把自己的答案打出來。 我們來一個一個講,第一個女生說,我生病了,大部分的共揚者的男生,會怎麼做,會去說你吃點藥,你在什麼地方去醫院看一看,我給你買點藥送過去,你喝點粥,照顧好自己等等,你看他們一堆,他們這些生病了。 這是共揚者的思維,這是共揚者的思維,而情人的思維,而一個有妹的男人的思維,去把女孩帶領出來,從壞的情緒當中帶領出來,我之前專門講過,女孩生病的時候,怎麼去做,不知道可以去看一下那階口中,只需要把女孩在。 當下的這種壞的情緒當中,去帶出來,帶到好情緒上去,就可以了,在比如女孩說,我想吃外賣了,這時候很多男生會怎麼做,很多共揚者會怎麼做,會給這個女孩去點外賣,會給這個女孩去送吃的,我之前專門講過。 很多級這個案例,不知道可以去看一看,大家看到之前的案例,我不知道有多少人看過,一個男生可能要開車,幾十公里,給這個女孩去送吃的,開車幾十公里,去給這個女孩送吃的,你想一下是不是,這是多麼調適的行為。 多麼調適的行為,你想吃外賣你自己點嗎對不對,你傻嗎你不是不會用手機對不對,你還能自己餓死了嗎,你餓了自己不知道吃嗎對不對,女孩跟你說我最近在找工作,這時候又有可能有大部分的共揚者,會怎麼做。 會覺得說我幫你想找什麼工作,我幫你問一問,我幫你怎麼樣怎麼樣的,或者去告訴地方說,怎麼樣找工作好找,他們就聊一堆,那麼這都是共揚者的行為,你根本沒必要,去跟對方去討論這些事情對吧,去聊一聊別的就行了。 女孩說我最近在找工作,我說我最近發現了一家好吃的,我改天帶你去一支,看你這麼辛苦找工作,對不對,馬上話題就被帶領走了,是不是如此,那麼所以說,所有的兄弟們不要再做共揚者了,不要再做共揚者了。 什麼時候可以做共揚者呢,有些人說,難道說我這個和女孩在一起,就是說所有跟女孩在一起的時間,都不能做共揚者嗎,那麼也不是的,有些時候是可以做共揚者的,有些人知道什麼時候嗎,可以把答案打出來。 有一些時候你是可以做共揚者的,那什麼時候呢,就是你們成為真正的男女朋友的時候,或者你們結婚了,或者你們結婚了,你們成為真正的男女朋友之後,你可以去請對方吃飯,給對方買禮物,甚至發紅包,跟對方對對方好。 帶對方去醫院看,對方生命了怎麼樣,怎麼樣的,去對對方好,這都是可以了,或者說你們結婚了,你是可以做共揚者的了,這個時候沒關係了,這個時候你就該去關心他,該去做這個行為了,共揚者本身是不存在對錯之分的。 因為如果說你要組成家庭,組成婚姻,你必須要去共揚你的家庭,你必須要去共揚你的家庭,那樣是吧,你和女孩,你們未來要結婚,你們要在一起,你們結婚之後,或者說這個女孩成為你真正的男女,真正的女朋友之後。 那代表什麼呢,你要去共揚,你要去共揚你的家庭,但如果說,你想讓一個女生做你的女朋友,而去做共揚者,大家記住這個句話,如果說你想讓一個女生,成為你的女朋友,然後呢去做共揚者的行為。 那麼女生是絕對不會和你在一起的,正確的做法是什麼呢,先做情人,先做情人,跟女孩確立了關係之後,你可以主動地去提供資源,但是記住不能貴田,提供資源跟做共揚者,沒關係,但是不能貴田女人。 就是你們成為了男女朋友之後,你們結婚了以後,你也不能去貴田女人,男人依然,要在關係當中去幹嘛,去主動你的關係,你必須要先確立,和這個女孩必須要先確立,你們是一個實質的,男女朋友關係,然後你去送禮物。 去請吃飯,你們一起住在一起,一起共揚都沒有什麼問題,但是千萬不要貴田女人,不要去貴田女人,貴田女人,就只會幹嘛,讓女人對你來說,會把你的底線,會把你的界線,這個月才月滴,月才月滴,你就沒有底線。 沒有界線了,你成為了一個這樣的男人,共揚沒有問題,但是不能貴田,記住,那麼女生會通過,女生可以在等待的幾分鐘之內,就能夠判斷,你這個男人,是共揚者,還是情人,如果說在女人沒有喜歡你之前。 沒有成為你女朋友,之前女人知道,為你是一個共揚者,那麼女人就會幹嘛,女人再也不會,你把他帶一切,那麼什麼是情人,什麼是共揚者呢,情人就是女人喜歡的類型,我們把它俗成叫做情人,女人想聊天的類型。 女人想談戀愛的類型,這都是什麼的,我們把它俗成為一種類型,叫做情人,叫做情人類型,那麼共揚者是什麼呢,共揚者是女人不喜歡的類型,女人會把共揚者當成被胎,會把共揚者當成一個,這個,當成一個ATMG。 等等的,女人如果說,沒有辦法跟情人結婚,女人也有可能會,跟這個共揚者去結婚,那麼跟共揚者結婚,跟共揚者結婚,結果是什麼呢,結果可能是這個共揚者,未來就幹嘛,未來可能就成為了一個被胎,成為了一個。 打婚前這個被胎,婚禍為率,對,非常的悲慘,所以希望大家不要,成為共揚者,當然是什麼,什麼時候不要成為共揚者,你在和這個女孩,在一起之前,不要成為共揚者,一個女人,和一個男人剛認識的時候。 女人就會在潛意識裡,去判斷,對吧判斷這個男人,會不會成為自己的男朋友,會不會和自己發生關係,這是一個女人,在剛認識一個男人的時候,她的潛意識裡,就能夠判斷出來,如果說你在吸引到這個女生。 之前你表現出來了,共揚者的特製,那麼這個女人,是絕對不會和你發生關係的,那麼這個女人會自然地,把你歸納為共揚者,如果說你是一個共揚者,你和這個女生做朋友,甚至說,你當了這個女生的男朋友,但是。 你始終沒有辦法,和這個女生發生實質的關係,一旦你被這個女生,定義為是共揚者類型,你將非常難翻身,你非常難翻身,即使我們去幫你,也會變得非常難,所以我們千丁主萬主婦,不要做共揚者的行為。 在信義到這個女孩之前,不要做,如果說你成了共揚者,那麼會幹嘛,那麼這時候,女孩會自然地把你打成,打入一個共揚者的小合作,當你被女生當成了共揚者,女生就會把你裝到一個盒子裡邊去,當你想要樂界的時候。 做出超出有為行為的,超出這個有為性的行為,和語言的時候,你能會不自覺地,再把你拉回小盒子裡邊,我們把它稱之為什麼,我們把它稱之為叫做,刺等男人羞辱,如果聽過我們之前課程的會懂,會懂。 比如說你是一個共揚者,你想去跟女孩簽手,你想去跟女孩升級關係,那麼女孩會很自然地去幹嘛,去打壓你,去把你關到共揚者的盒子裡邊,去說你把我當什麼人了,你放最重點,你把手拿開一點,我是個正經人,對不對。 你能不能正常一點,女孩就會給你嗎,做刺等男人的羞辱,對你去羞辱,不可能給你升級的機會,不可能給你升級的機會,那麼記住,記住,不要成為共揚者,那麼今天我們給大家講了4點,錯誤的戀愛家族官。 清楚到你老海當中,錯誤的戀愛家族官,可能有很多兄弟們之前,會犯這些問題,比如說第一點,評價女生,第二點,處理清潔,第三點,好女孩清潔,第四點,共揚者的思維,是不是如此,那麼怎麼正確的吸引女孩。 怎麼做一個有魅力的男人呢,那麼最簡單的方法,去報名參加,我們的魅力男生課程,我們的魅力男生課程,會包含全套的戀愛流程,全套的戀愛流程,我們會教大家,有一個妥男,怎麼變成一個男神,我們會告訴大家。 怎麼樣去吸引女孩,哪些行為,是吸引女孩的,哪些行為,哪些氣質,哪些品質等等,哪些東西,是能夠散發你的魅力的,我們會專門有一個,可以的那盒的板塊,去交給大家,由內到外的一個特變,我們會交給大家,聊職長。 所有有關和女人聊天的問題,在聊職長當中,都可以完整的去學會,還會有約會加多戲,所有有關女孩約會的問題,可以在約會加多戲課中當中,學會去解決,還會有關係維護,你和這個女孩成為了男女朋友之後。 只需要讓我們的關係,維護的步驟去做,你們就可以維護一段長長,求求者的內愛,甚至未來去結婚,我們的魅力男生課程,總共超過年買集課程,總共超過年買集課程,如果說你想去學習,想去改變,最簡單的方法。 就去報名參加,我們的魅力男生課程,魅力男生呢,誰一套可以複製的,戀愛流程,你只需要按照,戀愛這個魅力男生課程當中,講的方法去做,去複製,去複製,去做愛上這套流程的方法去做,你就能夠獲得女朋友。 你就能夠有女孩喜歡你,你就能夠變得有魅力,那麼快速拖擔的協盡,就是報名參加我們的魅力男生課程,學會正確認識女孩的局勢,和女生聊天,約會的技巧,讓女生喜歡上你,大家可以去,沒關注我們微信公眾的號子。 去關注上我們的微信公眾的號,我們的微信公眾的號是,愛上電愛學,關注完我們的微信公眾的號之後,回覆兩個字,拿兩個字呢,課程回覆課程兩個字,回覆課程兩個字,之後會談出來一個,我們的課程店鋪。 進入我們的課程店鋪之後,可以去選擇你要報名參加的課程,比如說,去學聊天參加聊職長課程,學完整的戀愛流程,去報名我們的魅力男生課程,魅力男生課程中,會包含聊職長課程,那麼今天課程到最後,送給大家一屬師。 我愛你,因為你時常穿透我的心靈,如同陽光穿透,水晶般容易,我愛你,因為有你在的時候,一切都對,我說了天安,那麼這就是今天的。 少別落
PypiClean
/Orquestra-1.4-py3-none-any.whl/orquestra/static/semantic-ui/components/video.min.js
!function(e,o,t,n){"use strict";e.fn.video=function(t){{var a,i=e(this),r=i.selector||"",l=(new Date).getTime(),c=[],u=arguments[0],s="string"==typeof u,m=[].slice.call(arguments,1);o.requestAnimationFrame||o.mozRequestAnimationFrame||o.webkitRequestAnimationFrame||o.msRequestAnimationFrame||function(e){setTimeout(e,0)}}return i.each(function(){var d,p=e.isPlainObject(t)?e.extend(!0,{},e.fn.video.settings,t):e.extend({},e.fn.video.settings),f=p.selector,g=p.className,h=p.error,v=p.metadata,b=p.namespace,y=p.templates,w="."+b,x="module-"+b,F=(e(o),e(this)),C=F.find(f.placeholder),E=F.find(f.playButton),T=F.find(f.embed),A=this,P=F.data(x);d={initialize:function(){d.debug("Initializing video"),d.create(),F.on("click"+w,f.placeholder,d.play).on("click"+w,f.playButton,d.play),d.instantiate()},instantiate:function(){d.verbose("Storing instance of module",d),P=d,F.data(x,d)},create:function(){var e=F.data(v.image),o=y.video(e);F.html(o),d.refresh(),e||d.play(),d.debug("Creating html for video element",o)},destroy:function(){d.verbose("Destroying previous instance of video"),d.reset(),F.removeData(x).off(w)},refresh:function(){d.verbose("Refreshing selector cache"),C=F.find(f.placeholder),E=F.find(f.playButton),T=F.find(f.embed)},change:function(e,o,t){d.debug("Changing video to ",e,o,t),F.data(v.source,e).data(v.id,o).data(v.url,t),p.onChange()},reset:function(){d.debug("Clearing video embed and showing placeholder"),F.removeClass(g.active),T.html(" "),C.show(),p.onReset()},play:function(){d.debug("Playing video");var e=F.data(v.source)||!1,o=F.data(v.url)||!1,t=F.data(v.id)||!1;T.html(d.generate.html(e,t,o)),F.addClass(g.active),p.onPlay()},get:{source:function(e){return"string"!=typeof e?!1:-1!==e.search("youtube.com")?"youtube":-1!==e.search("vimeo.com")?"vimeo":!1},id:function(e){return e.match(p.regExp.youtube)?e.match(p.regExp.youtube)[1]:e.match(p.regExp.vimeo)?e.match(p.regExp.vimeo)[2]:!1}},generate:{html:function(e,o,t){d.debug("Generating embed html");var n;return e=e||p.source,o=o||p.id,e&&o||t?(e&&o||(e=d.get.source(t),o=d.get.id(t)),"vimeo"==e?n='<iframe src="//player.vimeo.com/video/'+o+"?="+d.generate.url(e)+'" width="100%" height="100%" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>':"youtube"==e&&(n='<iframe src="//www.youtube.com/embed/'+o+"?="+d.generate.url(e)+'" width="100%" height="100%" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>')):d.error(h.noVideo),n},url:function(e){var o=p.api?1:0,t="auto"===p.autoplay?F.data("image")!==n:p.autoplay,a=p.hd?1:0,i=p.showUI?1:0,r=p.showUI?0:1,l="";return"vimeo"==e&&(l="api="+o+"&amp;title="+i+"&amp;byline="+i+"&amp;portrait="+i+"&amp;autoplay="+t,p.color&&(l+="&amp;color="+p.color)),"ustream"==e?(l="autoplay="+t,p.color&&(l+="&amp;color="+p.color)):"youtube"==e&&(l="enablejsapi="+o+"&amp;autoplay="+t+"&amp;autohide="+r+"&amp;hq="+a+"&amp;modestbranding=1",p.color&&(l+="&amp;color="+p.color)),l}},setting:function(o,t){if(d.debug("Changing setting",o,t),e.isPlainObject(o))e.extend(!0,p,o);else{if(t===n)return p[o];p[o]=t}},internal:function(o,t){if(e.isPlainObject(o))e.extend(!0,d,o);else{if(t===n)return d[o];d[o]=t}},debug:function(){p.debug&&(p.performance?d.performance.log(arguments):(d.debug=Function.prototype.bind.call(console.info,console,p.name+":"),d.debug.apply(console,arguments)))},verbose:function(){p.verbose&&p.debug&&(p.performance?d.performance.log(arguments):(d.verbose=Function.prototype.bind.call(console.info,console,p.name+":"),d.verbose.apply(console,arguments)))},error:function(){d.error=Function.prototype.bind.call(console.error,console,p.name+":"),d.error.apply(console,arguments)},performance:{log:function(e){var o,t,n;p.performance&&(o=(new Date).getTime(),n=l||o,t=o-n,l=o,c.push({Name:e[0],Arguments:[].slice.call(e,1)||"",Element:A,"Execution Time":t})),clearTimeout(d.performance.timer),d.performance.timer=setTimeout(d.performance.display,500)},display:function(){var o=p.name+":",t=0;l=!1,clearTimeout(d.performance.timer),e.each(c,function(e,o){t+=o["Execution Time"]}),o+=" "+t+"ms",r&&(o+=" '"+r+"'"),i.length>1&&(o+=" ("+i.length+")"),(console.group!==n||console.table!==n)&&c.length>0&&(console.groupCollapsed(o),console.table?console.table(c):e.each(c,function(e,o){console.log(o.Name+": "+o["Execution Time"]+"ms")}),console.groupEnd()),c=[]}},invoke:function(o,t,i){var r,l,c,u=P;return t=t||m,i=A||i,"string"==typeof o&&u!==n&&(o=o.split(/[\. ]/),r=o.length-1,e.each(o,function(t,a){var i=t!=r?a+o[t+1].charAt(0).toUpperCase()+o[t+1].slice(1):o;if(e.isPlainObject(u[i])&&t!=r)u=u[i];else{if(u[i]!==n)return l=u[i],!1;if(!e.isPlainObject(u[a])||t==r)return u[a]!==n?(l=u[a],!1):(d.error(h.method,o),!1);u=u[a]}})),e.isFunction(l)?c=l.apply(i,t):l!==n&&(c=l),e.isArray(a)?a.push(c):a!==n?a=[a,c]:c!==n&&(a=c),l}},s?(P===n&&d.initialize(),d.invoke(u)):(P!==n&&P.invoke("destroy"),d.initialize())}),a!==n?a:this},e.fn.video.settings={name:"Video",namespace:"video",debug:!1,verbose:!1,performance:!0,metadata:{id:"id",image:"image",source:"source",url:"url"},source:!1,url:!1,id:!1,aspectRatio:16/9,onPlay:function(){},onReset:function(){},onChange:function(){},onPause:function(){},onStop:function(){},width:"auto",height:"auto",autoplay:"auto",color:"#442359",hd:!0,showUI:!1,api:!0,regExp:{youtube:/^(?:https?:\/\/)?(?:www\.)?(?:youtu\.be\/|youtube\.com\/(?:embed\/|v\/|watch\?v=|watch\?.+&v=))((\w|-){11})(?:\S+)?$/,vimeo:/http:\/\/(www\.)?vimeo.com\/(\d+)($|\/)/},error:{noVideo:"No video specified",method:"The method you called is not defined"},className:{active:"active"},selector:{embed:".embed",placeholder:".placeholder",playButton:".play"}},e.fn.video.settings.templates={video:function(e){var o="";return e&&(o+='<i class="video play icon"></i><img class="placeholder" src="'+e+'">'),o+='<div class="embed"></div>'}}}(jQuery,window,document);
PypiClean
/sec-certs-0.1.5.tar.gz/sec-certs-0.1.5/src/sec_certs/dataset/fips.py
from __future__ import annotations import datetime import itertools import logging import shutil from pathlib import Path from typing import Final import numpy as np import pandas as pd from bs4 import BeautifulSoup, NavigableString from sec_certs import constants from sec_certs.configuration import config from sec_certs.dataset.cpe import CPEDataset from sec_certs.dataset.cve import CVEDataset from sec_certs.dataset.dataset import AuxiliaryDatasets, Dataset from sec_certs.dataset.fips_algorithm import FIPSAlgorithmDataset from sec_certs.model.reference_finder import ReferenceFinder from sec_certs.model.transitive_vulnerability_finder import TransitiveVulnerabilityFinder from sec_certs.sample.fips import FIPSCertificate from sec_certs.serialization.json import ComplexSerializableType, serialize from sec_certs.utils import helpers from sec_certs.utils import parallel_processing as cert_processing from sec_certs.utils.helpers import fips_dgst logger = logging.getLogger(__name__) class FIPSAuxiliaryDatasets(AuxiliaryDatasets): cpe_dset: CPEDataset | None = None cve_dset: CVEDataset | None = None algorithm_dset: FIPSAlgorithmDataset | None = None class FIPSDataset(Dataset[FIPSCertificate, FIPSAuxiliaryDatasets], ComplexSerializableType): """ Class for processing of FIPSCertificate samples. Inherits from `ComplexSerializableType` and base abstract `Dataset` class. """ def __init__( self, certs: dict[str, FIPSCertificate] = {}, root_dir: str | Path = constants.DUMMY_NONEXISTING_PATH, name: str | None = None, description: str = "", state: Dataset.DatasetInternalState | None = None, auxiliary_datasets: FIPSAuxiliaryDatasets | None = None, ): self.certs = certs self.timestamp = datetime.datetime.now() self.sha256_digest = "not implemented" self.name = name if name else type(self).__name__ + " dataset" self.description = description if description else datetime.datetime.now().strftime("%d/%m/%Y %H:%M:%S") self.state = state if state else self.DatasetInternalState() self.auxiliary_datasets: FIPSAuxiliaryDatasets = ( auxiliary_datasets if auxiliary_datasets else FIPSAuxiliaryDatasets() ) self.root_dir = Path(root_dir) LIST_OF_CERTS_HTML: Final[dict[str, str]] = { "fips_modules_active.html": constants.FIPS_ACTIVE_MODULES_URL, "fips_modules_historical.html": constants.FIPS_HISTORICAL_MODULES_URL, "fips_modules_revoked.html": constants.FIPS_REVOKED_MODULES_URL, } @property def policies_dir(self) -> Path: return self.certs_dir / "policies" @property def policies_pdf_dir(self) -> Path: return self.policies_dir / "pdf" @property def policies_txt_dir(self) -> Path: return self.policies_dir / "txt" @property def module_dir(self) -> Path: return self.certs_dir / "modules" @property def algorithm_dataset_path(self) -> Path: return self.auxiliary_datasets_dir / "algorithms.json" def __getitem__(self, item: str) -> FIPSCertificate: try: return super().__getitem__(item) except KeyError: return super().__getitem__(fips_dgst(item)) def _extract_data_from_html_modules(self) -> None: """ Extracts data from html module file :param bool fresh: if all certs should be processed, or only the failed ones. Defaults to True """ logger.info("Extracting data from html modules.") certs_to_process = [x for x in self if x.state.module_is_ok_to_analyze()] processed_certs = cert_processing.process_parallel( FIPSCertificate.parse_html_module, certs_to_process, use_threading=False, progress_bar_desc="Extracting data from html modules", ) self.update_with_certs(processed_certs) @serialize def extract_data(self) -> None: logger.info("Extracting various data from certification artifacts.") for cert in self: cert.state.policy_extract_ok = True cert.state.module_extract_ok = True self._extract_data_from_html_modules() self._extract_policy_pdf_metadata() self._extract_policy_pdf_keywords() self._extract_algorithms_from_policy_tables() def _extract_policy_pdf_keywords(self) -> None: logger.info("Extracting keywords from policy pdfs.") certs_to_process = [x for x in self if x.state.policy_is_ok_to_analyze()] processed_certs = cert_processing.process_parallel( FIPSCertificate.extract_policy_pdf_keywords, certs_to_process, use_threading=False, progress_bar_desc="Extracting keywords from policy pdfs", ) self.update_with_certs(processed_certs) def _download_all_artifacts_body(self, fresh: bool = True) -> None: self._download_modules(fresh) self._download_policies(fresh) def _download_modules(self, fresh: bool = True) -> None: self.module_dir.mkdir(parents=True, exist_ok=True) certs_to_process = [x for x in self if x.state.module_is_ok_to_download(fresh)] if fresh: logger.info("Downloading HTML cryptographic modules.") if not fresh and certs_to_process: logger.info(f"Downloading {len(certs_to_process)} HTML modules for which download failed.") cert_processing.process_parallel( FIPSCertificate.download_module, certs_to_process, progress_bar_desc="Downloading HTML modules", ) def _download_policies(self, fresh: bool = True) -> None: self.policies_pdf_dir.mkdir(parents=True, exist_ok=True) certs_to_process = [x for x in self if x.state.policy_is_ok_to_download(fresh)] if fresh: logger.info("Downloading PDF security policies.") if not fresh and certs_to_process: logger.info(f"Downloading {len(certs_to_process)} PDF security policies for which download failed.") cert_processing.process_parallel( FIPSCertificate.download_policy, certs_to_process, progress_bar_desc="Downloading PDF security policies", ) def _convert_all_pdfs_body(self, fresh: bool = True) -> None: self._convert_policies_to_txt(fresh) def _convert_policies_to_txt(self, fresh: bool = True) -> None: self.policies_txt_dir.mkdir(parents=True, exist_ok=True) certs_to_process = [x for x in self if x.state.policy_is_ok_to_convert(fresh)] if fresh: logger.info("Converting FIPS security policies to .txt") if not fresh and certs_to_process: logger.info( f"Converting {len(certs_to_process)} FIPS security polcies to .txt for which previous convert failed." ) cert_processing.process_parallel( FIPSCertificate.convert_policy_pdf, certs_to_process, progress_bar_desc="Converting policies to pdf", ) def _download_html_resources(self) -> None: logger.info("Downloading HTML files that list FIPS certificates.") html_urls = list(FIPSDataset.LIST_OF_CERTS_HTML.values()) html_paths = [self.web_dir / x for x in FIPSDataset.LIST_OF_CERTS_HTML] helpers.download_parallel(html_urls, html_paths) def _get_all_certs_from_html_sources(self) -> list[FIPSCertificate]: return list( itertools.chain.from_iterable( self._get_certificates_from_html(self.web_dir / x) for x in self.LIST_OF_CERTS_HTML ) ) def _get_certificates_from_html(self, html_file: Path) -> list[FIPSCertificate]: with html_file.open("r", encoding="utf-8") as handle: html = BeautifulSoup(handle.read(), "html5lib") table = [x for x in html.find(id="searchResultsTable").tbody.contents if x != "\n"] cert_ids: set[str] = set() for entry in table: if isinstance(entry, NavigableString): continue cert_id = entry.find("a").text if cert_id not in cert_ids: cert_ids.add(cert_id) return [FIPSCertificate(int(cert_id)) for cert_id in cert_ids] @classmethod def from_web_latest(cls) -> FIPSDataset: """ Fetches the fresh snapshot of FIPSDataset from mirror. """ return cls.from_web(config.fips_latest_snapshot, "Downloading FIPS Dataset", "fips_latest_dataset.json") def _set_local_paths(self) -> None: super()._set_local_paths() if self.auxiliary_datasets.algorithm_dset: self.auxiliary_datasets.algorithm_dset.json_path = self.algorithm_dataset_path cert: FIPSCertificate for cert in self.certs.values(): cert.set_local_paths(self.policies_pdf_dir, self.policies_txt_dir, self.module_dir) @serialize def get_certs_from_web(self, to_download: bool = True, keep_metadata: bool = True) -> None: self.web_dir.mkdir(parents=True, exist_ok=True) if to_download: self._download_html_resources() logger.info("Adding unprocessed FIPS certificates into FIPSDataset.") self.certs = {x.dgst: x for x in self._get_all_certs_from_html_sources()} logger.info(f"The dataset now contains {len(self)} certificates.") if not keep_metadata: shutil.rmtree(self.web_dir) self._set_local_paths() self.state.meta_sources_parsed = True @serialize def process_auxiliary_datasets(self, download_fresh: bool = False) -> None: super().process_auxiliary_datasets(download_fresh) self.auxiliary_datasets.algorithm_dset = self._prepare_algorithm_dataset(download_fresh) def _prepare_algorithm_dataset(self, download_fresh_algs: bool = False) -> FIPSAlgorithmDataset: logger.info("Preparing FIPSAlgorithm dataset.") if not self.algorithm_dataset_path.exists() or download_fresh_algs: alg_dset = FIPSAlgorithmDataset.from_web(self.algorithm_dataset_path) alg_dset.to_json() else: alg_dset = FIPSAlgorithmDataset.from_json(self.algorithm_dataset_path) return alg_dset def _extract_algorithms_from_policy_tables(self): logger.info("Extracting Algorithms from policy tables") certs_to_process = [x for x in self if x.state.policy_is_ok_to_analyze()] cert_processing.process_parallel( FIPSCertificate.get_algorithms_from_policy_tables, certs_to_process, use_threading=False, progress_bar_desc="Extracting Algorithms from policy tables", ) def _extract_policy_pdf_metadata(self) -> None: logger.info("Extracting security policy metadata from the pdfs") certs_to_process = [x for x in self if x.state.policy_is_ok_to_analyze()] processed_certs = cert_processing.process_parallel( FIPSCertificate.extract_policy_pdf_metadata, certs_to_process, use_threading=False, progress_bar_desc="Extracting security policy metadata", ) self.update_with_certs(processed_certs) def _compute_transitive_vulnerabilities(self) -> None: logger.info("Computing heuristics: Computing transitive vulnerabilities in referenc(ed/ing) certificates.") transitive_cve_finder = TransitiveVulnerabilityFinder(lambda cert: str(cert.cert_id)) transitive_cve_finder.fit(self.certs, lambda cert: cert.heuristics.policy_processed_references) for dgst in self.certs: transitive_cve = transitive_cve_finder.predict_single_cert(dgst) self.certs[dgst].heuristics.direct_transitive_cves = transitive_cve.direct_transitive_cves self.certs[dgst].heuristics.indirect_transitive_cves = transitive_cve.indirect_transitive_cves def _prune_reference_candidates(self) -> None: for cert in self: cert.prune_referenced_cert_ids() # Previously, a following procedure was used to prune reference_candidates: # - A set of algorithms was obtained via self.auxiliary_datasets.algorithm_dset.get_algorithms_by_id(reference_candidate) # - If any of these algorithms had the same vendor as the reference_candidate, the candidate was rejected # - The rationale is that if an ID appears in a certificate s.t. an algorithm with the same ID was produced by the same vendor, the reference likely refers to alg. # - Such reference should then be discarded. # - We are uncertain of the effectivity of such measure, disabling it for now. def _compute_references(self, keep_unknowns: bool = False) -> None: logger.info("Computing heuristics: Recovering references between certificates") self._prune_reference_candidates() policy_reference_finder = ReferenceFinder() policy_reference_finder.fit( self.certs, lambda cert: str(cert.cert_id), lambda cert: cert.heuristics.policy_prunned_references ) module_reference_finder = ReferenceFinder() module_reference_finder.fit( self.certs, lambda cert: str(cert.cert_id), lambda cert: cert.heuristics.module_prunned_references ) for cert in self: cert.heuristics.policy_processed_references = policy_reference_finder.predict_single_cert( cert.dgst, keep_unknowns ) cert.heuristics.module_processed_references = module_reference_finder.predict_single_cert( cert.dgst, keep_unknowns ) def to_pandas(self) -> pd.DataFrame: df = pd.DataFrame([x.pandas_tuple for x in self.certs.values()], columns=FIPSCertificate.pandas_columns) df = df.set_index("dgst") df.date_validation = pd.to_datetime(df.date_validation, infer_datetime_format=True, errors="coerce") df.date_sunset = pd.to_datetime(df.date_sunset, infer_datetime_format=True, errors="coerce") # Manually delete one certificate with bad embodiment (seems to have many blank fields) df = df.loc[~(df.embodiment == "*")] df = df.astype( {"type": "category", "status": "category", "standard": "category", "embodiment": "category"} ).fillna(value=np.nan) df.level = df.level.fillna(value=np.nan).astype("float") # df.level = pd.Categorical(df.level, categories=sorted(df.level.dropna().unique().tolist()), ordered=True) # Introduce year when cert got valid df["year_from"] = pd.DatetimeIndex(df.date_validation).year return df
PypiClean
/cleanup_sims-0.0.1.tar.gz/cleanup_sims-0.0.1/README.md
[![License: LGPL v3](https://img.shields.io/badge/License-LGPL_v3-blue.svg)](https://www.gnu.org/licenses/lgpl-3.0) [![Unittests](https://github.com/kevinsawade/cleanup_sims/actions/workflows/CI.yaml/badge.svg)](https://github.com/kevinsawade/cleanup_sims/actions/workflows/CI.yaml) [![codecov](https://codecov.io/gh/kevinsawade/cleanup_sims/branch/main/graph/badge.svg?token=MYMAFVMXZX)](https://codecov.io/gh/kevinsawade/cleanup_sims) # Cleanup Sims Cleans up your messy MD simulations. Visit the documentation under: https://kevinsawade.github.io/cleanup_sims # Installation Run ```bash $ pip install cleanup_sims ``` # Usage This will install a command-line tool called `cleanup_sims` in your python environment. Access the help with ```bash $ cleanup_sims -h ```
PypiClean
/baiduads_sdk_auto-2023.1.0-py3-none-any.whl/baiduads/crowdfeed/model/get_eshop_trade_crowds_response_wrapper_body.py
import re # noqa: F401 import sys # noqa: F401 from baiduads.model_utils import ( # noqa: F401 ApiTypeError, ModelComposed, ModelNormal, ModelSimple, cached_property, change_keys_js_to_python, convert_js_args_to_python_args, date, datetime, file_type, none_type, validate_get_composed_info, OpenApiModel ) from baiduads.exceptions import ApiAttributeError def lazy_import(): from baiduads.crowdfeed.model.eshop_crowd_type import EshopCrowdType globals()['EshopCrowdType'] = EshopCrowdType class GetEshopTradeCrowdsResponseWrapperBody(ModelNormal): """NOTE: This class is auto generated by OpenAPI Generator. Ref: https://openapi-generator.tech Do not edit the class manually. Attributes: allowed_values (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict with a capitalized key describing the allowed value and an allowed value. These dicts store the allowed enum values. attribute_map (dict): The key is attribute name and the value is json key in definition. discriminator_value_class_map (dict): A dict to go from the discriminator variable value to the discriminator class name. validations (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict that stores validations for max_length, min_length, max_items, min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum, inclusive_minimum, and regex. additional_properties_type (tuple): A tuple of classes accepted as additional properties values. """ allowed_values = { } validations = { } @cached_property def additional_properties_type(): """ This must be a method because a model may have properties that are of type self, this must run after the class is loaded """ lazy_import() return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501 _nullable = False @cached_property def openapi_types(): """ This must be a method because a model may have properties that are of type self, this must run after the class is loaded Returns openapi_types (dict): The key is attribute name and the value is attribute type. """ lazy_import() return { 'data': ([EshopCrowdType],), # noqa: E501 } @cached_property def discriminator(): return None attribute_map = { 'data': 'data', # noqa: E501 } read_only_vars = { } _composed_schemas = {} @classmethod @convert_js_args_to_python_args def _from_openapi_data(cls, *args, **kwargs): # noqa: E501 """GetEshopTradeCrowdsResponseWrapperBody - a model defined in OpenAPI Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _spec_property_naming (bool): True if the variable names in the input data are serialized names, as specified in the OpenAPI document. False if the variable names in the input data are pythonic names, e.g. snake case (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. _visited_composed_classes (tuple): This stores a tuple of classes that we have traveled through so that if we see that class again we will not use its discriminator again. When traveling through a discriminator, the composed schema that is is traveled through is added to this set. For example if Animal has a discriminator petType and we pass in "Dog", and the class Dog allOf includes Animal, we move through Animal once using the discriminator, and pick Dog. Then in Dog, we will make an instance of the Animal class but this time we won't travel through its discriminator because we passed in _visited_composed_classes = (Animal,) data ([EshopCrowdType]): [optional] # noqa: E501 """ _check_type = kwargs.pop('_check_type', True) _spec_property_naming = kwargs.pop('_spec_property_naming', False) _path_to_item = kwargs.pop('_path_to_item', ()) _configuration = kwargs.pop('_configuration', None) _visited_composed_classes = kwargs.pop('_visited_composed_classes', ()) self = super(OpenApiModel, cls).__new__(cls) if args: raise ApiTypeError( "Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % ( args, self.__class__.__name__, ), path_to_item=_path_to_item, valid_classes=(self.__class__,), ) self._data_store = {} self._check_type = _check_type self._spec_property_naming = _spec_property_naming self._path_to_item = _path_to_item self._configuration = _configuration self._visited_composed_classes = _visited_composed_classes + (self.__class__,) for var_name, var_value in kwargs.items(): if var_name not in self.attribute_map and \ self._configuration is not None and \ self._configuration.discard_unknown_keys and \ self.additional_properties_type is None: # discard variable. continue setattr(self, var_name, var_value) return self required_properties = set([ '_data_store', '_check_type', '_spec_property_naming', '_path_to_item', '_configuration', '_visited_composed_classes', ]) @convert_js_args_to_python_args def __init__(self, *args, **kwargs): # noqa: E501 """GetEshopTradeCrowdsResponseWrapperBody - a model defined in OpenAPI Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _spec_property_naming (bool): True if the variable names in the input data are serialized names, as specified in the OpenAPI document. False if the variable names in the input data are pythonic names, e.g. snake case (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. _visited_composed_classes (tuple): This stores a tuple of classes that we have traveled through so that if we see that class again we will not use its discriminator again. When traveling through a discriminator, the composed schema that is is traveled through is added to this set. For example if Animal has a discriminator petType and we pass in "Dog", and the class Dog allOf includes Animal, we move through Animal once using the discriminator, and pick Dog. Then in Dog, we will make an instance of the Animal class but this time we won't travel through its discriminator because we passed in _visited_composed_classes = (Animal,) data ([EshopCrowdType]): [optional] # noqa: E501 """ _check_type = kwargs.pop('_check_type', True) _spec_property_naming = kwargs.pop('_spec_property_naming', False) _path_to_item = kwargs.pop('_path_to_item', ()) _configuration = kwargs.pop('_configuration', None) _visited_composed_classes = kwargs.pop('_visited_composed_classes', ()) if args: raise ApiTypeError( "Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % ( args, self.__class__.__name__, ), path_to_item=_path_to_item, valid_classes=(self.__class__,), ) self._data_store = {} self._check_type = _check_type self._spec_property_naming = _spec_property_naming self._path_to_item = _path_to_item self._configuration = _configuration self._visited_composed_classes = _visited_composed_classes + (self.__class__,) for var_name, var_value in kwargs.items(): if var_name not in self.attribute_map and \ self._configuration is not None and \ self._configuration.discard_unknown_keys and \ self.additional_properties_type is None: # discard variable. continue setattr(self, var_name, var_value) if var_name in self.read_only_vars: raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate " f"class with read only attributes.")
PypiClean
/ipyvuetify-1.8.10.tar.gz/ipyvuetify-1.8.10/generate_source/node_modules/ret/lib/util.js
var types = require('./types'); var sets = require('./sets'); // All of these are private and only used by randexp. // It's assumed that they will always be called with the correct input. var CTRL = '@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^ ?'; var SLSH = { '0': 0, 't': 9, 'n': 10, 'v': 11, 'f': 12, 'r': 13 }; /** * Finds character representations in str and convert all to * their respective characters * * @param {String} str * @return {String} */ exports.strToChars = function(str) { /* jshint maxlen: false */ var chars_regex = /(\[\\b\])|(\\)?\\(?:u([A-F0-9]{4})|x([A-F0-9]{2})|(0?[0-7]{2})|c([@A-Z\[\\\]\^?])|([0tnvfr]))/g; str = str.replace(chars_regex, function(s, b, lbs, a16, b16, c8, dctrl, eslsh) { if (lbs) { return s; } var code = b ? 8 : a16 ? parseInt(a16, 16) : b16 ? parseInt(b16, 16) : c8 ? parseInt(c8, 8) : dctrl ? CTRL.indexOf(dctrl) : SLSH[eslsh]; var c = String.fromCharCode(code); // Escape special regex characters. if (/[\[\]{}\^$.|?*+()]/.test(c)) { c = '\\' + c; } return c; }); return str; }; /** * turns class into tokens * reads str until it encounters a ] not preceeded by a \ * * @param {String} str * @param {String} regexpStr * @return {Array.<Array.<Object>, Number>} */ exports.tokenizeClass = function(str, regexpStr) { /* jshint maxlen: false */ var tokens = []; var regexp = /\\(?:(w)|(d)|(s)|(W)|(D)|(S))|((?:(?:\\)(.)|([^\]\\]))-(?:\\)?([^\]]))|(\])|(?:\\)?(.)/g; var rs, c; while ((rs = regexp.exec(str)) != null) { if (rs[1]) { tokens.push(sets.words()); } else if (rs[2]) { tokens.push(sets.ints()); } else if (rs[3]) { tokens.push(sets.whitespace()); } else if (rs[4]) { tokens.push(sets.notWords()); } else if (rs[5]) { tokens.push(sets.notInts()); } else if (rs[6]) { tokens.push(sets.notWhitespace()); } else if (rs[7]) { tokens.push({ type: types.RANGE, from: (rs[8] || rs[9]).charCodeAt(0), to: rs[10].charCodeAt(0), }); } else if (c = rs[12]) { tokens.push({ type: types.CHAR, value: c.charCodeAt(0), }); } else { return [tokens, regexp.lastIndex]; } } exports.error(regexpStr, 'Unterminated character class'); }; /** * Shortcut to throw errors. * * @param {String} regexp * @param {String} msg */ exports.error = function(regexp, msg) { throw new SyntaxError('Invalid regular expression: /' + regexp + '/: ' + msg); };
PypiClean
/django_handyhelpers-0.3.9-py3-none-any.whl/handyhelpers/static/node_modules/bootstrap-table/dist/locale/bootstrap-table-el-GR.js
(function (global, factory) { typeof exports === 'object' && typeof module !== 'undefined' ? factory(require('jquery')) : typeof define === 'function' && define.amd ? define(['jquery'], factory) : (global = typeof globalThis !== 'undefined' ? globalThis : global || self, factory(global.jQuery)); })(this, (function ($$2) { 'use strict'; var commonjsGlobal = typeof globalThis !== 'undefined' ? globalThis : typeof window !== 'undefined' ? window : typeof global !== 'undefined' ? global : typeof self !== 'undefined' ? self : {}; var check = function (it) { return it && it.Math == Math && it; }; // https://github.com/zloirock/core-js/issues/86#issuecomment-115759028 var global$a = // eslint-disable-next-line es/no-global-this -- safe check(typeof globalThis == 'object' && globalThis) || check(typeof window == 'object' && window) || // eslint-disable-next-line no-restricted-globals -- safe check(typeof self == 'object' && self) || check(typeof commonjsGlobal == 'object' && commonjsGlobal) || // eslint-disable-next-line no-new-func -- fallback (function () { return this; })() || Function('return this')(); var objectGetOwnPropertyDescriptor = {}; var fails$c = function (exec) { try { return !!exec(); } catch (error) { return true; } }; var fails$b = fails$c; // Detect IE8's incomplete defineProperty implementation var descriptors = !fails$b(function () { // eslint-disable-next-line es/no-object-defineproperty -- required for testing return Object.defineProperty({}, 1, { get: function () { return 7; } })[1] != 7; }); var fails$a = fails$c; var functionBindNative = !fails$a(function () { // eslint-disable-next-line es/no-function-prototype-bind -- safe var test = (function () { /* empty */ }).bind(); // eslint-disable-next-line no-prototype-builtins -- safe return typeof test != 'function' || test.hasOwnProperty('prototype'); }); var NATIVE_BIND$1 = functionBindNative; var call$5 = Function.prototype.call; var functionCall = NATIVE_BIND$1 ? call$5.bind(call$5) : function () { return call$5.apply(call$5, arguments); }; var objectPropertyIsEnumerable = {}; var $propertyIsEnumerable = {}.propertyIsEnumerable; // eslint-disable-next-line es/no-object-getownpropertydescriptor -- safe var getOwnPropertyDescriptor$1 = Object.getOwnPropertyDescriptor; // Nashorn ~ JDK8 bug var NASHORN_BUG = getOwnPropertyDescriptor$1 && !$propertyIsEnumerable.call({ 1: 2 }, 1); // `Object.prototype.propertyIsEnumerable` method implementation // https://tc39.es/ecma262/#sec-object.prototype.propertyisenumerable objectPropertyIsEnumerable.f = NASHORN_BUG ? function propertyIsEnumerable(V) { var descriptor = getOwnPropertyDescriptor$1(this, V); return !!descriptor && descriptor.enumerable; } : $propertyIsEnumerable; var createPropertyDescriptor$3 = function (bitmap, value) { return { enumerable: !(bitmap & 1), configurable: !(bitmap & 2), writable: !(bitmap & 4), value: value }; }; var NATIVE_BIND = functionBindNative; var FunctionPrototype$1 = Function.prototype; var call$4 = FunctionPrototype$1.call; var uncurryThisWithBind = NATIVE_BIND && FunctionPrototype$1.bind.bind(call$4, call$4); var functionUncurryThis = NATIVE_BIND ? uncurryThisWithBind : function (fn) { return function () { return call$4.apply(fn, arguments); }; }; var uncurryThis$a = functionUncurryThis; var toString$1 = uncurryThis$a({}.toString); var stringSlice$1 = uncurryThis$a(''.slice); var classofRaw$1 = function (it) { return stringSlice$1(toString$1(it), 8, -1); }; var uncurryThis$9 = functionUncurryThis; var fails$9 = fails$c; var classof$3 = classofRaw$1; var $Object$3 = Object; var split = uncurryThis$9(''.split); // fallback for non-array-like ES3 and non-enumerable old V8 strings var indexedObject = fails$9(function () { // throws an error in rhino, see https://github.com/mozilla/rhino/issues/346 // eslint-disable-next-line no-prototype-builtins -- safe return !$Object$3('z').propertyIsEnumerable(0); }) ? function (it) { return classof$3(it) == 'String' ? split(it, '') : $Object$3(it); } : $Object$3; // we can't use just `it == null` since of `document.all` special case // https://tc39.es/ecma262/#sec-IsHTMLDDA-internal-slot-aec var isNullOrUndefined$2 = function (it) { return it === null || it === undefined; }; var isNullOrUndefined$1 = isNullOrUndefined$2; var $TypeError$6 = TypeError; // `RequireObjectCoercible` abstract operation // https://tc39.es/ecma262/#sec-requireobjectcoercible var requireObjectCoercible$2 = function (it) { if (isNullOrUndefined$1(it)) throw $TypeError$6("Can't call method on " + it); return it; }; // toObject with fallback for non-array-like ES3 strings var IndexedObject$1 = indexedObject; var requireObjectCoercible$1 = requireObjectCoercible$2; var toIndexedObject$3 = function (it) { return IndexedObject$1(requireObjectCoercible$1(it)); }; var documentAll$2 = typeof document == 'object' && document.all; // https://tc39.es/ecma262/#sec-IsHTMLDDA-internal-slot // eslint-disable-next-line unicorn/no-typeof-undefined -- required for testing var IS_HTMLDDA = typeof documentAll$2 == 'undefined' && documentAll$2 !== undefined; var documentAll_1 = { all: documentAll$2, IS_HTMLDDA: IS_HTMLDDA }; var $documentAll$1 = documentAll_1; var documentAll$1 = $documentAll$1.all; // `IsCallable` abstract operation // https://tc39.es/ecma262/#sec-iscallable var isCallable$c = $documentAll$1.IS_HTMLDDA ? function (argument) { return typeof argument == 'function' || argument === documentAll$1; } : function (argument) { return typeof argument == 'function'; }; var isCallable$b = isCallable$c; var $documentAll = documentAll_1; var documentAll = $documentAll.all; var isObject$7 = $documentAll.IS_HTMLDDA ? function (it) { return typeof it == 'object' ? it !== null : isCallable$b(it) || it === documentAll; } : function (it) { return typeof it == 'object' ? it !== null : isCallable$b(it); }; var global$9 = global$a; var isCallable$a = isCallable$c; var aFunction = function (argument) { return isCallable$a(argument) ? argument : undefined; }; var getBuiltIn$3 = function (namespace, method) { return arguments.length < 2 ? aFunction(global$9[namespace]) : global$9[namespace] && global$9[namespace][method]; }; var uncurryThis$8 = functionUncurryThis; var objectIsPrototypeOf = uncurryThis$8({}.isPrototypeOf); var engineUserAgent = typeof navigator != 'undefined' && String(navigator.userAgent) || ''; var global$8 = global$a; var userAgent = engineUserAgent; var process = global$8.process; var Deno = global$8.Deno; var versions = process && process.versions || Deno && Deno.version; var v8 = versions && versions.v8; var match, version; if (v8) { match = v8.split('.'); // in old Chrome, versions of V8 isn't V8 = Chrome / 10 // but their correct versions are not interesting for us version = match[0] > 0 && match[0] < 4 ? 1 : +(match[0] + match[1]); } // BrowserFS NodeJS `process` polyfill incorrectly set `.v8` to `0.0` // so check `userAgent` even if `.v8` exists, but 0 if (!version && userAgent) { match = userAgent.match(/Edge\/(\d+)/); if (!match || match[1] >= 74) { match = userAgent.match(/Chrome\/(\d+)/); if (match) version = +match[1]; } } var engineV8Version = version; /* eslint-disable es/no-symbol -- required for testing */ var V8_VERSION$2 = engineV8Version; var fails$8 = fails$c; // eslint-disable-next-line es/no-object-getownpropertysymbols -- required for testing var symbolConstructorDetection = !!Object.getOwnPropertySymbols && !fails$8(function () { var symbol = Symbol(); // Chrome 38 Symbol has incorrect toString conversion // `get-own-property-symbols` polyfill symbols converted to object are not Symbol instances return !String(symbol) || !(Object(symbol) instanceof Symbol) || // Chrome 38-40 symbols are not inherited from DOM collections prototypes to instances !Symbol.sham && V8_VERSION$2 && V8_VERSION$2 < 41; }); /* eslint-disable es/no-symbol -- required for testing */ var NATIVE_SYMBOL$1 = symbolConstructorDetection; var useSymbolAsUid = NATIVE_SYMBOL$1 && !Symbol.sham && typeof Symbol.iterator == 'symbol'; var getBuiltIn$2 = getBuiltIn$3; var isCallable$9 = isCallable$c; var isPrototypeOf = objectIsPrototypeOf; var USE_SYMBOL_AS_UID$1 = useSymbolAsUid; var $Object$2 = Object; var isSymbol$2 = USE_SYMBOL_AS_UID$1 ? function (it) { return typeof it == 'symbol'; } : function (it) { var $Symbol = getBuiltIn$2('Symbol'); return isCallable$9($Symbol) && isPrototypeOf($Symbol.prototype, $Object$2(it)); }; var $String$2 = String; var tryToString$1 = function (argument) { try { return $String$2(argument); } catch (error) { return 'Object'; } }; var isCallable$8 = isCallable$c; var tryToString = tryToString$1; var $TypeError$5 = TypeError; // `Assert: IsCallable(argument) is true` var aCallable$1 = function (argument) { if (isCallable$8(argument)) return argument; throw $TypeError$5(tryToString(argument) + ' is not a function'); }; var aCallable = aCallable$1; var isNullOrUndefined = isNullOrUndefined$2; // `GetMethod` abstract operation // https://tc39.es/ecma262/#sec-getmethod var getMethod$1 = function (V, P) { var func = V[P]; return isNullOrUndefined(func) ? undefined : aCallable(func); }; var call$3 = functionCall; var isCallable$7 = isCallable$c; var isObject$6 = isObject$7; var $TypeError$4 = TypeError; // `OrdinaryToPrimitive` abstract operation // https://tc39.es/ecma262/#sec-ordinarytoprimitive var ordinaryToPrimitive$1 = function (input, pref) { var fn, val; if (pref === 'string' && isCallable$7(fn = input.toString) && !isObject$6(val = call$3(fn, input))) return val; if (isCallable$7(fn = input.valueOf) && !isObject$6(val = call$3(fn, input))) return val; if (pref !== 'string' && isCallable$7(fn = input.toString) && !isObject$6(val = call$3(fn, input))) return val; throw $TypeError$4("Can't convert object to primitive value"); }; var sharedExports = {}; var shared$3 = { get exports(){ return sharedExports; }, set exports(v){ sharedExports = v; }, }; var global$7 = global$a; // eslint-disable-next-line es/no-object-defineproperty -- safe var defineProperty$2 = Object.defineProperty; var defineGlobalProperty$3 = function (key, value) { try { defineProperty$2(global$7, key, { value: value, configurable: true, writable: true }); } catch (error) { global$7[key] = value; } return value; }; var global$6 = global$a; var defineGlobalProperty$2 = defineGlobalProperty$3; var SHARED = '__core-js_shared__'; var store$3 = global$6[SHARED] || defineGlobalProperty$2(SHARED, {}); var sharedStore = store$3; var store$2 = sharedStore; (shared$3.exports = function (key, value) { return store$2[key] || (store$2[key] = value !== undefined ? value : {}); })('versions', []).push({ version: '3.29.0', mode: 'global', copyright: '© 2014-2023 Denis Pushkarev (zloirock.ru)', license: 'https://github.com/zloirock/core-js/blob/v3.29.0/LICENSE', source: 'https://github.com/zloirock/core-js' }); var requireObjectCoercible = requireObjectCoercible$2; var $Object$1 = Object; // `ToObject` abstract operation // https://tc39.es/ecma262/#sec-toobject var toObject$3 = function (argument) { return $Object$1(requireObjectCoercible(argument)); }; var uncurryThis$7 = functionUncurryThis; var toObject$2 = toObject$3; var hasOwnProperty = uncurryThis$7({}.hasOwnProperty); // `HasOwnProperty` abstract operation // https://tc39.es/ecma262/#sec-hasownproperty // eslint-disable-next-line es/no-object-hasown -- safe var hasOwnProperty_1 = Object.hasOwn || function hasOwn(it, key) { return hasOwnProperty(toObject$2(it), key); }; var uncurryThis$6 = functionUncurryThis; var id = 0; var postfix = Math.random(); var toString = uncurryThis$6(1.0.toString); var uid$2 = function (key) { return 'Symbol(' + (key === undefined ? '' : key) + ')_' + toString(++id + postfix, 36); }; var global$5 = global$a; var shared$2 = sharedExports; var hasOwn$6 = hasOwnProperty_1; var uid$1 = uid$2; var NATIVE_SYMBOL = symbolConstructorDetection; var USE_SYMBOL_AS_UID = useSymbolAsUid; var Symbol$1 = global$5.Symbol; var WellKnownSymbolsStore = shared$2('wks'); var createWellKnownSymbol = USE_SYMBOL_AS_UID ? Symbol$1['for'] || Symbol$1 : Symbol$1 && Symbol$1.withoutSetter || uid$1; var wellKnownSymbol$6 = function (name) { if (!hasOwn$6(WellKnownSymbolsStore, name)) { WellKnownSymbolsStore[name] = NATIVE_SYMBOL && hasOwn$6(Symbol$1, name) ? Symbol$1[name] : createWellKnownSymbol('Symbol.' + name); } return WellKnownSymbolsStore[name]; }; var call$2 = functionCall; var isObject$5 = isObject$7; var isSymbol$1 = isSymbol$2; var getMethod = getMethod$1; var ordinaryToPrimitive = ordinaryToPrimitive$1; var wellKnownSymbol$5 = wellKnownSymbol$6; var $TypeError$3 = TypeError; var TO_PRIMITIVE = wellKnownSymbol$5('toPrimitive'); // `ToPrimitive` abstract operation // https://tc39.es/ecma262/#sec-toprimitive var toPrimitive$1 = function (input, pref) { if (!isObject$5(input) || isSymbol$1(input)) return input; var exoticToPrim = getMethod(input, TO_PRIMITIVE); var result; if (exoticToPrim) { if (pref === undefined) pref = 'default'; result = call$2(exoticToPrim, input, pref); if (!isObject$5(result) || isSymbol$1(result)) return result; throw $TypeError$3("Can't convert object to primitive value"); } if (pref === undefined) pref = 'number'; return ordinaryToPrimitive(input, pref); }; var toPrimitive = toPrimitive$1; var isSymbol = isSymbol$2; // `ToPropertyKey` abstract operation // https://tc39.es/ecma262/#sec-topropertykey var toPropertyKey$3 = function (argument) { var key = toPrimitive(argument, 'string'); return isSymbol(key) ? key : key + ''; }; var global$4 = global$a; var isObject$4 = isObject$7; var document$1 = global$4.document; // typeof document.createElement is 'object' in old IE var EXISTS$1 = isObject$4(document$1) && isObject$4(document$1.createElement); var documentCreateElement = function (it) { return EXISTS$1 ? document$1.createElement(it) : {}; }; var DESCRIPTORS$7 = descriptors; var fails$7 = fails$c; var createElement = documentCreateElement; // Thanks to IE8 for its funny defineProperty var ie8DomDefine = !DESCRIPTORS$7 && !fails$7(function () { // eslint-disable-next-line es/no-object-defineproperty -- required for testing return Object.defineProperty(createElement('div'), 'a', { get: function () { return 7; } }).a != 7; }); var DESCRIPTORS$6 = descriptors; var call$1 = functionCall; var propertyIsEnumerableModule$1 = objectPropertyIsEnumerable; var createPropertyDescriptor$2 = createPropertyDescriptor$3; var toIndexedObject$2 = toIndexedObject$3; var toPropertyKey$2 = toPropertyKey$3; var hasOwn$5 = hasOwnProperty_1; var IE8_DOM_DEFINE$1 = ie8DomDefine; // eslint-disable-next-line es/no-object-getownpropertydescriptor -- safe var $getOwnPropertyDescriptor$1 = Object.getOwnPropertyDescriptor; // `Object.getOwnPropertyDescriptor` method // https://tc39.es/ecma262/#sec-object.getownpropertydescriptor objectGetOwnPropertyDescriptor.f = DESCRIPTORS$6 ? $getOwnPropertyDescriptor$1 : function getOwnPropertyDescriptor(O, P) { O = toIndexedObject$2(O); P = toPropertyKey$2(P); if (IE8_DOM_DEFINE$1) try { return $getOwnPropertyDescriptor$1(O, P); } catch (error) { /* empty */ } if (hasOwn$5(O, P)) return createPropertyDescriptor$2(!call$1(propertyIsEnumerableModule$1.f, O, P), O[P]); }; var objectDefineProperty = {}; var DESCRIPTORS$5 = descriptors; var fails$6 = fails$c; // V8 ~ Chrome 36- // https://bugs.chromium.org/p/v8/issues/detail?id=3334 var v8PrototypeDefineBug = DESCRIPTORS$5 && fails$6(function () { // eslint-disable-next-line es/no-object-defineproperty -- required for testing return Object.defineProperty(function () { /* empty */ }, 'prototype', { value: 42, writable: false }).prototype != 42; }); var isObject$3 = isObject$7; var $String$1 = String; var $TypeError$2 = TypeError; // `Assert: Type(argument) is Object` var anObject$2 = function (argument) { if (isObject$3(argument)) return argument; throw $TypeError$2($String$1(argument) + ' is not an object'); }; var DESCRIPTORS$4 = descriptors; var IE8_DOM_DEFINE = ie8DomDefine; var V8_PROTOTYPE_DEFINE_BUG = v8PrototypeDefineBug; var anObject$1 = anObject$2; var toPropertyKey$1 = toPropertyKey$3; var $TypeError$1 = TypeError; // eslint-disable-next-line es/no-object-defineproperty -- safe var $defineProperty = Object.defineProperty; // eslint-disable-next-line es/no-object-getownpropertydescriptor -- safe var $getOwnPropertyDescriptor = Object.getOwnPropertyDescriptor; var ENUMERABLE = 'enumerable'; var CONFIGURABLE$1 = 'configurable'; var WRITABLE = 'writable'; // `Object.defineProperty` method // https://tc39.es/ecma262/#sec-object.defineproperty objectDefineProperty.f = DESCRIPTORS$4 ? V8_PROTOTYPE_DEFINE_BUG ? function defineProperty(O, P, Attributes) { anObject$1(O); P = toPropertyKey$1(P); anObject$1(Attributes); if (typeof O === 'function' && P === 'prototype' && 'value' in Attributes && WRITABLE in Attributes && !Attributes[WRITABLE]) { var current = $getOwnPropertyDescriptor(O, P); if (current && current[WRITABLE]) { O[P] = Attributes.value; Attributes = { configurable: CONFIGURABLE$1 in Attributes ? Attributes[CONFIGURABLE$1] : current[CONFIGURABLE$1], enumerable: ENUMERABLE in Attributes ? Attributes[ENUMERABLE] : current[ENUMERABLE], writable: false }; } } return $defineProperty(O, P, Attributes); } : $defineProperty : function defineProperty(O, P, Attributes) { anObject$1(O); P = toPropertyKey$1(P); anObject$1(Attributes); if (IE8_DOM_DEFINE) try { return $defineProperty(O, P, Attributes); } catch (error) { /* empty */ } if ('get' in Attributes || 'set' in Attributes) throw $TypeError$1('Accessors not supported'); if ('value' in Attributes) O[P] = Attributes.value; return O; }; var DESCRIPTORS$3 = descriptors; var definePropertyModule$3 = objectDefineProperty; var createPropertyDescriptor$1 = createPropertyDescriptor$3; var createNonEnumerableProperty$2 = DESCRIPTORS$3 ? function (object, key, value) { return definePropertyModule$3.f(object, key, createPropertyDescriptor$1(1, value)); } : function (object, key, value) { object[key] = value; return object; }; var makeBuiltInExports = {}; var makeBuiltIn$2 = { get exports(){ return makeBuiltInExports; }, set exports(v){ makeBuiltInExports = v; }, }; var DESCRIPTORS$2 = descriptors; var hasOwn$4 = hasOwnProperty_1; var FunctionPrototype = Function.prototype; // eslint-disable-next-line es/no-object-getownpropertydescriptor -- safe var getDescriptor = DESCRIPTORS$2 && Object.getOwnPropertyDescriptor; var EXISTS = hasOwn$4(FunctionPrototype, 'name'); // additional protection from minified / mangled / dropped function names var PROPER = EXISTS && (function something() { /* empty */ }).name === 'something'; var CONFIGURABLE = EXISTS && (!DESCRIPTORS$2 || (DESCRIPTORS$2 && getDescriptor(FunctionPrototype, 'name').configurable)); var functionName = { EXISTS: EXISTS, PROPER: PROPER, CONFIGURABLE: CONFIGURABLE }; var uncurryThis$5 = functionUncurryThis; var isCallable$6 = isCallable$c; var store$1 = sharedStore; var functionToString = uncurryThis$5(Function.toString); // this helper broken in `[email protected]`, so we can't use `shared` helper if (!isCallable$6(store$1.inspectSource)) { store$1.inspectSource = function (it) { return functionToString(it); }; } var inspectSource$2 = store$1.inspectSource; var global$3 = global$a; var isCallable$5 = isCallable$c; var WeakMap$1 = global$3.WeakMap; var weakMapBasicDetection = isCallable$5(WeakMap$1) && /native code/.test(String(WeakMap$1)); var shared$1 = sharedExports; var uid = uid$2; var keys = shared$1('keys'); var sharedKey$1 = function (key) { return keys[key] || (keys[key] = uid(key)); }; var hiddenKeys$3 = {}; var NATIVE_WEAK_MAP = weakMapBasicDetection; var global$2 = global$a; var isObject$2 = isObject$7; var createNonEnumerableProperty$1 = createNonEnumerableProperty$2; var hasOwn$3 = hasOwnProperty_1; var shared = sharedStore; var sharedKey = sharedKey$1; var hiddenKeys$2 = hiddenKeys$3; var OBJECT_ALREADY_INITIALIZED = 'Object already initialized'; var TypeError$1 = global$2.TypeError; var WeakMap = global$2.WeakMap; var set, get, has; var enforce = function (it) { return has(it) ? get(it) : set(it, {}); }; var getterFor = function (TYPE) { return function (it) { var state; if (!isObject$2(it) || (state = get(it)).type !== TYPE) { throw TypeError$1('Incompatible receiver, ' + TYPE + ' required'); } return state; }; }; if (NATIVE_WEAK_MAP || shared.state) { var store = shared.state || (shared.state = new WeakMap()); /* eslint-disable no-self-assign -- prototype methods protection */ store.get = store.get; store.has = store.has; store.set = store.set; /* eslint-enable no-self-assign -- prototype methods protection */ set = function (it, metadata) { if (store.has(it)) throw TypeError$1(OBJECT_ALREADY_INITIALIZED); metadata.facade = it; store.set(it, metadata); return metadata; }; get = function (it) { return store.get(it) || {}; }; has = function (it) { return store.has(it); }; } else { var STATE = sharedKey('state'); hiddenKeys$2[STATE] = true; set = function (it, metadata) { if (hasOwn$3(it, STATE)) throw TypeError$1(OBJECT_ALREADY_INITIALIZED); metadata.facade = it; createNonEnumerableProperty$1(it, STATE, metadata); return metadata; }; get = function (it) { return hasOwn$3(it, STATE) ? it[STATE] : {}; }; has = function (it) { return hasOwn$3(it, STATE); }; } var internalState = { set: set, get: get, has: has, enforce: enforce, getterFor: getterFor }; var uncurryThis$4 = functionUncurryThis; var fails$5 = fails$c; var isCallable$4 = isCallable$c; var hasOwn$2 = hasOwnProperty_1; var DESCRIPTORS$1 = descriptors; var CONFIGURABLE_FUNCTION_NAME = functionName.CONFIGURABLE; var inspectSource$1 = inspectSource$2; var InternalStateModule = internalState; var enforceInternalState = InternalStateModule.enforce; var getInternalState = InternalStateModule.get; var $String = String; // eslint-disable-next-line es/no-object-defineproperty -- safe var defineProperty$1 = Object.defineProperty; var stringSlice = uncurryThis$4(''.slice); var replace = uncurryThis$4(''.replace); var join = uncurryThis$4([].join); var CONFIGURABLE_LENGTH = DESCRIPTORS$1 && !fails$5(function () { return defineProperty$1(function () { /* empty */ }, 'length', { value: 8 }).length !== 8; }); var TEMPLATE = String(String).split('String'); var makeBuiltIn$1 = makeBuiltIn$2.exports = function (value, name, options) { if (stringSlice($String(name), 0, 7) === 'Symbol(') { name = '[' + replace($String(name), /^Symbol\(([^)]*)\)/, '$1') + ']'; } if (options && options.getter) name = 'get ' + name; if (options && options.setter) name = 'set ' + name; if (!hasOwn$2(value, 'name') || (CONFIGURABLE_FUNCTION_NAME && value.name !== name)) { if (DESCRIPTORS$1) defineProperty$1(value, 'name', { value: name, configurable: true }); else value.name = name; } if (CONFIGURABLE_LENGTH && options && hasOwn$2(options, 'arity') && value.length !== options.arity) { defineProperty$1(value, 'length', { value: options.arity }); } try { if (options && hasOwn$2(options, 'constructor') && options.constructor) { if (DESCRIPTORS$1) defineProperty$1(value, 'prototype', { writable: false }); // in V8 ~ Chrome 53, prototypes of some methods, like `Array.prototype.values`, are non-writable } else if (value.prototype) value.prototype = undefined; } catch (error) { /* empty */ } var state = enforceInternalState(value); if (!hasOwn$2(state, 'source')) { state.source = join(TEMPLATE, typeof name == 'string' ? name : ''); } return value; }; // add fake Function#toString for correct work wrapped methods / constructors with methods like LoDash isNative // eslint-disable-next-line no-extend-native -- required Function.prototype.toString = makeBuiltIn$1(function toString() { return isCallable$4(this) && getInternalState(this).source || inspectSource$1(this); }, 'toString'); var isCallable$3 = isCallable$c; var definePropertyModule$2 = objectDefineProperty; var makeBuiltIn = makeBuiltInExports; var defineGlobalProperty$1 = defineGlobalProperty$3; var defineBuiltIn$1 = function (O, key, value, options) { if (!options) options = {}; var simple = options.enumerable; var name = options.name !== undefined ? options.name : key; if (isCallable$3(value)) makeBuiltIn(value, name, options); if (options.global) { if (simple) O[key] = value; else defineGlobalProperty$1(key, value); } else { try { if (!options.unsafe) delete O[key]; else if (O[key]) simple = true; } catch (error) { /* empty */ } if (simple) O[key] = value; else definePropertyModule$2.f(O, key, { value: value, enumerable: false, configurable: !options.nonConfigurable, writable: !options.nonWritable }); } return O; }; var objectGetOwnPropertyNames = {}; var ceil = Math.ceil; var floor = Math.floor; // `Math.trunc` method // https://tc39.es/ecma262/#sec-math.trunc // eslint-disable-next-line es/no-math-trunc -- safe var mathTrunc = Math.trunc || function trunc(x) { var n = +x; return (n > 0 ? floor : ceil)(n); }; var trunc = mathTrunc; // `ToIntegerOrInfinity` abstract operation // https://tc39.es/ecma262/#sec-tointegerorinfinity var toIntegerOrInfinity$2 = function (argument) { var number = +argument; // eslint-disable-next-line no-self-compare -- NaN check return number !== number || number === 0 ? 0 : trunc(number); }; var toIntegerOrInfinity$1 = toIntegerOrInfinity$2; var max = Math.max; var min$1 = Math.min; // Helper for a popular repeating case of the spec: // Let integer be ? ToInteger(index). // If integer < 0, let result be max((length + integer), 0); else let result be min(integer, length). var toAbsoluteIndex$1 = function (index, length) { var integer = toIntegerOrInfinity$1(index); return integer < 0 ? max(integer + length, 0) : min$1(integer, length); }; var toIntegerOrInfinity = toIntegerOrInfinity$2; var min = Math.min; // `ToLength` abstract operation // https://tc39.es/ecma262/#sec-tolength var toLength$1 = function (argument) { return argument > 0 ? min(toIntegerOrInfinity(argument), 0x1FFFFFFFFFFFFF) : 0; // 2 ** 53 - 1 == 9007199254740991 }; var toLength = toLength$1; // `LengthOfArrayLike` abstract operation // https://tc39.es/ecma262/#sec-lengthofarraylike var lengthOfArrayLike$2 = function (obj) { return toLength(obj.length); }; var toIndexedObject$1 = toIndexedObject$3; var toAbsoluteIndex = toAbsoluteIndex$1; var lengthOfArrayLike$1 = lengthOfArrayLike$2; // `Array.prototype.{ indexOf, includes }` methods implementation var createMethod = function (IS_INCLUDES) { return function ($this, el, fromIndex) { var O = toIndexedObject$1($this); var length = lengthOfArrayLike$1(O); var index = toAbsoluteIndex(fromIndex, length); var value; // Array#includes uses SameValueZero equality algorithm // eslint-disable-next-line no-self-compare -- NaN check if (IS_INCLUDES && el != el) while (length > index) { value = O[index++]; // eslint-disable-next-line no-self-compare -- NaN check if (value != value) return true; // Array#indexOf ignores holes, Array#includes - not } else for (;length > index; index++) { if ((IS_INCLUDES || index in O) && O[index] === el) return IS_INCLUDES || index || 0; } return !IS_INCLUDES && -1; }; }; var arrayIncludes = { // `Array.prototype.includes` method // https://tc39.es/ecma262/#sec-array.prototype.includes includes: createMethod(true), // `Array.prototype.indexOf` method // https://tc39.es/ecma262/#sec-array.prototype.indexof indexOf: createMethod(false) }; var uncurryThis$3 = functionUncurryThis; var hasOwn$1 = hasOwnProperty_1; var toIndexedObject = toIndexedObject$3; var indexOf = arrayIncludes.indexOf; var hiddenKeys$1 = hiddenKeys$3; var push = uncurryThis$3([].push); var objectKeysInternal = function (object, names) { var O = toIndexedObject(object); var i = 0; var result = []; var key; for (key in O) !hasOwn$1(hiddenKeys$1, key) && hasOwn$1(O, key) && push(result, key); // Don't enum bug & hidden keys while (names.length > i) if (hasOwn$1(O, key = names[i++])) { ~indexOf(result, key) || push(result, key); } return result; }; // IE8- don't enum bug keys var enumBugKeys$2 = [ 'constructor', 'hasOwnProperty', 'isPrototypeOf', 'propertyIsEnumerable', 'toLocaleString', 'toString', 'valueOf' ]; var internalObjectKeys$1 = objectKeysInternal; var enumBugKeys$1 = enumBugKeys$2; var hiddenKeys = enumBugKeys$1.concat('length', 'prototype'); // `Object.getOwnPropertyNames` method // https://tc39.es/ecma262/#sec-object.getownpropertynames // eslint-disable-next-line es/no-object-getownpropertynames -- safe objectGetOwnPropertyNames.f = Object.getOwnPropertyNames || function getOwnPropertyNames(O) { return internalObjectKeys$1(O, hiddenKeys); }; var objectGetOwnPropertySymbols = {}; // eslint-disable-next-line es/no-object-getownpropertysymbols -- safe objectGetOwnPropertySymbols.f = Object.getOwnPropertySymbols; var getBuiltIn$1 = getBuiltIn$3; var uncurryThis$2 = functionUncurryThis; var getOwnPropertyNamesModule = objectGetOwnPropertyNames; var getOwnPropertySymbolsModule$1 = objectGetOwnPropertySymbols; var anObject = anObject$2; var concat$1 = uncurryThis$2([].concat); // all object keys, includes non-enumerable and symbols var ownKeys$1 = getBuiltIn$1('Reflect', 'ownKeys') || function ownKeys(it) { var keys = getOwnPropertyNamesModule.f(anObject(it)); var getOwnPropertySymbols = getOwnPropertySymbolsModule$1.f; return getOwnPropertySymbols ? concat$1(keys, getOwnPropertySymbols(it)) : keys; }; var hasOwn = hasOwnProperty_1; var ownKeys = ownKeys$1; var getOwnPropertyDescriptorModule = objectGetOwnPropertyDescriptor; var definePropertyModule$1 = objectDefineProperty; var copyConstructorProperties$1 = function (target, source, exceptions) { var keys = ownKeys(source); var defineProperty = definePropertyModule$1.f; var getOwnPropertyDescriptor = getOwnPropertyDescriptorModule.f; for (var i = 0; i < keys.length; i++) { var key = keys[i]; if (!hasOwn(target, key) && !(exceptions && hasOwn(exceptions, key))) { defineProperty(target, key, getOwnPropertyDescriptor(source, key)); } } }; var fails$4 = fails$c; var isCallable$2 = isCallable$c; var replacement = /#|\.prototype\./; var isForced$1 = function (feature, detection) { var value = data[normalize(feature)]; return value == POLYFILL ? true : value == NATIVE ? false : isCallable$2(detection) ? fails$4(detection) : !!detection; }; var normalize = isForced$1.normalize = function (string) { return String(string).replace(replacement, '.').toLowerCase(); }; var data = isForced$1.data = {}; var NATIVE = isForced$1.NATIVE = 'N'; var POLYFILL = isForced$1.POLYFILL = 'P'; var isForced_1 = isForced$1; var global$1 = global$a; var getOwnPropertyDescriptor = objectGetOwnPropertyDescriptor.f; var createNonEnumerableProperty = createNonEnumerableProperty$2; var defineBuiltIn = defineBuiltIn$1; var defineGlobalProperty = defineGlobalProperty$3; var copyConstructorProperties = copyConstructorProperties$1; var isForced = isForced_1; /* options.target - name of the target object options.global - target is the global object options.stat - export as static methods of target options.proto - export as prototype methods of target options.real - real prototype method for the `pure` version options.forced - export even if the native feature is available options.bind - bind methods to the target, required for the `pure` version options.wrap - wrap constructors to preventing global pollution, required for the `pure` version options.unsafe - use the simple assignment of property instead of delete + defineProperty options.sham - add a flag to not completely full polyfills options.enumerable - export as enumerable property options.dontCallGetSet - prevent calling a getter on target options.name - the .name of the function if it does not match the key */ var _export = function (options, source) { var TARGET = options.target; var GLOBAL = options.global; var STATIC = options.stat; var FORCED, target, key, targetProperty, sourceProperty, descriptor; if (GLOBAL) { target = global$1; } else if (STATIC) { target = global$1[TARGET] || defineGlobalProperty(TARGET, {}); } else { target = (global$1[TARGET] || {}).prototype; } if (target) for (key in source) { sourceProperty = source[key]; if (options.dontCallGetSet) { descriptor = getOwnPropertyDescriptor(target, key); targetProperty = descriptor && descriptor.value; } else targetProperty = target[key]; FORCED = isForced(GLOBAL ? key : TARGET + (STATIC ? '.' : '#') + key, options.forced); // contained in target if (!FORCED && targetProperty !== undefined) { if (typeof sourceProperty == typeof targetProperty) continue; copyConstructorProperties(sourceProperty, targetProperty); } // add a flag to not completely full polyfills if (options.sham || (targetProperty && targetProperty.sham)) { createNonEnumerableProperty(sourceProperty, 'sham', true); } defineBuiltIn(target, key, sourceProperty, options); } }; var classof$2 = classofRaw$1; // `IsArray` abstract operation // https://tc39.es/ecma262/#sec-isarray // eslint-disable-next-line es/no-array-isarray -- safe var isArray$2 = Array.isArray || function isArray(argument) { return classof$2(argument) == 'Array'; }; var $TypeError = TypeError; var MAX_SAFE_INTEGER = 0x1FFFFFFFFFFFFF; // 2 ** 53 - 1 == 9007199254740991 var doesNotExceedSafeInteger$1 = function (it) { if (it > MAX_SAFE_INTEGER) throw $TypeError('Maximum allowed index exceeded'); return it; }; var toPropertyKey = toPropertyKey$3; var definePropertyModule = objectDefineProperty; var createPropertyDescriptor = createPropertyDescriptor$3; var createProperty$1 = function (object, key, value) { var propertyKey = toPropertyKey(key); if (propertyKey in object) definePropertyModule.f(object, propertyKey, createPropertyDescriptor(0, value)); else object[propertyKey] = value; }; var wellKnownSymbol$4 = wellKnownSymbol$6; var TO_STRING_TAG$1 = wellKnownSymbol$4('toStringTag'); var test = {}; test[TO_STRING_TAG$1] = 'z'; var toStringTagSupport = String(test) === '[object z]'; var TO_STRING_TAG_SUPPORT = toStringTagSupport; var isCallable$1 = isCallable$c; var classofRaw = classofRaw$1; var wellKnownSymbol$3 = wellKnownSymbol$6; var TO_STRING_TAG = wellKnownSymbol$3('toStringTag'); var $Object = Object; // ES3 wrong here var CORRECT_ARGUMENTS = classofRaw(function () { return arguments; }()) == 'Arguments'; // fallback for IE11 Script Access Denied error var tryGet = function (it, key) { try { return it[key]; } catch (error) { /* empty */ } }; // getting tag from ES6+ `Object.prototype.toString` var classof$1 = TO_STRING_TAG_SUPPORT ? classofRaw : function (it) { var O, tag, result; return it === undefined ? 'Undefined' : it === null ? 'Null' // @@toStringTag case : typeof (tag = tryGet(O = $Object(it), TO_STRING_TAG)) == 'string' ? tag // builtinTag case : CORRECT_ARGUMENTS ? classofRaw(O) // ES3 arguments fallback : (result = classofRaw(O)) == 'Object' && isCallable$1(O.callee) ? 'Arguments' : result; }; var uncurryThis$1 = functionUncurryThis; var fails$3 = fails$c; var isCallable = isCallable$c; var classof = classof$1; var getBuiltIn = getBuiltIn$3; var inspectSource = inspectSource$2; var noop = function () { /* empty */ }; var empty = []; var construct = getBuiltIn('Reflect', 'construct'); var constructorRegExp = /^\s*(?:class|function)\b/; var exec = uncurryThis$1(constructorRegExp.exec); var INCORRECT_TO_STRING = !constructorRegExp.exec(noop); var isConstructorModern = function isConstructor(argument) { if (!isCallable(argument)) return false; try { construct(noop, empty, argument); return true; } catch (error) { return false; } }; var isConstructorLegacy = function isConstructor(argument) { if (!isCallable(argument)) return false; switch (classof(argument)) { case 'AsyncFunction': case 'GeneratorFunction': case 'AsyncGeneratorFunction': return false; } try { // we can't check .prototype since constructors produced by .bind haven't it // `Function#toString` throws on some built-it function in some legacy engines // (for example, `DOMQuad` and similar in FF41-) return INCORRECT_TO_STRING || !!exec(constructorRegExp, inspectSource(argument)); } catch (error) { return true; } }; isConstructorLegacy.sham = true; // `IsConstructor` abstract operation // https://tc39.es/ecma262/#sec-isconstructor var isConstructor$1 = !construct || fails$3(function () { var called; return isConstructorModern(isConstructorModern.call) || !isConstructorModern(Object) || !isConstructorModern(function () { called = true; }) || called; }) ? isConstructorLegacy : isConstructorModern; var isArray$1 = isArray$2; var isConstructor = isConstructor$1; var isObject$1 = isObject$7; var wellKnownSymbol$2 = wellKnownSymbol$6; var SPECIES$1 = wellKnownSymbol$2('species'); var $Array = Array; // a part of `ArraySpeciesCreate` abstract operation // https://tc39.es/ecma262/#sec-arrayspeciescreate var arraySpeciesConstructor$1 = function (originalArray) { var C; if (isArray$1(originalArray)) { C = originalArray.constructor; // cross-realm fallback if (isConstructor(C) && (C === $Array || isArray$1(C.prototype))) C = undefined; else if (isObject$1(C)) { C = C[SPECIES$1]; if (C === null) C = undefined; } } return C === undefined ? $Array : C; }; var arraySpeciesConstructor = arraySpeciesConstructor$1; // `ArraySpeciesCreate` abstract operation // https://tc39.es/ecma262/#sec-arrayspeciescreate var arraySpeciesCreate$1 = function (originalArray, length) { return new (arraySpeciesConstructor(originalArray))(length === 0 ? 0 : length); }; var fails$2 = fails$c; var wellKnownSymbol$1 = wellKnownSymbol$6; var V8_VERSION$1 = engineV8Version; var SPECIES = wellKnownSymbol$1('species'); var arrayMethodHasSpeciesSupport$1 = function (METHOD_NAME) { // We can't use this feature detection in V8 since it causes // deoptimization and serious performance degradation // https://github.com/zloirock/core-js/issues/677 return V8_VERSION$1 >= 51 || !fails$2(function () { var array = []; var constructor = array.constructor = {}; constructor[SPECIES] = function () { return { foo: 1 }; }; return array[METHOD_NAME](Boolean).foo !== 1; }); }; var $$1 = _export; var fails$1 = fails$c; var isArray = isArray$2; var isObject = isObject$7; var toObject$1 = toObject$3; var lengthOfArrayLike = lengthOfArrayLike$2; var doesNotExceedSafeInteger = doesNotExceedSafeInteger$1; var createProperty = createProperty$1; var arraySpeciesCreate = arraySpeciesCreate$1; var arrayMethodHasSpeciesSupport = arrayMethodHasSpeciesSupport$1; var wellKnownSymbol = wellKnownSymbol$6; var V8_VERSION = engineV8Version; var IS_CONCAT_SPREADABLE = wellKnownSymbol('isConcatSpreadable'); // We can't use this feature detection in V8 since it causes // deoptimization and serious performance degradation // https://github.com/zloirock/core-js/issues/679 var IS_CONCAT_SPREADABLE_SUPPORT = V8_VERSION >= 51 || !fails$1(function () { var array = []; array[IS_CONCAT_SPREADABLE] = false; return array.concat()[0] !== array; }); var isConcatSpreadable = function (O) { if (!isObject(O)) return false; var spreadable = O[IS_CONCAT_SPREADABLE]; return spreadable !== undefined ? !!spreadable : isArray(O); }; var FORCED = !IS_CONCAT_SPREADABLE_SUPPORT || !arrayMethodHasSpeciesSupport('concat'); // `Array.prototype.concat` method // https://tc39.es/ecma262/#sec-array.prototype.concat // with adding support of @@isConcatSpreadable and @@species $$1({ target: 'Array', proto: true, arity: 1, forced: FORCED }, { // eslint-disable-next-line no-unused-vars -- required for `.length` concat: function concat(arg) { var O = toObject$1(this); var A = arraySpeciesCreate(O, 0); var n = 0; var i, k, length, len, E; for (i = -1, length = arguments.length; i < length; i++) { E = i === -1 ? O : arguments[i]; if (isConcatSpreadable(E)) { len = lengthOfArrayLike(E); doesNotExceedSafeInteger(n + len); for (k = 0; k < len; k++, n++) if (k in E) createProperty(A, n, E[k]); } else { doesNotExceedSafeInteger(n + 1); createProperty(A, n++, E); } } A.length = n; return A; } }); var internalObjectKeys = objectKeysInternal; var enumBugKeys = enumBugKeys$2; // `Object.keys` method // https://tc39.es/ecma262/#sec-object.keys // eslint-disable-next-line es/no-object-keys -- safe var objectKeys$1 = Object.keys || function keys(O) { return internalObjectKeys(O, enumBugKeys); }; var DESCRIPTORS = descriptors; var uncurryThis = functionUncurryThis; var call = functionCall; var fails = fails$c; var objectKeys = objectKeys$1; var getOwnPropertySymbolsModule = objectGetOwnPropertySymbols; var propertyIsEnumerableModule = objectPropertyIsEnumerable; var toObject = toObject$3; var IndexedObject = indexedObject; // eslint-disable-next-line es/no-object-assign -- safe var $assign = Object.assign; // eslint-disable-next-line es/no-object-defineproperty -- required for testing var defineProperty = Object.defineProperty; var concat = uncurryThis([].concat); // `Object.assign` method // https://tc39.es/ecma262/#sec-object.assign var objectAssign = !$assign || fails(function () { // should have correct order of operations (Edge bug) if (DESCRIPTORS && $assign({ b: 1 }, $assign(defineProperty({}, 'a', { enumerable: true, get: function () { defineProperty(this, 'b', { value: 3, enumerable: false }); } }), { b: 2 })).b !== 1) return true; // should work with symbols and should have deterministic property order (V8 bug) var A = {}; var B = {}; // eslint-disable-next-line es/no-symbol -- safe var symbol = Symbol(); var alphabet = 'abcdefghijklmnopqrst'; A[symbol] = 7; alphabet.split('').forEach(function (chr) { B[chr] = chr; }); return $assign({}, A)[symbol] != 7 || objectKeys($assign({}, B)).join('') != alphabet; }) ? function assign(target, source) { // eslint-disable-line no-unused-vars -- required for `.length` var T = toObject(target); var argumentsLength = arguments.length; var index = 1; var getOwnPropertySymbols = getOwnPropertySymbolsModule.f; var propertyIsEnumerable = propertyIsEnumerableModule.f; while (argumentsLength > index) { var S = IndexedObject(arguments[index++]); var keys = getOwnPropertySymbols ? concat(objectKeys(S), getOwnPropertySymbols(S)) : objectKeys(S); var length = keys.length; var j = 0; var key; while (length > j) { key = keys[j++]; if (!DESCRIPTORS || call(propertyIsEnumerable, S, key)) T[key] = S[key]; } } return T; } : $assign; var $ = _export; var assign = objectAssign; // `Object.assign` method // https://tc39.es/ecma262/#sec-object.assign // eslint-disable-next-line es/no-object-assign -- required for testing $({ target: 'Object', stat: true, arity: 2, forced: Object.assign !== assign }, { assign: assign }); /** * Bootstrap Table Greek translation * Author: giannisdallas */ $$2.fn.bootstrapTable.locales['el-GR'] = $$2.fn.bootstrapTable.locales['el'] = { formatCopyRows: function formatCopyRows() { return 'Copy Rows'; }, formatPrint: function formatPrint() { return 'Print'; }, formatLoadingMessage: function formatLoadingMessage() { return 'Φορτώνει, παρακαλώ περιμένετε'; }, formatRecordsPerPage: function formatRecordsPerPage(pageNumber) { return "".concat(pageNumber, " \u03B1\u03C0\u03BF\u03C4\u03B5\u03BB\u03AD\u03C3\u03BC\u03B1\u03C4\u03B1 \u03B1\u03BD\u03AC \u03C3\u03B5\u03BB\u03AF\u03B4\u03B1"); }, formatShowingRows: function formatShowingRows(pageFrom, pageTo, totalRows, totalNotFiltered) { if (totalNotFiltered !== undefined && totalNotFiltered > 0 && totalNotFiltered > totalRows) { return "\u0395\u03BC\u03C6\u03B1\u03BD\u03AF\u03B6\u03BF\u03BD\u03C4\u03B1\u03B9 \u03B1\u03C0\u03CC \u03C4\u03B7\u03BD ".concat(pageFrom, " \u03C9\u03C2 \u03C4\u03B7\u03BD ").concat(pageTo, " \u03B1\u03C0\u03CC \u03C3\u03CD\u03BD\u03BF\u03BB\u03BF ").concat(totalRows, " \u03C3\u03B5\u03B9\u03C1\u03CE\u03BD (filtered from ").concat(totalNotFiltered, " total rows)"); } return "\u0395\u03BC\u03C6\u03B1\u03BD\u03AF\u03B6\u03BF\u03BD\u03C4\u03B1\u03B9 \u03B1\u03C0\u03CC \u03C4\u03B7\u03BD ".concat(pageFrom, " \u03C9\u03C2 \u03C4\u03B7\u03BD ").concat(pageTo, " \u03B1\u03C0\u03CC \u03C3\u03CD\u03BD\u03BF\u03BB\u03BF ").concat(totalRows, " \u03C3\u03B5\u03B9\u03C1\u03CE\u03BD"); }, formatSRPaginationPreText: function formatSRPaginationPreText() { return 'previous page'; }, formatSRPaginationPageText: function formatSRPaginationPageText(page) { return "to page ".concat(page); }, formatSRPaginationNextText: function formatSRPaginationNextText() { return 'next page'; }, formatDetailPagination: function formatDetailPagination(totalRows) { return "Showing ".concat(totalRows, " rows"); }, formatClearSearch: function formatClearSearch() { return 'Clear Search'; }, formatSearch: function formatSearch() { return 'Αναζητήστε'; }, formatNoMatches: function formatNoMatches() { return 'Δεν βρέθηκαν αποτελέσματα'; }, formatPaginationSwitch: function formatPaginationSwitch() { return 'Hide/Show pagination'; }, formatPaginationSwitchDown: function formatPaginationSwitchDown() { return 'Show pagination'; }, formatPaginationSwitchUp: function formatPaginationSwitchUp() { return 'Hide pagination'; }, formatRefresh: function formatRefresh() { return 'Refresh'; }, formatToggleOn: function formatToggleOn() { return 'Show card view'; }, formatToggleOff: function formatToggleOff() { return 'Hide card view'; }, formatColumns: function formatColumns() { return 'Columns'; }, formatColumnsToggleAll: function formatColumnsToggleAll() { return 'Toggle all'; }, formatFullscreen: function formatFullscreen() { return 'Fullscreen'; }, formatAllRows: function formatAllRows() { return 'All'; }, formatAutoRefresh: function formatAutoRefresh() { return 'Auto Refresh'; }, formatExport: function formatExport() { return 'Export data'; }, formatJumpTo: function formatJumpTo() { return 'GO'; }, formatAdvancedSearch: function formatAdvancedSearch() { return 'Advanced search'; }, formatAdvancedCloseButton: function formatAdvancedCloseButton() { return 'Close'; }, formatFilterControlSwitch: function formatFilterControlSwitch() { return 'Hide/Show controls'; }, formatFilterControlSwitchHide: function formatFilterControlSwitchHide() { return 'Hide controls'; }, formatFilterControlSwitchShow: function formatFilterControlSwitchShow() { return 'Show controls'; } }; Object.assign($$2.fn.bootstrapTable.defaults, $$2.fn.bootstrapTable.locales['el-GR']); }));
PypiClean
/sheetfu-1.6.1.tar.gz/sheetfu-1.6.1/README.rst
Sheetfu ======= .. image:: https://travis-ci.org/socialpoint-labs/sheetfu.svg?branch=master :target: https://travis-ci.org/socialpoint-labs/sheetfu Sheetfu was built to interacts with Google Sheets with a simple, intuitive, and fast API. The primary goal of this library is to adapt the Google App Script API for spreadsheets, to Python. With Sheetfu, you can easily get or set cell values, background colors, font colors or any other cell attributes. Installing ---------- Install and update using `pip`_: .. code-block:: text pip install -U Sheetfu A Simple Example ---------------- .. code-block:: python from sheetfu import SpreadsheetApp sa = SpreadsheetApp('path/to/secret.json') spreadsheet = sa.open_by_id('<insert spreadsheet id here>') sheet = spreadsheet.get_sheet_by_name('Sheet1') data_range = sheet.get_data_range() # returns the sheet range that contains data values. # this is how you get things values = data_range.get_values() # returns a 2D matrix of values. backgrounds = data_range.get_backgrounds() # returns a 2D matrix of background colors in hex format. # this is how you set things data_range.set_background('#000000') # set every cell backgrounds to black data_range.set_font_color('#ffffff') # set every cell font colors to white You can also create your SpreadsheetApp object with environment variables instead of the `secrets.json` file. You can refer to `the authentication tutorial`_ for more info. .. _the authentication tutorial: https://github.com/socialpoint-labs/sheetfu/blob/master/documentation/authentication.rst Please read the `sheetfu API documentation`_ for a more detailed description. .. _sheetfu API documentation: https://github.com/socialpoint-labs/sheetfu/blob/master/documentation/usage.rst The Table module ---------------- Sheetfu also contains a table module that abstracts completely the coordinates system for an ORM-like syntax. The example below is for a sheet with the 3 columns 'name', 'surname' and 'age'. .. code-block:: python from sheetfu import Table spreadsheet = SpreadsheetApp('path/to/secret.json').open_by_id('<insert spreadsheet id here>') data_range = spreadsheet.get_sheet_by_name('people').get_data_range() table = Table(data_range, backgrounds=True) for item in table: if item.get_field_value('name') == 'foo': item.set_field_value('surname', 'bar') # this set the surname field value age = item.get_field_value('age') item.set_field_value('age', age + 1) item.set_field_background('age', '#ff0000') # this set the field 'age' to red color # Every set functions are batched for speed performance. # To send the batch update of every set requests you made, # you need to commit the table object as follow. table.commit() You can refer to the `Table API documentation`_ for a more detailed description. .. _Table API documentation: https://github.com/socialpoint-labs/sheetfu/blob/master/documentation/table.rst Casting ------- An effort has been made to guide Sheetu as a Google Sheet ORM, where any values found in a spreadsheet are casted to a matching Python object. Since version 1.5.7, Sheetfu returns `DATE` and `DATE_TIME` as Python `datetime` object. Similarly, setting a cell with a `datetime` object will make the necessary parsing and casting to reflect those cells as `DATE_TIME` in the sheet. .. code-block:: python from sheetfu import SpreadsheetApp sa = SpreadsheetApp('path/to/secret.json') spreadsheet = sa.open_by_id('<insert spreadsheet id here>') sheet = spreadsheet.get_sheet_by_name('Sheet1') # Assuming the cells are in DATE or DATE_TIME format. cells_with_dates = sheet.get_range_from_a1("A1:A2")) print(cells_with_dates.get_values()) # [ # [datetime.datetime(2021, 11, 26, 16, 58, 37, 737940)], # [datetime.datetime(2021, 11, 26, 16, 58, 37, 737940)] # ] This means we can introduce python datetime operation in our code very effectively. .. code-block:: python from sheetfu import SpreadsheetApp from datetime import datetime sa = SpreadsheetApp('path/to/secret.json') spreadsheet = sa.open_by_id('<insert spreadsheet id here>') sheet = spreadsheet.get_sheet_by_name('Sheet1') a1 = sheet.get_range_from_a1("A1") # The following will set today's date in the #cell in the right google sheet format a1.set_value(datetime.today()) Contributing ------------ For guidance on how to make a contribution to Sheetfu, see the `contributing guidelines`_. .. _contributing guidelines: https://github.com/socialpoint-labs/sheetfu/blob/master/CONTRIBUTING.rst Links ----- * License: `MIT <https://github.com/socialpoint-labs/sheetfu/blob/master/LICENSE>`_ * Releases: https://pypi.org/project/sheetfu/ * Code: https://github.com/socialpoint-labs/sheetfu * Issue tracker: https://github.com/socialpoint-labs/sheetfu/issues .. _pip: https://pip.pypa.io/en/stable/quickstart/ If you are looking for the original sheetfu google apps script library, it has been relocated to `this page`_. .. _this page: https://github.com/socialpoint-labs/sheetfu-apps-script
PypiClean
/Nuitka-1.8.tar.gz/Nuitka-1.8/nuitka/build/inline_copy/lib/scons-3.1.2/SCons/Subst.py
__revision__ = "src/engine/SCons/Subst.py bee7caf9defd6e108fc2998a2520ddb36a967691 2019-12-17 02:07:09 bdeegan" import collections import re import SCons.Errors from SCons.Util import is_String, is_Sequence # Indexed by the SUBST_* constants below. _strconv = [SCons.Util.to_String_for_subst, SCons.Util.to_String_for_subst, SCons.Util.to_String_for_signature] AllowableExceptions = (IndexError, NameError) def SetAllowableExceptions(*excepts): global AllowableExceptions AllowableExceptions = [_f for _f in excepts if _f] def raise_exception(exception, target, s): name = exception.__class__.__name__ msg = "%s `%s' trying to evaluate `%s'" % (name, exception, s) if target: raise SCons.Errors.BuildError(target[0], msg) else: raise SCons.Errors.UserError(msg) class Literal(object): """A wrapper for a string. If you use this object wrapped around a string, then it will be interpreted as literal. When passed to the command interpreter, all special characters will be escaped.""" def __init__(self, lstr): self.lstr = lstr def __str__(self): return self.lstr def escape(self, escape_func): return escape_func(self.lstr) def for_signature(self): return self.lstr def is_literal(self): return 1 def __eq__(self, other): if not isinstance(other, Literal): return False return self.lstr == other.lstr def __neq__(self, other): return not self.__eq__(other) def __hash__(self): return hash(self.lstr) class SpecialAttrWrapper(object): """This is a wrapper for what we call a 'Node special attribute.' This is any of the attributes of a Node that we can reference from Environment variable substitution, such as $TARGET.abspath or $SOURCES[1].filebase. We implement the same methods as Literal so we can handle special characters, plus a for_signature method, such that we can return some canonical string during signature calculation to avoid unnecessary rebuilds.""" def __init__(self, lstr, for_signature=None): """The for_signature parameter, if supplied, will be the canonical string we return from for_signature(). Else we will simply return lstr.""" self.lstr = lstr if for_signature: self.forsig = for_signature else: self.forsig = lstr def __str__(self): return self.lstr def escape(self, escape_func): return escape_func(self.lstr) def for_signature(self): return self.forsig def is_literal(self): return 1 def quote_spaces(arg): """Generic function for putting double quotes around any string that has white space in it.""" if ' ' in arg or '\t' in arg: return '"%s"' % arg else: return str(arg) class CmdStringHolder(collections.UserString): """This is a special class used to hold strings generated by scons_subst() and scons_subst_list(). It defines a special method escape(). When passed a function with an escape algorithm for a particular platform, it will return the contained string with the proper escape sequences inserted. """ def __init__(self, cmd, literal=None): collections.UserString.__init__(self, cmd) self.literal = literal def is_literal(self): return self.literal def escape(self, escape_func, quote_func=quote_spaces): """Escape the string with the supplied function. The function is expected to take an arbitrary string, then return it with all special characters escaped and ready for passing to the command interpreter. After calling this function, the next call to str() will return the escaped string. """ if self.is_literal(): return escape_func(self.data) elif ' ' in self.data or '\t' in self.data: return quote_func(self.data) else: return self.data def escape_list(mylist, escape_func): """Escape a list of arguments by running the specified escape_func on every object in the list that has an escape() method.""" def escape(obj, escape_func=escape_func): try: e = obj.escape except AttributeError: return obj else: return e(escape_func) return list(map(escape, mylist)) class NLWrapper(object): """A wrapper class that delays turning a list of sources or targets into a NodeList until it's needed. The specified function supplied when the object is initialized is responsible for turning raw nodes into proxies that implement the special attributes like .abspath, .source, etc. This way, we avoid creating those proxies just "in case" someone is going to use $TARGET or the like, and only go through the trouble if we really have to. In practice, this might be a wash performance-wise, but it's a little cleaner conceptually... """ def __init__(self, list, func): self.list = list self.func = func def _return_nodelist(self): return self.nodelist def _gen_nodelist(self): mylist = self.list if mylist is None: mylist = [] elif not is_Sequence(mylist): mylist = [mylist] # The map(self.func) call is what actually turns # a list into appropriate proxies. self.nodelist = SCons.Util.NodeList(list(map(self.func, mylist))) self._create_nodelist = self._return_nodelist return self.nodelist _create_nodelist = _gen_nodelist class Targets_or_Sources(collections.UserList): """A class that implements $TARGETS or $SOURCES expansions by in turn wrapping a NLWrapper. This class handles the different methods used to access the list, calling the NLWrapper to create proxies on demand. Note that we subclass collections.UserList purely so that the is_Sequence() function will identify an object of this class as a list during variable expansion. We're not really using any collections.UserList methods in practice. """ def __init__(self, nl): self.nl = nl def __getattr__(self, attr): nl = self.nl._create_nodelist() return getattr(nl, attr) def __getitem__(self, i): nl = self.nl._create_nodelist() return nl[i] def __getslice__(self, i, j): nl = self.nl._create_nodelist() i = max(i, 0); j = max(j, 0) return nl[i:j] def __str__(self): nl = self.nl._create_nodelist() return str(nl) def __repr__(self): nl = self.nl._create_nodelist() return repr(nl) class Target_or_Source(object): """A class that implements $TARGET or $SOURCE expansions by in turn wrapping a NLWrapper. This class handles the different methods used to access an individual proxy Node, calling the NLWrapper to create a proxy on demand. """ def __init__(self, nl): self.nl = nl def __getattr__(self, attr): nl = self.nl._create_nodelist() try: nl0 = nl[0] except IndexError: # If there is nothing in the list, then we have no attributes to # pass through, so raise AttributeError for everything. raise AttributeError("NodeList has no attribute: %s" % attr) return getattr(nl0, attr) def __str__(self): nl = self.nl._create_nodelist() if nl: return str(nl[0]) return '' def __repr__(self): nl = self.nl._create_nodelist() if nl: return repr(nl[0]) return '' class NullNodeList(SCons.Util.NullSeq): def __call__(self, *args, **kwargs): return '' def __str__(self): return '' NullNodesList = NullNodeList() def subst_dict(target, source): """Create a dictionary for substitution of special construction variables. This translates the following special arguments: target - the target (object or array of objects), used to generate the TARGET and TARGETS construction variables source - the source (object or array of objects), used to generate the SOURCES and SOURCE construction variables """ dict = {} if target: def get_tgt_subst_proxy(thing): try: subst_proxy = thing.get_subst_proxy() except AttributeError: subst_proxy = thing # probably a string, just return it return subst_proxy tnl = NLWrapper(target, get_tgt_subst_proxy) dict['TARGETS'] = Targets_or_Sources(tnl) dict['TARGET'] = Target_or_Source(tnl) # This is a total cheat, but hopefully this dictionary goes # away soon anyway. We just let these expand to $TARGETS # because that's "good enough" for the use of ToolSurrogates # (see test/ToolSurrogate.py) to generate documentation. dict['CHANGED_TARGETS'] = '$TARGETS' dict['UNCHANGED_TARGETS'] = '$TARGETS' else: dict['TARGETS'] = NullNodesList dict['TARGET'] = NullNodesList if source: def get_src_subst_proxy(node): try: rfile = node.rfile except AttributeError: pass else: node = rfile() try: return node.get_subst_proxy() except AttributeError: return node # probably a String, just return it snl = NLWrapper(source, get_src_subst_proxy) dict['SOURCES'] = Targets_or_Sources(snl) dict['SOURCE'] = Target_or_Source(snl) # This is a total cheat, but hopefully this dictionary goes # away soon anyway. We just let these expand to $TARGETS # because that's "good enough" for the use of ToolSurrogates # (see test/ToolSurrogate.py) to generate documentation. dict['CHANGED_SOURCES'] = '$SOURCES' dict['UNCHANGED_SOURCES'] = '$SOURCES' else: dict['SOURCES'] = NullNodesList dict['SOURCE'] = NullNodesList return dict # Constants for the "mode" parameter to scons_subst_list() and # scons_subst(). SUBST_RAW gives the raw command line. SUBST_CMD # gives a command line suitable for passing to a shell. SUBST_SIG # gives a command line appropriate for calculating the signature # of a command line...if this changes, we should rebuild. SUBST_CMD = 0 SUBST_RAW = 1 SUBST_SIG = 2 _rm = re.compile(r'\$[()]') # Note the pattern below only matches $( or $) when there is no # preceeding $. (Thus the (?<!\$)) _rm_split = re.compile(r'(?<!\$)(\$[()])') # Indexed by the SUBST_* constants above. _regex_remove = [ _rm, None, _rm_split ] def _rm_list(list): return [l for l in list if l not in ('$(', '$)')] def _remove_list(list): result = [] depth = 0 for l in list: if l == '$(': depth += 1 elif l == '$)': depth -= 1 if depth < 0: break elif depth == 0: result.append(l) if depth != 0: return None return result # Indexed by the SUBST_* constants above. _list_remove = [ _rm_list, None, _remove_list ] # Regular expressions for splitting strings and handling substitutions, # for use by the scons_subst() and scons_subst_list() functions: # # The first expression compiled matches all of the $-introduced tokens # that we need to process in some way, and is used for substitutions. # The expressions it matches are: # # "$$" # "$(" # "$)" # "$variable" [must begin with alphabetic or underscore] # "${any stuff}" # # The second expression compiled is used for splitting strings into tokens # to be processed, and it matches all of the tokens listed above, plus # the following that affect how arguments do or don't get joined together: # # " " [white space] # "non-white-space" [without any dollar signs] # "$" [single dollar sign] # _dollar_exps_str = r'\$[\$\(\)]|\$[_a-zA-Z][\.\w]*|\${[^}]*}' _dollar_exps = re.compile(r'(%s)' % _dollar_exps_str) _separate_args = re.compile(r'(%s|\s+|[^\s\$]+|\$)' % _dollar_exps_str) # This regular expression is used to replace strings of multiple white # space characters in the string result from the scons_subst() function. _space_sep = re.compile(r'[\t ]+(?![^{]*})') def scons_subst(strSubst, env, mode=SUBST_RAW, target=None, source=None, gvars={}, lvars={}, conv=None): """Expand a string or list containing construction variable substitutions. This is the work-horse function for substitutions in file names and the like. The companion scons_subst_list() function (below) handles separating command lines into lists of arguments, so see that function if that's what you're looking for. """ if (isinstance(strSubst, str) and '$' not in strSubst) or isinstance(strSubst, CmdStringHolder): return strSubst class StringSubber(object): """A class to construct the results of a scons_subst() call. This binds a specific construction environment, mode, target and source with two methods (substitute() and expand()) that handle the expansion. """ def __init__(self, env, mode, conv, gvars): self.env = env self.mode = mode self.conv = conv self.gvars = gvars def expand(self, s, lvars): """Expand a single "token" as necessary, returning an appropriate string containing the expansion. This handles expanding different types of things (strings, lists, callables) appropriately. It calls the wrapper substitute() method to re-expand things as necessary, so that the results of expansions of side-by-side strings still get re-evaluated separately, not smushed together. """ if is_String(s): try: s0, s1 = s[:2] except (IndexError, ValueError): return s if s0 != '$': return s if s1 == '$': # In this case keep the double $'s which we'll later # swap for a single dollar sign as we need to retain # this information to properly avoid matching "$("" when # the actual text was "$$("" (or "$)"" when "$$)"" ) return '$$' elif s1 in '()': return s else: key = s[1:] if key[0] == '{' or '.' in key: if key[0] == '{': key = key[1:-1] try: s = eval(key, self.gvars, lvars) except KeyboardInterrupt: raise except Exception as e: if e.__class__ in AllowableExceptions: return '' raise_exception(e, lvars['TARGETS'], s) else: if key in lvars: s = lvars[key] elif key in self.gvars: s = self.gvars[key] elif NameError not in AllowableExceptions: raise_exception(NameError(key), lvars['TARGETS'], s) else: return '' # Before re-expanding the result, handle # recursive expansion by copying the local # variable dictionary and overwriting a null # string for the value of the variable name # we just expanded. # # This could potentially be optimized by only # copying lvars when s contains more expansions, # but lvars is usually supposed to be pretty # small, and deeply nested variable expansions # are probably more the exception than the norm, # so it should be tolerable for now. lv = lvars.copy() var = key.split('.')[0] lv[var] = '' return self.substitute(s, lv) elif is_Sequence(s): def func(l, conv=self.conv, substitute=self.substitute, lvars=lvars): return conv(substitute(l, lvars)) return list(map(func, s)) elif callable(s): try: s = s(target=lvars['TARGETS'], source=lvars['SOURCES'], env=self.env, for_signature=(self.mode != SUBST_CMD)) except TypeError: # This probably indicates that it's a callable # object that doesn't match our calling arguments # (like an Action). if self.mode == SUBST_RAW: return s s = self.conv(s) return self.substitute(s, lvars) elif s is None: return '' else: return s def substitute(self, args, lvars): """Substitute expansions in an argument or list of arguments. This serves as a wrapper for splitting up a string into separate tokens. """ if is_String(args) and not isinstance(args, CmdStringHolder): args = str(args) # In case it's a UserString. try: def sub_match(match): return self.conv(self.expand(match.group(1), lvars)) result = _dollar_exps.sub(sub_match, args) except TypeError: # If the internal conversion routine doesn't return # strings (it could be overridden to return Nodes, for # example), then the 1.5.2 re module will throw this # exception. Back off to a slower, general-purpose # algorithm that works for all data types. args = _separate_args.findall(args) result = [] for a in args: result.append(self.conv(self.expand(a, lvars))) if len(result) == 1: result = result[0] else: result = ''.join(map(str, result)) return result else: return self.expand(args, lvars) if conv is None: conv = _strconv[mode] # Doing this every time is a bit of a waste, since the Executor # has typically already populated the OverrideEnvironment with # $TARGET/$SOURCE variables. We're keeping this (for now), though, # because it supports existing behavior that allows us to call # an Action directly with an arbitrary target+source pair, which # we use in Tool/tex.py to handle calling $BIBTEX when necessary. # If we dropped that behavior (or found another way to cover it), # we could get rid of this call completely and just rely on the # Executor setting the variables. if 'TARGET' not in lvars: d = subst_dict(target, source) if d: lvars = lvars.copy() lvars.update(d) # We're (most likely) going to eval() things. If Python doesn't # find a __builtins__ value in the global dictionary used for eval(), # it copies the current global values for you. Avoid this by # setting it explicitly and then deleting, so we don't pollute the # construction environment Dictionary(ies) that are typically used # for expansion. gvars['__builtins__'] = __builtins__ ss = StringSubber(env, mode, conv, gvars) result = ss.substitute(strSubst, lvars) try: del gvars['__builtins__'] except KeyError: pass res = result if is_String(result): # Remove $(-$) pairs and any stuff in between, # if that's appropriate. remove = _regex_remove[mode] if remove: if mode == SUBST_SIG: result = _list_remove[mode](remove.split(result)) if result is None: raise SCons.Errors.UserError("Unbalanced $(/$) in: " + res) result = ' '.join(result) else: result = remove.sub('', result) if mode != SUBST_RAW: # Compress strings of white space characters into # a single space. result = _space_sep.sub(' ', result).strip() # Now replace escaped $'s currently "$$" # This is needed because we now retain $$ instead of # replacing them during substition to avoid # improperly trying to escape "$$(" as being "$(" result = result.replace('$$','$') elif is_Sequence(result): remove = _list_remove[mode] if remove: result = remove(result) if result is None: raise SCons.Errors.UserError("Unbalanced $(/$) in: " + str(res)) return result def scons_subst_list(strSubst, env, mode=SUBST_RAW, target=None, source=None, gvars={}, lvars={}, conv=None): """Substitute construction variables in a string (or list or other object) and separate the arguments into a command list. The companion scons_subst() function (above) handles basic substitutions within strings, so see that function instead if that's what you're looking for. """ class ListSubber(collections.UserList): """A class to construct the results of a scons_subst_list() call. Like StringSubber, this class binds a specific construction environment, mode, target and source with two methods (substitute() and expand()) that handle the expansion. In addition, however, this class is used to track the state of the result(s) we're gathering so we can do the appropriate thing whenever we have to append another word to the result--start a new line, start a new word, append to the current word, etc. We do this by setting the "append" attribute to the right method so that our wrapper methods only need ever call ListSubber.append(), and the rest of the object takes care of doing the right thing internally. """ def __init__(self, env, mode, conv, gvars): collections.UserList.__init__(self, []) self.env = env self.mode = mode self.conv = conv self.gvars = gvars if self.mode == SUBST_RAW: self.add_strip = lambda x: self.append(x) else: self.add_strip = lambda x: None self.in_strip = None self.next_line() def expand(self, s, lvars, within_list): """Expand a single "token" as necessary, appending the expansion to the current result. This handles expanding different types of things (strings, lists, callables) appropriately. It calls the wrapper substitute() method to re-expand things as necessary, so that the results of expansions of side-by-side strings still get re-evaluated separately, not smushed together. """ if is_String(s): try: s0, s1 = s[:2] except (IndexError, ValueError): self.append(s) return if s0 != '$': self.append(s) return if s1 == '$': self.append('$') elif s1 == '(': self.open_strip('$(') elif s1 == ')': self.close_strip('$)') else: key = s[1:] if key[0] == '{' or key.find('.') >= 0: if key[0] == '{': key = key[1:-1] try: s = eval(key, self.gvars, lvars) except KeyboardInterrupt: raise except Exception as e: if e.__class__ in AllowableExceptions: return raise_exception(e, lvars['TARGETS'], s) else: if key in lvars: s = lvars[key] elif key in self.gvars: s = self.gvars[key] elif NameError not in AllowableExceptions: raise_exception(NameError(), lvars['TARGETS'], s) else: return # Before re-expanding the result, handle # recursive expansion by copying the local # variable dictionary and overwriting a null # string for the value of the variable name # we just expanded. lv = lvars.copy() var = key.split('.')[0] lv[var] = '' self.substitute(s, lv, 0) self.this_word() elif is_Sequence(s): for a in s: self.substitute(a, lvars, 1) self.next_word() elif callable(s): try: s = s(target=lvars['TARGETS'], source=lvars['SOURCES'], env=self.env, for_signature=(self.mode != SUBST_CMD)) except TypeError: # This probably indicates that it's a callable # object that doesn't match our calling arguments # (like an Action). if self.mode == SUBST_RAW: self.append(s) return s = self.conv(s) self.substitute(s, lvars, within_list) elif s is None: self.this_word() else: self.append(s) def substitute(self, args, lvars, within_list): """Substitute expansions in an argument or list of arguments. This serves as a wrapper for splitting up a string into separate tokens. """ if is_String(args) and not isinstance(args, CmdStringHolder): args = str(args) # In case it's a UserString. args = _separate_args.findall(args) for a in args: if a[0] in ' \t\n\r\f\v': if '\n' in a: self.next_line() elif within_list: self.append(a) else: self.next_word() else: self.expand(a, lvars, within_list) else: self.expand(args, lvars, within_list) def next_line(self): """Arrange for the next word to start a new line. This is like starting a new word, except that we have to append another line to the result.""" collections.UserList.append(self, []) self.next_word() def this_word(self): """Arrange for the next word to append to the end of the current last word in the result.""" self.append = self.add_to_current_word def next_word(self): """Arrange for the next word to start a new word.""" self.append = self.add_new_word def add_to_current_word(self, x): """Append the string x to the end of the current last word in the result. If that is not possible, then just add it as a new word. Make sure the entire concatenated string inherits the object attributes of x (in particular, the escape function) by wrapping it as CmdStringHolder.""" if not self.in_strip or self.mode != SUBST_SIG: try: current_word = self[-1][-1] except IndexError: self.add_new_word(x) else: # All right, this is a hack and it should probably # be refactored out of existence in the future. # The issue is that we want to smoosh words together # and make one file name that gets escaped if # we're expanding something like foo$EXTENSION, # but we don't want to smoosh them together if # it's something like >$TARGET, because then we'll # treat the '>' like it's part of the file name. # So for now, just hard-code looking for the special # command-line redirection characters... try: last_char = str(current_word)[-1] except IndexError: last_char = '\0' if last_char in '<>|': self.add_new_word(x) else: y = current_word + x # We used to treat a word appended to a literal # as a literal itself, but this caused problems # with interpreting quotes around space-separated # targets on command lines. Removing this makes # none of the "substantive" end-to-end tests fail, # so we'll take this out but leave it commented # for now in case there's a problem not covered # by the test cases and we need to resurrect this. #literal1 = self.literal(self[-1][-1]) #literal2 = self.literal(x) y = self.conv(y) if is_String(y): #y = CmdStringHolder(y, literal1 or literal2) y = CmdStringHolder(y, None) self[-1][-1] = y def add_new_word(self, x): if not self.in_strip or self.mode != SUBST_SIG: literal = self.literal(x) x = self.conv(x) if is_String(x): x = CmdStringHolder(x, literal) self[-1].append(x) self.append = self.add_to_current_word def literal(self, x): try: l = x.is_literal except AttributeError: return None else: return l() def open_strip(self, x): """Handle the "open strip" $( token.""" self.add_strip(x) self.in_strip = 1 def close_strip(self, x): """Handle the "close strip" $) token.""" self.add_strip(x) self.in_strip = None if conv is None: conv = _strconv[mode] # Doing this every time is a bit of a waste, since the Executor # has typically already populated the OverrideEnvironment with # $TARGET/$SOURCE variables. We're keeping this (for now), though, # because it supports existing behavior that allows us to call # an Action directly with an arbitrary target+source pair, which # we use in Tool/tex.py to handle calling $BIBTEX when necessary. # If we dropped that behavior (or found another way to cover it), # we could get rid of this call completely and just rely on the # Executor setting the variables. if 'TARGET' not in lvars: d = subst_dict(target, source) if d: lvars = lvars.copy() lvars.update(d) # We're (most likely) going to eval() things. If Python doesn't # find a __builtins__ value in the global dictionary used for eval(), # it copies the current global values for you. Avoid this by # setting it explicitly and then deleting, so we don't pollute the # construction environment Dictionary(ies) that are typically used # for expansion. gvars['__builtins__'] = __builtins__ ls = ListSubber(env, mode, conv, gvars) ls.substitute(strSubst, lvars, 0) try: del gvars['__builtins__'] except KeyError: pass return ls.data def scons_subst_once(strSubst, env, key): """Perform single (non-recursive) substitution of a single construction variable keyword. This is used when setting a variable when copying or overriding values in an Environment. We want to capture (expand) the old value before we override it, so people can do things like: env2 = env.Clone(CCFLAGS = '$CCFLAGS -g') We do this with some straightforward, brute-force code here... """ if isinstance(strSubst, str) and strSubst.find('$') < 0: return strSubst matchlist = ['$' + key, '${' + key + '}'] val = env.get(key, '') def sub_match(match, val=val, matchlist=matchlist): a = match.group(1) if a in matchlist: a = val if is_Sequence(a): return ' '.join(map(str, a)) else: return str(a) if is_Sequence(strSubst): result = [] for arg in strSubst: if is_String(arg): if arg in matchlist: arg = val if is_Sequence(arg): result.extend(arg) else: result.append(arg) else: result.append(_dollar_exps.sub(sub_match, arg)) else: result.append(arg) return result elif is_String(strSubst): return _dollar_exps.sub(sub_match, strSubst) else: return strSubst # Local Variables: # tab-width:4 # indent-tabs-mode:nil # End: # vim: set expandtab tabstop=4 shiftwidth=4:
PypiClean
/django-odoo-auth-1.0.4.1.tar.gz/django-odoo-auth-1.0.4.1/README.md
# django-odoo-auth [![Upload Python Package](https://github.com/w0rng/django-odoo-auth/actions/workflows/python-publish.yml/badge.svg)](https://github.com/w0rng/django-odoo-auth/actions/workflows/python-publish.yml) Custom django auth backend for authorization via odoo ## Quick start 1. install module `pip install django-odoo-auth` 2. Add odoo_auth to your INSTALLED_APPS setting like this: ```python INSTALLED_APPS = [ # ... 'odoo_auth', ] ``` 3. Add backend to your AUTHENTICATION_BACKENDS setting like this: ```python AUTHENTICATION_BACKENDS = ( # ... 'odoo_auth.backends.OdooBackend', ) ``` 4. Edit the information to connect to your server Odoo in `settings.py`: ```python ODOO_SERVER_URL = 'https://exmaple.com' ODOO_SERVER_PORT = 443 # this optional ODOO_SERVER_DBNAME = 'database' ``` 5. Run `python manage.py migrate` to create the odoo_auth models. 6. For call odoo_auth, use standard authenticate: ```python from django.contrib.auth import authenticate user = authenticate(username='username', password='password') ```
PypiClean
/msgraph-sdk-1.0.0a3.tar.gz/msgraph-sdk-1.0.0a3/msgraph/generated/teams/item/schedule/shifts/count/count_request_builder.py
from __future__ import annotations from dataclasses import dataclass from kiota_abstractions.get_path_parameters import get_path_parameters from kiota_abstractions.method import Method from kiota_abstractions.request_adapter import RequestAdapter from kiota_abstractions.request_information import RequestInformation from kiota_abstractions.request_option import RequestOption from kiota_abstractions.response_handler import ResponseHandler from kiota_abstractions.serialization import Parsable, ParsableFactory from typing import Any, Callable, Dict, List, Optional, Union from ......models.o_data_errors import o_data_error class CountRequestBuilder(): """ Provides operations to count the resources in the collection. """ def __init__(self,request_adapter: RequestAdapter, path_parameters: Optional[Union[Dict[str, Any], str]] = None) -> None: """ Instantiates a new CountRequestBuilder and sets the default values. Args: pathParameters: The raw url or the Url template parameters for the request. requestAdapter: The request adapter to use to execute the requests. """ if path_parameters is None: raise Exception("path_parameters cannot be undefined") if request_adapter is None: raise Exception("request_adapter cannot be undefined") # Url template to use to build the URL for the current request builder self.url_template: str = "{+baseurl}/teams/{team%2Did}/schedule/shifts/$count" url_tpl_params = get_path_parameters(path_parameters) self.path_parameters = url_tpl_params self.request_adapter = request_adapter def create_get_request_information(self,request_configuration: Optional[CountRequestBuilderGetRequestConfiguration] = None) -> RequestInformation: """ Get the number of the resource Args: requestConfiguration: Configuration for the request such as headers, query parameters, and middleware options. Returns: RequestInformation """ request_info = RequestInformation() request_info.url_template = self.url_template request_info.path_parameters = self.path_parameters request_info.http_method = Method.GET request_info.headers["Accept"] = "text/plain" if request_configuration: request_info.add_request_headers(request_configuration.headers) request_info.add_request_options(request_configuration.options) return request_info async def get(self,request_configuration: Optional[CountRequestBuilderGetRequestConfiguration] = None, response_handler: Optional[ResponseHandler] = None) -> Optional[int]: """ Get the number of the resource Args: requestConfiguration: Configuration for the request such as headers, query parameters, and middleware options. responseHandler: Response handler to use in place of the default response handling provided by the core service Returns: Optional[int] """ request_info = self.create_get_request_information( request_configuration ) error_mapping: Dict[str, ParsableFactory] = { "4XX": o_data_error.ODataError, "5XX": o_data_error.ODataError, } if not self.request_adapter: raise Exception("Http core is null") return await self.request_adapter.send_primitive_async(request_info, "int", response_handler, error_mapping) @dataclass class CountRequestBuilderGetRequestConfiguration(): """ Configuration for the request such as headers, query parameters, and middleware options. """ # Request headers headers: Optional[Dict[str, str]] = None # Request options options: Optional[List[RequestOption]] = None
PypiClean
/veikkaaja-0.1.3.tar.gz/veikkaaja-0.1.3/test/mock_client.py
import json from pathlib import Path from typing import Any, Dict, Union import requests import responses from veikkaaja.endpoints import EndPoint from veikkaaja.veikkaus_client import VeikkausClient class MockClient(VeikkausClient): """Do not try to actually login""" def __init__(self): # pylint: disable=super-init-not-called """Replace the original initialization that requires account information is given as arguments os as environment variables Do not try to login to the API Register all saved api responses as 'responses' as available endpoints. """ saved_responses = (Path(__file__).parent / 'api_responses').glob('*.json') # For each saved actual json response from the real API # register a callback with responses that would return # the same json content that the real query would have returned. for response in saved_responses: # TODO: We just assume that we do not POST # and GET same endpoints. responses.add( responses.POST, EndPoint.API_ENDPOINT + "/" + response.stem.replace('.', '/'), match_querystring=False, json=json.loads(response.read_text())) responses.add( responses.GET, EndPoint.API_ENDPOINT + "/" + response.stem.replace('.', '/'), match_querystring=False, json=json.loads(response.read_text())) @responses.activate def _access_endpoint(self, endpoint: EndPoint, payload: Dict[str, Any] = None, method="GET"): """ Override the common entrypoint that sends out requests Check if we have stored a correct response from the API for this request and return that instead of trying to actually query a response from the API. """ # check if we have the corresponding request/response files available request_file = Path(__file__).parent / 'api_requests' / endpoint.endpoint.replace( '/', '.') if request_file.exists(): print("Found target request for {}".format(endpoint.endpoint)) print(request_file.read_text()) # TODO: Found a saved request, compare this request to it # we have registered the response with 'responses' # Now we just go and get it if method == "GET": response = requests.get( endpoint.url, headers=self.API_HEADERS, params=payload) elif method == "POST": response = requests.post(endpoint.url, headers=self.API_HEADERS, json=payload) else: raise RuntimeError("Unsupported method {}".format(method)) if response.status_code != 200: return None return response
PypiClean
/urnote-0.4-py3-none-any.whl/note/controller.py
import sys import note from note.infrastructure import config from note.infrastructure.error import UserError from note.utils.pattern import Singleton class Controller(metaclass=Singleton): def __init__(self, view, logger, parser, get_runner, get_initializer, get_purger, get_status_result_visitor, get_commit_result_visitor): self.view = view self.logger = logger self.parser = parser self.get_runner = get_runner self.get_initializer = get_initializer self.get_purger = get_purger self.get_commit_result_visitor = get_commit_result_visitor self.get_status_result_visitor = get_status_result_visitor def run(self, args=None): try: self._run(args) except UserError as err: self.view.show_error(exc=err) except SystemExit: raise except: self.logger.critical('', exc_info=True) self.view.show_error(msg='sorry,程序发生了内部错误') raise SystemExit def _run(self, args): # 如果是--help,会在此处会向stdout输出信息并且抛出exit程序 args = self.parser.parse_args(args or sys.argv[1:]) if args.doc: self._show_doc() elif args.version: self.view.show(note.__version__) elif args.cmd: self._handle_sub_command(args) else: self.view.show( "see '{} --help'".format(config.APP_NAME)) @staticmethod def _show_doc(): import webbrowser webbrowser.open(config.DOC_URL) def _handle_sub_command(self, args): if args.cmd == 'init': initializer = self.get_initializer() initializer.init() elif args.cmd == 'status': runner = self.get_runner() result = runner.run(commit=False, use_link=not args.not_link, short=args.short, pattern=args.pattern, default=args.default_score) report = result.accept(self.get_status_result_visitor()) self.view.show_report_after_status(report) elif args.cmd == 'commit': runner = self.get_runner() result = runner.run(commit=True, time=args.time, use_link=not args.not_link, short=args.short, pattern=args.pattern, default=args.default_score) report = result.accept(self.get_commit_result_visitor()) self.view.show_report_after_commit(report) elif args.cmd == 'purge': purger = self.get_purger() path = args.path purger.purge(path) else: raise RuntimeError
PypiClean
/palanteer_scripting-0.6-cp38-cp38-win_amd64.whl/palanteer_scripting/_scripting.py
import os import sys import re import platform import subprocess import threading import datetime import atexit import time import struct import palanteer_scripting._cextension # Constants # ========= _PL_FLAG_TYPE_DATA_NONE = 0 _PL_FLAG_TYPE_DATA_S32 = 1 _PL_FLAG_TYPE_DATA_U32 = 2 _PL_FLAG_TYPE_DATA_S64 = 3 _PL_FLAG_TYPE_DATA_U64 = 4 _PL_FLAG_TYPE_DATA_FLOAT = 5 _PL_FLAG_TYPE_DATA_DOUBLE = 6 _PL_FLAG_TYPE_DATA_STRING = 7 _PL_FLAG_TYPE_DATA_TIMESTAMP = 8 _PL_FLAG_TYPE_LOCK_WAIT = 16 _PL_FLAG_TYPE_LOCK_ACQUIRED = 17 _PL_FLAG_TYPE_LOCK_RELEASED = 18 _PL_FLAG_TYPE_LOCK_NOTIFIED = 19 _PL_FLAG_TYPE_LOG = 20 _PL_FLAG_TYPE_MASK = 0x1F # Default: "data" _flag_to_kind = { _PL_FLAG_TYPE_LOCK_WAIT: "lock wait", _PL_FLAG_TYPE_LOCK_ACQUIRED: "lock use", _PL_FLAG_TYPE_LOCK_RELEASED: "lock use", _PL_FLAG_TYPE_LOCK_NOTIFIED: "lock notified", _PL_FLAG_TYPE_LOG: "log", } MATCH_EXT_STRING_LOOKUP = re.compile("@@([0-9A-F]{16})@@(.*)$") # Exceptions # ========== class InitializationError(Exception): """The Palanteer library has not been initialized. Use the function 'initialize_scripting'.""" class ConnectionError(Exception): """There is no connection to the program.""" class UnknownThreadError(Exception): """The thread with the provided name is unknown.""" # Structures # ========== class Evt: """Represents a received event from the program under test""" def __init__(self, thread_name, kind, path, date_ns, value, spec_id): self.thread = thread_name self.kind = kind self.path = path # List is easier to work with (checking parent, popping one level, etc...) self.date_ns = date_ns self.value = value # Type depends on the event self.spec_id = spec_id self.children = [] def __str__(self): s = [ "%10.6f ms kind=%-9s thread=%s path=%s" % (0.000001 * self.date_ns, self.kind, self.thread, self.path) ] if len(self.children) > 0: s[-1] += " children=%d" % len(self.children) if len(str(self.value)): s[-1] += " value=%s" % self.value for c in self.children: s.append(" | %s" % str(c)) return "\n".join(s) class _Elem: def __init__(self, nameHash, prevElemIdx, threadId, flags): self.nameHash = nameHash self.prevElemIdx = prevElemIdx self.threadId = threadId self.threadName = _event_ctx.db_thread_names[threadId] self.valueDecodeFormat = { _PL_FLAG_TYPE_DATA_S32: "i", _PL_FLAG_TYPE_DATA_U32: "I", _PL_FLAG_TYPE_DATA_S64: "q", _PL_FLAG_TYPE_DATA_FLOAT: "f", _PL_FLAG_TYPE_DATA_DOUBLE: "d", _PL_FLAG_TYPE_DATA_STRING: "i", }.get(flags & _PL_FLAG_TYPE_MASK, None) self.valueDecodeLength = ( struct.calcsize(self.valueDecodeFormat) if self.valueDecodeFormat else None ) eType = flags & _PL_FLAG_TYPE_MASK self.valueIsString = eType in (_PL_FLAG_TYPE_DATA_STRING, _PL_FLAG_TYPE_LOG) self.kind = _flag_to_kind.get(eType, "data") # Build the path self.path, elemIdx = [ _event_ctx.lkup_hash_to_string_value[nameHash] ], prevElemIdx while elemIdx >= 0: elem = _event_ctx.db_elems[elemIdx] self.path.append(_event_ctx.lkup_hash_to_string_value[elem.nameHash]) elemIdx = elem.prevElemIdx self.path.reverse() class EvtSpec: """Describes a subset of events to capture""" def __init__(self, events, thread=None, parent=None): # Hash will be overwritten before sending to the extension # In the common case, a null hash is sent as we cannot compute it from the string content without knowing # the hash size and salt. These null hashes will be recomputed once this info is known. # Only the strings mentioned in the external strings lookup will be non-zero and used as is (no recomputation) self.threadName = thread if thread else "" # Parent self.parentSpec = self._extractElemPath(parent) # Elems self.elemSpec = [] if type(events) == type(""): events = [events] for e in events: ep = self._extractElemPath(e) if not ep: continue self.elemSpec.append(ep) def _extractElemPath(self, elemSpec): if elemSpec == None: elemSpec = "" result = [] for t in [t.strip() for t in elemSpec.split("/") if t]: doStore = False if t == "*": doStore = True elif t == "**": doStore = ( result and result[-1] != "**" ) # Super wildcard in front is useless, same for consecutive ones elif t == ".": doStore = not result # Must be in front to be meaningful else: doStore = True if doStore: result.append(t) # Beginning with "./**" voids these 2 terms while len(result) > 2 and result[0] == "." and result[1] == "**": result = result[2:] return tuple(result) # Local context # ============= # Library context _is_initialized = False _log_func = None _old_excepthook = None # Program control context class _ProgramContext: def __init__(self): self.freeze_mode = False self.connection = threading.Event() self.lock = threading.Lock() self.reset() def reset(self, cli_to_quit=None): self.process = None self.std_out, self.std_err = None, None self.stdout_buffer, self.stderr_buffer = [], [] self.cli_to_quit = cli_to_quit def _tee_pipe(self, is_stderr): # Loop on the selected readable stream (until closed) stdpipe = self.process.stderr if is_stderr else self.process.stdout for lines in stdpipe: # Get non empty lines filtered_lines = [l for l in lines.split("\n") if l.strip()] # Store in our buffer self.lock.acquire() if is_stderr: self.stderr_buffer.extend(filtered_lines) else: self.stdout_buffer.extend(filtered_lines) self.lock.release() _program_ctx = _ProgramContext() # CLI context class _CommandContext: def __init__(self): self.lock = threading.Lock() # Field protection self.answer = threading.Event() self.reset() def reset(self): self.answer_status = None self.answer_text = "" self.is_control_enabled = True _command_ctx = _CommandContext() # Evt sniffing context class _EvtContext: def __init__(self): self.lock = threading.Lock() self.wake_from_events = threading.Event() self.specs = [] # Persistent self.lkup_hash_to_external_string_value = {} # Recording string decoding lookup self.lkup_external_string_value_to_hash = {} # Spec string to hash lookup self.reset() def reset(self): self.db_thread_names = [] self.db_elems = [] self.db_clis = [] self.lkup_hash_to_string_value = {} self.lkup_hash_to_thread_id = {} self.string_values = [] self.are_strings_external = False self.is_short_hash = False self.events = [] self.frozen_thread_bitmap = 0 self.frozen_thread_bitmap_change = 0 self.collection_ticks = 0 _event_ctx = _EvtContext() # ======================= # Internal notifications # ======================= def _notify_record_started( app_name, build_name, are_strings_external, is_short_hash, is_control_enabled ): global _program_ctx, _event_ctx, _command_ctx _program_ctx.lock.acquire() _event_ctx.are_strings_external = are_strings_external _event_ctx.is_short_hash = is_short_hash _command_ctx.is_control_enabled = is_control_enabled _program_ctx.connection.set() _program_ctx.lock.release() def _notify_record_ended(): global _command_ctx, _program_ctx _program_ctx.connection.clear() _command_ctx.answer.set() def _notify_log(level, msg): global _log_func if _log_func: _log_func(level, msg) def _notify_command_answer(status, answer): global _command_ctx _command_ctx.lock.acquire() _command_ctx.answer_status, _command_ctx.answer_text = status, answer _command_ctx.answer.set() _command_ctx.lock.release() def _notify_new_frozen_thread_state(frozenThreadBitmap): global _event_ctx _event_ctx.lock.acquire() _event_ctx.frozen_thread_bitmap_change |= ( _event_ctx.frozen_thread_bitmap ^ frozenThreadBitmap ) # Track the changes _event_ctx.frozen_thread_bitmap = frozenThreadBitmap _event_ctx.wake_from_events.set() _event_ctx.lock.release() def _notify_new_strings(strings): global _event_ctx _event_ctx.lock.acquire() for h, s in strings: if not s and _event_ctx.are_strings_external: s = _event_ctx.lkup_hash_to_external_string_value.get(h, "@@%016X@@" % h) _event_ctx.lkup_hash_to_string_value[h] = s _event_ctx.string_values.append(s) _event_ctx.lock.release() def _notify_new_collection_tick(): global _event_ctx _event_ctx.lock.acquire() _event_ctx.collection_ticks += 1 _event_ctx.wake_from_events.set() _event_ctx.lock.release() def _notify_new_threads(threads): global _event_ctx _event_ctx.lock.acquire() for name_hash, thread_idx in threads: while thread_idx >= len(_event_ctx.db_thread_names): _event_ctx.db_thread_names.append(None) _event_ctx.db_thread_names[thread_idx] = _event_ctx.lkup_hash_to_string_value[ name_hash ] _event_ctx.lkup_hash_to_thread_id[name_hash] = thread_idx _event_ctx.lock.release() def _notify_new_elems(elems): global _event_ctx _event_ctx.lock.acquire() for name_hash, elem_idx, prev_elem_idx, thread_idx, flags in elems: while elem_idx >= len(_event_ctx.db_elems): _event_ctx.db_elems.append(None) while thread_idx >= len(_event_ctx.db_thread_names): _event_ctx.db_thread_names.append(None) _event_ctx.db_elems[elem_idx] = _Elem( name_hash, prev_elem_idx, thread_idx, flags ) _event_ctx.lock.release() def _notify_new_clis(clis): _event_ctx.lock.acquire() for name, param_spec, description in clis: _event_ctx.db_clis.append((name, param_spec, description)) _event_ctx.lock.release() def _notify_new_events(events): global _event_ctx nestedQty = 0 _event_ctx.lock.acquire() for spec_id, elem_id, children_qty, name_hash, date_ns, raw_value in events: elem = _event_ctx.db_elems[elem_id] if elem.valueDecodeFormat: value = struct.pack("Q", raw_value)[: elem.valueDecodeLength] value = struct.unpack(elem.valueDecodeFormat, value)[0] else: value = raw_value if elem.valueIsString: value = _event_ctx.string_values[value] evt = Evt(elem.threadName, elem.kind, elem.path, date_ns, value, spec_id) if nestedQty == 0: _event_ctx.events.append(evt) nestedQty = children_qty else: _event_ctx.events[-1].children.append(evt) nestedQty = nestedQty - 1 _event_ctx.wake_from_events.set() _event_ctx.lock.release() # Public initialization API # ========================= def _cleanup_at_exit(): global _program_ctx if _program_ctx.process: process_stop() def _cleanup_at_uncaught_exception(exc_type, exc_value, exc_traceback): _cleanup_at_exit() _old_excepthook(exc_type, exc_value, exc_traceback) default_log_min_level = 2 def _default_log_func(level, msg): # Filtering if level < default_log_min_level: return date_str = datetime.datetime.today().strftime("%H:%M:%S.%f")[ :-3 ] # [:-3] to remove the microseconds level_str = "[%s]" % {0: "detail ", 1: "info ", 2: "warning", 3: "error "}.get( level, "unknown" ) print("%s %-9s %s" % (date_str, level_str, msg)) def initialize_scripting(port=59059, log_func=None): """Initialize the Palanteer module. It shall be called once before using any function.""" global _is_initialized, _log_func, _old_excepthook if _is_initialized: return _log_func = log_func if log_func else _default_log_func # Initialize the extension library palanteer_scripting._cextension.server_start(port) # Register the exit function, to clean/kill all sub processes at exit atexit.register(_cleanup_at_exit) _old_excepthook = sys.excepthook sys.excepthook = _cleanup_at_uncaught_exception # Finalized _is_initialized = True def uninitialize_scripting(): """Uninitialize the Palanteer module.""" global _is_initialized sys.excepthook = _old_excepthook _cleanup_at_exit() palanteer_scripting._cextension.server_stop() _is_initialized = False def set_external_strings(filename=None, lkup={}): """ Function to set the lookup which resolves the external strings. The final lookup is the contatenation of the file content and the provided 'dict'. """ global _event_ctx _event_ctx.lkup_hash_to_external_string_value = {} if lkup: _event_ctx.lkup_hash_to_external_string_value.update(lkup) if filename: lkup = {} with open(filename, "r") as fHandle: for l in fHandle.readlines(): m = MATCH_EXT_STRING_LOOKUP.match(l) if m: hash_value, str_value = int(m.group(1), 16), m.group(2) lkup[hash_value] = str_value _event_ctx.lkup_hash_to_external_string_value.update(lkup) # Compute the inverse lookup (to convert the event specification) _event_ctx.lkup_external_string_value_to_hash = {} for k, v in _event_ctx.lkup_hash_to_external_string_value.items(): _event_ctx.lkup_external_string_value_to_hash[v] = k def hash_string(s, is_short_hash=False): """Fowler–Noll–Vo hash function""" if not is_short_hash: h = 14695981039346656037 for c in s: h = ((h ^ ord(c)) * 1099511628211) & 0xFFFFFFFFFFFFFFFF if h == 0: h = 1 # Special case for our application (0 is reserved internally) return h else: h = 2166136261 for c in s: h = ((h ^ ord(c)) * 16777619) & 0xFFFFFFFF if h == 0: h = 1 # Special case for our application (0 is reserved internally) return h # Program control API # =================== def _remote_call(func, detail="", timeout_sec=5.0): global _command_ctx if not _is_initialized: raise InitializationError( "The Palanteer library has not been initialized. Use the function 'initialize_scripting'." ) _command_ctx.lock.acquire() _command_ctx.answer.clear() _command_ctx.answer_status = None _command_ctx.answer_text = "" func() _command_ctx.lock.release() if not _command_ctx.answer.wait(timeout_sec): raise ConnectionError return _command_ctx.answer_status, _command_ctx.answer_text def program_cli(command_str, timeout_sec=5.0): """ Calls synchronously a remote command on the program under test. If there is no answer before the timeout expires, a ConnectionError exception is raised. The output is a tuple (status, text). A null status means success, else an error occurs and the text provides some explanation. """ return _remote_call( lambda x=command_str: palanteer_scripting._cextension.send_cli_request(x), " when calling command '%s'" % command_str, ) def program_set_freeze_mode(state): """Set the 'freeze' mode on the program under test. If true, it will pause on the freeze point, else they will be ignored.""" global _program_ctx _program_ctx.freeze_mode = state try: _remote_call(lambda x=state: palanteer_scripting._cextension.set_freeze_mode(x)) except ConnectionError: pass # If no connection, the state will be propagated to the process at the future creation time def program_get_frozen_threads(): """Returns the list of currently frozen threads.""" global _event_ctx frozen_threads = [] _event_ctx.lock.acquire() for h, bit in _event_ctx.lkup_hash_to_thread_id.items(): if (1 << bit) & _event_ctx.frozen_thread_bitmap: frozen_threads.append(_event_ctx.lkup_hash_to_string_value[h]) _event_ctx.lock.release() return frozen_threads def program_wait_freeze(thread_names, timeout_sec=3.0): """Waits that all provided threads are frozen or the timeout expires.""" global _event_ctx if type(thread_names) != type([]): thread_names = [thread_names] hashed_thread_names = [ (t, hash_string(t, _event_ctx.is_short_hash)) for t in thread_names ] end_time_sec = time.time() + timeout_sec wait_timeout_sec = max(0.1 * timeout_sec, 0.2) frozen_threads = [] while time.time() < end_time_sec: _event_ctx.lock.acquire() _event_ctx.wake_from_events.clear() frozen_threads = [] for t, h in hashed_thread_names: bit = _event_ctx.lkup_hash_to_thread_id.get(h, None) if bit != None and ((1 << bit) & _event_ctx.frozen_thread_bitmap): frozen_threads.append(t) _event_ctx.lock.release() if len(frozen_threads) == len(thread_names): break # All are frozen _event_ctx.wake_from_events.wait(wait_timeout_sec) return frozen_threads def program_step_continue(thread_names, timeout_sec=1.0): """ This function unblocks all provided threads "frozen" on a freeze point. Before returning, it also waits that all of them effectively change their frozen. It returns True if it is the case, else False after the timeout expires. """ global _event_ctx if type(thread_names) != type([]): thread_names = [thread_names] hashed_thread_names = [ (t, hash_string(t, _event_ctx.is_short_hash)) for t in thread_names ] end_time_sec = time.time() + timeout_sec wait_timeout_sec = max(0.1 * timeout_sec, 0.2) # Send the "step continue" command to the remote program _event_ctx.lock.acquire() thread_bitmap = 0 for t, h in hashed_thread_names: bit = _event_ctx.lkup_hash_to_thread_id.get(h, None) if bit == None: raise UnknownThreadError("The thread '%s' is unknown" % t) thread_bitmap |= 1 << bit _event_ctx.frozen_thread_bitmap_change = 0 # Reset the changes _event_ctx.lock.release() _remote_call( lambda x=thread_bitmap: palanteer_scripting._cextension.step_continue(x) ) # Wait for an effective change before leaving this function while time.time() < end_time_sec: _event_ctx.lock.acquire() _event_ctx.wake_from_events.clear() bitmap_change = _event_ctx.frozen_thread_bitmap_change _event_ctx.lock.release() if (thread_bitmap & bitmap_change) == thread_bitmap: return True # All threads have changed frozen state _event_ctx.wake_from_events.wait(wait_timeout_sec) return False def _setup_process_initialization( record_filename, pass_first_freeze_point, cli_to_quit, ): global _program_ctx, _command_ctx, _event_ctx # Sanity if not _is_initialized: raise InitializationError( "The Palanteer library has not been initialized. Use the function 'initialize_scripting'." ) if _program_ctx.connection.is_set(): raise ConnectionError( "Only one program at a time can be controlled and a program is already connected." ) _program_ctx.reset(cli_to_quit) _command_ctx.reset() _event_ctx.reset() # Set the recording state if not record_filename: record_filename = "" if record_filename and not record_filename.endswith(".plt"): record_filename = record_filename + ".plt" palanteer_scripting._cextension.set_record_filename(record_filename) # Manage the synchronization freeze point if pass_first_freeze_point: program_set_freeze_mode(True) def _connect_to_process( pass_first_freeze_point, connection_timeout_sec, previous_freeze_state ): global _program_ctx, _event_ctx end_synch_time_sec = time.time() + connection_timeout_sec # Wait the connection to Palanteer if not _program_ctx.connection.wait(connection_timeout_sec): raise ConnectionError( "No program connected during the timeout (%f s)." % connection_timeout_sec ) # Set a small max latency, as we want script reactivity try: _remote_call(lambda: palanteer_scripting._cextension.set_max_latency_ms(10)) except ConnectionError: pass # Release the reception thread with the first call to "set freeze mode" program_set_freeze_mode(_program_ctx.freeze_mode) # Synchronization, if required if pass_first_freeze_point: # Wait one frozen thread wait_timeout_sec = max(0.1 * connection_timeout_sec, 0.2) while time.time() < end_synch_time_sec: _event_ctx.lock.acquire() _event_ctx.wake_from_events.clear() frozen_thread_bitmap = _event_ctx.frozen_thread_bitmap _event_ctx.lock.release() if frozen_thread_bitmap: break # At least one thread is frozen so this includes "the first one" _event_ctx.wake_from_events.wait(wait_timeout_sec) if time.time() >= end_synch_time_sec: raise ConnectionError( "Connected to the program but unable to synch on a freeze point during the timeout (%f s)." % connection_timeout_sec ) # Put back the previous freeze state program_set_freeze_mode(previous_freeze_state) def process_connect( record_filename="", pass_first_freeze_point=False, cli_to_quit=None, connection_timeout_sec=5.0, ): """ This function connects to an already running process and waits the connection the Palanteer remote module. If no connection is established before the timeout, a ConnectionError exception is raised. :record_filename: name of the record file. Default is no record file. :pass_first_freeze_point: if True, returns only after first freeze point is met and released (kind of synchro) :cli_to_quit: command line to call to stop the program. Default: terminate signal, then kill signal :connection_timeout_sec: timeout for the connection with the program """ global _program_ctx previous_freeze_state = _program_ctx.freeze_mode _setup_process_initialization(record_filename, pass_first_freeze_point, cli_to_quit) _connect_to_process( pass_first_freeze_point, connection_timeout_sec, previous_freeze_state ) def process_launch( program_path, args=[], record_filename="", pass_first_freeze_point=False, capture_output=False, cli_to_quit=None, connection_timeout_sec=5.0, ): """ This function launches a program and waits the connection the Palanteer remote module. If no connection is established before the timeout, a ConnectionError exception is raised. :record_filename: name of the record file. Default is no record file. :pass_first_freeze_point: if True, returns only after first freeze point is met and released (kind of synchro) :capture_output: boolean. If True, the stdout and stderr are captured and accessible (see 'process_get_stderr_lines' and 'process_get_stdout_lines'). :cli_to_quit: command line to call to stop the program. Default: terminate signal, then kill signal :connection_timeout_sec: timeout for the connection with the program """ global _program_ctx previous_freeze_state = _program_ctx.freeze_mode _setup_process_initialization(record_filename, pass_first_freeze_point, cli_to_quit) # Launch the process with or without collecting the standard outputs if capture_output: _program_ctx.process = subprocess.Popen( [program_path] + args, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) _program_ctx.std_out = threading.Thread( target=_program_ctx._tee_pipe, args=(False,) ) _program_ctx.std_err = threading.Thread( target=_program_ctx._tee_pipe, args=(True,) ) _program_ctx.std_out.start() _program_ctx.std_err.start() else: _program_ctx.process = subprocess.Popen( [program_path] + args, universal_newlines=True ) _connect_to_process( pass_first_freeze_point, connection_timeout_sec, previous_freeze_state ) def process_is_running(): """This function returns True if the launched program is still running""" global _program_ctx if not _program_ctx.process: return False _program_ctx.process.poll() # Query the process state return _program_ctx.process.returncode == None def process_get_returncode(): """ This function returns the exit code the launched program. The returns value is 'None' in case it is still running or no program has been launched.""" global _program_ctx if not _program_ctx.process: return None _program_ctx.process.poll() # Query the process state return _program_ctx.process.returncode def process_get_stderr_lines(): """ This function returns an array with the collected text lines from the 'stderr' output. Applicable only if capture_output=True has been used when launching the program """ global _program_ctx _program_ctx.lock.acquire() lines = _program_ctx.stderr_buffer _program_ctx.stderr_buffer = [] _program_ctx.lock.release() return lines def process_get_stdout_lines(): """ This function returns an array with the collected text lines from the 'stdout' output. Applicable only if capture_output=True has been used when launching the program """ global _program_ctx _program_ctx.lock.acquire() lines = _program_ctx.stdout_buffer _program_ctx.stdout_buffer = [] _program_ctx.lock.release() return lines def process_stop(): """This function stops the launched program""" global _program_ctx if not _program_ctx.process: return # Graceful stop: call custom command to quit, if any if _program_ctx.cli_to_quit: try: answer = program_cli(_program_ctx.cli_to_quit) except Exception: pass else: # A bit less graceful stop: OS terminate signal try: _program_ctx.process.terminate() except Exception: pass # Ensure that the process is really stopped try: _program_ctx.process.wait(timeout=0.5) except Exception: # No... we unleashed our fury now try: _program_ctx.process.kill() _program_ctx.process.wait(timeout=0.5) except Exception: pass # Process is over _program_ctx.process = None _program_ctx.connection.clear() # Evt sniffing API # ================== def data_configure_events(specs): """ This function configures the capture of the events. It replaces any previous configuration. It can be called before or while a program is running. It flushes all previously received events that have not been collected. :specs: one or a list of EvtSpec objects """ global _event_ctx # Reset specs and buffered events _event_ctx.specs = [] palanteer_scripting._cextension.clear_all_specs() data_clear_buffered_events() if type(specs) != type([]): # Handle the single spec case specs = [specs] for specId, spec in enumerate(specs): # Associate the string hashes from the external string lookup, or 0 to compute then later # In the common case, a null hash is sent as we cannot compute it from the string content without knowing # the hash size and salt. These null hashes will be recomputed once this info is known # Only the strings mentioned in the external strings lookup will be non-zero and used as is (no recomputation) updatedParentSpec = tuple( [ (ps, _event_ctx.lkup_external_string_value_to_hash.get(ps, 0)) for ps in spec.parentSpec ] ) updatedElemSpec = [] for ec in spec.elemSpec: updatedElemSpec.append( tuple( [ (es, _event_ctx.lkup_external_string_value_to_hash.get(es, 0)) for es in ec ] ) ) # Send the spec to the extension (which handles the recording and event flow) palanteer_scripting._cextension.add_spec( spec.threadName, _event_ctx.lkup_external_string_value_to_hash.get(spec.threadName, 0), updatedParentSpec, updatedElemSpec, ) _event_ctx.specs.append(spec) # Collect a slice of selected events until one of the exit condition is met def data_collect_events( wanted=[], unwanted=[], frozen_threads=[], max_event_qty=None, timeout_sec=1.0 ): """ This function collects the received events from the program under control The selection of the events is specified with 'data_configure_events'. It returns an array of Evt objects if one of the condition below is fullfilled. :wanted: a list of event that are expected. If all of them are collected at least once, the collection is stopped. :unwanted: a list of event that are not expected. If one of them is collected, the collection is stopped. :frozen_threads: a list of thread names. If all the threads are frozen, the collection is stopped. :max_event_qty: integer. If the quantity of collected events reaches this value, the collection is stopped. :timeout_sec: float. The collection is stopped after the timeout. """ global _event_ctx timeout_sec = max(0.01, timeout_sec) end_time_sec = time.time() + timeout_sec wait_timeout_sec = max(0.1 * timeout_sec, 0.2) # Internal polling period if wanted and type(wanted) == type(""): wanted = [wanted] if unwanted and type(unwanted) == type(""): unwanted = [unwanted] if frozen_threads and type(frozen_threads) == type(""): frozen_threads = [frozen_threads] exit_loop_count, events = None, [] while time.time() < end_time_sec: _event_ctx.lock.acquire() # Get newly received events new_events, _event_ctx.events = _event_ctx.events, [] _event_ctx.wake_from_events.clear() # Check the exit conditions if exit_loop_count == None and max_event_qty and len(events) >= max_event_qty: exit_loop_count = ( _event_ctx.collection_ticks ) # Exit without any additional tick if exit_loop_count == None and frozen_threads: thread_bitmap, areAllThreadsKnown = 0, True for t in frozen_threads: bit = _event_ctx.lkup_hash_to_thread_id.get( hash_string(t, _event_ctx.is_short_hash), None ) if bit != None: thread_bitmap |= 1 << bit else: areAllThreadsKnown = False if ( areAllThreadsKnown and (_event_ctx.frozen_thread_bitmap & thread_bitmap) == thread_bitmap ): exit_loop_count = ( _event_ctx.collection_ticks + 2 ) # +2 ticks for the double bank collection mechanism if exit_loop_count == None and wanted: for e in new_events: if e.path[-1] in wanted: wanted.remove(e.path[-1]) if not wanted: # All wanted are found, so exit without any additional tick exit_loop_count = _event_ctx.collection_ticks if exit_loop_count == None and unwanted: for e in new_events: if e.path[-1] in unwanted: exit_loop_count = ( _event_ctx.collection_ticks ) # One unwanted is enough to exit without additional ticks break if exit_loop_count == None and not process_is_running(): exit_loop_count = ( _event_ctx.collection_ticks + 2 ) # +2 ticks for the double bank collection mechanism # End the iteration current_collection_tick = _event_ctx.collection_ticks _event_ctx.lock.release() events.extend(new_events) if exit_loop_count != None and exit_loop_count <= current_collection_tick: break _event_ctx.wake_from_events.wait(wait_timeout_sec) # Return the collected events return events # Clean received but not collected yet events def data_clear_buffered_events(): """This function clears all buffered events that have not been collected""" global _event_ctx _event_ctx.lock.acquire() _event_ctx.events = [] palanteer_scripting._cextension.clear_buffered_events() _event_ctx.lock.release() # Output is a list of (spec id, event spec, unresolved explanation message) def data_get_unresolved_events(): """ This function returns a list of unresolved specified events. The tuples in this list are: (spec index, event specification, unresolved explanation message) """ global _event_ctx # Get the infos ueiList = palanteer_scripting._cextension.get_unresolved_elem_infos() # Format the output outputInfos = [] for spec_id, elem_id, error_msg in ueiList: f = _event_ctx.specs[spec_id] outputInfos.append( (spec_id, f.threadName, f.parentSpec, f.elemSpec[elem_id], error_msg) ) return outputInfos # Output is a list of thread names def data_get_known_threads(): """This function returns a list containing the names of the known threads.""" global _event_ctx _event_ctx.lock.acquire() known_threads = [ _event_ctx.lkup_hash_to_string_value[h] for h in _event_ctx.lkup_hash_to_thread_id.keys() ] _event_ctx.lock.release() return known_threads # Output is a list of (path, kind, thread name) def data_get_known_event_kinds(): """ This function returns a list containing the known event kinds. The tuples in this list are: (path of the event kind, kind of event, name of the thread) """ global _event_ctx _event_ctx.lock.acquire() ec = _event_ctx known_event_kinds = [ (e.path, e.kind, ec.db_thread_names[e.threadId]) for e in ec.db_elems if e != None ] _event_ctx.lock.release() return known_event_kinds # Output is the list of known CLI def data_get_known_clis(): """ This function returns a list containing the known CLIs. The tuples in this list are: (CLI name, parameter specification, description) """ global _event_ctx _event_ctx.lock.acquire() ec = _event_ctx known_clis = _event_ctx.db_clis[:] _event_ctx.lock.release() return known_clis # Debug API # ========= def debug_print_unresolved_events(output_file=sys.stdout): """This function displays the list of the unresolved specified events""" unresolved_event = data_get_unresolved_events() print("Unresolved events (%d):" % len(unresolved_event), file=output_file) for spec_id, thread_name, parent_spec, event_spec, msg in unresolved_event: print( " - From spec #%d, %s for event '%s'%s%s" % ( spec_id, msg, "/".join(event_spec), (" with parent '%s'" % "/".join(parent_spec)) if parent_spec else "", (" with thread '%s'" % thread_name) if thread_name else "", ), file=output_file, ) def debug_print_known_threads(output_file=sys.stdout): """This function displays the list of names of the known threads""" thread_names = data_get_known_threads() print("Known threads (%d):" % len(thread_names), file=output_file) for t in thread_names: print(" - %s" % t, file=output_file) def debug_print_known_event_kinds(output_file=sys.stdout): """This function displays the list of the known event kinds""" event_kinds = data_get_known_event_kinds() print("Known event kinds (%d):" % len(event_kinds), file=output_file) event_kinds.sort(key=lambda x: (x[2].lower(), x[1].lower(), x[0])) for path, kind, thread_name in event_kinds: print( " - %-11s %-24s : %s" % ("[%s]" % kind, thread_name, "/".join(path)), file=output_file, ) def debug_print_known_clis(output_file=sys.stdout): """This function displays the list of the known CLIs""" clis = data_get_known_clis() print("Known CLIs (%d):" % len(clis), file=output_file) clis.sort(key=lambda x: x[0].lower()) for name, param_spec, description in clis: print( " - %s %s\n %s" % (name, param_spec, description), file=output_file )
PypiClean
/logwrap-11.0.0.tar.gz/logwrap-11.0.0/doc/source/LogOnAccess.rst
.. LogOnAccess API: LogOnAccess ======================== .. py:module:: logwrap .. py:currentmodule:: logwrap .. py:class:: LogOnAccess(property) Property with logging on successful get/set/delete or failure. .. versionadded:: 6.1.0 .. py:method:: __init__(fget=None, fset=None, fdel=None, doc=None, *, logger=None, log_object_repr=True, log_level=logging.DEBUG, exc_level=logging.DEBUG, log_before=True, log_success=True, log_failure=True, log_traceback=True, override_name=None) :param fget: normal getter. :type fget: None | Callable[[typing.Any, ], typing.Any] :param fset: normal setter. :type fset: None | Callable[[typing.Any, typing.Any], None] :param fdel: normal deleter. :type fdel: None | Callable[[typing.Any, ], None] :param doc: docstring override :type doc: None | str :param logger: logger instance or name to use as override :type logger: None | logging.Logger | str :param log_object_repr: use `repr` over object to describe owner if True else owner class name and id :type log_object_repr: bool :param log_level: log level for successful operations :type log_level: int :param exc_level: log level for exceptions :type exc_level: int :param log_before: log before operation :type log_before: bool :param log_success: log successful operations :type log_success: bool :param log_failure: log exceptions :type log_failure: bool :param log_traceback: Log traceback on exceptions :type log_traceback: bool :param override_name: override property name if not None else use getter/setter/deleter name :type override_name: None | str .. py:method:: getter(fget) Descriptor to change the getter on a property. :param fget: new normal getter. :type fget: ``None | Callable[[typing.Any, ], typing.Any]`` :rtype: ``AdvancedProperty`` .. py:method:: setter(fset) Descriptor to change the setter on a property. :param fset: new setter. :type fset: ``None | Callable[[typing.Any, typing.Any], None]`` :rtype: ``AdvancedProperty`` .. py:method:: deleter(fdel) Descriptor to change the deleter on a property. :param fdel: New deleter. :type fdel: ``None | Callable[[typing.Any, ], None]`` :rtype: ``AdvancedProperty`` .. py:attribute:: fget ``None | Callable[[typing.Any, ], typing.Any]`` Getter instance. .. py:attribute:: fset ``None | Callable[[typing.Any, typing.Any], None]`` Setter instance. .. py:attribute:: fdel ``None | Callable[[typing.Any, ], None]`` Deleter instance. .. py:attribute:: logger ``None | logging.Logger`` Logger instance to use as override. .. py:attribute:: log_object_repr ``bool`` Use `repr` over object to describe owner if True else owner class name and id. .. py:attribute:: log_level ``int`` Log level for successful operations. .. py:attribute:: exc_level ``int`` Log level for exceptions. .. py:attribute:: log_before ``bool`` Log before operation .. py:attribute:: log_success ``bool`` Log successful operations. .. py:attribute:: log_failure ``bool`` Log exceptions. .. py:attribute:: log_traceback ``bool`` Log traceback on exceptions. .. py:attribute:: override_name ``None | str`` Override property name if not None else use getter/setter/deleter name.
PypiClean
/elastic2-doc-manager-transaction-1.2.0.tar.gz/elastic2-doc-manager-transaction-1.2.0/mongo_connector/doc_managers/elastic2_doc_manager.py
import base64 import logging import threading import time import warnings import bson.json_util try: __import__("elasticsearch") except ImportError: raise ImportError( "Error: elasticsearch (https://pypi.python.org/pypi/elasticsearch) " "version 2.x or 5.x is not installed.\n" "Install with:\n" " pip install elastic2-doc-manager[elastic2]\n" "or:\n" " pip install elastic2-doc-manager[elastic5]\n" ) from elasticsearch import ( Elasticsearch, exceptions as es_exceptions, connection as es_connection, ) from elasticsearch.helpers import bulk, scan, streaming_bulk, BulkIndexError import importlib_metadata from mongo_connector import errors from mongo_connector.constants import DEFAULT_COMMIT_INTERVAL, DEFAULT_MAX_BULK from mongo_connector.util import exception_wrapper, retry_until_ok from mongo_connector.doc_managers.doc_manager_base import DocManagerBase from mongo_connector.doc_managers.formatters import DefaultDocumentFormatter _HAS_AWS = True try: from boto3 import session from requests_aws_sign import AWSV4Sign except ImportError: _HAS_AWS = False wrap_exceptions = exception_wrapper( { BulkIndexError: errors.OperationFailed, es_exceptions.ConnectionError: errors.ConnectionFailed, es_exceptions.TransportError: errors.OperationFailed, es_exceptions.NotFoundError: errors.OperationFailed, es_exceptions.RequestError: errors.OperationFailed, } ) LOG = logging.getLogger(__name__) DEFAULT_SEND_INTERVAL = 5 """The default interval in seconds to send buffered operations.""" DEFAULT_AWS_REGION = "us-east-1" __version__ = importlib_metadata.version("elastic2_doc_manager-transaction") def convert_aws_args(aws_args): """Convert old style options into arguments to boto3.session.Session.""" if not isinstance(aws_args, dict): raise errors.InvalidConfiguration( 'Elastic DocManager config option "aws" must be a dict' ) old_session_kwargs = dict( region="region_name", access_id="aws_access_key_id", secret_key="aws_secret_access_key", ) new_kwargs = {} for arg in aws_args: if arg in old_session_kwargs: new_kwargs[old_session_kwargs[arg]] = aws_args[arg] else: new_kwargs[arg] = aws_args[arg] return new_kwargs def create_aws_auth(aws_args): try: aws_session = session.Session(**convert_aws_args(aws_args)) except TypeError as exc: raise errors.InvalidConfiguration( "Elastic DocManager unknown aws config option: %s" % (exc,) ) return AWSV4Sign( aws_session.get_credentials(), aws_session.region_name or DEFAULT_AWS_REGION, "es", ) class AutoCommiter(threading.Thread): """Thread that periodically sends buffered operations to Elastic. :Parameters: - `docman`: The Elasticsearch DocManager. - `send_interval`: Number of seconds to wait before sending buffered operations to Elasticsearch. Set to None or 0 to disable. - `commit_interval`: Number of seconds to wait before committing buffered operations to Elasticsearch. Set to None or 0 to disable. - `sleep_interval`: Number of seconds to sleep. """ def __init__(self, docman, send_interval, commit_interval, sleep_interval=1): super(AutoCommiter, self).__init__() self._docman = docman # Change `None` intervals to 0 self._send_interval = send_interval if send_interval else 0 self._commit_interval = commit_interval if commit_interval else 0 self._should_auto_send = self._send_interval > 0 self._should_auto_commit = self._commit_interval > 0 self._sleep_interval = max(sleep_interval, 1) self._stopped = False self.daemon = True def join(self, timeout=None): self._stopped = True super(AutoCommiter, self).join(timeout=timeout) def run(self): """Periodically sends buffered operations and/or commit. """ if not self._should_auto_commit and not self._should_auto_send: return last_send, last_commit = 0, 0 while not self._stopped: if self._should_auto_commit: if last_commit > self._commit_interval: self._docman.commit() # commit also sends so reset both last_send, last_commit = 0, 0 # Give a chance to exit the loop if self._stopped: break if self._should_auto_send: if last_send > self._send_interval: self._docman.send_buffered_operations() last_send = 0 time.sleep(self._sleep_interval) last_send += self._sleep_interval last_commit += self._sleep_interval class DocManager(DocManagerBase): """Elasticsearch implementation of the DocManager interface. Receives documents from an OplogThread and takes the appropriate actions on Elasticsearch. """ def __init__( self, url, auto_commit_interval=DEFAULT_COMMIT_INTERVAL, unique_key="_id", chunk_size=DEFAULT_MAX_BULK, meta_index_name="mongodb_meta", meta_type="mongodb_meta", attachment_field="content", **kwargs ): client_options = kwargs.get("clientOptions", {}) if "aws" in kwargs: if not _HAS_AWS: raise errors.InvalidConfiguration( "aws extras must be installed to sign Elasticsearch " "requests. Install with: " "pip install elastic2-doc-manager[aws]" ) client_options["http_auth"] = create_aws_auth(kwargs["aws"]) client_options["use_ssl"] = True client_options["verify_certs"] = True client_options["connection_class"] = es_connection.RequestsHttpConnection if type(url) is not list: url = [url] self.elastic = Elasticsearch(hosts=url, **client_options) self._formatter = DefaultDocumentFormatter() self.BulkBuffer = BulkBuffer(self) # As bulk operation can be done in another thread # lock is needed to prevent access to BulkBuffer # while commiting documents to Elasticsearch # It is because BulkBuffer might get outdated # docs from Elasticsearch if bulk is still ongoing self.lock = threading.Lock() self.auto_commit_interval = auto_commit_interval self.auto_send_interval = kwargs.get("autoSendInterval", DEFAULT_SEND_INTERVAL) self.meta_index_name = meta_index_name self.meta_type = meta_type self.unique_key = unique_key self.chunk_size = chunk_size self.has_attachment_mapping = False self.attachment_field = attachment_field self.auto_commiter = AutoCommiter( self, self.auto_send_interval, self.auto_commit_interval ) self.auto_commiter.start() def _index_and_mapping(self, namespace): """Helper method for getting the index and type from a namespace.""" index, doc_type = namespace.split(".", 1) return index.lower(), doc_type def stop(self): """Stop the auto-commit thread.""" self.auto_commiter.join() self.auto_commit_interval = 0 # Commit any remaining docs from buffer self.commit() def apply_update(self, doc, update_spec): if "$set" not in update_spec and "$unset" not in update_spec: # Don't try to add ns and _ts fields back in from doc return update_spec return super(DocManager, self).apply_update(doc, update_spec) @wrap_exceptions def handle_command(self, doc, namespace, timestamp): # Flush buffer before handle command self.commit() db = namespace.split(".", 1)[0] if doc.get("dropDatabase"): dbs = self.command_helper.map_db(db) for _db in dbs: self.elastic.indices.delete(index=_db.lower()) if doc.get("renameCollection"): raise errors.OperationFailed( "elastic_doc_manager does not support renaming a mapping." ) if doc.get("create"): db, coll = self.command_helper.map_collection(db, doc["create"]) if db and coll: self.elastic.indices.put_mapping( index=db.lower(), doc_type=coll, body={"_source": {"enabled": True}} ) if doc.get("drop"): db, coll = self.command_helper.map_collection(db, doc["drop"]) if db and coll: # This will delete the items in coll, but not get rid of the # mapping. warnings.warn( "Deleting all documents of type %s on index %s." "The mapping definition will persist and must be" "removed manually." % (coll, db) ) responses = streaming_bulk( self.elastic, ( dict(result, _op_type="delete") for result in scan( self.elastic, index=db.lower(), doc_type=coll ) ), ) for ok, resp in responses: if not ok: LOG.error( "Error occurred while deleting ElasticSearch docum" "ent during handling of 'drop' command: %r" % resp ) @wrap_exceptions def update(self, document_id, update_spec, namespace, timestamp): """Apply updates given in update_spec to the document whose id matches that of doc. """ index, doc_type = self._index_and_mapping(namespace) with self.lock: # Check if document source is stored in local buffer document = self.BulkBuffer.get_from_sources( index, doc_type, str(document_id) ) if document: # Document source collected from local buffer # Perform apply_update on it and then it will be # ready for commiting to Elasticsearch updated = self.apply_update(document, update_spec) # _id is immutable in MongoDB, so won't have changed in update updated["_id"] = document_id self.upsert(updated, namespace, timestamp) else: # Document source needs to be retrieved from Elasticsearch # before performing update. Pass update_spec to upsert function updated = {"_id": document_id} self.upsert(updated, namespace, timestamp, update_spec) # upsert() strips metadata, so only _id + fields in _source still here return updated @wrap_exceptions def upsert(self, doc, namespace, timestamp, update_spec=None): """Insert a document into Elasticsearch.""" index, doc_type = self._index_and_mapping(namespace) # No need to duplicate '_id' in source document doc_id = str(doc.pop("_id")) metadata = {"ns": namespace, "_ts": timestamp} # Index the source document, using lowercase namespace as index name. action = { "_op_type": "index", "_index": index, "_type": doc_type, "_id": doc_id, "_source": self._formatter.format_document(doc), } # Index document metadata with original namespace (mixed upper/lower). meta_action = { "_op_type": "index", "_index": self.meta_index_name, "_type": self.meta_type, "_id": doc_id, "_source": bson.json_util.dumps(metadata), } self.index(action, meta_action, doc, update_spec) # Leave _id, since it's part of the original document doc["_id"] = doc_id @wrap_exceptions def bulk_upsert(self, docs, namespace, timestamp): """Insert multiple documents into Elasticsearch.""" def docs_to_upsert(): doc = None for doc in docs: # Remove metadata and redundant _id index, doc_type = self._index_and_mapping(namespace) doc_id = str(doc.pop("_id")) document_action = { "_index": index, "_type": doc_type, "_id": doc_id, "_source": self._formatter.format_document(doc), } document_meta = { "_index": self.meta_index_name, "_type": self.meta_type, "_id": doc_id, "_source": {"ns": namespace, "_ts": timestamp}, } yield document_action yield document_meta if doc is None: raise errors.EmptyDocsError( "Cannot upsert an empty sequence of " "documents into Elastic Search" ) try: kw = {} if self.chunk_size > 0: kw["chunk_size"] = self.chunk_size responses = streaming_bulk( client=self.elastic, actions=docs_to_upsert(), **kw ) for ok, resp in responses: if not ok: LOG.error( "Could not bulk-upsert document " "into ElasticSearch: %r" % resp ) if self.auto_commit_interval == 0: self.commit() except errors.EmptyDocsError: # This can happen when mongo-connector starts up, there is no # config file, but nothing to dump pass @wrap_exceptions def insert_file(self, f, namespace, timestamp): doc = f.get_metadata() doc_id = str(doc.pop("_id")) index, doc_type = self._index_and_mapping(namespace) # make sure that elasticsearch treats it like a file if not self.has_attachment_mapping: body = {"properties": {self.attachment_field: {"type": "attachment"}}} self.elastic.indices.put_mapping(index=index, doc_type=doc_type, body=body) self.has_attachment_mapping = True metadata = {"ns": namespace, "_ts": timestamp} doc = self._formatter.format_document(doc) doc[self.attachment_field] = base64.b64encode(f.read()).decode() action = { "_op_type": "index", "_index": index, "_type": doc_type, "_id": doc_id, "_source": doc, } meta_action = { "_op_type": "index", "_index": self.meta_index_name, "_type": self.meta_type, "_id": doc_id, "_source": bson.json_util.dumps(metadata), } self.index(action, meta_action) @wrap_exceptions def remove(self, document_id, namespace, timestamp): """Remove a document from Elasticsearch.""" index, doc_type = self._index_and_mapping(namespace) action = { "_op_type": "delete", "_index": index, "_type": doc_type, "_id": str(document_id), } meta_action = { "_op_type": "delete", "_index": self.meta_index_name, "_type": self.meta_type, "_id": str(document_id), } self.index(action, meta_action) @wrap_exceptions def _stream_search(self, *args, **kwargs): """Helper method for iterating over ES search results.""" for hit in scan( self.elastic, query=kwargs.pop("body", None), scroll="10m", **kwargs ): hit["_source"]["_id"] = hit["_id"] yield hit["_source"] def search(self, start_ts, end_ts): """Query Elasticsearch for documents in a time range. This method is used to find documents that may be in conflict during a rollback event in MongoDB. """ return self._stream_search( index=self.meta_index_name, body={"query": {"range": {"_ts": {"gte": start_ts, "lte": end_ts}}}}, ) def index(self, action, meta_action, doc_source=None, update_spec=None): with self.lock: self.BulkBuffer.add_upsert(action, meta_action, doc_source, update_spec) # Divide by two to account for meta actions if ( len(self.BulkBuffer.action_buffer) / 2 >= self.chunk_size or self.auto_commit_interval == 0 ): self.commit() def send_buffered_operations(self): """Send buffered operations to Elasticsearch. This method is periodically called by the AutoCommitThread. """ with self.lock: try: action_buffer = self.BulkBuffer.get_buffer() if action_buffer: successes, errors = bulk(self.elastic, action_buffer) LOG.debug( "Bulk request finished, successfully sent %d " "operations", successes, ) if errors: LOG.error("Bulk request finished with errors: %r", errors) except es_exceptions.ElasticsearchException: LOG.exception("Bulk request failed with exception") def commit(self): """Send buffered requests and refresh all indexes.""" self.send_buffered_operations() retry_until_ok(self.elastic.indices.refresh, index="") @wrap_exceptions def get_last_doc(self): """Get the most recently modified document from Elasticsearch. This method is used to help define a time window within which documents may be in conflict after a MongoDB rollback. """ try: result = self.elastic.search( index=self.meta_index_name, body={"query": {"match_all": {}}, "sort": [{"_ts": "desc"}]}, size=1, )["hits"]["hits"] for r in result: r["_source"]["_id"] = r["_id"] return r["_source"] except es_exceptions.RequestError: # no documents so ES returns 400 because of undefined _ts mapping return None class BulkBuffer(object): def __init__(self, docman): # Parent object self.docman = docman # Action buffer for bulk indexing self.action_buffer = [] # Docs to update # Dict stores all documents for which firstly # source has to be retrieved from Elasticsearch # and then apply_update needs to be performed # Format: [ (doc, update_spec, action_buffer_index, get_from_ES) ] self.doc_to_update = [] # Below dictionary contains ids of documents # which need to be retrieved from Elasticsearch # It prevents from getting same document multiple times from ES # Format: {"_index": {"_type": {"_id": True}}} self.doc_to_get = {} # Dictionary of sources # Format: {"_index": {"_type": {"_id": {"_source": actual_source}}}} self.sources = {} def add_upsert(self, action, meta_action, doc_source, update_spec): """ Function which stores sources for "insert" actions and decide if for "update" action has to add docs to get source buffer """ # Whenever update_spec is provided to this method # it means that doc source needs to be retrieved # from Elasticsearch. It means also that source # is not stored in local buffer if update_spec: self.bulk_index(action, meta_action) # -1 -> to get latest index number # -1 -> to get action instead of meta_action # Update document based on source retrieved from ES self.add_doc_to_update(action, update_spec, len(self.action_buffer) - 2) else: # Insert and update operations provide source # Store it in local buffer and use for comming updates # inside same buffer # add_to_sources will not be called for delete operation # as it does not provide doc_source if doc_source: self.add_to_sources(action, doc_source) self.bulk_index(action, meta_action) def add_doc_to_update(self, action, update_spec, action_buffer_index): """ Prepare document for update based on Elasticsearch response. Set flag if document needs to be retrieved from Elasticsearch """ doc = { "_index": action["_index"], "_type": action["_type"], "_id": action["_id"], } # If get_from_ES == True -> get document's source from Elasticsearch get_from_ES = self.should_get_id(action) self.doc_to_update.append((doc, update_spec, action_buffer_index, get_from_ES)) def should_get_id(self, action): """ Mark document to retrieve its source from Elasticsearch. Returns: True - if marking document for the first time in this bulk False - if document has been already marked """ mapping_ids = self.doc_to_get.setdefault(action["_index"], {}).setdefault( action["_type"], set() ) if action["_id"] in mapping_ids: # There is an update on this id already return False else: mapping_ids.add(action["_id"]) return True def get_docs_sources_from_ES(self): """Get document sources using MGET elasticsearch API""" docs = [doc for doc, _, _, get_from_ES in self.doc_to_update if get_from_ES] if docs: documents = self.docman.elastic.mget(body={"docs": docs}, realtime=True) return iter(documents["docs"]) else: return iter([]) @wrap_exceptions def update_sources(self): """Update local sources based on response from Elasticsearch""" ES_documents = self.get_docs_sources_from_ES() for doc, update_spec, action_buffer_index, get_from_ES in self.doc_to_update: if get_from_ES: # Update source based on response from ES ES_doc = next(ES_documents) if ES_doc["found"]: source = ES_doc["_source"] else: # Document not found in elasticsearch, # Seems like something went wrong during replication LOG.error( "mGET: Document id: %s has not been found " "in Elasticsearch. Due to that " "following update failed: %s", doc["_id"], update_spec, ) self.reset_action(action_buffer_index) continue else: # Get source stored locally before applying update # as it is up-to-date source = self.get_from_sources(doc["_index"], doc["_type"], doc["_id"]) if not source: LOG.error( "mGET: Document id: %s has not been found " "in local sources. Due to that following " "update failed: %s", doc["_id"], update_spec, ) self.reset_action(action_buffer_index) continue updated = self.docman.apply_update(source, update_spec) # Remove _id field from source if "_id" in updated: del updated["_id"] # Everytime update locally stored sources to keep them up-to-date self.add_to_sources(doc, updated) self.action_buffer[action_buffer_index][ "_source" ] = self.docman._formatter.format_document(updated) # Remove empty actions if there were errors self.action_buffer = [ each_action for each_action in self.action_buffer if each_action ] def reset_action(self, action_buffer_index): """Reset specific action as update failed""" self.action_buffer[action_buffer_index] = {} self.action_buffer[action_buffer_index + 1] = {} def add_to_sources(self, action, doc_source): """Store sources locally""" mapping = self.sources.setdefault(action["_index"], {}).setdefault( action["_type"], {} ) mapping[action["_id"]] = doc_source def get_from_sources(self, index, doc_type, document_id): """Get source stored locally""" return self.sources.get(index, {}).get(doc_type, {}).get(document_id, {}) def bulk_index(self, action, meta_action): self.action_buffer.append(action) self.action_buffer.append(meta_action) def clean_up(self): """Do clean-up before returning buffer""" self.action_buffer = [] self.sources = {} self.doc_to_get = {} self.doc_to_update = [] def get_buffer(self): """Get buffer which needs to be bulked to elasticsearch""" # Get sources for documents which are in Elasticsearch # and they are not in local buffer if self.doc_to_update: self.update_sources() ES_buffer = self.action_buffer self.clean_up() return ES_buffer
PypiClean
/parallel_utils-1.2-py3-none-any.whl/parallel_utils/process/decorators.py
from functools import wraps from multiprocessing import Manager from typing import Union from parallel_utils.process import Monitor def synchronized(max_threads: int = 1): ''' This decorator will allow only up to max_processes to run this function simultaneously. :param max_threads: Maximum number of processes. ''' m = Manager() s = m.Semaphore(max_threads) def locked(func): @wraps(func) def locked_func(*args, **kw_args): exceptions = [] s.acquire() try: res = func(*args, **kw_args) except Exception as e: exceptions.append(e) s.release() if len(exceptions): raise exceptions[0] return res return locked_func return locked def synchronized_priority(*args, **kwargs): m = Monitor() def synchronized_priority(uid: Union[str, int], order: int = 1, total: int = None): ''' This decorator will synchronize different processes to execute some functions in a specific order to avoid race conditions. :param uid: Unique identifier for the set of code snippets. :param order: The priority of the function protected with this function's uid. :param total: The total number of functions to synchronize using this function's uid. ''' def locked(func): @wraps(func) def locked_func(*args, **kw_args): exceptions = [] m.lock_priority_code(uid=uid, order=order, total=total) try: res = func(*args, **kw_args) except Exception as e: exceptions.append(e) m.unlock_code(uid=uid) if len(exceptions): raise exceptions[0] return res return locked_func return locked return synchronized_priority synchronized_priority = synchronized_priority()
PypiClean
/learned_optimization-0.0.1.tar.gz/learned_optimization-0.0.1/learned_optimization/continuous_eval/run_eval_worker.py
"""Worker for continuous evaluation.""" import os import time from typing import Any, Callable, Mapping, Optional, TypeVar, Union from absl import app from absl import flags from absl import logging import courier import gin import jax import jax.numpy as jnp from learned_optimization import checkpoints from learned_optimization import distributed from learned_optimization import eval_training from learned_optimization import filesystem from learned_optimization import profile from learned_optimization import setup_experiment from learned_optimization.continuous_eval import run_eval_chief from learned_optimization.continuous_eval import task_group_server from learned_optimization.learned_optimizers import base as lopt_base from learned_optimization.outer_trainers import gradient_learner from learned_optimization.tasks import base as tasks_base import numpy as onp FLAGS = flags.FLAGS PRNGKey = jnp.ndarray _cache = [] T = TypeVar("T") def _cache_load_state(path: str, theta: T) -> T: """Load values matching the theta pytree (meta-parameters) from the path. Often this worker is tasked with running different tasks for the same checkpoint. This caches that load. Args: path: path of the checkpoint theta: Structue with which to load the values into. Returns: A pytree of the same structure as theta but with the loaded values. """ global _cache paths = [x[0] for x in _cache] if path in paths: return _cache[paths.index(path)][1] else: val = checkpoints.load_state(path, theta) _cache.append((path, val)) _cache = _cache[-5:] return val _task_family_cache = {} @gin.configurable def get_task_family( task: Optional[tasks_base.Task] = None, task_family: Optional[tasks_base.TaskFamily] = None, task_family_seed: Optional[int] = None, sample_task_family_fn: Optional[Callable[[PRNGKey], tasks_base.TaskFamily]] = None, sample_task_family_fn_seed: Optional[int] = None) -> tasks_base.TaskFamily: """Load the task family. This function is to be overloaded by gin. Only pass one of either task or task_family, or sample_task_family_fn. Args: task: Task to use task_family: Task family to use task_family_seed: seed to use when sampling from a task_family. This is useful to reduce eval variance if the task family has a wide variety of tasks. sample_task_family_fn: A callable that samples a task_family sample_task_family_fn_seed: The seed used when drawing the sample from sample_task_family_fn. Returns: TaskFamily instance containing either the task, or the task_family. """ if sum([x is not None for x in [task, task_family, sample_task_family_fn] ]) != 1: raise ValueError( "Must set only a single kind of task config in gin.\n" f"Passed in: task: {task}\n" f"Passed in: task_family: {task_family}\n" f"Passed in: sample_task_family_fn: {sample_task_family_fn}\n") if sample_task_family_fn: if sample_task_family_fn_seed is None: sample_task_family_fn_seed = onp.random.randint(0, 100000) task_family = sample_task_family_fn( jax.random.PRNGKey(sample_task_family_fn_seed)) if task_family: if task_family_seed is not None: class _TaskFamily(tasks_base.TaskFamily): def __init__(self): self.datasets = task_family.datasets def sample(self, key: PRNGKey) -> Any: return task_family.sample(jax.random.PRNGKey(task_family_seed)) def task_fn(self, cfg: Any) -> Any: return task_family.task_fn(cfg) return _TaskFamily() else: return task_family if task: return tasks_base.single_task_to_family(task) raise NotImplementedError() def load_gin_and_run( train_log_dir: str, task: task_group_server.EvalTask, learned_optimizer: lopt_base.LearnedOptimizer ) -> Mapping[str, Union[float, onp.ndarray, str]]: """Load the configuration for task then compute values.""" task_idx, saved_paths = task.task_group task_id = task.task_index # TODO(lmetz) decide of we should pass an eval name here. (eval_cfg, unused_eval_name) = task.task_content with profile.Profile("loading gin config"): # Here we do a series of steps to load a configuration to do the eval under. # This means we first clear gin, load the config file from the current # directory, load the gin_bindings flag, then finally load the configs # specified by the task queue. # This is bit of a misuse / overuse of gin, but I find it is quite # convinent to have this much controll when configuring evaluation. # Clear, and then overwrite the configuration for the current task. gin.clear_config(clear_constants=True) config_file = os.path.join(train_log_dir, "config.gin") if not filesystem.exists(config_file): logging.info("Found directory, but config file missing. Sleeping 10 sec.") time.sleep(10) gin.parse_config_file(config_file, skip_unknown=True) logging.info("Gin bindings:") if FLAGS.gin_bindings: for g in FLAGS.gin_bindings: logging.info(g) gin.parse_config(FLAGS.gin_bindings, skip_unknown=True) logging.info("Parsed Gin bindings:") for g in eval_cfg: logging.info(g) gin.parse_config(eval_cfg, skip_unknown=True) with profile.Profile("initial_learned_opt_state"): key = jax.random.PRNGKey(0) theta = learned_optimizer.init(key) with profile.Profile("loading_state"): param_checkpoint = gradient_learner.ParameterCheckpoint(theta, "gen_id", 0) load_path = saved_paths["params_"] param_checkpoint = _cache_load_state(load_path, param_checkpoint) theta, gen_id, step = (param_checkpoint.params, param_checkpoint.gen_id, param_checkpoint.step) # Our goal here is to avoid needing to recompile for every new task family. # By default, when we construct a new task family instance, jax has no way # of knowing this was already used. # Instead of reloading a new task family everytime, we cache based on the # gin config received from the task queue. # This causes the same instance of the task family to be returned, and thus # we get less compiles. gin_key = hash(tuple(eval_cfg)) if gin_key in _task_family_cache: task_family = _task_family_cache[gin_key] else: task_family = get_task_family() _task_family_cache[gin_key] = task_family # Finally, we do the actual training! with profile.Profile("inner_train"): stime = time.time() losses = eval_training.multi_task_training_curves( task_family, learned_optimizer, theta=theta) total_time = time.time() - stime result = {"total_time": total_time, "gen_id": gen_id, "step": step} for k, v in losses.items(): result[k] = v return result def connect_to_server_and_do_tasks(train_log_dir: str): """Main worker loop. Pull jobs from the task queue, run them, and report back. Args: train_log_dir: Experiment directory (used to find correct server address). """ chief_name, unused_num_workers, lopt = run_eval_chief.eval_chief_config() server_name = distributed.uniquify_server_name( chief_name, os.path.join(train_log_dir, chief_name)) logging.info("Connecting to client [[%s]]", server_name) client = courier.Client(str(server_name)) while True: logging.info("trying to get work") with profile.Profile("get_work"): task = client.get_work(FLAGS.task) if task is None: with profile.Profile("task_sleep"): time.sleep(0.5) # not too many workers, so this can be agressive. continue logging.info("Got a task! %s", str(task)) with profile.Profile("load_gin_and_run"): result = load_gin_and_run(train_log_dir, task, learned_optimizer=lopt) logging.info("Finished the task with val %s", str(result)) with profile.Profile("finish_work"): client.finish_work(FLAGS.task, result) def main(_): train_log_dir = setup_experiment.setup_experiment(gin_finalize=False) connect_to_server_and_do_tasks(train_log_dir) if __name__ == "__main__": app.run(main)
PypiClean
/autorec-0.0.2.tar.gz/autorec-0.0.2/autorecsys/pipeline/node.py
import numpy as np import pandas as pd import tensorflow as tf from tensorflow.python.util import nest from autorecsys.utils.common import dataset_shape from autorecsys.pipeline import base class Input(base.Node): """Input node for tensor data. The data should be numpy.ndarray or tf.data.Dataset. """ def _check(self, x): """Record any information needed by transform.""" if not isinstance(x, (np.ndarray, tf.data.Dataset)): raise TypeError('Expect the data to Input to be numpy.ndarray or ' 'tf.data.Dataset, but got {type}.'.format(type=type(x))) if isinstance(x, np.ndarray) and not np.issubdtype(x.dtype, np.number): raise TypeError('Expect the data to Input to be numerical, but got ' '{type}.'.format(type=x.dtype)) def _convert_to_dataset(self, x): if isinstance(x, tf.data.Dataset): return x if isinstance(x, np.ndarray): x = x.astype(np.float32) return tf.data.Dataset.from_tensor_slices(x) def _record_dataset_shape(self, dataset): self.shape = dataset_shape(dataset) def fit_transform(self, x): dataset = self.transform(x) self._record_dataset_shape(dataset) return dataset def transform(self, x): """Transform x into a compatible type (tf.data.Dataset).""" self._check(x) dataset = self._convert_to_dataset(x) return dataset class StructuredDataInput(Input): """Input node for structured data. The input data should be numpy.ndarray, pandas.DataFrame or tensorflow.Dataset. # Arguments column_names: A list of strings specifying the names of the columns. The length of the list should be equal to the number of columns of the data. Defaults to None. If None, it will obtained from the header of the csv file or the pandas.DataFrame. column_types: Dict. The keys are the column names. The values should either be 'numerical' or 'categorical', indicating the type of that column. Defaults to None. If not None, the column_names need to be specified. If None, it will be inferred from the data. A column will be judged as categorical if the number of different values is less than 5% of the number of instances. """ def __init__(self, column_names=None, column_types=None, **kwargs): super().__init__(**kwargs) self.column_names = column_names self.column_types = column_types # Variables for inferring column types. self.count_nan = None self.count_numerical = None self.count_categorical = None self.count_unique_numerical = [] self.num_col = None def get_state(self): state = super().get_state() state.update({ 'column_names': self.column_names, 'column_types': self.column_types, 'count_nan': self.count_nan, 'count_numerical': self.count_numerical, 'count_categorical': self.count_categorical, 'count_unique_numerical': self.count_unique_numerical, 'num_col': self.num_col }) return state def set_state(self, state): super().set_state(state) self.column_names = state['column_names'] self.column_types = state['column_types'] self.count_nan = state['count_nan'] self.count_numerical = state['count_numerical'] self.count_categorical = state['count_categorical'] self.count_unique_numerical = state['count_unique_numerical'] self.num_col = state['num_col'] def _check(self, x): if not isinstance(x, (pd.DataFrame, np.ndarray)): raise TypeError('Unsupported type {type} for ' '{name}.'.format(type=type(x), name=self.__class__.__name__)) # Extract column_names from pd.DataFrame. if isinstance(x, pd.DataFrame) and self.column_names is None: self.column_names = list(x.columns) # column_types is provided by user if self.column_types: for column_name in self.column_types: if column_name not in self.column_names: raise ValueError('Column_names and column_types are ' 'mismatched. Cannot find column name ' '{name} in the data.'.format( name=column_name)) # Generate column_names. if self.column_names is None: if self.column_types: raise ValueError('Column names must be specified.') self.column_names = [index for index in range(x.shape[1])] # Check if column_names has the correct length. if len(self.column_names) != x.shape[1]: raise ValueError('Expect column_names to have length {expect} ' 'but got {actual}.'.format( expect=x.shape[1], actual=len(self.column_names))) def _convert_to_dataset(self, x): if isinstance(x, pd.DataFrame): # Convert x, y, validation_data to tf.Dataset. x = tf.data.Dataset.from_tensor_slices( x.values.astype(np.unicode)) if isinstance(x, np.ndarray): x = tf.data.Dataset.from_tensor_slices(x.astype(np.unicode)) dataset = super()._convert_to_dataset(x) for x in dataset: self.update(x) self.infer_column_types() return dataset def update(self, x): # Calculate the statistics. x = nest.flatten(x)[0].numpy() if self.num_col is None: self.num_col = len(x) self.count_nan = np.zeros(self.num_col) self.count_numerical = np.zeros(self.num_col) self.count_categorical = np.zeros(self.num_col) for i in range(len(x)): self.count_unique_numerical.append({}) for i in range(self.num_col): x[i] = x[i].decode('utf-8') if x[i] == 'nan': self.count_nan[i] += 1 elif x[i] == 'True': self.count_categorical[i] += 1 elif x[i] == 'False': self.count_categorical[i] += 1 else: try: tmp_num = float(x[i]) self.count_numerical[i] += 1 if tmp_num not in self.count_unique_numerical[i]: self.count_unique_numerical[i][tmp_num] = 1 else: self.count_unique_numerical[i][tmp_num] += 1 except ValueError: self.count_categorical[i] += 1 def infer_column_types(self): column_types = {} for i in range(self.num_col): if self.count_categorical[i] > 0: column_types[self.column_names[i]] = 'categorical' elif len(self.count_unique_numerical[i])/self.count_numerical[i] < 0.05: column_types[self.column_names[i]] = 'categorical' else: column_types[self.column_names[i]] = 'numerical' # Partial column_types is provided. if self.column_types is None: self.column_types = {} for key, value in column_types.items(): if key not in self.column_types: self.column_types[key] = value
PypiClean
/caighdean-0.0.4.tar.gz/caighdean-0.0.4/README.rst
python-caighdean ================ Python client for the Caighdean Machine Translation service - https://github.com/kscanne/caighdean |build| |coverage| Install ------- .. code-block:: console $ pip install caighdean For development install .. code-block:: console $ pip install -e [email protected]:translate/python-caighdean#egg=caighdean $ pip install caighdean[test] Run --- .. code-block:: python >>> import caighdean >>> source = u'Agus thubhairt e, \n "Iongantach!" an dèidh sin.' >>> source u'Agus thubhairt e, \n "Iongantach!" an d\xc3\xa8idh sin.' >>> caighdean.Translator().translate(source) u'Agus d\xfairt s\xe9, \n "Iontach!" ina dhiaidh sin.' >>> print(caighdean.Translator().translate(source)) Agus dúirt sé, "Iontach!" ina dhiaidh sin. .. |build| image:: https://img.shields.io/travis/translate/python-caighdean/master.svg?style=flat-square :alt: Build Status :target: https://travis-ci.org/translate/python-caighdean/branches .. |coverage| image:: https://img.shields.io/codecov/c/github/translate/python-caighdean/master.svg?style=flat-square :target: https://codecov.io/gh/translate/python-caighdean/branch/master :alt: Test Coverage
PypiClean
/cow-framework-1.0.5.tar.gz/cow-framework-1.0.5/cow/plugins/mongoengine_plugin.py
import logging import sys import mongoengine import mongoengine.connection from pymongo.errors import AutoReconnect from cow.plugins import BasePlugin class MongoEnginePlugin(BasePlugin): @classmethod def after_start(cls, application, io_loop=None, *args, **kw): databases = application.config.get('MONGO_DATABASES') if not databases or not isinstance(databases, (dict,)): raise RuntimeError("MONGO_DATABASES configuration is required and should be a dictionary.") items = databases.items() for index, (key, value) in enumerate(items): host = value['host'] port = int(value['port']) db = value['database'] username = value.get('username', None) password = value.get('password', None) conn_str = "mongodb://%s:%d/%s" % (host, port, db) if username is not None: if password is not None: conn_str = "mongodb://%s:%s@%s:%d/%s" % (username, password, host, port, db) else: conn_str = "mongodb://%s@%s:%d/%s" % (username, host, port, db) arguments = dict( host=conn_str, ) arguments['alias'] = key replica_set = arguments.get('replica_set', None) if replica_set is not None: arguments['replicaSet'] = replica_set logging.info("Connecting to mongodb at %s" % conn_str) mongoengine.connect(db, **arguments) if index == 0: arguments.pop('alias') mongoengine.connect(db, **arguments) @classmethod def before_end(cls, application, *args, **kw): pass #databases = application.config.get('MONGO_DATABASES') #for key in databases.keys(): #logging.info("Disconnecting from mongodb[%s]..." % key) #mongoengine.disconnect(alias=key) @classmethod def before_healthcheck(cls, application, callback, *args, **kw): databases = application.config.get('MONGO_DATABASES') for key in databases.keys(): conn = mongoengine.connection.get_connection(alias=key).connection try: callback(conn.command('ping')) except AutoReconnect: logging.exception(sys.exc_info()[1]) callback({}) @classmethod def validate(cls, result, *args, **kw): return result.get('ok', 0) == 1.0 @classmethod def define_configurations(cls, config): config.define('MONGO_DATABASES', None, "Dictionary holding all the mongodb connections to be made.", "MotorEngine")
PypiClean
/mis_modulos-0.1.tar.gz/mis_modulos-0.1/pip/_vendor/chardet/sjisprober.py
from .chardistribution import SJISDistributionAnalysis from .codingstatemachine import CodingStateMachine from .enums import MachineState, ProbingState from .jpcntx import SJISContextAnalysis from .mbcharsetprober import MultiByteCharSetProber from .mbcssm import SJIS_SM_MODEL class SJISProber(MultiByteCharSetProber): def __init__(self): super().__init__() self.coding_sm = CodingStateMachine(SJIS_SM_MODEL) self.distribution_analyzer = SJISDistributionAnalysis() self.context_analyzer = SJISContextAnalysis() self.reset() def reset(self): super().reset() self.context_analyzer.reset() @property def charset_name(self): return self.context_analyzer.charset_name @property def language(self): return "Japanese" def feed(self, byte_str): for i, byte in enumerate(byte_str): coding_state = self.coding_sm.next_state(byte) if coding_state == MachineState.ERROR: self.logger.debug( "%s %s prober hit error at byte %s", self.charset_name, self.language, i, ) self._state = ProbingState.NOT_ME break if coding_state == MachineState.ITS_ME: self._state = ProbingState.FOUND_IT break if coding_state == MachineState.START: char_len = self.coding_sm.get_current_charlen() if i == 0: self._last_char[1] = byte self.context_analyzer.feed( self._last_char[2 - char_len :], char_len ) self.distribution_analyzer.feed(self._last_char, char_len) else: self.context_analyzer.feed( byte_str[i + 1 - char_len : i + 3 - char_len], char_len ) self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len) self._last_char[0] = byte_str[-1] if self.state == ProbingState.DETECTING: if self.context_analyzer.got_enough_data() and ( self.get_confidence() > self.SHORTCUT_THRESHOLD ): self._state = ProbingState.FOUND_IT return self.state def get_confidence(self): context_conf = self.context_analyzer.get_confidence() distrib_conf = self.distribution_analyzer.get_confidence() return max(context_conf, distrib_conf)
PypiClean
/msgraph_beta_sdk-1.0.0a9-py3-none-any.whl/msgraph/generated/teamwork/team_templates/item/definitions/item/team_definition/permission_grants/item/check_member_objects/check_member_objects_post_request_body.py
from __future__ import annotations from kiota_abstractions.serialization import AdditionalDataHolder, Parsable, ParseNode, SerializationWriter from typing import Any, Callable, Dict, List, Optional, TYPE_CHECKING, Union class CheckMemberObjectsPostRequestBody(AdditionalDataHolder, Parsable): def __init__(self,) -> None: """ Instantiates a new checkMemberObjectsPostRequestBody and sets the default values. """ # Stores additional data not described in the OpenAPI description found when deserializing. Can be used for serialization as well. self._additional_data: Dict[str, Any] = {} # The ids property self._ids: Optional[List[str]] = None @property def additional_data(self,) -> Dict[str, Any]: """ Gets the additionalData property value. Stores additional data not described in the OpenAPI description found when deserializing. Can be used for serialization as well. Returns: Dict[str, Any] """ return self._additional_data @additional_data.setter def additional_data(self,value: Dict[str, Any]) -> None: """ Sets the additionalData property value. Stores additional data not described in the OpenAPI description found when deserializing. Can be used for serialization as well. Args: value: Value to set for the AdditionalData property. """ self._additional_data = value @staticmethod def create_from_discriminator_value(parse_node: Optional[ParseNode] = None) -> CheckMemberObjectsPostRequestBody: """ Creates a new instance of the appropriate class based on discriminator value Args: parseNode: The parse node to use to read the discriminator value and create the object Returns: CheckMemberObjectsPostRequestBody """ if parse_node is None: raise Exception("parse_node cannot be undefined") return CheckMemberObjectsPostRequestBody() def get_field_deserializers(self,) -> Dict[str, Callable[[ParseNode], None]]: """ The deserialization information for the current model Returns: Dict[str, Callable[[ParseNode], None]] """ fields: Dict[str, Callable[[Any], None]] = { "ids": lambda n : setattr(self, 'ids', n.get_collection_of_primitive_values(str)), } return fields @property def ids(self,) -> Optional[List[str]]: """ Gets the ids property value. The ids property Returns: Optional[List[str]] """ return self._ids @ids.setter def ids(self,value: Optional[List[str]] = None) -> None: """ Sets the ids property value. The ids property Args: value: Value to set for the ids property. """ self._ids = value def serialize(self,writer: SerializationWriter) -> None: """ Serializes information the current object Args: writer: Serialization writer to use to serialize this model """ if writer is None: raise Exception("writer cannot be undefined") writer.write_collection_of_primitive_values("ids", self.ids) writer.write_additional_data_value(self.additional_data)
PypiClean
/dsin100daysv29-6.0.1.tar.gz/dsin100daysv29-6.0.1/notebook/static/notebook/js/actions.js
// How to pick action names: // // * First pick a noun and a verb for the action. For example, if the action is "restart kernel," the verb is // "restart" and the noun is "kernel". // * Omit terms like "selected" and "active" by default, so "delete-cell", rather than "delete-selected-cell". // Only provide a scope like "-all-" if it is other than the default "selected" or "active" scope. // * If an action has a secondary action, separate the secondary action with "-and-", so // "restart-kernel-and-clear-output". // * Don't ever use before/after as they have a temporal connotation that is confusing when used in a spatial // context. // * Use above/below or previous/next to indicate spacial and sequential relationships. // * For dialogs, use a verb that indicates what the dialog will accomplish, such as "confirm-restart-kernel". define([ 'base/js/i18n', ], function(i18n){ "use strict"; var warn_bad_name = function(name){ if(name !== "" && !name.match(/:/)){ console.warn('You are trying to use an action/command name, where the separator between prefix and name is not `:`\n'+ '"'+name+'"\n'+ 'You are likely to not use the API in a correct way. Typically use the following:\n'+ '`var key = actions.register(<object>, "<name>", "<prefix>");` and reuse the `key` variable'+ 'instead of re-generating the key yourself.' ); } }; var ActionHandler = function (env) { this.env = env || {}; Object.seal(this); }; var $ = requirejs('jquery'); var events = requirejs('base/js/events'); /** * A bunch of predefined `Simple Actions` used by Jupyter. * `Simple Actions` have the following keys: * help (optional): a short string the describe the action. * will be used in various context, like as menu name, tool tips on buttons, * and short description in help menu. * help_index (optional): a string used to sort action in help menu. * icon (optional): a short string that represent the icon that have to be used with this * action. this should mainly correspond to a Font_awesome class. * handler : a function which is called when the action is activated. It will receive at first parameter * a dictionary containing various handle to element of the notebook. * * action need to be registered with a **name** that can be use to refer to this action. * * if `help` is not provided it will be derived by replacing any dash by space * in the **name** of the action. It is advised to provide a prefix to action name to * avoid conflict the prefix should be all lowercase and end with a dot `.` * in the absence of a prefix the behavior of the action is undefined. * * All action provided by the Jupyter notebook are prefixed with `jupyter-notebook:`. * * One can register extra actions or replace an existing action with another one is possible * but is considered undefined behavior. * **/ var _actions = { 'toggle-rtl-layout': { cmd: i18n.msg._('toggle rtl layout'), help: i18n.msg._('Toggle the screen directionality between left-to-right and right-to-left'), handler: function () { (document.body.getAttribute('dir')=='rtl') ? document.body.setAttribute('dir','ltr') : document.body.setAttribute('dir','rtl'); } }, 'edit-command-mode-keyboard-shortcuts': { cmd: i18n.msg._('edit command mode keyboard shortcuts'), help: i18n.msg._('Open a dialog to edit the command mode keyboard shortcuts'), handler: function (env) { env.notebook.show_shortcuts_editor(); } }, 'shutdown-kernel': { help: 'Shutdown the kernel (no confirmation dialog)', handler: function (env) { env.notebook.shutdown_kernel({confirm: false}); } }, 'confirm-shutdown-kernel':{ icon: 'fa-repeat', help_index : 'hb', help: 'Shutdown the kernel (with confirmation dialog)', handler : function (env) { env.notebook.shutdown_kernel(); } }, 'restart-kernel': { cmd: i18n.msg._('restart kernel'), help: i18n.msg._('restart the kernel (no confirmation dialog)'), handler: function (env) { env.notebook.restart_kernel({confirm: false}); }, }, 'confirm-restart-kernel':{ icon: 'fa-repeat', help_index : 'hb', cmd: i18n.msg._('confirm restart kernel'), help: i18n.msg._('restart the kernel (with dialog)'), handler : function (env) { env.notebook.restart_kernel(); } }, 'restart-kernel-and-run-all-cells': { cmd: i18n.msg._('restart kernel and run all cells'), help: i18n.msg._('restart the kernel, then re-run the whole notebook (no confirmation dialog)'), handler: function (env) { env.notebook.restart_run_all({confirm: false}); } }, 'confirm-restart-kernel-and-run-all-cells': { icon: 'fa-forward', cmd: i18n.msg._('confirm restart kernel and run all cells'), help: i18n.msg._('restart the kernel, then re-run the whole notebook (with dialog)'), handler: function (env) { env.notebook.restart_run_all(); } }, 'restart-kernel-and-clear-output': { cmd: i18n.msg._('restart kernel and clear output'), help: i18n.msg._('restart the kernel and clear all output (no confirmation dialog)'), handler: function (env) { env.notebook.restart_clear_output({confirm: false}); } }, 'confirm-restart-kernel-and-clear-output': { cmd: i18n.msg._('confirm restart kernel and clear output'), help: i18n.msg._('restart the kernel and clear all output (with dialog)'), handler: function (env) { env.notebook.restart_clear_output(); } }, 'interrupt-kernel':{ icon: 'fa-stop', cmd: i18n.msg._('interrupt the kernel'), help: i18n.msg._('interrupt the kernel'), help_index : 'ha', handler : function (env) { env.notebook.kernel.interrupt(); } }, 'run-cell-and-select-next': { cmd: i18n.msg._('run cell and select next'), icon: 'fa-step-forward', help: i18n.msg._('run cell, select below'), help_index : 'ba', handler : function (env) { env.notebook.execute_cell_and_select_below(); } }, 'run-cell':{ cmd: i18n.msg._('run selected cells'), help : i18n.msg._('run selected cells'), help_index : 'bb', handler : function (env) { env.notebook.execute_selected_cells(); } }, 'run-cell-and-insert-below':{ cmd: i18n.msg._('run cell and insert below'), help : i18n.msg._('run cell and insert below'), help_index : 'bc', handler : function (env) { env.notebook.execute_cell_and_insert_below(); } }, 'run-all-cells': { cmd: i18n.msg._('run all cells'), help: i18n.msg._('run all cells'), help_index: 'bd', handler: function (env) { env.notebook.execute_all_cells(); } }, 'run-all-cells-above':{ cmd: i18n.msg._('run all cells above'), help: i18n.msg._('run all cells above'), handler : function (env) { env.notebook.execute_cells_above(); } }, 'run-all-cells-below':{ cmd: i18n.msg._('run all cells below'), help: i18n.msg._('run all cells below'), handler : function (env) { env.notebook.execute_cells_below(); } }, 'enter-command-mode': { cmd: i18n.msg._('enter command mode'), help : i18n.msg._('enter command mode'), help_index : 'aa', handler : function (env) { env.notebook.command_mode(); } }, 'insert-image': { cmd: i18n.msg._('insert image'), help : i18n.msg._('insert image'), help_index : 'dz', handler : function (env) { env.notebook.insert_image(); } }, 'cut-cell-attachments': { cmd: i18n.msg._('cut cell attachments'), help : i18n.msg._('cut cell attachments'), help_index : 'dza', handler: function (env) { env.notebook.cut_cell_attachments(); } }, 'copy-cell-attachments': { cmd: i18n.msg._('copy cell attachments'), help : i18n.msg._('copy cell attachments'), help_index: 'dzb', handler: function (env) { env.notebook.copy_cell_attachments(); } }, 'paste-cell-attachments': { cmd: i18n.msg._('paste cell attachments'), help : i18n.msg._('paste cell attachments'), help_index: 'dzc', handler: function (env) { env.notebook.paste_cell_attachments(); } }, 'split-cell-at-cursor': { cmd: i18n.msg._('split cell at cursor'), help : i18n.msg._('split cell at cursor'), help_index : 'ea', handler : function (env) { env.notebook.split_cell(); } }, 'enter-edit-mode' : { cmd: i18n.msg._('enter edit mode'), help : i18n.msg._('enter edit mode'), help_index : 'aa', handler : function (env) { env.notebook.edit_mode(); } }, 'select-previous-cell' : { cmd: i18n.msg._('select previous cell'), help: i18n.msg._('select cell above'), help_index : 'da', handler : function (env) { var index = env.notebook.get_selected_index(); if (index !== 0 && index !== null) { env.notebook.select_prev(true); env.notebook.focus_cell(); } } }, 'select-next-cell' : { cmd: i18n.msg._('select next cell'), help: i18n.msg._('select cell below'), help_index : 'db', handler : function (env) { var index = env.notebook.get_selected_index(); if (index !== (env.notebook.ncells()-1) && index !== null) { env.notebook.select_next(true); env.notebook.focus_cell(); } } }, 'extend-selection-above' : { cmd: i18n.msg._('extend selection above'), help: i18n.msg._('extend selected cells above'), help_index : 'dc', handler : function (env) { env.notebook.extend_selection_by(-1); // scroll into view, // do not call notebook.focus_cell(), or // all the selection get thrown away env.notebook.get_selected_cell().element.focus(); } }, 'extend-selection-below' : { cmd: i18n.msg._('extend selection below'), help: i18n.msg._('extend selected cells below'), help_index : 'dd', handler : function (env) { env.notebook.extend_selection_by(1); // scroll into view, // do not call notebook.focus_cell(), or // all the selection get thrown away env.notebook.get_selected_cell().element.focus(); } }, 'cut-cell' : { cmd: i18n.msg._('cut selected cells'), help: i18n.msg._('cut selected cells'), icon: 'fa-cut', help_index : 'ee', handler : function (env) { var index = env.notebook.get_selected_index(); env.notebook.cut_cell(); env.notebook.select(index); } }, 'copy-cell' : { cmd: i18n.msg._('copy selected cells'), help: i18n.msg._('copy selected cells'), icon: 'fa-copy', help_index : 'ef', handler : function (env) { env.notebook.copy_cell(); } }, 'paste-cell-replace' : { help: 'paste cells replace', handler : function (env) { env.notebook.paste_cell_replace(); } }, 'paste-cell-above' : { cmd: i18n.msg._('paste cells above'), help: i18n.msg._('paste cells above'), help_index : 'eg', handler : function (env) { env.notebook.paste_cell_above(); } }, 'paste-cell-below' : { cmd: i18n.msg._('paste cells below'), help: i18n.msg._('paste cells below'), icon: 'fa-paste', help_index : 'eh', handler : function (env) { env.notebook.paste_cell_below(); } }, 'insert-cell-above' : { cmd: i18n.msg._('insert cell above'), help: i18n.msg._('insert cell above'), help_index : 'ec', handler : function (env) { env.notebook.insert_cell_above(); env.notebook.select_prev(true); env.notebook.focus_cell(); } }, 'insert-cell-below' : { cmd: i18n.msg._('insert cell below'), help: i18n.msg._('insert cell below'), icon : 'fa-plus', help_index : 'ed', handler : function (env) { env.notebook.insert_cell_below(); env.notebook.select_next(true); env.notebook.focus_cell(); } }, 'change-cell-to-code' : { cmd: i18n.msg._('change cell to code'), help : i18n.msg._('change cell to code'), help_index : 'ca', handler : function (env) { env.notebook.cells_to_code(); } }, 'change-cell-to-markdown' : { cmd: i18n.msg._('change cell to markdown'), help : i18n.msg._('change cell to markdown'), help_index : 'cb', handler : function (env) { env.notebook.cells_to_markdown(); } }, 'change-cell-to-raw' : { cmd: i18n.msg._('change cell to raw'), help : i18n.msg._('change cell to raw'), help_index : 'cc', handler : function (env) { env.notebook.cells_to_raw(); } }, 'change-cell-to-heading-1' : { cmd: i18n.msg._('change cell to heading 1'), help : i18n.msg._('change cell to heading 1'), help_index : 'cd', handler : function (env) { env.notebook.to_heading(undefined, 1); } }, 'change-cell-to-heading-2' : { cmd: i18n.msg._('change cell to heading 2'), help : i18n.msg._('change cell to heading 2'), help_index : 'ce', handler : function (env) { env.notebook.to_heading(undefined, 2); } }, 'change-cell-to-heading-3' : { cmd: i18n.msg._('change cell to heading 3'), help : i18n.msg._('change cell to heading 3'), help_index : 'cf', handler : function (env) { env.notebook.to_heading(undefined, 3); } }, 'change-cell-to-heading-4' : { cmd: i18n.msg._('change cell to heading 4'), help : i18n.msg._('change cell to heading 4'), help_index : 'cg', handler : function (env) { env.notebook.to_heading(undefined, 4); } }, 'change-cell-to-heading-5' : { cmd: i18n.msg._('change cell to heading 5'), help : i18n.msg._('change cell to heading 5'), help_index : 'ch', handler : function (env) { env.notebook.to_heading(undefined, 5); } }, 'change-cell-to-heading-6' : { cmd: i18n.msg._('change cell to heading 6'), help : i18n.msg._('change cell to heading 6'), help_index : 'ci', handler : function (env) { env.notebook.to_heading(undefined, 6); } }, 'toggle-cell-output-collapsed' : { cmd: i18n.msg._('toggle cell output'), help : i18n.msg._('toggle output of selected cells'), help_index : 'gb', handler : function (env) { env.notebook.toggle_cells_outputs(); } }, 'toggle-cell-output-scrolled' : { cmd: i18n.msg._('toggle cell scrolling'), help : i18n.msg._('toggle output scrolling of selected cells'), help_index : 'gc', handler : function (env) { env.notebook.toggle_cells_outputs_scroll(); } }, 'clear-cell-output' : { cmd: i18n.msg._('clear cell output'), help : i18n.msg._('clear output of selected cells'), handler : function (env) { env.notebook.clear_cells_outputs(); } }, 'move-cell-down' : { cmd: i18n.msg._('move cells down'), help: i18n.msg._('move selected cells down'), icon: 'fa-arrow-down', help_index : 'eb', handler : function (env) { env.notebook.move_cell_down(); } }, 'move-cell-up' : { cmd: i18n.msg._('move cells up'), help: i18n.msg._('move selected cells up'), icon: 'fa-arrow-up', help_index : 'ea', handler : function (env) { env.notebook.move_cell_up(); } }, 'toggle-cell-line-numbers' : { cmd: i18n.msg._('toggle line numbers'), help : i18n.msg._('toggle line numbers'), help_index : 'ga', handler : function (env) { env.notebook.cell_toggle_line_numbers(); } }, 'show-keyboard-shortcuts' : { cmd: i18n.msg._('show keyboard shortcuts'), help : i18n.msg._('show keyboard shortcuts'), help_index : 'ge', handler : function (env) { env.quick_help.show_keyboard_shortcuts(); } }, 'delete-cell': { cmd: i18n.msg._('delete cells'), help: i18n.msg._('delete selected cells'), help_index : 'ej', handler : function (env) { env.notebook.delete_cell(); } }, 'undo-cell-deletion' : { cmd: i18n.msg._('undo cell deletion'), help: i18n.msg._('undo cell deletion'), help_index : 'ei', handler : function (env) { env.notebook.undelete_cell(); } }, // TODO reminder // open an issue, merge with above merge with last cell of notebook if at top. 'merge-cell-with-previous-cell' : { cmd: i18n.msg._('merge cell with previous cell'), help : i18n.msg._('merge cell above'), handler : function (env) { env.notebook.merge_cell_above(); } }, 'merge-cell-with-next-cell' : { cmd: i18n.msg._('merge cell with next cell'), help : i18n.msg._('merge cell below'), help_index : 'ek', handler : function (env) { env.notebook.merge_cell_below(); } }, 'merge-selected-cells' : { cmd: i18n.msg._('merge selected cells'), help : i18n.msg._('merge selected cells'), help_index: 'el', handler: function(env) { env.notebook.merge_selected_cells(); } }, 'merge-cells' : { cmd: i18n.msg._('merge cells'), help : i18n.msg._('merge selected cells, or current cell with cell below if only one cell is selected'), help_index: 'el', handler: function(env) { var l = env.notebook.get_selected_cells_indices().length; if(l == 1){ env.notebook.merge_cell_below(); } else { env.notebook.merge_selected_cells(); } } }, 'show-command-palette': { help_index : 'aa', cmd: i18n.msg._('show command pallette'), help: i18n.msg._('open the command palette'), icon: 'fa-keyboard-o', handler : function(env){ env.notebook.show_command_palette(); } }, 'toggle-all-line-numbers': { cmd: i18n.msg._('toggle all line numbers'), help : i18n.msg._('toggles line numbers in all cells, and persist the setting'), icon: 'fa-list-ol', handler: function(env) { var value = !env.notebook.line_numbers; env.notebook.get_cells().map(function(c) { c.code_mirror.setOption('lineNumbers', value); }); env.notebook.line_numbers = value; } }, 'show-all-line-numbers': { cmd: i18n.msg._('show all line numbers'), help : i18n.msg._('show line numbers in all cells, and persist the setting'), handler: function(env) { env.notebook.get_cells().map(function(c) { c.code_mirror.setOption('lineNumbers', true); }); env.notebook.line_numbers = true; } }, 'hide-all-line-numbers': { cmd: i18n.msg._('hide all line numbers'), help : i18n.msg._('hide line numbers in all cells, and persist the setting'), handler: function(env) { env.notebook.get_cells().map(function(c) { c.code_mirror.setOption('lineNumbers', false); }); env.notebook.line_numbers = false; } }, 'toggle-header':{ cmd: i18n.msg._('toggle header'), help: i18n.msg._('switch between showing and hiding the header'), handler : function(env) { var value = !env.notebook.header; if (value === true) { $('#header-container').show(); $('.header-bar').show(); } else if (value === false) { $('#header-container').hide(); $('.header-bar').hide(); } events.trigger('resize-header.Page'); env.notebook.header = value; } }, 'show-header':{ cmd: i18n.msg._('show the header'), help: i18n.msg._('show the header'), handler : function(env) { $('#header-container').show(); $('.header-bar').show(); events.trigger('resize-header.Page'); env.notebook.header = true; } }, 'hide-header':{ cmd: i18n.msg._('hide the header'), help: i18n.msg._('hide the header'), handler : function(env) { $('#header-container').hide(); $('.header-bar').hide(); events.trigger('resize-header.Page'); env.notebook.header = false; } }, 'toggle-menubar':{ help: 'hide/show the menu bar', handler : function(env) { $('#menubar-container').toggle(); events.trigger('resize-header.Page'); } }, 'show-menubar':{ help: 'show the menu bar', handler : function(env) { $('#menubar-container').show(); events.trigger('resize-header.Page'); } }, 'hide-menubar':{ help: 'hide the menu bar', handler : function(env) { $('#menubar-container').hide(); events.trigger('resize-header.Page'); } }, 'toggle-toolbar':{ cmd: i18n.msg._('toggle toolbar'), help: i18n.msg._('switch between showing and hiding the toolbar'), handler : function(env) { var value = !env.notebook.toolbar; if (value === true) { $('div#maintoolbar').show(); } else if (value === false) { $('div#maintoolbar').hide(); } events.trigger('resize-header.Page'); env.notebook.toolbar = value; } }, 'show-toolbar':{ cmd: i18n.msg._('show the toolbar'), help: i18n.msg._('show the toolbar'), handler : function(env) { $('div#maintoolbar').show(); events.trigger('resize-header.Page'); env.notebook.toolbar = true; } }, 'hide-toolbar':{ cmd: i18n.msg._('hide the toolbar'), help: i18n.msg._('hide the toolbar'), handler : function(env) { $('div#maintoolbar').hide(); events.trigger('resize-header.Page'); env.notebook.toolbar = false; } }, 'close-pager': { cmd: i18n.msg._('close the pager'), help : i18n.msg._('close the pager'), handler : function(env) { // Collapse the page if it is open if (env.pager && env.pager.expanded) { env.pager.collapse(); } } }, 'auto-indent': { cmd: i18n.msg._('automatically indent selection'), help : i18n.msg._('automatically indent selection'), handler : function(env) { // Get selected cell var selected_cell = env.notebook.get_selected_cell(); // Execute a CM command selected_cell.code_mirror.execCommand('indentAuto'); } }, 'close-and-halt': { cmd: i18n.msg._('shutdown kernel and close window'), help : i18n.msg._('shutdown kernel and close window'), handler : function(env) { env.notebook.close_and_halt(); } } }; /** * A bunch of `Advance actions` for Jupyter. * Cf `Simple Action` plus the following properties. * * handler: first argument of the handler is the event that triggered the action * (typically keypress). The handler is responsible for any modification of the * event and event propagation. * Is also responsible for returning false if the event have to be further ignored, * true, to tell keyboard manager that it ignored the event. * * the second parameter of the handler is the environment passed to Simple Actions * **/ var custom_ignore = { 'ignore':{ cmd: i18n.msg._('ignore'), handler : function () { return true; } }, 'move-cursor-up':{ cmd: i18n.msg._('move cursor up'), help: i18n.msg._("move cursor up"), handler : function (env, event) { var index = env.notebook.get_selected_index(); var cell = env.notebook.get_cell(index); var cm = env.notebook.get_selected_cell().code_mirror; var cur = cm.getCursor(); if (cell && cell.at_top() && index !== 0 && cur.ch === 0) { if(event){ event.preventDefault(); } env.notebook.command_mode(); env.notebook.select_prev(true); env.notebook.edit_mode(); cm = env.notebook.get_selected_cell().code_mirror; cm.setCursor(cm.lastLine(), 0); } return false; } }, 'move-cursor-down':{ cmd: i18n.msg._('move cursor down'), help: i18n.msg._("move cursor down"), handler : function (env, event) { var index = env.notebook.get_selected_index(); var cell = env.notebook.get_cell(index); if (cell.at_bottom() && index !== (env.notebook.ncells()-1)) { if(event){ event.preventDefault(); } env.notebook.command_mode(); env.notebook.select_next(true); env.notebook.edit_mode(); var cm = env.notebook.get_selected_cell().code_mirror; cm.setCursor(0, 0); } return false; } }, 'scroll-notebook-down': { cmd: i18n.msg._('scroll notebook down'), help: i18n.msg._("scroll notebook down"), handler: function(env, event) { if(event){ event.preventDefault(); } return env.notebook.scroll_manager.scroll(1); }, }, 'scroll-notebook-up': { cmd: i18n.msg._('scroll notebook up'), help: i18n.msg._("scroll notebook up"), handler: function(env, event) { if(event){ event.preventDefault(); } return env.notebook.scroll_manager.scroll(-1); }, }, 'scroll-cell-center': { cmd: i18n.msg._('scroll cell center'), help: i18n.msg._("Scroll the current cell to the center"), handler: function (env, event) { if(event){ event.preventDefault(); } var cell = env.notebook.get_selected_index(); return env.notebook.scroll_cell_percent(cell, 50, 0); } }, 'scroll-cell-top': { cmd: i18n.msg._('scroll cell top'), help: i18n.msg._("Scroll the current cell to the top"), handler: function (env, event) { if(event){ event.preventDefault(); } var cell = env.notebook.get_selected_index(); return env.notebook.scroll_cell_percent(cell, 0, 0); } }, 'duplicate-notebook':{ cmd: i18n.msg._('duplicate notebook'), help: i18n.msg._("Create and open a copy of the current notebook"), handler : function (env, event) { env.notebook.copy_notebook(); } }, 'trust-notebook':{ cmd: i18n.msg._('trust notebook'), help: i18n.msg._("Trust the current notebook"), handler : function (env, event) { env.notebook.trust_notebook(); } }, 'rename-notebook':{ cmd: i18n.msg._('rename notebook'), help: i18n.msg._("Rename the current notebook"), handler : function (env, event) { env.notebook.save_widget.rename_notebook({notebook: env.notebook}); } }, 'toggle-all-cells-output-collapsed':{ cmd: i18n.msg._('toggle all cells output collapsed'), help: i18n.msg._("Toggle the hidden state of all output areas"), handler : function (env, event) { env.notebook.toggle_all_output(); } }, 'toggle-all-cells-output-scrolled':{ cmd: i18n.msg._('toggle all cells output scrolled'), help: i18n.msg._("Toggle the scrolling state of all output areas"), handler : function (env, event) { env.notebook.toggle_all_output_scroll(); } }, 'clear-all-cells-output':{ cmd: i18n.msg._('clear all cells output'), help: i18n.msg._("Clear the content of all the outputs"), handler : function (env, event) { env.notebook.clear_all_output(); } }, 'save-notebook':{ cmd: i18n.msg._('save notebook'), help: i18n.msg._("Save and Checkpoint"), help_index : 'fb', icon: 'fa-save', handler : function (env, event) { env.notebook.save_checkpoint(); if(event){ event.preventDefault(); } return false; } }, }; // private stuff that prepend `jupyter-notebook:` to actions names // and uniformize/fill in missing pieces in of an action. var _prepare_handler = function(registry, subkey, source){ registry['jupyter-notebook:'+subkey] = {}; registry['jupyter-notebook:'+subkey].cmd = source[subkey].cmd; registry['jupyter-notebook:'+subkey].help = source[subkey].help||subkey.replace(/-/g,' '); registry['jupyter-notebook:'+subkey].help_index = source[subkey].help_index; registry['jupyter-notebook:'+subkey].icon = source[subkey].icon; return source[subkey].handler; }; // Will actually generate/register all the Jupyter actions var fun = function(){ var final_actions = {}; var k; for(k in _actions){ if(_actions.hasOwnProperty(k)){ // Js closure are function level not block level need to wrap in a IIFE // and append jupyter-notebook: to event name these things do intercept event so are wrapped // in a function that return false. var handler = _prepare_handler(final_actions, k, _actions); (function(key, handler){ final_actions['jupyter-notebook:'+key].handler = function(env, event){ handler(env); if(event){ event.preventDefault(); } return false; }; })(k, handler); } } for(k in custom_ignore){ // Js closure are function level not block level need to wrap in a IIFE // same as above, but decide for themselves whether or not they intercept events. if(custom_ignore.hasOwnProperty(k)){ handler = _prepare_handler(final_actions, k, custom_ignore); (function(key, handler){ final_actions['jupyter-notebook:'+key].handler = function(env, event){ return handler(env, event); }; })(k, handler); } } return final_actions; }; ActionHandler.prototype._actions = fun(); /** * extend the environment variable that will be pass to handlers **/ ActionHandler.prototype.extend_env = function(env){ for(var k in env){ this.env[k] = env[k]; } }; ActionHandler.prototype.register = function(action, name, prefix){ /** * Register an `action` with an optional name and prefix. * * if name and prefix are not given they will be determined automatically. * if action if just a `function` it will be wrapped in an anonymous action. * * @return the full name to access this action . **/ action = this.normalise(action); if( !name ){ name = 'autogenerated-'+String(action.handler); } prefix = prefix || 'auto'; var full_name = prefix+':'+name; this._actions[full_name] = action; return full_name; }; ActionHandler.prototype.normalise = function(data){ /** * given an `action` or `function`, return a normalised `action` * by setting all known attributes and removing unknown attributes; **/ if(typeof(data) === 'function'){ data = {handler:data}; } if(typeof(data.handler) !== 'function'){ throw new Error('unknown datatype, cannot register'); } var _data = data; data = {}; data.handler = _data.handler; data.help = _data.help || ''; data.icon = _data.icon || ''; data.help_index = _data.help_index || ''; return data; }; ActionHandler.prototype.get_name = function(name_or_data){ /** * given an `action` or `name` of an action, return the name attached to this action. * if given the name of and corresponding actions does not exist in registry, return `null`. **/ if(typeof(name_or_data) === 'string'){ warn_bad_name(name_or_data); if(this.exists(name_or_data)){ return name_or_data; } else { return null; } } else { return this.register(name_or_data); } }; ActionHandler.prototype.get = function(name){ warn_bad_name(name); return this._actions[name]; }; ActionHandler.prototype.call = function(name, event, env){ return this._actions[name].handler(env|| this.env, event); }; ActionHandler.prototype.exists = function(name){ return (typeof(this._actions[name]) !== 'undefined'); }; return {init:ActionHandler}; });
PypiClean
/dosage-3.0.tar.gz/dosage-3.0/dosagelib/plugins/comicfury.py
import os from ..scraper import ParserScraper from ..helpers import bounceStarter XPATH_LINK = '//a[d:class("%s") and contains(text(), "%s")]' XPATH_IMG = '//div[d:class("comicnav")]//a[img[contains(@alt, "%s")]]' class ComicFury(ParserScraper): imageSearch = ( '//img[@id="comicimage"]', '//div[@id="comicimagewrap"]//embed', '//div[@id="comicimagewrap"]//img', ) prevSearch = ( '//link[@rel="prev"]', # 137 (needs to be before the generic a@rel, because layout is wrong) '//a[contains(@title, "previous")]', '//a[@rel="prev"]', XPATH_LINK % ('comicnavlink', 'Previous'), XPATH_IMG % ('Previous'), # Art, ConsolersDLC, etc. u'//nav//a[contains(text(), "\u2039")]', # LatchkeyKingdom '//a[d:class("navi") and img[contains(@src, "Previous")]]', # KATRAN '//a[contains(text(), "Previous")]', # MansionofE '//a[img[contains(@alt, "PREVIOUS")]]', # RedSpot '//a[contains(text(), "Back")]', ) nextSearch = ( '//link[@rel="next"]', # 137 (see above) '//a[contains(@title, "next")]', '//a[@rel="next"]', XPATH_LINK % ('comicnavlink', 'Next'), XPATH_IMG % ('Next'), # Art, ConsolersDLC, etc. u'//nav//a[contains(text(), "\u203A")]', # LatchkeyKingdom '//a[d:class("navi") and img[contains(@src, "Next")]]', # RedSpot, KATRAN '//a[contains(text(), "Next")]', # MansionofE '//a[img[contains(@alt, "NEXT")]]', ) help = 'Index format: n' starter = bounceStarter def __init__(self, name, sub, lang=None, adult=False, endOfLife=False, segmented=False): super().__init__('ComicFury/' + name) self.prefix = name self.url = 'https://%s.webcomic.ws/comics/' % sub self.stripUrl = self.url + '%s' self.firstStripUrl = self.stripUrl % '1' if lang: self.lang = lang if adult: self.adult = adult if endOfLife: self.endOfLife = endOfLife if segmented: self.multipleImagesPerStrip = True self.imageSearch = self.imageSearch + ( '//img[d:class("comicsegmentimage")]', ) def namer(self, image_url, page_url): parts = page_url.split('/') path, ext = os.path.splitext(image_url) num = parts[-1] return "%s_%s%s" % (self.prefix, num, ext) def shouldSkipUrl(self, url, data): """Skip pages without images.""" # Videos on Underverse return (data.xpath('//div[@id="comicimagewrap"]//video') and not data.xpath('//div[@id="comicimagewrap"]//img')) @classmethod def getmodules(cls): # noqa: Allowed to be long return ( # These were once in the list below, but fell out from the index... cls('BadassologyByMichaelBay', 'strudelology'), cls('DandyAndCompany', 'dandyandcompany'), cls('DeadAtNight', 'deadnight'), cls('Shatterrealm', 'shatterrealm'), # do not edit anything below since these entries are generated from # scripts/comicfury.py # START AUTOUPDATE cls('0Eight', '0eight'), cls('1000', '1000'), cls('12YearsLater', '12yearslater'), cls('137', '137'), cls('20', 'two-over-zero'), cls('20QuidAmusements', 'twentyquidamusements'), cls('30', '30years'), cls('30DaysOfCharacters', '30days'), cls('3DGlasses', '3dglasses'), cls('60SecondComics', '6tsc'), cls('6ColorStories', '6colorstories'), cls('6Tales', 'sixtales'), cls('933Dollars', '933dollars'), cls('_Thetest_', 'thetest'), cls('AbbyComics', 'abbycomics'), cls('ABrickishSpaceComic', 'abrickishspacecomic'), cls('AbsentMindedTheatre', 'amtheatre'), cls('Absurd', 'absurd'), cls('ACannonadeOfHogwash', 'cannonadeofhogwash'), cls('AccidentallyOnPurpose', 'accidentally-on-purpose'), cls('ACelestialStory', 'acelestialstory'), cls('AComicExistense', 'acomicexistense'), cls('Acroalis', 'acroalis'), cls('ActingOut', 'actingout'), cls('ActionLand', 'actionland'), cls('Advent', 'advent'), cls('AdventuresInJetpacks', 'adventuresinjetpacks'), cls('AdventuresInTanoshii', 'adventuresintanoshii'), cls('AdventuresInTrueLove', 'advtl'), cls('Aerosol', 'aerosol'), cls('AetherEarthAndSun', 'aether'), cls('AForeverQuest', 'aforeverquest'), cls('Afterdead', 'afterdead'), cls('AGame', 'kirahitogame'), cls('Agency', 'agency-comic'), cls('AgentBishop', 'agentbishop'), cls('AHappierKindOfSad', 'ahappierkindofsad'), cls('AlbinoBrothers', 'albinobros'), cls('Alderwood', 'alderwood'), cls('AlexanderAndLucasRebooted', 'alexanderandlucas'), cls('AliaTerra', 'alia-terra'), cls('AlienIrony', 'alien-irony'), cls('AlienSpike', 'alienspike'), cls('Alignment', 'alignment'), cls('AllTheBbqSauce', 'allthebbqsauce'), cls('Alone', 'alone'), cls('ALoonaticsTale', 'aloonaticstale'), cls('ALoveStorydraft', 'alovestory'), cls('AlyaTheLastChildOfLight', 'alya'), cls('Amara', 'amara'), cls('Ampre', 'ampere'), cls('AmyOok', 'amyook'), cls('AndroidFiles', 'androidfiles'), cls('AngelGuardianEnEspanol', 'angelguardianespanol', 'es'), cls('AngelsOfIblis', 'angelsofiblis'), cls('AngryFaerie', 'angryfaerie'), cls('AnimalInstinct', 'fur-realanimalinstinct'), cls('Animangitis', 'animangitis'), cls('AnK', 'ank'), cls('Anne', 'anne'), cls('AntarcticBroadcasting', 'antarcticbroadcasting'), cls('AntaresComplex', 'antarescomplex'), cls('Antcomics', 'antcomics'), cls('Anthology', 'strudelology'), cls('AnthologyOfAnfer', 'anfer'), cls('AnthrosAndDungeons', 'anthrosanddungeons'), cls('AntiqueTimeMachine', 'atm'), cls('APiratesLife', 'pirateslife'), cls('ApocalypsoAdventure', 'thewriter13'), cls('ApplepineMonkeyAndFriends', 'applepine'), cls('AquazoneBreakfastNews', 'aqbn'), cls('ArachnidGoddess', 'arachnidgoddess'), cls('Arcane', 'rbsarcane'), cls('Archibald', 'archibald'), cls('ArchiNinja', 'archininja'), cls('AreYouDoneYet', 'areyoudoneyet'), cls('ArmlessAmy', 'armlessamy'), cls('ArmlessAmyExtraEdition', 'armlessamyextraedition'), cls('ArmyBrat', 'armybrat'), cls('Art', 'art'), cls('ArtificialStorm', 'artificialstorm'), cls('ArtisticAdventuresInBoredom', 'aab'), cls('ARVEYToonz', 'arveytoonz'), cls('Ashes', 'ashescomic'), cls('Asperchu', 'asperchu'), cls('AsperitasAstraalia', 'asperitasastraalia'), cls('AssholeAndDouchebag', 'aaanddb'), cls('AstralAves', 'astralaves'), cls('ASTRAYCATS', 'astraycats'), cls('Astronautical', 'astronautical'), cls('AtomicMonkeyComics', 'atomicmonkey'), cls('ATownCalledAlandale', 'atowncalledalandale'), cls('AttackOfTheRobofemoids', 'attack-of-the-robofemoids'), cls('AugustosClassic', 'augustos-classic'), cls('AuntieClara', 'auntieclara'), cls('Auriga', 'auriga'), cls('Auster', 'auster'), cls('AutumnBay', 'autumnbay'), cls('AutumnBayExtraEdition', 'autumnbayextra'), cls('Avatars', 'avatars'), cls('AvengersRollInitiative', 'avengersrollinitiative'), cls('AwkwardPaws', 'awkwardpaws'), cls('AwkwardShelby', 'awkwardshelby'), cls('BabesOfDongaria', 'dongaria'), cls('Baby001', 'baby001'), cls('BabyBatman', 'babybatman'), cls('BackToTheRefridgerator', 'bttf'), cls('BadAdjectives', 'badadjectives'), cls('BananaCreamCake', 'bananacreamcake'), cls('BarkingCrayon', 'barkingcrayon'), cls('BASKERVILLE', 'baskerville'), cls('BASO', 'baso'), cls('BattleOfTheRobofemoids', 'battle-of-the-robofemoids'), cls('BeatStuffUpMan', 'beatstuffupman'), cls('BeepClub', 'beepclub'), cls('BeePolice', 'beepolice'), cls('Beezwax', 'beezwax'), cls('BeforeAndAfter', 'beforeandafter'), cls('Being', 'being'), cls('BELECOMICS', 'belecomics'), cls('BentElbows', 'bentelbows'), cls('BetaParticles', 'betaparticles'), cls('BetweenTheFrames', 'betweentheframes'), cls('BetweenTheInterval', 'betweentheinterval'), cls('BibleBelt', 'biblebelt'), cls('BilateralComics', 'bilateralcomics'), cls('BionicleTales', 'bionicletales'), cls('BioSyte', 'biosyte'), cls('Birdman', 'birdman'), cls('BlankLifeInsertPlayerRokulily', 'blanklife'), cls('BlackTapestries', 'blacktapestries', adult=True), cls('BlitzPhoenix', 'blinix'), cls('BlobWorld', 'blobworld'), cls('BlueBloodHeroes', 'bluebloodheroes'), cls('BoatcrashChronicles', 'boatcrash'), cls('BobbyTheFetus', 'bobbythefetus'), cls('Boobgirl', 'boobgirl'), cls('BookOfThree', 'bookofthree'), cls('BooksDontWorkHere', 'booksdontworkhere'), cls('BorisAndBjorn', 'borisandbjorn'), cls('Boritom', 'boritom'), cls('BrainFood', 'brainfood'), cls('BrainTeaser', 'brainteaser'), cls('BritarsesHashHymnal', 'hashhymnal'), cls('BroadoakPeople', 'broadoakpeople'), cls('BrokenWings', 'brokenwingscomic'), cls('BromosWorld', 'bromosworld'), cls('Brujagh', 'brujagh'), cls('BubbleFox', 'bubblefox'), cls('Bulletproof', 'bulletproof'), cls('BunnyGoreJustice', 'bunny-gore-justice'), cls('BustySolar', 'bustysolar'), cls('ButterflyAFortuitousTale', 'butterfly'), cls('ButterflyEffect', 'thebutterflyeffect'), cls('BUXYAndDave', 'buxy'), cls('BuyingTime', 'buyingtime'), cls('CACKLENCOMICS', 'cacklencomics'), cls('CactusCanyon', 'cactuscanyon'), cls('CAFEGRUESOME', 'cafegruesome'), cls('Cagegirl', 'cagegirl'), cls('CastOfMadness', 'castofmadness'), cls('CatHerosEpicCatventuresAsAnHero', 'cathero'), cls('CatosApprenticeship', 'cato'), cls('CattDogg', 'cattdogg'), cls('Cattic', 'cattic'), cls('CattusesChristmasCalendar', 'xmascattuses'), cls('CatWithGoggles', 'catwithgoggles'), cls('CautionaryTales', 'cautionarytales'), cls('CazTheComicStrip', 'cazthecomicstrip'), cls('CelticShaman', 'celticshaman'), cls('Chainbreaker', 'chainbreaker'), cls('ChamberOfTheArcanum', 'cofthea'), cls('ChampionOfKatara', 'championofkatara'), cls('ChanpuruSaga', 'chanpuru'), cls('CharacterBattleBetweenRounds', 'between-rounds'), cls('CHLOE', 'chloe'), cls('ChocoLavaCOMICScom', 'chocolava'), cls('Chosen', 'chosentheultimatecliche'), cls('CHRISTMASEVETheFirstLadyOfYuletideCheer', 'coolyulecomics'), cls('ChristmasWithMadDog', 'christmas-with-maddog'), cls('ChronoRedux', 'chronoredux'), cls('Cinder', 'cinder'), cls('CircusJaxs', 'circusjaxs', segmented=True), cls('CityFolk', 'cityfolkwebcomics'), cls('CityOfDream', 'cityofdream'), cls('CKarrus', 'ckarrus'), cls('ClassicElsewhere', 'classicelsewhere'), cls('ClassicMissJAndTheAmComics19842006', 'missjandtheam'), cls('ClydeNOwen', 'clydenowen'), cls('COCHLEAAndEUSTACHIA', 'chromefetus'), cls('CockeyedComix', 'cockeyed'), cls('Code', 'code'), cls('CollegeMunchies', 'collegemunchies'), cls('Colorforce', 'colorforce'), cls('ComicFuryFanArtExchanges', 'cfexchanges'), cls('ComicShopOfHorror', 'comicshop'), cls('ComicShortsTheMainSeries', 'comicshortsmain'), cls('ComingApartments', 'comingapartments'), cls('COMIXTURE', 'comixture'), cls('CommonReadComicAdaptions', 'slucommonread'), cls('CompanyManComic', 'companyman'), cls('ConcerningJustice', 'concerningjustice'), cls('CONIES', 'conies'), cls('ConradTheCaterpillar', 'conradthecaterpillar'), cls('Consolers', 'consolers'), cls('ConsolersDLC', 'consolers-dlc'), cls('ContestedTerritory', 'contestedterritory'), cls('CoolstarComicsMasterFiles', 'coolstarcomicsmasterfiles'), cls('CopyPasteAndMrBenjy', 'copypasteandmrbenjy'), cls('Corpses', 'corpses'), cls('Cosmos', 'planetcosmos'), # CourageousManAdventures has a duplicate in ComicSherpa/CourageousManAdventures cls('CowboysAndCrossovers', 'cowboysandcrossovers'), cls('Cowtoon', 'cowtoon'), cls('CrackPutty', 'crackputty'), cls('CRashCourse', 'crashcourse'), cls('Crawlers', 'crawlers'), cls('CrimsonPixelComics', 'crimsonpixel'), cls('Critters', 'critters'), cls('CrossoverChampionship', 'crossoverchampionship'), cls('CrossoverExchange', 'crossoverexchange'), cls('CrossoverlordAndCrossoverkill', 'crossoverlordkill'), cls('CrossWorld', 'crossworld'), cls('CrowbarASciFiAdventure', 'crowbar'), cls('CrowbarsDontKillPeopleCROWBARSDo', 'crowbars'), cls('Cryptida', 'cryptida', 'de'), cls('CryptidaEnglish', 'cryptida-eng'), cls('CrystalBall', 'crystalball'), cls('CtrlZ', 'ctrlz'), cls('CubeCows', 'cubecows'), cls('CupcakeGraffiti', 'cupcakegraffiti'), cls('CYXLOSISM', 'cyxlocistic'), cls('DailyDoodle', 'dailydoodle'), cls('DailyOneLiner', 'daily1l'), cls('DamaclesAndKenjall', 'wowwithatwist-damaclesandkejallcomic'), cls('DamnHipsters', 'damnhipsters'), cls('DAndDAangvanced', 'danddaangvanced'), cls('Daredoers', 'daredoers'), cls('DarkHorse', 'darkhorse'), cls('Darklings', 'darklings'), cls('DarkSisters', 'darksisters'), cls('DarVal', 'murghcomics'), cls('Datachasers', 'datachasers'), cls('DaughterOfDarkness', 'honeyvenom'), cls('DaxTapu', 'daxtapu'), cls('DDSR', 'ddsr'), cls('DEAD', 'dead'), cls('DeadDucks', 'deadducks'), cls('DeadFingers', 'deadfingers'), cls('DeadRabbitCa', 'afairtrade'), cls('DeepBlue', 'deepblue'), cls('DefineHero', 'definehero'), cls('DELIA', 'delia'), cls('DemasPokmonAdventure', 'nuzlocke-dema'), cls('DesertGrey', 'desertgrey'), cls('DesertShark', 'desertshark'), cls('Dictatorship', 'dictatorship'), cls('DieRabbitDie', 'dierabbitdie'), cls('DimensioNoir', 'dimensionoir'), cls('DivinaFortuna', 'divinafortuna'), cls('DNA', 'd-n-a'), cls('DoffeEllende', 'doffeellende'), cls('Dogstar', 'dogstar'), cls('Domain', 'domain'), cls('DonutsForSharks', 'donutsforsharks'), cls('DotComic', 'dotcomic'), cls('DotX', 'dotx'), cls('DoubleJumpGameComics', 'doublejump'), cls('Draginbeard', 'draginbeard'), cls('DragonballZElsewhere', 'dbzelsewhere'), cls('DragonCity', 'dragoncity'), cls('DragonsAndSilk', 'dragonsandsilk'), cls('DragonsOfAzuma', 'dragonsofazuma'), cls('DrApocalyptosSurvivorama', 'docapoc'), cls('DressedForSuccess', 'dressedforsuccess'), cls('Drettaville', 'drettaville'), cls('DrifterJournalsOfAHero', 'drifterjournalsofahero'), cls('Drifting', 'drifting'), cls('Droned', 'droned'), cls('DRouggs', 'drouggs'), cls('DrugsAndKisses', 'd-and-k'), cls('Druids', 'druids', adult=True), cls('DubCity', 'dubcity'), cls('DueEast', 'dueeast'), cls('DuelingHeroes', 'duelingheroes'), # DungeonHordes has a duplicate in ComicSherpa/DungeonHordes cls('DungeonMasterEffect', 'dungeonmastereffect'), cls('DyerinsLine', 'dyerinsline'), cls('EclipseLegend', 'eclipselegend'), cls('Educomix', 'educomix'), cls('EffinguKookoo', 'effingukookoo'), cls('EightBitAdventuresOfCaptainA', 'eightbitadventures'), cls('ElektrosComicAnthology', 'elektroanthology'), cls('Element8', 'element8'), cls('ElementsOfEve', 'elementsofeve'), cls('Elf', 'elf-comic'), cls('Elsewhere', 'elsewhere'), cls('EmpiresOfSteam', 'empiresofsteam'), cls('Energize', 'energize'), cls('enoZone', 'xenozone'), cls('EpicsOfNoche', 'epicsofnoche'), cls('Equilibrium', 'equilibrists'), cls('Ergosphere', 'ergosphereworld'), cls('Eros', 'eros'), cls('ErraticElegance', 'erratice'), cls('EscapeVelocity', 'escapevelocity'), cls('EternalNight', 'eternalnight'), cls('EternityComplex', 'eternityc'), cls('EverydayAbnormal', 'everydayabnormal'), cls('EvilRising', 'evilrising'), cls('EWMIC', 'ewmic'), cls('ExperiMentalTheatre', 'emt'), cls('FacesOfFire', 'facesofire'), cls('Fallacy', 'fallacy-harha'), cls('Fannicklas', 'fannicklas'), cls('FatalExpression', 'fexpression'), cls('FBHNKAG', 'fbhnk-ag'), cls('FeliciaSorceressOfKatara', 'felicia'), cls('FEZ', 'fez'), cls('FiendishFellowship', 'fiendishfellowship'), cls('FighterDan', 'fighterdan'), cls('FingerPuppetShow', 'fingerpuppetshow'), cls('FireBorn', 'fireborn2'), cls('Fishbowl', 'fishbowl'), cls('FishfaceAndBirdbrain', 'ahtiventures'), cls('Flickwit', 'flickwit'), cls('FlintlockesGuideToAzeroth', 'flintlocke'), cls('FlintlockeVsTheHorde', 'flintlockevshorde'), cls('ForeignTerritory', 'foreignterritory'), cls('ForNathaniel', 'fornathaniel'), cls('FoxyFlavoredCookie', 'pobrepucho'), cls('FracturedTea', 'fracturedtea'), cls('Frames', 'frames'), cls('FraterniT', 'fraterni-t'), cls('FraternityOfEvil', 'foe'), cls('FreeLancer', 'freelancer'), cls('FreQuency', 'frequency'), cls('FridayAndGrover', 'fridayandgrover'), cls('FriendshipIsDragons', 'friendshipisdragons'), cls('FromDustToRuination', 'fromdust2ruination'), cls('Frontier2170', 'frontier2170'), cls('FrostFire', 'frostfire'), cls('FullmetalBrothers', 'fullmetalbrothers', 'es'), cls('FurAndN3rdy', 'furnerdy'), cls('FurryExperience', 'furryexperience'), cls('Fusion', 'fusion'), cls('FutureRegrets', 'futureregrets'), cls('FuzzballAndScuzzball', 'fuzzballandscuzzball'), cls('GalbertOfBruges', 'galbertofbruges'), cls('GarfieldMinusJon', 'garfieldminusjon'), cls('Gatito', 'gatito'), cls('GenjiGami', 'genjigami'), cls('Ghelis', 'ghelis'), cls('GhostGirlsClubZero', 'ghostgirlsclubzero'), cls('GhostSyndrome', 'ghostsyndrome'), cls('GiantQueenSakura', 'giantqueensakura'), cls('GillimurphyStories', 'gillimurphy'), cls('GillimurphyStoriesorig', 'gillimurphy-orig'), cls('GlomshireKnights', 'glomshire'), cls('Glorianna', 'glorianna'), cls('GnomereganForever', 'gnomereganforever'), cls('GODHATESDADS', 'godhatesdads'), cls('GoldBlood', 'goldblood'), cls('Goldrush', 'goldrush-dynllewcomics'), cls('GrandfathersTale', 'grandfatherstale'), cls('Grandify', 'grandify'), cls('Gratz', 'gratz'), cls('Grayling', 'grayling'), cls('GreenEyes', 'greeneyes'), cls('GreysterJemp', 'greysterjemp'), cls('GrimReaperSchool', 'grimreaperschool'), cls('GrippsBrain', 'grippsbrain'), cls('GrokBoop', 'grokboop'), cls('GrowingTroubles', 'growingtroubles'), cls('Guardia', 'guardia-tales-of-halgeis'), cls('GUS', 'gus'), cls('HalloweenCameoCaper2012', 'halloween2012'), cls('HalloweenCameoCaper2013', 'halloween2013'), cls('HalloweenCameoCaper2014', 'halloween2014'), cls('HARDLUCK', 'hardluck'), cls('HAYWIRE', 'haywire'), cls('HazardousScience', 'hazsci'), cls('HazardsWake', 'hazardswake'), cls('HazyDaze', 'hazydaze'), cls('HeadRoom', 'headroom'), cls('HeadWound', 'headwound'), cls('HeartOfKeol', 'keol', segmented=True), cls('HeavyLittlePeople', 'heavylittlepeople'), cls('HeavyMetalSailorMoon', 'hmsm'), cls('Hellbent', 'hellbent'), cls('Hellbound', 'hellboundarchive'), cls('HellCar', 'hellcar'), cls('HenriettaLamb', 'henriettalamb'), cls('HeraclesKnot', 'heraclesknot'), cls('HeroesAtWork', 'heroesatwork'), cls('HeroesOfPower', 'myhorriblesite'), cls('HitmanPiranha', 'hitmanpiranha'), cls('HitmenForDestiny', 'hitmen'), cls('HobGoblinAdventure', 'hobgoblin'), cls('Holon', 'holon'), cls('HolyBibble', 'holy-bibble'), cls('HolyCowComics', 'holycowcomics'), cls('HomeOfTheSpaceWalnut', 'hotsw'), cls('HoodzAndCaperz', 'hoodzandcaperz'), cls('HorizonGakuen', 'horizongakuen'), cls('HourlyKelly', 'hourlykelly'), cls('HouseOnWritersBlock', 'houseonwritersblock'), cls('Housepets1X', 'housepets1x'), cls('HowIRememberIt', 'hiri'), cls('HowToRaiseYourTeenageDragon', 'teenagedragon'), cls('HowWeStaySaneAtWork', 'howwestaysaneatwork'), cls('HumanCookies', 'humancookies'), cls('HurfanosOrphans', 'huerfanos'), cls('HUSH', 'hush'), cls('HyperactiveComics', 'hyperactivecomics'), cls('ICryWhileYouSleep', 'icrywhileusleep'), cls('IDGet', 'idget'), cls('IFSU', 'ifsused'), cls('IgnitionZero', 'ignitionzero'), cls('IlusionOfTime', 'illusionoftime'), cls('Immigrant', 'immigrant'), cls('ImNotYourFriend', 'imnotyourfriend'), cls('ImperialEntanglements', 'imperialentanglements'), cls('Imperium', 'imperium'), cls('IMPERIVM', 'imperivmgalactica'), cls('Impisha', 'impisha'), cls('InBloodOfColour', 'inbloodofcolour'), cls('Indexmancave', 'indexmancave'), cls('InfraCityTheComic', 'infracity'), cls('InkLaRue', 'inkalarue'), cls('Inorganic', 'disturbingcomics'), cls('InsanityCorpV22', 'insanitycorp'), cls('Insectia', 'insectia'), cls('InsideOuT', 'insideout'), cls('InstantGraphicNovel', 'ign'), cls('IntergalacticTruckstop', 'its'), cls('InternetSuperbuddies', 'isb'), cls('Invicta', 'invicta'), cls('IsaacAndFriends', 'isaacandfriends'), cls('IslandOfTheMoths', 'moths'), cls('Isonacia', 'isonacia'), cls('ItsComplicated', 'itscomplicated'), cls('ItsJustAnotherDay', 'itsjustanotherday'), cls('ItsNEWDAY', 'itsnewday'), cls('Jack', 'jackrabbit', adult=True), cls('JackAndTheBeanstalk', 'jackandthebeanstalk'), cls('JackFrostDoujin', 'jfdoujin'), cls('JackitAndFriends', 'jackitandfriends'), cls('JakeBone', 'jakebone'), cls('JamieJupiter', 'jamiejupiter'), cls('JaquieNovemberAndTheSpookiness', 'november-spookiness'), cls('JaysInternetFightClub', 'jaysinternetfightclub'), cls('JellyfishStew', 'yppcomic'), cls('JenffersShowsFanArtPage', 'jenffersshowsfanartpage'), cls('JenffersShowsMissJAndJensPhotoAlbum', 'missjandjensphotoalbum'), cls('JenffersShowTheNewStoriesOfMissJAndJen', 'thenewstoriesofmissjandjen'), cls('Jericho', 'jericho'), cls('JillpokeBohemia', 'jillpokebohemia'), cls('Jix', 'jix'), cls('JohnnyBullet', 'johnnybullet'), cls('JonathinQuackupOfThePlanetWeralt', 'quackup'), cls('JoostsDailyDealings', 'joostdailies'), cls('JournalComics', 'jordansjournal'), cls('JourneyToRaifina', 'journeytoraifina'), cls('JudeAndMaria', 'judeandmaria'), cls('Jump', 'jump2'), cls('Junk', 'junk'), cls('Jupiter', 'jupiter'), cls('JustPeachy', 'justpeachy'), cls('KarensEdge', 'karensedge'), cls('Katastrophe', 'katastrophe'), cls('KATRAN', 'katran'), cls('KayAndP', 'kayandp'), cls('KazasMateGwenna', 'kaza-and-gwenna'), cls('KAZE', 'kaze'), cls('KeepingThePeace', 'keepingthepeace'), cls('KeepingUpWithThursday', 'keepingupwiththursday'), cls('KetsuekiDoku', 'ketsuekidoku'), cls('KevinWatch', 'kevinwatch'), cls('KevinWatchTheMovie', 'kevinwatchthemovie'), cls('KiasComic', 'kiascomic'), cls('KiasOTHERComic', 'kiasothercomic'), cls('KiLAILO', 'kilailo'), cls('KingdomOfTheDinosaurs', 'dinosaurkingdom'), cls('KingdomPrettyCure', 'kingdomprettycure'), cls('KirbyVsShyGuy', 'kvsg'), cls('KMLsSticks', 'kmlssticks'), cls('KnavesEnd', 'knavesend'), cls('KnightGuy', 'knightguy'), cls('Kordinar25000', 'kordinar'), cls('KougarStreetTheHumiliationOfLisaRumpson', 'kougarstreet'), cls('KronosWoWComics', 'kronoswowcomics'), cls('KyoniWanderer', 'kyoniwanderer'), cls('LaceyInvestigations', 'lacey-investigations'), cls('LadySpectraAndSparky', 'ladyspectra'), cls('Lambo', 'lambo'), cls('LandOfTheEverYoung', 'landoftheeveryoung'), cls('LaserBrigade', 'laserbrigade'), cls('LastCall', 'lastcallcomic'), cls('LastTaxi', 'lasttaxi'), cls('Latchkey', 'latchkey'), cls('LatchkeyKingdom', 'latchkeykingdom'), cls('Lately', 'lately'), cls('Lauras24HourComics', 'lauras24hourcomics'), cls('LazyComics', 'lazy'), cls('LeahClearwaterFancomic', 'leahclearwaterfancomic'), cls('LegendOfPaean', 'legend-of-paean'), cls('LegendOfTheRedPhantom', 'legendoftheredphantom'), cls('LegendOfZeldaOcarinaOfTim', 'ocarinaoftim'), cls('LethargicMisanthropy', 'lethargicmisanthropy'), cls('LetsCelebrate', 'letscelebrate'), cls('Level30Psychiatry', 'lvl30psy'), cls('LifeExplained', 'lifeexplained'), cls('LightBulbs', 'lightbulbs'), cls('LightningProphetess', 'lp'), cls('LilHeroArtists', 'lilheroartists'), # LimboRoad has a duplicate in ComicSherpa/LimboRoad cls('Lint', 'lint'), cls('Lintier', 'lintier'), cls('LiquidLunch', 'liquidlunch'), cls('LiteBites', 'litebites'), cls('LittleBlackDress', 'little-black-dress'), cls('LittleJacquie', 'littlejacquie'), cls('LittleRedRobo', 'littleredrobo'), cls('LivingInACloud', 'livinginacloud'), # Lola has a duplicate in GoComics/Lola cls('LongDistanceChargesApply', 'zacharybinks'), cls('Longhike', 'longhike'), cls('LookStraightAhead', 'lookstraightahead'), cls('LOSTLOVE', 'lostlove'), cls('LoveIsConplicated', 'conplicated'), cls('LoveKillsSlowly', 'lovekillsslowly'), cls('LOVETriologyExtraArt', 'mlextralove'), cls('LuckyHazard', 'luckyhazard'), cls('Lukewarm', 'lukewarm'), cls('LunaStar', 'lunastar'), cls('LustAndIre', 'lustandire', adult=True), cls('MadGirl', 'madgirl'), cls('MagicElDesencuentro', 'magiceldesencuentro', 'es'), cls('MagicTheScattering', 'magicthescattering'), cls('Magience', 'magience'), cls('MAGISAPARASAYOupdatesMonFri', 'mag-isa'), cls('MagnaComica', 'magnacomica'), cls('ManChildren', 'manchildren'), cls('MariosCastleTales', 'mariocastletales', 'it'), cls('MarriedToATransformersFan', 'marriedtoatransformersfan'), cls('MARS', 'mars'), cls('MaskOfTheAryans', 'mask-of-the-aryans'), cls('MassEffectMinarga', 'minarga'), cls('Mateys', 'mateys'), cls('MaxFuture', 'maxfuture'), cls('MAYBELOVE', 'emmacomics'), cls('MayonakaDensha', 'mayonakadensha'), cls('MayTheRainCome', 'maytheraincome', endOfLife=True), cls('MegaMaidenVSTheChopChopPrincess', 'megamaiden'), cls('MeganKearneysBeautyAndTheBeast', 'batb'), cls('MelancholyGoRound', 'melancholygoround'), cls('MerelyMortal', 'merelymortal'), cls('Messenger', 'messenger'), cls('MichaelTDesingsArmyAnts', 'armyants'), cls('MichellesUniverseScrapbook', 'michellesuniversescrapbook'), cls('MidnightMoon', 'midnightmoonrp'), cls('MidnightRUN', 'midnight-run'), cls('MIGHTYRACCOON', 'starraccoon'), cls('MildlyAmusing', 'mildlyamusing'), cls('Minecraft2b2tnet', 'minecraft2b2t'), cls('MiraclesOfNeksenziPoint', 'neksenzi-miracles'), cls('MirroredConversations', 'mirroredconversations'), cls('MiscellaneousMadness', 'rangerrandom'), cls('MissingDream', 'missingdream'), cls('MissionMars', 'missionmars'), cls('MithrilRavens', 'mithril-ravens'), cls('MiVidaSinUnJetpack', 'sinjetpack', 'es'), cls('MobiusAdventures', 'mobiusadventures'), cls('Mohyla', 'mohyla'), cls('Molasses', 'molasses'), cls('MondayMonday', 'mondaymonday'), cls('MonochromeRainbow', 'monobow'), cls('MonsterBait', 'deadnight'), cls('MonsterInTheKingdom', 'monster'), cls('MonstersWithBenefits', 'failmonsters'), cls('MonstroniverseAdventures', 'monstroniverse'), cls('MoonlitBrew', 'moonlitbrew'), cls('MoonWraith', 'moonwraith'), cls('MorningSquirtz', 'morningsquirtz'), cls('MotherOfAllMonsters', 'moam'), cls('MousebearComedy', 'mousebearcomedy'), cls('MrCow', 'mrcow'), cls('MrPunchAndProfRatbaggyEmeritus', 'punch'), cls('MudCompany', 'mudcompany'), cls('Mudskipper', 'mudskipper'), cls('Muscleheart', 'muscleheart'), cls('MushroomGo', 'mushroomgo'), cls('MutantElf', 'mutantelf'), cls('Mutigenx', 'mutigenx'), cls('MVPL', 'mvpl'), cls('MyForgottenPast', 'myforgottenpast'), cls('MyGirlfriendTheSecretAgent', 'mygfthesecagent'), cls('MyLifeWithoutAJetpack', 'nojetpack'), cls('MyLittlePonyFriendshipIsBetrayal', 'mlp-fib'), cls('MysteriousManOfSkull', 'mysteriousmanofskull'), cls('MyTVIsEvil', 'mytvisevil'), cls('NA', 'noche'), cls('NamcoWars', 'namcowars'), cls('NarutoJutsuAndJinchuriki', 'jutsuandjinchuriki'), cls('NatureDEEP', 'naturedeep'), cls('Necreshaw', 'nartopia'), cls('Neighbors', 'neighborscomic'), cls('NeverMindTheGap', 'nmg'), cls('Newheimburg', 'newheimburg'), cls('NEXGEN', 'nexgentheseries'), cls('NightmareNauts', 'nightmarenauts'), cls('NightshadeTheMerryWidow', 'lorddarke'), cls('NinthLife', 'ninthlife'), cls('Nocturne21', 'nocturne21'), cls('NoFuture', 'nofuturevit'), cls('NoKeys', 'nokeys'), cls('Noprrkele', 'noprrkele'), cls('NothingMen', 'nothing-men'), cls('NoTitleRequired', 'ntr'), cls('NotSinceYou', 'notsinceyou'), cls('NyxInTheOverworld', 'nyx'), cls('Oeight', 'oeight'), cls('OffCentaured', 'offcentaured'), cls('OfficeLogic', 'office-logic'), cls('OffSeason', 'offseasoncomic'), cls('OffWorldTheCrease', 'thecrease'), cls('OldFiyoraNya', 'retrofiyora'), cls('OldHumanCookies', 'oldhumancookies'), cls('OmegaChronicles', 'omegachronicles', 'es'), cls('OnceStung', 'oncestung'), cls('OnePageComicCollection', 'onepagecomiccollection'), cls('OnePieceGrandLine3Point5', 'grandline3point5'), cls('OneSided', 'one-sided'), cls('OrbFragmentSlim', 'orbfragment'), cls('OrganizedMess', 'organizedmess'), cls('Otherworldly', 'otherworldly-comics'), cls('OutFerASmoke', 'outferasmoke'), cls('Outletting', 'outletting'), cls('OutsideIn', 'outside-in'), cls('Palindrome', 'palindrome'), cls('PANAPANSTRAKOVI', 'strakovi'), cls('PaperStreamerAtDefCon5', 'paperstreamer'), cls('ParaFrenic', 'parafrenic'), cls('ParasiteGalaxy', 'parasitegalaxy'), cls('Parisel313', 'parisel313'), cls('PARKER', 'parker', segmented=True), cls('Parmeshen', 'parmeshen'), cls('ParoxysmTemporal', 'pt'), cls('PateEmpire', 'pateempire'), cls('PCMS20', 'pcms'), cls('PeeInTheMorningREBOOTED', 'holy-hecking-balls-rebooted', 'pt'), cls('PeepsAndPerks', 'peepsnperks'), cls('Pegwarmers', 'pegwarmers'), cls('PenguinCapers', 'penguin-capers'), cls('PerceivablyHuman', 'perceivablyhuman'), cls('PersonaForTheWin', 'personaftw'), cls('Perspectives', 'perspectives'), cls('PhantomsTrail', 'phantomstrail'), cls('Phoenix', 'phoenix'), cls('Pilgrim', 'pilgrimsprogress'), cls('PilgrimEnEspanol', 'pilgrimenespanol', 'es'), cls('PITCHBLACK', 'pitchblack'), cls('PlasticBulletsMayhemUnloaded', 'plasticbulletsmayhemunloaded'), cls('Poharex', 'poharex'), cls('PokemonWarpers', 'pokemonwarpers'), cls('PokmonOurStory', 'pokemonos'), cls('PokmonShadowStories', 'shadowstories'), cls('PoldaAPolda', 'poldove'), cls('PopCulturesKids', 'pop-cultures-kids'), cls('Powertrip', 'powertrip'), cls('POWRightInTheNostalgia', 'powrightinthenostalgia'), cls('PrimalWarsAftermath', 'primalwars'), cls('PrinceOfCats', 'princeofcats'), cls('ProfessorAstonishing', 'professorastonishing'), cls('ProfessorAmazingAndTheIncredibleGoldenFox', 'paigf'), cls('ProjectArc', 'projectarc'), cls('ProjectGTH', 'projectgth'), cls('ProjectJikoku', 'projectjikoku'), cls('ProjectSternenlicht', 'projectsternenlicht'), cls('PromiseList', 'promiselist'), cls('ProportionalExcitability', 'proportionalexcitability'), cls('Prosopopoeia', 'prosopopoeia'), cls('Pulse', 'pulse'), cls('PureHavoc', 'pure-havoc'), cls('Queenie', 'queenie'), cls('QuestCorporeal', 'questcorporeal'), cls('Rain', 'rain'), cls('RandomlyAssembled', 'randomlyassembled'), cls('RandomThoughts', 'randomthoughts'), cls('RapturousArcane', 'rapturousarcane'), cls('RawLatex', 'rawlatex'), cls('ReadershipOfOne', 'readershipofone'), cls('RebelYell', 'rebelyell'), cls('RebuildOfGenericMangaShippuden', 'rebuildofgenericmanga'), cls('RecklessComix', 'recklesscomix'), cls('RedSpot', 'redspot'), cls('RegardingDandelions', 'regardingdandelions'), cls('Remedy', 'remedy'), cls('RememberBedlam', 'bedlam'), cls('RequiemsGate', 'requiemsgate'), cls('ReSetArt', 'resetfanarts'), cls('ResidentWeirdo', 'residentweirdo'), cls('ReturnOfWonderland', 'returnofwonderland'), cls('Revive', 'revive'), cls('RexAfterDark', 'rexafterdark'), cls('RexfordAvenue', 'rexfordavenue'), # Ringers has a duplicate in ComicSherpa/Ringers cls('RockGardenComics', 'rockgardencomics'), cls('RoguesOfClwydRhan', 'rocr'), cls('RoleplayingPartyTales', 'rpt'), cls('RoomOfMirrors', 'room-of-mirrors'), cls('RootBeers', 'root-beers'), cls('Rozak', 'rozak'), cls('RPSLARPComic', 'rps'), cls('RumfAdventures', 'rumfadventures'), cls('RunningRiot', 'runningriot'), cls('SailorMoonTheEnemyNextDoor', 'sailormoontheenemynextdoor'), cls('Saluna', 'saluna'), cls('SanctaTerra', 'sanctaterra'), cls('SanityProtectionFactor', 'spf1337'), cls('SaraAndKleeyo', 'sarakleeyo'), cls('SaveMeGebus', 'savemegebus'), cls('SawbladersBlackNuzlockeChallenge', 'sawbladersblacknuzlocke'), cls('ScottieRoad', 'scottieroad'), cls('Scoundrels', 'scoundrels'), cls('ScrubDiving', 'scrubdiving'), cls('Scuvener', 'scuvener'), cls('SEAAOMSagaArchive', 'seaaom'), cls('SECRETLOVE', 'secretlove'), cls('SecretSanta2013', 'secretsanta2013'), cls('SeeYourFeels', 'seeyourfeels'), cls('SenatorSurprise', 'senatorsurprise'), cls('Sentiments', 'sentiments'), cls('SerengettiDreams', 'serengetti'), cls('SeriousEngineering', 'seriousengineering'), cls('SerpamiaFlare', 'serpamiaflare'), cls('SerpentsOfOld', 'serpentsofold'), cls('SerpentsOfOldFanArt', 'soofans'), cls('Shades', 'shades'), cls('ShadesOfGray', 'fuzzylittleninjas'), cls('ShaiAway', 'shaiaway'), cls('ShakingOffSorcery', 'shakingoffsorcery'), cls('ShakingOffSorceryPL', 'shakingoffsorcery-pl'), cls('ShamanQuest', 'shamanquest'), cls('ShatteredSkies', 'shatteredskies'), cls('Sharak', 'sharak'), cls('Shenanigans', 's'), cls('ShenaniganSquares', 'ss-comic'), cls('ShikuTheFirstAndFinal', 'shiku'), cls('ShiroAndKuro', 'shiroandkuro'), cls('ShutUpDiarybyBarbaraHolm', 'shutupdiary'), cls('Sigh', 'sigh'), cls('Silver', 'sil-ver'), cls('SilverNights', 'silvernights'), cls('Skeeter', 'herecomesskeeter'), cls('Sketchy', 'sketchy'), cls('Skylords', 'skylords'), cls('SlugMan', 'slug-man'), cls('SmallTownValues', 'smalltownvalues'), cls('SmitheeZombieHunter', 'smitheezombiehunter'), cls('SneakersUForce', 'sneakers'), cls('Snowfall', 'snowfall'), cls('SoFunnyIForgotToLaugh', 'sofunnyiforgottolaugh'), cls('SonichuREDone', 'sonichuredone'), cls('SonichuREDoneJ', 'sonichuredonejapanese', 'ja'), cls('Soulsworn', 'soulsworn'), cls('SpacedOutTheBeginning', 'spacedoutthebeginning'), cls('SpaceFarmer', 'spacefarmer'), cls('SpacePiratesOfTheBlackQuarter', 'spacepirates'), cls('SpacePulp', 'spacepulp'), cls('Spades', 'spades'), cls('SpicyDesu', 'desu'), cls('SpiderManShadowsOfNight', 'shadowsofnight'), cls('SpiritSquireTheQuestForTheUltimateKnight', 'spiritsquire-1'), cls('Spooky', 'spooky'), cls('SPOON', 'spooncomic'), cls('StampedeJessicasStory', 'stampedegirl'), cls('Starcrossed', 'starcrossed'), cls('StarPunchGirl', 'starpunchgirl'), cls('STARWARSXWingAlliance', 'x-wingalliance'), cls('STASonicTheAdventure', 'sta'), cls('StereotyPixs', 'stereotypixs'), cls('StevenAndTheCrystalGMs', 'crystalgms'), cls('StickLife', 'sticklife'), cls('StickMisadventures', 'stick-misadventures'), cls('StinkomanFatChickenQuest', 'stinkoman'), cls('StonedHeroes', 'stonedheroes'), cls('StrangeAttractors', 'strangeattractors', segmented=True), cls('Streamo', 'streamo'), cls('SundaySmash', 'sundaysmash'), cls('Sunray', 'sunray'), cls('SuperGalaxyKnightsDeluxeR', 'sgkdr'), cls('SuperheroTales', 'superherobeingsuper'), cls('SuperShashi', 'supershashi'), cls('Supervillainous', 'supervillainous'), cls('SurrealScience', 'surrealscience'), cls('Swashbuckled', 'swashbuckled'), cls('Swazzyknocks', 'swazzyknocks'), cls('Synapticisms', 'synapticisms'), cls('TalesFromRiota', 'ganold'), cls('TalesOfBrickland', 'brickland'), cls('TalesOfMiddar', 'talesofmiddar'), cls('TalesOfSpoons', 'talesofspoons'), cls('TalesOfTheGalli', 'totg-mirror'), cls('TamTeamAdventures', 'tamteam'), cls('TangledMessTheGirlyNerdyTerriblyStrangeJournalComi', 'tangledmess'), cls('TangledRiver', 'tangled-river', segmented=True), cls('TBA', 'tba'), cls('TBAold', 'tba-old'), cls('TerwilligersCafe', 'terwilligers'), cls('TheAccidentalSpaceSpy', 'spacespy'), cls('TheAccidentalWitch', 'theaccidentalwitch'), cls('TheAdventuresOfAquilaAndTeren', 'aquilateren'), cls('TheAdventuresOfBaldy', 'adventuresofbaldy'), cls('TheAdventuresOfBidoof', 'bidoof'), cls('TheAdventuresOfCarrotKnight', 'carrotknight'), cls('TheAdventuresOfGrumpyBearAndMrGoose', 'grumpyandgoose'), cls('TheAdventuresOfMechaSmiles', 'mechasmiles'), cls('TheAdventuresOfSherilynAndEmma', 'taosae'), cls('TheAdventuresOfTheLadySkylark', 'ladyskylark'), cls('TheBarrowHill', 'thebarrowhill'), cls('TheBellInTheOcean', 'bellintheocean'), cls('TheBend', 'thebend'), cls('TheBigFoldy', 'bigfoldy'), cls('THEBIGSCIFIMISHMASH', 'thebigsci-fimish-mash'), cls('TheBlackPrincess', 'theblackprincess'), cls('THEBOOKOFLIES', 'bookofliescomic'), cls('TheBookOfThree', 'thebookofthree', segmented=True), cls('TheChanterelleAndMayLife', 'chanterelleandmay'), cls('TheChroniclesOfBuckyONeill', 'buckyoneill'), cls('TheChroniclesOfDrew', 'thechroniclesofdrew'), cls('TheChroniclesOfLillian', 'chroniclesoflillian'), cls('TheChroniclesOfLoth', 'chroniclesofloth'), cls('TheCompozerz', 'compozerz'), cls('TheContinentals', 'continentals'), cls('TheCrepusculars', 'crepusculars'), cls('TheCrumpletonExperiments', 'thecrumpletonexperiments'), cls('TheDailyDoodle', 'tdd'), cls('TheDevilsHorn', 'thedevilshorn'), cls('TheDragonFistsOfSmortySmythe', 'thedragonfistsofsmortysmythe'), cls('TheDrongos', 'thedrongos'), cls('TheEpicEpic', 'theepicepic'), cls('TheFaithful', 'thefaithful'), cls('TheFeloranChronicles', 'felora'), cls('TheFunnyZone', 'thefunnyzone'), cls('TheGalleryOfFreaks', 'galleryoffreaks'), cls('TheGarage', 'thegarage'), cls('TheGarden', 'thegarden'), cls('TheGingerbreadManChronicles', 'gingerbreadmanchronicles'), cls('TheGrazingMongrel', 'grazingmongrel'), cls('TheGuardian', 'theguardian'), cls('TheHarriopulate', 'theharriopulate'), cls('TheHighestBet', 'thehighestbet'), cls('TheHighestBetITA', 'thehighestbet-ita', 'it'), cls('TheHobbit', 'hobbit'), cls('TheHolidayDoctor', 'holidaydoctor'), cls('TheHorrifyingExperimentsOfDrPleasant', 'thehorrifyingexperimentsofdrpleasant'), cls('TheHub', 'cbbrthehub'), cls('TheHubBook', 'thehubbook'), cls('TheHumanBattery', 'thehumanbattery'), cls('TheHundredsUprising', 'thehundredsuprising'), cls('TheILL', 'theill'), cls('TheIntrovertManifesto', 'introvert'), cls('TheJabbercrow', 'jabbercrow'), cls('TheKeepOnTheBorderlands', 'thekeepontheborderlands'), cls('TheLamp', 'thelamp'), cls('TheLastHope', 'tlhcomic'), cls('TheLawOfPurple', 'lawofpurple'), cls('TheLeagueOfExtraordinaryRoleplayers', 'lxgrpg'), cls('TheLeapfrogTeam', 'leapfrogteam'), cls('TheLegendaryPixelCrew', 'thelegendarypixelcrew'), cls('TheLegendOfLink', 'legendoflink'), cls('TheLozoyas', 'thelozoyas'), cls('TheMansionofE', 'mansionofe'), cls('TheMates', 'themates'), cls('TheMatesPortugus', 'matespt', 'pt'), cls('TheMeaningOfLife', 'themeaningoflife'), cls('TheMightyBlue', 'themightyblue'), cls('TheMightyMeteorite', 'mightymeteorite'), cls('TheMisadventuresOfDexterTheAlien', 'dexterthealien'), cls('TheMisadventuresOfTheTrailerParkTrio', 'tmaottpt'), cls('TheMitchellEffect', 'themitchelleffect'), cls('TheMoonValley', 'moonvalley'), cls('TheNew30DaysOfCharacters', '30l30characters'), cls('TheNewAdventuresOfFelicity', 'felicity'), cls('TheNineteenthCenturyIndustrialist', 'thebaron'), cls('TheNonesuchTales', 'thenonesuchtales'), cls('TheORIGINALShonenPunk', 'shonenpunk'), cls('TheOtherGreyMeat', 'togm'), cls('TheOverture', 'theoverture'), cls('ThePresident', 'president'), cls('TheQuantumKid', 'thequantumkid'), cls('TheQuestForCoitus', 'acomicstudios'), cls('TheRathNexus', 'rath'), cls('TheRealmOfKaerwyn', 'kaerwyn'), cls('TheRebels', 'rebels'), cls('TheRedeemers', 'theredeemers'), cls('TheRestlessDead', 'therestlessdead'), cls('TheRidiculousPushyReeder', 'pushy'), cls('TheRoseKiller', 'therosekiller'), cls('TheRubyNation', 'rubynation'), cls('TheScienceOfCookies', 'cookiescience', 'fr'), cls('TheSecondCrimeanWar', 'secondcrimeanwar'), cls('TheSettlers', 'thesettlers'), cls('TheSketchyAdventuresOfKyoAndMatt', 'kyoandmatt'), cls('TheSkybox', 'skybox'), cls('TheSpecialCASE', 'thespecialcase'), cls('THESTORMRUNNERS', 'thestormrunners'), cls('TheStoryOfBobChapter1Part1', 'thebobstory'), cls('TheStoryOfSaliria', 'saliria'), cls('TheSupernaturalsEpisode4', 'thesupernaturals4'), cls('TheSurface', 'thesurface'), cls('TheTempleAtFiftyFathoms', 'the-temple-at-fifty-fathoms'), cls('TheTenTailorsOfWestonCourt', 'tentailors'), cls('TheTrialsOfMannack', 'mannack'), cls('TheUnclean', 'theunclean'), cls('TheWayOfTheMetagamer', 'wayofthemetagamer'), cls('TheWellkeeper', 'thewellkeeper'), cls('TheWesternGang', 'thewesterngang'), cls('TheWhizzkids', 'whizzkids'), cls('TheWolfAtWestonCourt', 'thewolfatwestoncourt'), cls('TheWorldJumper', 'theworldjumper'), cls('TheWorldOfUh', 'theworldofuh'), cls('TheWrongTree', 'thewrongtree'), cls('TheWWord', 'thewword'), cls('ThisIsNormal', 'thisisnormal'), cls('ThisIsTheLife', 'thisisthelifecomic'), cls('ThomasAndZacharyArchives', 'thomasandzachary'), cls('Thornwar', 'thornwar'), cls('ThoseUnknowableTheShadowsOverInnsmouth', 'tsoi'), cls('Threan', 'threan'), cls('ThreeFreeFrikis', 'tff', 'es'), cls('TickTock', 'tick-tock'), cls('TigerWrestling', 'anybodythere'), cls('Timezone', 'timezone'), cls('Tinytown', 'tinytown'), cls('TM47', 'tm47'), cls('TohvelinTuhinoita', 'tuhinaloota'), cls('TOLVA', 'tolva'), cls('TombOfTheKing', 'tomboftheking'), cls('TomorrowsGirls', 'tomorrowsgirls'), cls('ToneOutComics', 'toneout'), cls('TonyComics', 'tonycomics'), cls('Toontown', 'toontowncomics'), cls('TotalChaos', 'totalchaos'), cls('TotallyKaimera', 'totallykaimera'), cls('TotallyKaimeraBackgroundStory2', 'totallykaimerabackgroundstory2'), cls('TotallyKaimeraPart2', 'totallykaimerapart2'), cls('TotallyKaimeraPart3', 'totallykaimerapart3'), cls('TracyAndTristan', 'tandt'), cls('TradScribbles', 'tradscribbles'), cls('TrAgEdY', 'tragedy'), cls('TransdimensionalBrainChip', 'brainchip'), cls('TransformersNexus', 'tfnexus'), cls('TransientPulseNotIntentionallyObsessive', 'niotp'), cls('Transmission', 'transmission'), cls('TransUmanSUbterran', 'sub-terran'), cls('Traveler', 'clioyorokobi'), cls('TreeScratches', 'treescratches'), cls('Treeville', 'treeville'), cls('TriforceOfPower', 'triforceofpower'), cls('Trigonometry', 'trigonometry'), cls('Trinity', 'trinity'), cls('TrixieSlaughteraxeForPresident', 'trixie'), cls('TrollGirl', 'trollgirl'), cls('TrueFist', 'true-fist'), cls('TruFax', 'trufax'), cls('TSAndTJ', 'tsandtj'), cls('TsuyuSociety', 'tsuyusociety'), cls('TurnerAndHercules', 'turnerandhercules'), cls('TussenKatersEnSpraakwater', 'tussenkatersenspraakwater'), cls('TvQuest', 'tvquest'), cls('TwilightTrust', 'twilighttrust'), cls('TwinsAgony', 'twinsagony'), cls('TwistedPeel', 'twistedpeel'), cls('TwoFaced', 'twofaced'), cls('TwoHearts', 'twohearts'), cls('TWTWE', 'twtwe'), cls('TypicalStrange', 'typicalstrange'), cls('UglyBookCovers', 'uglybookcovers'), cls('UltimateSwordsSummoner', 'uss'), cls('UltraViresEnglish', 'ultravires-eng'), cls('UltraViresesky', 'ultravires'), cls('Unconventional', 'unconventional', adult=True), cls('Underverse', 'underverse'), cls('UnfortunateCircumstances', 'unfortunatecircumstances'), cls('UniversityOfSpeed', 'u-speed'), cls('UnknownLands', 'unknownlands'), cls('UNPROFESSIONAL', 'unprofessional'), cls('Unreliable', 'unreliable'), cls('V4', 'v4'), cls('ValeOfDemons', 'valeofdemons'), cls('VampireBites', 'vampirebites'), cls('VampireCatgirlPart2', 'vampirecatgirl2'), cls('VeldaGirlDetective', 'veldagirldetective'), cls('Verboten', 'verboten'), cls('VHV', 'vhv'), cls('Victory', 'victoryadventures'), cls('ViewHub', 'viewhub'), cls('ViolentBlue', 'violentblue'), cls('Virtrena', 'virtrena'), cls('VisualDiaryOfMyLife', 'visualdiary'), cls('VOE', 'voe'), cls('Voidchild', 'voidchild'), cls('WaitWhat', 'waitwhatcomic'), cls('WARG', 'warg'), cls('Wargyrl', 'wargyrl'), cls('WarriorTwentySeven', 'warrior27'), cls('WastedAway', 'wastedaway'), cls('WastedPotential', 'wastedpotential'), cls('WastelandersAnonymous', 'wastelanders'), cls('WasteOfTime', 'wasteoftime'), cls('WayTooOffensive', 'waytooffensive'), cls('WeAreTheLosers', 'thelosers'), cls('WeeabooIsland', 'weeabooisland'), cls('WestTreeAcademyOfHeroes', 'westtree'), cls('WhatIDontEven', 'idonteven'), cls('WHATSERP', 'whatserp'), cls('WhiskeyAndMelancholy', 'whiskeyandmelancholy'), cls('WhiteOut', 'whiteout'), cls('WhiteSpace', 'whitespace'), cls('WhoseLineIsItAnyhoo', 'whoseline'), cls('WilfordTheWalrus', 'wilfordthewalrus'), cls('Willem', 'willem'), cls('WindRiders', 'windriders'), cls('WinstonsWorld', 'winstonsworld'), cls('WitchesTeaParty', 'witchesteaparty'), cls('WithoutMoonlight', 'withoutmoonlight'), cls('WonderTeam', 'wonderteam'), cls('WoodsForTheTrees', 'woodsforthetrees'), cls('WoodsOfEvil', 'woodsofevil'), cls('Woohooligan', 'woohooligan'), cls('WordsToLiveBy', 'wordstoliveby'), cls('WORMCURSE', 'wormcurse'), cls('WrightAsRayne', 'wrightasrayne'), cls('WrongNumber', 'wrongnumber'), cls('WYIHN', 'wyihn'), cls('Xibalba', 'xibalba'), cls('Xit', 'x-it'), cls('YesterdayBound', 'yesterdaybound'), cls('YouAreNow', 'yan'), cls('YOURCHOICE', 'yourchoice'), cls('ZackDragonbladeAndTheExcalites', 'zackdragonblade'), cls('ZebraGirl', 'zebragirl'), cls('Zelfia', 'zelfia'), cls('ZeroEffortFantasy', 'zeroeffort'), cls('ZombieZoup', 'zombiezoup'), cls('ZwergElf', 'zwergelf', 'de'), # END AUTOUPDATE )
PypiClean
/en_aesops_sentiment-0.0.0.tar.gz/en_aesops_sentiment-0.0.0/README.md
| Feature | Description | | --- | --- | | **Name** | `en_aesops_sentiment` | | **Version** | `0.0.0` | | **spaCy** | `>=3.5.4,<3.6.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `Negative`, `Positive` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 69.95 | | `CATS_MICRO_P` | 73.34 | | `CATS_MICRO_R` | 73.34 | | `CATS_MICRO_F` | 73.34 | | `CATS_MACRO_P` | 69.52 | | `CATS_MACRO_R` | 70.61 | | `CATS_MACRO_F` | 69.95 | | `CATS_MACRO_AUC` | 77.81 | | `TEXTCAT_LOSS` | 1320.07 |
PypiClean
/exactly-0.15.0-py3-none-any.whl/exactly_lib/help/entities/types/objects/logic_types.py
from exactly_lib.definitions import misc_texts, matcher_model from exactly_lib.definitions.cross_ref.concrete_cross_refs import PredefinedHelpContentsPartReference, \ HelpPredefinedContentsPart from exactly_lib.definitions.entity import types, syntax_elements, concepts from exactly_lib.definitions.test_case import phase_infos from exactly_lib.definitions.test_case.instructions import instruction_names from exactly_lib.help.entities.types.contents_structure import TypeDocumentation from exactly_lib.type_val_prims.matcher import line_matcher from exactly_lib.util.textformat.structure.document import SectionContents from exactly_lib.util.textformat.textformat_parser import TextParser _TP = TextParser({ 'os_process': misc_texts.OS_PROCESS_NAME, 'program_type': types.PROGRAM_TYPE_INFO.name, 'conf_param': concepts.CONFIGURATION_PARAMETER_CONCEPT_INFO.name, 'os_proc_env_section_header': misc_texts.OS_PROCESS_ENVIRONMENT_SECTION_HEADER, 'test_case_spec_title': misc_texts.TEST_CASE_SPEC_TITLE, 'current_OS': misc_texts.CURRENT_OS, 'First_line_number': line_matcher.FIRST_LINE_NUMBER_DESCRIPTION, 'Line_separator_description': line_matcher.LINE_SEPARATOR_DESCRIPTION, 'text_model': matcher_model.TEXT_MODEL, 'string_source_type': types.STRING_SOURCE_TYPE_INFO.name, }) _LINE_MATCHER_DESCRIPTION = """\ A line is represented by its * line number * text contents {First_line_number} The line separator is not included in the text contents. {Line_separator_description} """ INTEGER_MATCHER_DOCUMENTATION = TypeDocumentation( types.INTEGER_MATCHER_TYPE_INFO, syntax_elements.INTEGER_MATCHER_SYNTAX_ELEMENT, ) LINE_MATCHER_DOCUMENTATION = TypeDocumentation( types.LINE_MATCHER_TYPE_INFO, syntax_elements.LINE_MATCHER_SYNTAX_ELEMENT, _TP.section_contents(_LINE_MATCHER_DESCRIPTION), ) FILE_MATCHER_DOCUMENTATION = TypeDocumentation( types.FILE_MATCHER_TYPE_INFO, syntax_elements.FILE_MATCHER_SYNTAX_ELEMENT, ) STRING_TRANSFORMER_DOCUMENTATION = TypeDocumentation( types.STRING_TRANSFORMER_TYPE_INFO, syntax_elements.STRING_TRANSFORMER_SYNTAX_ELEMENT, SectionContents.empty(), (), ) STRING_MATCHER_DOCUMENTATION = TypeDocumentation( types.STRING_MATCHER_TYPE_INFO, syntax_elements.STRING_MATCHER_SYNTAX_ELEMENT, SectionContents.empty(), (), ) FILES_MATCHER_DOCUMENTATION = TypeDocumentation( types.FILES_MATCHER_TYPE_INFO, syntax_elements.FILES_MATCHER_SYNTAX_ELEMENT, ) FILES_CONDITION_DOCUMENTATION = TypeDocumentation( types.FILES_CONDITION_TYPE_INFO, syntax_elements.FILES_CONDITION_SYNTAX_ELEMENT, ) FILES_SOURCE_DOCUMENTATION = TypeDocumentation( types.FILES_SOURCE_TYPE_INFO, syntax_elements.FILES_SOURCE_SYNTAX_ELEMENT, ) _PROGRAM_DESCRIPTION_REST = """\ {program_type:a/uq} is executed as an {os_process}. The environment in which a {program_type} is executed is described in "{test_case_spec_title}" / "{os_proc_env_section_header}". """ PROGRAM_DOCUMENTATION = TypeDocumentation( types.PROGRAM_TYPE_INFO, syntax_elements.PROGRAM_SYNTAX_ELEMENT, _TP.section_contents(_PROGRAM_DESCRIPTION_REST), [ PredefinedHelpContentsPartReference( HelpPredefinedContentsPart.TEST_CASE_SPEC), phase_infos.SETUP.instruction_cross_reference_target(instruction_names.ENV_VAR_INSTRUCTION_NAME), phase_infos.SETUP.instruction_cross_reference_target(instruction_names.TIMEOUT_INSTRUCTION_NAME), ]) _STRING_SOURCE_DESCRIPTION_REST = """\ Produces {text_model:a}, when referenced. The produced {text_model} may differ when {string_source_type:a/q} is referenced from different locations. One such example is {string_source_type:a/q} that is the output from {program_type:a/q} that produces different output on different executions. """ STRING_SOURCE_DOCUMENTATION = TypeDocumentation( types.STRING_SOURCE_TYPE_INFO, syntax_elements.STRING_SOURCE_SYNTAX_ELEMENT, _TP.section_contents(_STRING_SOURCE_DESCRIPTION_REST), )
PypiClean
/nni_upload_test-0.7.1904290925-py3-none-win_amd64.whl/nni_upload_test-0.7.1904290925.data/data/nni/node_modules/joi/test/errors.js
var Lab = require('lab'); var Code = require('code'); var Joi = require('../lib'); // Declare internals var internals = {}; // Test shortcuts var lab = exports.lab = Lab.script(); var describe = lab.describe; var it = lab.it; var expect = Code.expect; describe('errors', function () { it('supports custom errors when validating types', function (done) { var schema = Joi.object({ email: Joi.string().email(), date: Joi.date(), alphanum: Joi.string().alphanum(), min: Joi.string().min(3), max: Joi.string().max(3), required: Joi.string().required(), xor: Joi.string(), renamed: Joi.string().valid('456'), notEmpty: Joi.string().required() }).rename('renamed', 'required').without('required', 'xor').without('xor', 'required'); var input = { email: 'invalid-email', date: 'invalid-date', alphanum: '\b\n\f\r\t', min: 'ab', max: 'abcd', required: 'hello', xor: '123', renamed: '456', notEmpty: '' }; var lang = { any: { empty: '3' }, date: { base: '18' }, string: { base: '13', min: '14', max: '15', alphanum: '16', email: '19' }, object: { without: '7', rename: { override: '11' } } }; Joi.validate(input, schema, { abortEarly: false, language: lang }, function (err, value) { expect(err).to.exist(); expect(err.name).to.equal('ValidationError'); expect(err.message).to.equal('"value" 11. child "email" fails because ["email" 19]. child "date" fails because ["date" 18]. child "alphanum" fails because ["alphanum" 16]. child "min" fails because ["min" 14]. child "max" fails because ["max" 15]. child "notEmpty" fails because ["notEmpty" 3]. "required" 7. "xor" 7'); done(); }); }); it('does not prefix with key when language uses context.key', function (done) { Joi.valid('sad').options({ language: { any: { allowOnly: 'my hero "{{key}}" is not {{valids}}' } } }).validate(5, function (err, value) { expect(err.message).to.equal('my hero "value" is not [sad]'); done(); }); }); it('escapes unsafe keys', function (done) { var schema = { 'a()': Joi.number() }; Joi.validate({ 'a()': 'x' }, schema, function (err, value) { expect(err.message).to.equal('child "a&#x28;&#x29;" fails because ["a&#x28;&#x29;" must be a number]'); Joi.validate({ 'b()': 'x' }, schema, function (err2, value2) { expect(err2.message).to.equal('"b&#x28;&#x29;" is not allowed'); done(); }); }); }); it('returns error type in validation error', function (done) { var input = { notNumber: '', notString: true, notBoolean: 9 }; var schema = { notNumber: Joi.number().required(), notString: Joi.string().required(), notBoolean: Joi.boolean().required() }; Joi.validate(input, schema, { abortEarly: false }, function (err, value) { expect(err).to.exist(); expect(err.details).to.have.length(3); expect(err.details[0].type).to.equal('number.base'); expect(err.details[1].type).to.equal('string.base'); expect(err.details[2].type).to.equal('boolean.base'); done(); }); }); it('returns a full path to an error value on an array (items)', function (done) { var schema = Joi.array().items(Joi.array().items({ x: Joi.number() })); var input = [ [{ x: 1 }], [{ x: 1 }, { x: 'a' }] ]; schema.validate(input, function (err, value) { expect(err).to.exist(); expect(err.details[0].path).to.equal('1.1.x'); done(); }); }); it('returns a full path to an error value on an array (items forbidden)', function (done) { var schema = Joi.array().items(Joi.array().items(Joi.object({ x: Joi.string() }).forbidden())); var input = [ [{ x: 1 }], [{ x: 1 }, { x: 'a' }] ]; schema.validate(input, function (err, value) { expect(err).to.exist(); expect(err.details[0].path).to.equal('1.1'); done(); }); }); it('returns a full path to an error value on an object', function (done) { var schema = { x: Joi.array().items({ x: Joi.number() }) }; var input = { x: [{ x: 1 }, { x: 'a' }] }; Joi.validate(input, schema, function (err, value) { expect(err).to.exist(); expect(err.details[0].path).to.equal('x.1.x'); done(); }); }); it('overrides root key language', function (done) { Joi.string().options({ language: { root: 'blah' } }).validate(4, function (err, value) { expect(err.message).to.equal('"blah" must be a string'); done(); }); }); it('overrides label key language', function (done) { Joi.string().options({ language: { key: 'my own {{!key}} ' } }).validate(4, function (err, value) { expect(err.message).to.equal('my own value must be a string'); done(); }); }); it('overrides wrapArrays', function (done) { Joi.array().items(Joi.boolean()).options({ language: { messages: { wrapArrays: false } } }).validate([4], function (err, value) { expect(err.message).to.equal('"value" at position 0 fails because "0" must be a boolean'); done(); }); }); it('allows html escaping', function (done) { Joi.string().options({ language: { root: 'blah', label: 'bleh' } }).validate(4, function (err, value) { expect(err.message).to.equal('"bleh" must be a string'); done(); }); }); it('provides context with the error', function (done) { Joi.object({ length: Joi.number().min(3).required() }).validate({ length: 1 }, function (err) { expect(err.details).to.deep.equal([{ message: '"length" must be larger than or equal to 3', path: 'length', type: 'number.min', context: { limit: 3, key: 'length', value: 1 } }]); done(); }); }); it('has a name that is ValidationError', function (done) { var schema = Joi.number(); schema.validate('a', function (validateErr) { expect(validateErr).to.exist(); expect(validateErr.name).to.be.equal('ValidationError'); try { Joi.assert('a', schema); throw new Error('should not reach that'); } catch (assertErr) { expect(assertErr.name).to.be.equal('ValidationError'); } try { Joi.assert('a', schema, 'foo'); throw new Error('should not reach that'); } catch (assertErr) { expect(assertErr.name).to.be.equal('ValidationError'); } try { Joi.assert('a', schema, new Error('foo')); throw new Error('should not reach that'); } catch (assertErr) { expect(assertErr.name).to.equal('Error'); done(); } }); }); describe('#annotate', function () { it('annotates error', function (done) { var object = { a: 'm', y: { b: { c: 10 } } }; var schema = { a: Joi.string().valid('a', 'b', 'c', 'd'), y: Joi.object({ u: Joi.string().valid(['e', 'f', 'g', 'h']).required(), b: Joi.string().valid('i', 'j').allow(false), d: Joi.object({ x: Joi.string().valid('k', 'l').required(), c: Joi.number() }) }) }; Joi.validate(object, schema, { abortEarly: false }, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"y\": {\n \"b\" \u001b[31m[1]\u001b[0m: {\n \"c\": 10\n },\n \u001b[41m\"u\"\u001b[0m\u001b[31m [2]: -- missing --\u001b[0m\n },\n "a" \u001b[31m[3]\u001b[0m: \"m\"\n}\n\u001b[31m\n[1] "a" must be one of [a, b, c, d]\n[2] "u" is required\n[3] "b" must be a string\u001b[0m'); done(); }); }); it('annotates error within array', function (done) { var object = { a: [1, 2, 3, 4, 2, 5] }; var schema = { a: Joi.array().items(Joi.valid(1, 2)) }; Joi.validate(object, schema, { abortEarly: false }, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"a\": [\n 1,\n 2,\n 3, \u001b[31m[1]\u001b[0m\n 4, \u001b[31m[2]\u001b[0m\n 2,\n 5 \u001b[31m[3]\u001b[0m\n ]\n}\n\u001b[31m\n[1] \"2\" must be one of [1, 2]\n[2] \"3\" must be one of [1, 2]\n[3] \"5\" must be one of [1, 2]\u001b[0m'); done(); }); }); it('annotates error within array multiple times on the same element', function (done) { var object = { a: [2, 3, 4] }; var schema = { a: Joi.array().items(Joi.number().min(4).max(2)) }; Joi.validate(object, schema, { abortEarly: false }, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"a\": [\n 2, \u001b[31m[1]\u001b[0m\n 3, \u001b[31m[3, 2]\u001b[0m\n 4 \u001b[31m[4]\u001b[0m\n ]\n}\n\u001b[31m\n[1] \"0\" must be larger than or equal to 4\n[2] \"1\" must be larger than or equal to 4\n[3] \"1\" must be less than or equal to 2\n[4] \"2\" must be less than or equal to 2\u001b[0m'); done(); }); }); it('annotates error within array when it is an object', function (done) { var object = { a: [{ b: 2 }] }; var schema = { a: Joi.array().items(Joi.number()) }; Joi.validate(object, schema, { abortEarly: false }, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"a\": [\n { \u001b[31m[1]\u001b[0m\n \"b\": 2\n }\n ]\n}\n\u001b[31m\n[1] \"0\" must be a number\u001b[0m'); done(); }); }); it('annotates error within multiple arrays and multiple times on the same element', function (done) { var object = { a: [2, 3, 4], b: [2, 3, 4] }; var schema = { a: Joi.array().items(Joi.number().min(4).max(2)), b: Joi.array().items(Joi.number().min(4).max(2)) }; Joi.validate(object, schema, { abortEarly: false }, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"a\": [\n 2, \u001b[31m[1]\u001b[0m\n 3, \u001b[31m[3, 2]\u001b[0m\n 4 \u001b[31m[4]\u001b[0m\n ],\n \"b\": [\n 2, \u001b[31m[5]\u001b[0m\n 3, \u001b[31m[7, 6]\u001b[0m\n 4 \u001b[31m[8]\u001b[0m\n ]\n}\n\u001b[31m\n[1] \"0\" must be larger than or equal to 4\n[2] \"1\" must be larger than or equal to 4\n[3] \"1\" must be less than or equal to 2\n[4] \"2\" must be less than or equal to 2\n[5] \"0\" must be larger than or equal to 4\n[6] \"1\" must be larger than or equal to 4\n[7] \"1\" must be less than or equal to 2\n[8] \"2\" must be less than or equal to 2\u001b[0m'); done(); }); }); it('displays alternatives fail as a single line', function (done) { var schema = { x: [ Joi.string(), Joi.number(), Joi.date() ] }; Joi.validate({ x: true }, schema, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"x\" \u001b[31m[1, 2, 3]\u001b[0m: true\n}\n\u001b[31m\n[1] "x" must be a string\n[2] "x" must be a number\n[3] "x" must be a number of milliseconds or valid date string\u001b[0m'); done(); }); }); it('annotates circular input', function (done) { var schema = { x: Joi.object({ y: Joi.object({ z: Joi.number() }) }) }; var input = {}; input.x = input; Joi.validate(input, schema, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"x\" \u001b[31m[1]\u001b[0m: \"[Circular ~]\"\n}\n\u001b[31m\n[1] \"x\" is not allowed\u001b[0m'); done(); }); }); it('annotates deep circular input', function (done) { var schema = { x: Joi.object({ y: Joi.object({ z: Joi.number() }) }) }; var input = { x: { y: {} } }; input.x.y.z = input.x.y; Joi.validate(input, schema, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"x\": {\n \"y\": {\n \"z\" \u001b[31m[1]\u001b[0m: \"[Circular ~.x.y]\"\n }\n }\n}\n\u001b[31m\n[1] \"z\" must be a number\u001b[0m'); done(); }); }); it('annotates deep circular input with extra keys', function (done) { var schema = { x: Joi.object({ y: Joi.object({ z: Joi.number() }) }) }; var input = { x: { y: { z: 1, foo: {} } } }; input.x.y.foo = input.x.y; Joi.validate(input, schema, function (err, value) { expect(err).to.exist(); expect(err.annotate()).to.equal('{\n \"x\": {\n \"y\": {\n \"z\": 1,\n \"foo\" \u001b[31m[1]\u001b[0m: \"[Circular ~.x.y]\"\n }\n }\n}\n\u001b[31m\n[1] \"foo\" is not allowed\u001b[0m'); done(); }); }); }); });
PypiClean
/plugins/koji.py
import koji import did.base from did.stats import Stats, StatsGroup from did.utils import log # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Stats # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class KojiBuilds(Stats): """ Finished koji builds """ def __init__(self, option, name=None, parent=None, user=None, options=None, server=None, userinfo=None): Stats.__init__(self, option, name, parent, userinfo, options) self.server = server self.userinfo = userinfo def fetch(self): log.info("Searching for builds by {0}".format(self.user)) builds = self.server.listBuilds( userID=self.user['id'], completeAfter=str(self.options.since), completeBefore=str(self.options.until)) self.stats = [build['nvr'] for build in builds] # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Stats Group # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class KojiStats(StatsGroup): """ Koji work """ # Default order order = 420 def __init__(self, option, name=None, parent=None, user=None): StatsGroup.__init__(self, option, name, parent, user) config = dict(did.base.Config().section(option)) try: url = config['url'] except KeyError: raise did.base.ReportError( "No koji url set in the [{0}] section".format(option)) server = koji.ClientSession(url) try: user = server.getUser(config['login'], strict=True) except KeyError: raise did.base.ReportError( "No koji user set in the [{0}] section".format(option)) except koji.GenericError: raise did.base.ReportError( "Non-existent koji user set in the [{0}] section".format(option)) name = config.get('name', url) self.stats = [ KojiBuilds(option=option + "-builds", name='Completed builds in {0}'.format(name), server=server, userinfo=user, parent=self) ]
PypiClean
/macrobond-data-api-2.0.1.tar.gz/macrobond-data-api-2.0.1/macrobond_data_api/web/_web_api_in_house_series.py
from datetime import datetime from typing import TYPE_CHECKING, Optional, Sequence, Union, Any, List from macrobond_data_api.common.enums import SeriesWeekdays, SeriesFrequency if TYPE_CHECKING: # pragma: no cover from .web_api import WebApi __pdoc__ = { "WebApi.__init__": False, } def _set_metadata(metadata: dict, name: str, val: Any) -> None: if name in metadata: if metadata[name] != val: raise ValueError(f"{name} in metadata does not match {name}") else: metadata[name] = val def upload_series( self: "WebApi", name: str, description: str, region: str, category: str, frequency: SeriesFrequency, values: Sequence[Optional[float]], start_date_or_dates: Union[datetime, Sequence[datetime]], dayMask: SeriesWeekdays = SeriesWeekdays.MONDAY_TO_FRIDAY, metadata: Optional[dict] = None, forecast_flags: Optional[Sequence[bool]] = None, ) -> None: if not isinstance(values, list): values = list(values) if forecast_flags is not None and not isinstance(forecast_flags, list): forecast_flags = list(forecast_flags) if metadata is None: metadata = {} else: metadata = {k: v.isoformat() if isinstance(v, datetime) else v for k, v in metadata.items()} _set_metadata(metadata, "PrimName", name) _set_metadata(metadata, "Description", description) _set_metadata(metadata, "Region", region) _set_metadata(metadata, "IHCategory", category) _set_metadata(metadata, "Frequency", frequency.name.lower()) _set_metadata(metadata, "DayMask", dayMask.value) dates: Optional[List[str]] = None if isinstance(start_date_or_dates, datetime): if start_date_or_dates.tzinfo is None: raise ValueError("start_date_or_dates must have a timezone") _set_metadata(metadata, "StartDate", start_date_or_dates.isoformat()) else: if any(x.tzinfo is None for x in start_date_or_dates): raise ValueError("start_date_or_dates must have a timezone") dates = [x.isoformat() for x in start_date_or_dates] self.session.in_house_series.upload_series( {"forecastFlags": forecast_flags, "metadata": metadata, "values": values, "dates": dates} ) def delete_serie(self: "WebApi", series_name: str) -> None: self.session.in_house_series.delete_series(series_name)
PypiClean
/odoo12_addon_l10n_es_aeat_mod115-12.0.1.4.0-py3-none-any.whl/odoo/addons/l10n_es_aeat_mod115/readme/USAGE.rst
Para crear un modelo, por ejemplo de un trimestre del año: 1. Ir a Contabilidad > Informe > Informes legales > Declaraciones AEAT > Modelo 115 2. Pulsar en el botón "Crear" 3. Seleccionar el ejercicio fiscal y el tipo de período, los periodos incluidos se calculan automáticamente 4. Seleccionar el tipo de declaración 5. Rellenar el teléfono, necesario para la exportacion BOE 6. Guardar y pulsar en el botón "Calcular" 7. Rellenar (si es necesario) aquellos campos que Odoo no calcula automáticamente: * Resultados a ingresar anteriores: Casilla [04] 8. Cuando los valores sean los correctos, pulsar en el botón "Confirmar" 9. Podemos exportar en formato BOE para presentarlo telemáticamente en el portal de la AEAT
PypiClean
/netbluemind4-4.9.2993.tar.gz/netbluemind4-4.9.2993/netbluemind/directory/api/DirEntryQuery.py
import requests from netbluemind.python import serder class DirEntryQuery: def __init__(self): self.order = None self.nameFilter = None self.hiddenFilter = None self.emailFilter = None self.nameOrEmailFilter = None self.stateFilter = None self.systemFilter = None self.kindsFilter = None self.entries = None self.orgUnitIds = None self.accountTypeFilter = None self.from_ = None self.size = None self.entryUidFilter = None self.onlyManagable = None self.dataLocation = None pass class __DirEntryQuerySerDer__: def __init__(self): pass def parse(self, value): if (value == None): return None instance = DirEntryQuery() self.parseInternal(value, instance) return instance def parseInternal(self, value, instance): from netbluemind.directory.api.DirEntryQueryOrder import DirEntryQueryOrder from netbluemind.directory.api.DirEntryQueryOrder import __DirEntryQueryOrderSerDer__ orderValue = value['order'] instance.order = __DirEntryQueryOrderSerDer__().parse(orderValue) nameFilterValue = value['nameFilter'] instance.nameFilter = serder.STRING.parse(nameFilterValue) hiddenFilterValue = value['hiddenFilter'] instance.hiddenFilter = serder.BOOLEAN.parse(hiddenFilterValue) emailFilterValue = value['emailFilter'] instance.emailFilter = serder.STRING.parse(emailFilterValue) nameOrEmailFilterValue = value['nameOrEmailFilter'] instance.nameOrEmailFilter = serder.STRING.parse( nameOrEmailFilterValue) from netbluemind.directory.api.DirEntryQueryStateFilter import DirEntryQueryStateFilter from netbluemind.directory.api.DirEntryQueryStateFilter import __DirEntryQueryStateFilterSerDer__ stateFilterValue = value['stateFilter'] instance.stateFilter = __DirEntryQueryStateFilterSerDer__().parse(stateFilterValue) systemFilterValue = value['systemFilter'] instance.systemFilter = serder.BOOLEAN.parse(systemFilterValue) from netbluemind.directory.api.BaseDirEntryKind import BaseDirEntryKind from netbluemind.directory.api.BaseDirEntryKind import __BaseDirEntryKindSerDer__ kindsFilterValue = value['kindsFilter'] instance.kindsFilter = serder.ListSerDer( __BaseDirEntryKindSerDer__()).parse(kindsFilterValue) entriesValue = value['entries'] instance.entries = serder.ListSerDer(serder.STRING).parse(entriesValue) orgUnitIdsValue = value['orgUnitIds'] instance.orgUnitIds = serder.ListSerDer( serder.LONG).parse(orgUnitIdsValue) from netbluemind.directory.api.BaseDirEntryAccountType import BaseDirEntryAccountType from netbluemind.directory.api.BaseDirEntryAccountType import __BaseDirEntryAccountTypeSerDer__ accountTypeFilterValue = value['accountTypeFilter'] instance.accountTypeFilter = __BaseDirEntryAccountTypeSerDer__().parse( accountTypeFilterValue) from_Value = value['from'] instance.from_ = serder.INT.parse(from_Value) sizeValue = value['size'] instance.size = serder.INT.parse(sizeValue) entryUidFilterValue = value['entryUidFilter'] instance.entryUidFilter = serder.ListSerDer( serder.STRING).parse(entryUidFilterValue) onlyManagableValue = value['onlyManagable'] instance.onlyManagable = serder.BOOLEAN.parse(onlyManagableValue) dataLocationValue = value['dataLocation'] instance.dataLocation = serder.STRING.parse(dataLocationValue) return instance def encode(self, value): if (value == None): return None instance = dict() self.encodeInternal(value, instance) return instance def encodeInternal(self, value, instance): from netbluemind.directory.api.DirEntryQueryOrder import DirEntryQueryOrder from netbluemind.directory.api.DirEntryQueryOrder import __DirEntryQueryOrderSerDer__ orderValue = value.order instance["order"] = __DirEntryQueryOrderSerDer__().encode(orderValue) nameFilterValue = value.nameFilter instance["nameFilter"] = serder.STRING.encode(nameFilterValue) hiddenFilterValue = value.hiddenFilter instance["hiddenFilter"] = serder.BOOLEAN.encode(hiddenFilterValue) emailFilterValue = value.emailFilter instance["emailFilter"] = serder.STRING.encode(emailFilterValue) nameOrEmailFilterValue = value.nameOrEmailFilter instance["nameOrEmailFilter"] = serder.STRING.encode( nameOrEmailFilterValue) from netbluemind.directory.api.DirEntryQueryStateFilter import DirEntryQueryStateFilter from netbluemind.directory.api.DirEntryQueryStateFilter import __DirEntryQueryStateFilterSerDer__ stateFilterValue = value.stateFilter instance["stateFilter"] = __DirEntryQueryStateFilterSerDer__().encode( stateFilterValue) systemFilterValue = value.systemFilter instance["systemFilter"] = serder.BOOLEAN.encode(systemFilterValue) from netbluemind.directory.api.BaseDirEntryKind import BaseDirEntryKind from netbluemind.directory.api.BaseDirEntryKind import __BaseDirEntryKindSerDer__ kindsFilterValue = value.kindsFilter instance["kindsFilter"] = serder.ListSerDer( __BaseDirEntryKindSerDer__()).encode(kindsFilterValue) entriesValue = value.entries instance["entries"] = serder.ListSerDer( serder.STRING).encode(entriesValue) orgUnitIdsValue = value.orgUnitIds instance["orgUnitIds"] = serder.ListSerDer( serder.LONG).encode(orgUnitIdsValue) from netbluemind.directory.api.BaseDirEntryAccountType import BaseDirEntryAccountType from netbluemind.directory.api.BaseDirEntryAccountType import __BaseDirEntryAccountTypeSerDer__ accountTypeFilterValue = value.accountTypeFilter instance["accountTypeFilter"] = __BaseDirEntryAccountTypeSerDer__().encode( accountTypeFilterValue) from_Value = value.from_ instance["from"] = serder.INT.encode(from_Value) sizeValue = value.size instance["size"] = serder.INT.encode(sizeValue) entryUidFilterValue = value.entryUidFilter instance["entryUidFilter"] = serder.ListSerDer( serder.STRING).encode(entryUidFilterValue) onlyManagableValue = value.onlyManagable instance["onlyManagable"] = serder.BOOLEAN.encode(onlyManagableValue) dataLocationValue = value.dataLocation instance["dataLocation"] = serder.STRING.encode(dataLocationValue) return instance
PypiClean
/fake_bpy_module_2.93-20230117-py3-none-any.whl/bl_operators/mesh.py
import sys import typing import bpy_types GenericType = typing.TypeVar("GenericType") class MeshMirrorUV(bpy_types.Operator): bl_idname = None ''' ''' bl_label = None ''' ''' bl_options = None ''' ''' bl_rna = None ''' ''' id_data = None ''' ''' def as_keywords(self, ignore): ''' ''' pass def as_pointer(self): ''' ''' pass def bl_rna_get_subclass(self): ''' ''' pass def bl_rna_get_subclass_py(self): ''' ''' pass def do_mesh_mirror_UV(self, mesh, DIR): ''' ''' pass def driver_add(self): ''' ''' pass def driver_remove(self): ''' ''' pass def execute(self, context): ''' ''' pass def get(self): ''' ''' pass def is_property_hidden(self): ''' ''' pass def is_property_overridable_library(self): ''' ''' pass def is_property_readonly(self): ''' ''' pass def is_property_set(self): ''' ''' pass def items(self): ''' ''' pass def keyframe_delete(self): ''' ''' pass def keyframe_insert(self): ''' ''' pass def keys(self): ''' ''' pass def path_from_id(self): ''' ''' pass def path_resolve(self): ''' ''' pass def poll(self, context): ''' ''' pass def pop(self): ''' ''' pass def property_overridable_library_set(self): ''' ''' pass def property_unset(self): ''' ''' pass def type_recast(self): ''' ''' pass def values(self): ''' ''' pass class MeshSelectNext(bpy_types.Operator): bl_idname = None ''' ''' bl_label = None ''' ''' bl_options = None ''' ''' bl_rna = None ''' ''' id_data = None ''' ''' def as_keywords(self, ignore): ''' ''' pass def as_pointer(self): ''' ''' pass def bl_rna_get_subclass(self): ''' ''' pass def bl_rna_get_subclass_py(self): ''' ''' pass def driver_add(self): ''' ''' pass def driver_remove(self): ''' ''' pass def execute(self, context): ''' ''' pass def get(self): ''' ''' pass def is_property_hidden(self): ''' ''' pass def is_property_overridable_library(self): ''' ''' pass def is_property_readonly(self): ''' ''' pass def is_property_set(self): ''' ''' pass def items(self): ''' ''' pass def keyframe_delete(self): ''' ''' pass def keyframe_insert(self): ''' ''' pass def keys(self): ''' ''' pass def path_from_id(self): ''' ''' pass def path_resolve(self): ''' ''' pass def poll(self, context): ''' ''' pass def pop(self): ''' ''' pass def property_overridable_library_set(self): ''' ''' pass def property_unset(self): ''' ''' pass def type_recast(self): ''' ''' pass def values(self): ''' ''' pass class MeshSelectPrev(bpy_types.Operator): bl_idname = None ''' ''' bl_label = None ''' ''' bl_options = None ''' ''' bl_rna = None ''' ''' id_data = None ''' ''' def as_keywords(self, ignore): ''' ''' pass def as_pointer(self): ''' ''' pass def bl_rna_get_subclass(self): ''' ''' pass def bl_rna_get_subclass_py(self): ''' ''' pass def driver_add(self): ''' ''' pass def driver_remove(self): ''' ''' pass def execute(self, context): ''' ''' pass def get(self): ''' ''' pass def is_property_hidden(self): ''' ''' pass def is_property_overridable_library(self): ''' ''' pass def is_property_readonly(self): ''' ''' pass def is_property_set(self): ''' ''' pass def items(self): ''' ''' pass def keyframe_delete(self): ''' ''' pass def keyframe_insert(self): ''' ''' pass def keys(self): ''' ''' pass def path_from_id(self): ''' ''' pass def path_resolve(self): ''' ''' pass def poll(self, context): ''' ''' pass def pop(self): ''' ''' pass def property_overridable_library_set(self): ''' ''' pass def property_unset(self): ''' ''' pass def type_recast(self): ''' ''' pass def values(self): ''' ''' pass
PypiClean
/hikari-lightbulb-2.3.3.tar.gz/hikari-lightbulb-2.3.3/lightbulb/commands/slash.py
from __future__ import annotations __all__ = ["SlashCommand", "SlashCommandGroup", "SlashGroupMixin", "SlashSubGroup", "SlashSubCommand"] import abc import re import typing as t import hikari from lightbulb import context as context_ from lightbulb import errors from lightbulb.commands import base if t.TYPE_CHECKING: from lightbulb import app as app_ from lightbulb import plugins COMMAND_NAME_REGEX: re.Pattern[str] = re.compile(r"^[\w-]{1,32}$", re.U) class SlashGroupMixin(abc.ABC): __slots__ = () _plugin: t.Optional[plugins.Plugin] _subcommands: t.Dict[str, t.Union[SlashSubGroup, SlashSubCommand]] @property @abc.abstractmethod def name(self) -> str: ... def create_subcommands( self, raw_cmds: t.Sequence[base.CommandLike], app: app_.BotApp, allowed_types: t.Union[t.Tuple[t.Type[SlashSubCommand], t.Type[SlashSubGroup]], t.Type[SlashSubCommand]], ) -> None: for raw_cmd in raw_cmds: impls: t.List[t.Type[base.Command]] = getattr(raw_cmd.callback, "__cmd_types__", []) for impl in impls: if issubclass(impl, allowed_types): cmd = impl(app, raw_cmd) assert isinstance(cmd, (SlashSubCommand, SlashSubGroup)) cmd.parent = self # type: ignore cmd.plugin = self.plugin # type: ignore if cmd.name in self._subcommands: raise errors.CommandAlreadyExists( f"A prefix subcommand with name or alias {cmd.name!r} already exists for group {self.name!r}" ) self._subcommands[cmd.name] = cmd def recreate_subcommands(self, raw_cmds: t.Sequence[base.CommandLike], app: app_.BotApp) -> None: self._subcommands.clear() self.create_subcommands( raw_cmds, app, SlashSubCommand if isinstance(self, SlashSubGroup) else (SlashSubCommand, SlashSubGroup) ) async def _invoke_subcommand(self, context: context_.base.Context) -> None: assert isinstance(context, context_.slash.SlashContext) cmd_option = context._raw_options[0] context._raw_options = cmd_option.options or [] # Replace the invoked command prematurely so that _parse_options uses the correct command options context._invoked = self._subcommands[cmd_option.name] # Reparse the options for the subcommand context._parse_options(cmd_option.options) # Ensure we call _maybe_defer await context._maybe_defer() # Invoke the subcommand await context._invoked.invoke(context) def get_subcommand(self, name: str) -> t.Optional[t.Union[SlashSubGroup, SlashSubCommand]]: """Get the group's subcommand with the given name.""" return self._subcommands.get(name) @property def subcommands(self) -> t.Dict[str, t.Union[SlashSubGroup, SlashSubCommand]]: """Mapping of command name to command object containing the group's subcommands.""" return self._subcommands def _set_plugin(self, pl: plugins.Plugin) -> None: self._plugin = pl for command in self._subcommands.values(): if isinstance(command, SlashGroupMixin): command._set_plugin(pl) else: command.plugin = pl class SlashCommand(base.ApplicationCommand): """ An implementation of :obj:`~.commands.base.Command` representing a slash command. See the `API Documentation <https://discord.com/developers/docs/interactions/application-commands#slash-commands>`_. """ __slots__ = () def as_create_kwargs(self) -> t.Dict[str, t.Any]: sorted_opts = sorted(self.options.values(), key=lambda o: int(o.required), reverse=True) return { "type": hikari.CommandType.SLASH, "name": self.name, "description": self.description, "options": [o.as_application_command_option() for o in sorted_opts], } def _validate_attributes(self) -> None: if not COMMAND_NAME_REGEX.fullmatch(self.name) or self.name != self.name.lower(): raise ValueError( f"Slash command {self.name!r}: name must match regex '^[\\w-]{1,32}$' and be all lowercase" ) from None if len(self.description) < 1 or len(self.description) > 100: raise ValueError(f"Slash command {self.name!r}: description must be from 1-100 characters long") from None if len(self.options) > 25: raise ValueError(f"Slash command {self.name!r}: can at most have 25 options") from None class SlashSubCommand(SlashCommand, base.SubCommandTrait): """ Class representing a slash subcommand. """ __slots__ = () @property def qualname(self) -> str: assert self.parent is not None return f"{self.parent.qualname} {self.name}" def as_option(self) -> hikari.CommandOption: sorted_opts = sorted(self.options.values(), key=lambda o: int(o.required), reverse=True) return hikari.CommandOption( type=hikari.OptionType.SUB_COMMAND, name=self.name, description=self.description, is_required=False, options=[o.as_application_command_option() for o in sorted_opts], ) class SlashSubGroup(SlashCommand, SlashGroupMixin, base.SubCommandTrait): """ Class representing a slash subgroup of commands. """ __slots__ = ("_raw_subcommands", "_subcommands") def __init__(self, app: app_.BotApp, initialiser: base.CommandLike) -> None: super().__init__(app, initialiser) self._raw_subcommands = initialiser.subcommands initialiser.subcommands = ( initialiser.subcommands.add_parent(self) # type: ignore if isinstance(initialiser.subcommands, base._SubcommandListProxy) # type: ignore else base._SubcommandListProxy(initialiser.subcommands, parent=self) ) # Just to keep mypy happy we leave SlashSubGroup here self._subcommands: t.Dict[str, t.Union[SlashSubGroup, SlashSubCommand]] = {} self.create_subcommands(self._raw_subcommands, app, SlashSubCommand) @property def qualname(self) -> str: assert self.parent is not None return f"{self.parent.qualname} {self.name}" def as_option(self) -> hikari.CommandOption: return hikari.CommandOption( type=hikari.OptionType.SUB_COMMAND_GROUP, name=self.name, description=self.description, is_required=False, options=[c.as_option() for c in self._subcommands.values()], ) async def invoke(self, context: context_.base.Context, **_: t.Any) -> None: await self._invoke_subcommand(context) def _validate_attributes(self) -> None: super()._validate_attributes() if len(self._subcommands) > 25: raise ValueError(f"Slash command {self.name!r}: group can have at most 25 subcommands") from None def _set_plugin(self, pl: plugins.Plugin) -> None: SlashGroupMixin._set_plugin(self, pl) class SlashCommandGroup(SlashCommand, SlashGroupMixin): """ Class representing a slash command group. """ __slots__ = ("_raw_subcommands", "_subcommands") def __init__(self, app: app_.BotApp, initialiser: base.CommandLike) -> None: super().__init__(app, initialiser) self._raw_subcommands = initialiser.subcommands initialiser.subcommands = ( initialiser.subcommands.add_parent(self) # type: ignore if isinstance(initialiser.subcommands, base._SubcommandListProxy) # type: ignore else base._SubcommandListProxy(initialiser.subcommands, parent=self) ) self._subcommands: t.Dict[str, t.Union[SlashSubGroup, SlashSubCommand]] = {} self.create_subcommands(self._raw_subcommands, app, (SlashSubCommand, SlashSubGroup)) async def invoke(self, context: context_.base.Context, **_: t.Any) -> None: await self._invoke_subcommand(context) def as_create_kwargs(self) -> t.Dict[str, t.Any]: return { "type": hikari.CommandType.SLASH, "name": self.name, "description": self.description, "options": [c.as_option() for c in self._subcommands.values()], } def _validate_attributes(self) -> None: super()._validate_attributes() if len(self._subcommands) > 25: raise ValueError(f"Slash command {self.name!r}: group can have at most 25 subcommands") from None def _set_plugin(self, pl: plugins.Plugin) -> None: SlashGroupMixin._set_plugin(self, pl)
PypiClean
/alipay-python-3.3.17.tar.gz/alipay-python-3.3.17/alipay/aop/api/domain/AlipayBossProdSubmerchantCreateModel.py
import json from alipay.aop.api.constant.ParamConstants import * class AlipayBossProdSubmerchantCreateModel(object): def __init__(self): self._address = None self._alias_name = None self._business_license = None self._category_id = None self._city_code = None self._contact_email = None self._contact_mobile = None self._contact_name = None self._contact_phone = None self._district_code = None self._external_id = None self._id_card = None self._memo = None self._name = None self._province_code = None self._service_phone = None self._source = None @property def address(self): return self._address @address.setter def address(self, value): self._address = value @property def alias_name(self): return self._alias_name @alias_name.setter def alias_name(self, value): self._alias_name = value @property def business_license(self): return self._business_license @business_license.setter def business_license(self, value): self._business_license = value @property def category_id(self): return self._category_id @category_id.setter def category_id(self, value): self._category_id = value @property def city_code(self): return self._city_code @city_code.setter def city_code(self, value): self._city_code = value @property def contact_email(self): return self._contact_email @contact_email.setter def contact_email(self, value): self._contact_email = value @property def contact_mobile(self): return self._contact_mobile @contact_mobile.setter def contact_mobile(self, value): self._contact_mobile = value @property def contact_name(self): return self._contact_name @contact_name.setter def contact_name(self, value): self._contact_name = value @property def contact_phone(self): return self._contact_phone @contact_phone.setter def contact_phone(self, value): self._contact_phone = value @property def district_code(self): return self._district_code @district_code.setter def district_code(self, value): self._district_code = value @property def external_id(self): return self._external_id @external_id.setter def external_id(self, value): self._external_id = value @property def id_card(self): return self._id_card @id_card.setter def id_card(self, value): self._id_card = value @property def memo(self): return self._memo @memo.setter def memo(self, value): self._memo = value @property def name(self): return self._name @name.setter def name(self, value): self._name = value @property def province_code(self): return self._province_code @province_code.setter def province_code(self, value): self._province_code = value @property def service_phone(self): return self._service_phone @service_phone.setter def service_phone(self, value): self._service_phone = value @property def source(self): return self._source @source.setter def source(self, value): self._source = value def to_alipay_dict(self): params = dict() if self.address: if hasattr(self.address, 'to_alipay_dict'): params['address'] = self.address.to_alipay_dict() else: params['address'] = self.address if self.alias_name: if hasattr(self.alias_name, 'to_alipay_dict'): params['alias_name'] = self.alias_name.to_alipay_dict() else: params['alias_name'] = self.alias_name if self.business_license: if hasattr(self.business_license, 'to_alipay_dict'): params['business_license'] = self.business_license.to_alipay_dict() else: params['business_license'] = self.business_license if self.category_id: if hasattr(self.category_id, 'to_alipay_dict'): params['category_id'] = self.category_id.to_alipay_dict() else: params['category_id'] = self.category_id if self.city_code: if hasattr(self.city_code, 'to_alipay_dict'): params['city_code'] = self.city_code.to_alipay_dict() else: params['city_code'] = self.city_code if self.contact_email: if hasattr(self.contact_email, 'to_alipay_dict'): params['contact_email'] = self.contact_email.to_alipay_dict() else: params['contact_email'] = self.contact_email if self.contact_mobile: if hasattr(self.contact_mobile, 'to_alipay_dict'): params['contact_mobile'] = self.contact_mobile.to_alipay_dict() else: params['contact_mobile'] = self.contact_mobile if self.contact_name: if hasattr(self.contact_name, 'to_alipay_dict'): params['contact_name'] = self.contact_name.to_alipay_dict() else: params['contact_name'] = self.contact_name if self.contact_phone: if hasattr(self.contact_phone, 'to_alipay_dict'): params['contact_phone'] = self.contact_phone.to_alipay_dict() else: params['contact_phone'] = self.contact_phone if self.district_code: if hasattr(self.district_code, 'to_alipay_dict'): params['district_code'] = self.district_code.to_alipay_dict() else: params['district_code'] = self.district_code if self.external_id: if hasattr(self.external_id, 'to_alipay_dict'): params['external_id'] = self.external_id.to_alipay_dict() else: params['external_id'] = self.external_id if self.id_card: if hasattr(self.id_card, 'to_alipay_dict'): params['id_card'] = self.id_card.to_alipay_dict() else: params['id_card'] = self.id_card if self.memo: if hasattr(self.memo, 'to_alipay_dict'): params['memo'] = self.memo.to_alipay_dict() else: params['memo'] = self.memo if self.name: if hasattr(self.name, 'to_alipay_dict'): params['name'] = self.name.to_alipay_dict() else: params['name'] = self.name if self.province_code: if hasattr(self.province_code, 'to_alipay_dict'): params['province_code'] = self.province_code.to_alipay_dict() else: params['province_code'] = self.province_code if self.service_phone: if hasattr(self.service_phone, 'to_alipay_dict'): params['service_phone'] = self.service_phone.to_alipay_dict() else: params['service_phone'] = self.service_phone if self.source: if hasattr(self.source, 'to_alipay_dict'): params['source'] = self.source.to_alipay_dict() else: params['source'] = self.source return params @staticmethod def from_alipay_dict(d): if not d: return None o = AlipayBossProdSubmerchantCreateModel() if 'address' in d: o.address = d['address'] if 'alias_name' in d: o.alias_name = d['alias_name'] if 'business_license' in d: o.business_license = d['business_license'] if 'category_id' in d: o.category_id = d['category_id'] if 'city_code' in d: o.city_code = d['city_code'] if 'contact_email' in d: o.contact_email = d['contact_email'] if 'contact_mobile' in d: o.contact_mobile = d['contact_mobile'] if 'contact_name' in d: o.contact_name = d['contact_name'] if 'contact_phone' in d: o.contact_phone = d['contact_phone'] if 'district_code' in d: o.district_code = d['district_code'] if 'external_id' in d: o.external_id = d['external_id'] if 'id_card' in d: o.id_card = d['id_card'] if 'memo' in d: o.memo = d['memo'] if 'name' in d: o.name = d['name'] if 'province_code' in d: o.province_code = d['province_code'] if 'service_phone' in d: o.service_phone = d['service_phone'] if 'source' in d: o.source = d['source'] return o
PypiClean
/sdksio_juniper_mist_sdk-1.0.0-py3-none-any.whl/mistapi/controllers/sites_assets_controller.py
from mistapi.api_helper import APIHelper from mistapi.configuration import Server from mistapi.controllers.base_controller import BaseController from apimatic_core.request_builder import RequestBuilder from apimatic_core.response_handler import ResponseHandler from apimatic_core.types.parameter import Parameter from mistapi.http.http_method_enum import HttpMethodEnum from apimatic_core.authentication.multiple.single_auth import Single from apimatic_core.authentication.multiple.and_auth_group import And from apimatic_core.authentication.multiple.or_auth_group import Or from mistapi.models.asset import Asset from mistapi.models.count import Count from mistapi.models.assets_array_stats_search import AssetsArrayStatsSearch from mistapi.models.asset_of_interest import AssetOfInterest from mistapi.exceptions.api_exception import APIException from mistapi.exceptions.api_v_1_sites_assets_401_error_exception import ApiV1SitesAssets401ErrorException from mistapi.exceptions.api_v_1_sites_assets_403_error_exception import ApiV1SitesAssets403ErrorException from mistapi.exceptions.api_v_1_sites_assets_404_error_exception import ApiV1SitesAssets404ErrorException from mistapi.exceptions.api_v_1_sites_assets_import_401_error_exception import ApiV1SitesAssetsImport401ErrorException from mistapi.exceptions.api_v_1_sites_assets_import_403_error_exception import ApiV1SitesAssetsImport403ErrorException from mistapi.exceptions.api_v_1_sites_assets_import_404_error_exception import ApiV1SitesAssetsImport404ErrorException from mistapi.exceptions.api_v_1_sites_stats_assets_count_401_error_exception import ApiV1SitesStatsAssetsCount401ErrorException from mistapi.exceptions.api_v_1_sites_stats_assets_count_403_error_exception import ApiV1SitesStatsAssetsCount403ErrorException from mistapi.exceptions.api_v_1_sites_stats_assets_count_404_error_exception import ApiV1SitesStatsAssetsCount404ErrorException from mistapi.exceptions.api_v_1_sites_stats_assets_search_401_error_exception import ApiV1SitesStatsAssetsSearch401ErrorException from mistapi.exceptions.api_v_1_sites_stats_assets_search_403_error_exception import ApiV1SitesStatsAssetsSearch403ErrorException from mistapi.exceptions.api_v_1_sites_stats_assets_search_404_error_exception import ApiV1SitesStatsAssetsSearch404ErrorException from mistapi.exceptions.api_v_1_sites_stats_filtered_assets_401_error_exception import ApiV1SitesStatsFilteredAssets401ErrorException from mistapi.exceptions.api_v_1_sites_stats_filtered_assets_403_error_exception import ApiV1SitesStatsFilteredAssets403ErrorException from mistapi.exceptions.api_v_1_sites_stats_filtered_assets_404_error_exception import ApiV1SitesStatsFilteredAssets404ErrorException class SitesAssetsController(BaseController): """A Controller to access Endpoints in the mistapi API.""" def __init__(self, config): super(SitesAssetsController, self).__init__(config) def list_site_assets(self, site_id): """Does a GET request to /api/v1/sites/{site_id}/assets. Get List of Site Assets Args: site_id (uuid|string): TODO: type description here. Returns: list of Asset: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/assets') .http_method(HttpMethodEnum.GET) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .header_param(Parameter() .key('accept') .value('application/json')) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .deserialize_into(Asset.from_dictionary) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesAssets401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesAssets403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesAssets404ErrorException) ).execute() def create_site_asset(self, site_id, body=None): """Does a POST request to /api/v1/sites/{site_id}/assets. Create Site Asset Args: site_id (uuid|string): TODO: type description here. body (Asset, optional): Request Body Returns: Asset: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/assets') .http_method(HttpMethodEnum.POST) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .header_param(Parameter() .key('Content-Type') .value('application/json')) .body_param(Parameter() .value(body)) .header_param(Parameter() .key('accept') .value('application/json')) .body_serializer(APIHelper.json_serialize) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .deserialize_into(Asset.from_dictionary) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesAssets401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesAssets403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesAssets404ErrorException) ).execute() def import_site_assets(self, site_id, upsert='False', file=None): """Does a POST request to /api/v1/sites/{site_id}/assets/import. Impert Site Assets. It can be done via a CSV file or a JSON payload. ## CSV File Format ```csv name,mac "asset_name",5c5b53010101 ``` Args: site_id (uuid|string): TODO: type description here. upsert (UpsertEnum, optional): API will replace the assets with same mac if provided `upsert`==`True`, otherwise will report in errors in response. file (string, optional): CSV file Returns: object: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/assets/import') .http_method(HttpMethodEnum.POST) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .query_param(Parameter() .key('upsert') .value(upsert)) .form_param(Parameter() .key('file') .value(file)) .header_param(Parameter() .key('content-type') .value('application/x-www-form-urlencoded')) .header_param(Parameter() .key('accept') .value('application/json')) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesAssetsImport401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesAssetsImport403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesAssetsImport404ErrorException) ).execute() def delete_site_asset(self, site_id, asset_id): """Does a DELETE request to /api/v1/sites/{site_id}/assets/{asset_id}. Delete Site Asset Args: site_id (uuid|string): TODO: type description here. asset_id (uuid|string): TODO: type description here. Returns: object: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/assets/{asset_id}') .http_method(HttpMethodEnum.DELETE) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .template_param(Parameter() .key('asset_id') .value(asset_id) .should_encode(True)) .header_param(Parameter() .key('accept') .value('application/json')) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesAssets401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesAssets403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesAssets404ErrorException) ).execute() def get_site_asset(self, site_id, asset_id): """Does a GET request to /api/v1/sites/{site_id}/assets/{asset_id}. Get Site Asset Details Args: site_id (uuid|string): TODO: type description here. asset_id (uuid|string): TODO: type description here. Returns: Asset: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/assets/{asset_id}') .http_method(HttpMethodEnum.GET) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .template_param(Parameter() .key('asset_id') .value(asset_id) .should_encode(True)) .header_param(Parameter() .key('accept') .value('application/json')) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .deserialize_into(Asset.from_dictionary) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesAssets401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesAssets403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesAssets404ErrorException) ).execute() def update_site_asset(self, site_id, asset_id, body=None): """Does a PUT request to /api/v1/sites/{site_id}/assets/{asset_id}. Update Site Asset Args: site_id (uuid|string): TODO: type description here. asset_id (uuid|string): TODO: type description here. body (Asset, optional): Request Body Returns: Asset: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/assets/{asset_id}') .http_method(HttpMethodEnum.PUT) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .template_param(Parameter() .key('asset_id') .value(asset_id) .should_encode(True)) .header_param(Parameter() .key('Content-Type') .value('application/json')) .body_param(Parameter() .value(body)) .header_param(Parameter() .key('accept') .value('application/json')) .body_serializer(APIHelper.json_serialize) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .deserialize_into(Asset.from_dictionary) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesAssets401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesAssets403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesAssets404ErrorException) ).execute() def count_site_assets(self, site_id, distinct='map_id'): """Does a GET request to /api/v1/sites/{site_id}/stats/assets/count. Count Asset by distinct field Args: site_id (uuid|string): TODO: type description here. distinct (Distinct17Enum, optional): TODO: type description here. Example: map_id Returns: Count: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/stats/assets/count') .http_method(HttpMethodEnum.GET) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .query_param(Parameter() .key('distinct') .value(distinct)) .header_param(Parameter() .key('accept') .value('application/json')) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .deserialize_into(Count.from_dictionary) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesStatsAssetsCount401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesStatsAssetsCount403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesStatsAssetsCount404ErrorException) ).execute() def search_site_assets(self, site_id, mac=None, map_id=None, ibeacon_uuid=None, ibeacon_major=None, ibeacon_minor=None, eddystone_uid_namespace=None, eddystone_uid_instance=None, eddystone_url=None, device_name=None, by=None, name=None, ap_mac=None, beam=None, rssi=None, limit=100, start=0, end=0, duration='1d'): """Does a GET request to /api/v1/sites/{site_id}/stats/assets/search. Assets Search Args: site_id (uuid|string): TODO: type description here. mac (string, optional): TODO: type description here. map_id (uuid|string, optional): TODO: type description here. ibeacon_uuid (uuid|string, optional): TODO: type description here. ibeacon_major (int, optional): TODO: type description here. ibeacon_minor (int, optional): TODO: type description here. eddystone_uid_namespace (string, optional): TODO: type description here. eddystone_uid_instance (string, optional): TODO: type description here. eddystone_url (string, optional): TODO: type description here. device_name (string, optional): TODO: type description here. by (string, optional): TODO: type description here. name (string, optional): TODO: type description here. ap_mac (string, optional): TODO: type description here. beam (string, optional): TODO: type description here. rssi (string, optional): TODO: type description here. limit (int, optional): TODO: type description here. Example: 100 start (int, optional): TODO: type description here. Example: 0 end (int, optional): TODO: type description here. Example: 0 duration (string, optional): For historical stats and/or logs where time range is needed, you can specify the time range in a few different ways: * ?start=1430000000&end=1430864000 specify the start / end * ?end=1430864000&duration=1d specify end time and duration * ?duration=1d specify duration, end will be now() in seconds Returns: AssetsArrayStatsSearch: Response from the API. OK Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/stats/assets/search') .http_method(HttpMethodEnum.GET) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .query_param(Parameter() .key('mac') .value(mac)) .query_param(Parameter() .key('map_id') .value(map_id)) .query_param(Parameter() .key('ibeacon_uuid') .value(ibeacon_uuid)) .query_param(Parameter() .key('ibeacon_major') .value(ibeacon_major)) .query_param(Parameter() .key('ibeacon_minor') .value(ibeacon_minor)) .query_param(Parameter() .key('eddystone_uid_namespace') .value(eddystone_uid_namespace)) .query_param(Parameter() .key('eddystone_uid_instance') .value(eddystone_uid_instance)) .query_param(Parameter() .key('eddystone_url') .value(eddystone_url)) .query_param(Parameter() .key('device_name') .value(device_name)) .query_param(Parameter() .key('by') .value(by)) .query_param(Parameter() .key('name') .value(name)) .query_param(Parameter() .key('ap_mac') .value(ap_mac)) .query_param(Parameter() .key('beam') .value(beam)) .query_param(Parameter() .key('rssi') .value(rssi)) .query_param(Parameter() .key('limit') .value(limit)) .query_param(Parameter() .key('start') .value(start)) .query_param(Parameter() .key('end') .value(end)) .query_param(Parameter() .key('duration') .value(duration)) .header_param(Parameter() .key('accept') .value('application/json')) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .deserialize_into(AssetsArrayStatsSearch.from_dictionary) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesStatsAssetsSearch401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesStatsAssetsSearch403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesStatsAssetsSearch404ErrorException) ).execute() def get_site_assets_of_interest(self, site_id, duration='1d', start=0, end=0, page=1, limit=100): """Does a GET request to /api/v1/sites/{site_id}/stats/filtered_assets. Get a list of BLE beacons that matches Asset or AssetFilter Args: site_id (uuid|string): TODO: type description here. duration (string, optional): For historical stats and/or logs where time range is needed, you can specify the time range in a few different ways: * ?start=1430000000&end=1430864000 specify the start / end * ?end=1430864000&duration=1d specify end time and duration * ?duration=1d specify duration, end will be now() in seconds start (int, optional): TODO: type description here. Example: 0 end (int, optional): TODO: type description here. Example: 0 page (int, optional): TODO: type description here. Example: 1 limit (int, optional): TODO: type description here. Example: 100 Returns: list of AssetOfInterest: Response from the API. Example response Raises: APIException: When an error occurs while fetching the data from the remote API. This exception includes the HTTP Response code, an error message, and the HTTP body that was received in the request. """ return super().new_api_call_builder.request( RequestBuilder().server(Server.DEFAULT) .path('/api/v1/sites/{site_id}/stats/filtered_assets') .http_method(HttpMethodEnum.GET) .template_param(Parameter() .key('site_id') .value(site_id) .should_encode(True)) .query_param(Parameter() .key('duration') .value(duration)) .query_param(Parameter() .key('start') .value(start)) .query_param(Parameter() .key('end') .value(end)) .query_param(Parameter() .key('page') .value(page)) .query_param(Parameter() .key('limit') .value(limit)) .header_param(Parameter() .key('accept') .value('application/json')) .auth(Single('global')) ).response( ResponseHandler() .deserializer(APIHelper.json_deserialize) .deserialize_into(AssetOfInterest.from_dictionary) .local_error('400', 'The API endpoint exists but its syntax/payload is incorrect, detail may be given', APIException) .local_error('401', 'Unauthorized', ApiV1SitesStatsFilteredAssets401ErrorException) .local_error('403', 'Permission Denied', ApiV1SitesStatsFilteredAssets403ErrorException) .local_error('404', 'Not found. The API endpoint doesn\'t exist or resource doesn\'t exist', ApiV1SitesStatsFilteredAssets404ErrorException) ).execute()
PypiClean
/realms-wiki-0.9.3.tar.gz/realms-wiki-0.9.3/realms/static/vendor/ace-builds/src-noconflict/mode-mask.js
ace.define("ace/mode/doc_comment_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; var DocCommentHighlightRules = function() { this.$rules = { "start" : [ { token : "comment.doc.tag", regex : "@[\\w\\d_]+" // TODO: fix email addresses }, DocCommentHighlightRules.getTagRule(), { defaultToken : "comment.doc", caseInsensitive: true }] }; }; oop.inherits(DocCommentHighlightRules, TextHighlightRules); DocCommentHighlightRules.getTagRule = function(start) { return { token : "comment.doc.tag.storage.type", regex : "\\b(?:TODO|FIXME|XXX|HACK)\\b" }; } DocCommentHighlightRules.getStartRule = function(start) { return { token : "comment.doc", // doc comment regex : "\\/\\*(?=\\*)", next : start }; }; DocCommentHighlightRules.getEndRule = function (start) { return { token : "comment.doc", // closing comment regex : "\\*\\/", next : start }; }; exports.DocCommentHighlightRules = DocCommentHighlightRules; }); ace.define("ace/mode/javascript_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/doc_comment_highlight_rules","ace/mode/text_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var DocCommentHighlightRules = require("./doc_comment_highlight_rules").DocCommentHighlightRules; var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; var identifierRe = "[a-zA-Z\\$_\u00a1-\uffff][a-zA-Z\\d\\$_\u00a1-\uffff]*"; var JavaScriptHighlightRules = function(options) { var keywordMapper = this.createKeywordMapper({ "variable.language": "Array|Boolean|Date|Function|Iterator|Number|Object|RegExp|String|Proxy|" + // Constructors "Namespace|QName|XML|XMLList|" + // E4X "ArrayBuffer|Float32Array|Float64Array|Int16Array|Int32Array|Int8Array|" + "Uint16Array|Uint32Array|Uint8Array|Uint8ClampedArray|" + "Error|EvalError|InternalError|RangeError|ReferenceError|StopIteration|" + // Errors "SyntaxError|TypeError|URIError|" + "decodeURI|decodeURIComponent|encodeURI|encodeURIComponent|eval|isFinite|" + // Non-constructor functions "isNaN|parseFloat|parseInt|" + "JSON|Math|" + // Other "this|arguments|prototype|window|document" , // Pseudo "keyword": "const|yield|import|get|set|async|await|" + "break|case|catch|continue|default|delete|do|else|finally|for|function|" + "if|in|instanceof|new|return|switch|throw|try|typeof|let|var|while|with|debugger|" + "__parent__|__count__|escape|unescape|with|__proto__|" + "class|enum|extends|super|export|implements|private|public|interface|package|protected|static", "storage.type": "const|let|var|function", "constant.language": "null|Infinity|NaN|undefined", "support.function": "alert", "constant.language.boolean": "true|false" }, "identifier"); var kwBeforeRe = "case|do|else|finally|in|instanceof|return|throw|try|typeof|yield|void"; var escapedRe = "\\\\(?:x[0-9a-fA-F]{2}|" + // hex "u[0-9a-fA-F]{4}|" + // unicode "u{[0-9a-fA-F]{1,6}}|" + // es6 unicode "[0-2][0-7]{0,2}|" + // oct "3[0-7][0-7]?|" + // oct "[4-7][0-7]?|" + //oct ".)"; this.$rules = { "no_regex" : [ DocCommentHighlightRules.getStartRule("doc-start"), comments("no_regex"), { token : "string", regex : "'(?=.)", next : "qstring" }, { token : "string", regex : '"(?=.)', next : "qqstring" }, { token : "constant.numeric", // hex regex : /0(?:[xX][0-9a-fA-F]+|[bB][01]+)\b/ }, { token : "constant.numeric", // float regex : /[+-]?\d[\d_]*(?:(?:\.\d*)?(?:[eE][+-]?\d+)?)?\b/ }, { token : [ "storage.type", "punctuation.operator", "support.function", "punctuation.operator", "entity.name.function", "text","keyword.operator" ], regex : "(" + identifierRe + ")(\\.)(prototype)(\\.)(" + identifierRe +")(\\s*)(=)", next: "function_arguments" }, { token : [ "storage.type", "punctuation.operator", "entity.name.function", "text", "keyword.operator", "text", "storage.type", "text", "paren.lparen" ], regex : "(" + identifierRe + ")(\\.)(" + identifierRe +")(\\s*)(=)(\\s*)(function)(\\s*)(\\()", next: "function_arguments" }, { token : [ "entity.name.function", "text", "keyword.operator", "text", "storage.type", "text", "paren.lparen" ], regex : "(" + identifierRe +")(\\s*)(=)(\\s*)(function)(\\s*)(\\()", next: "function_arguments" }, { token : [ "storage.type", "punctuation.operator", "entity.name.function", "text", "keyword.operator", "text", "storage.type", "text", "entity.name.function", "text", "paren.lparen" ], regex : "(" + identifierRe + ")(\\.)(" + identifierRe +")(\\s*)(=)(\\s*)(function)(\\s+)(\\w+)(\\s*)(\\()", next: "function_arguments" }, { token : [ "storage.type", "text", "entity.name.function", "text", "paren.lparen" ], regex : "(function)(\\s+)(" + identifierRe + ")(\\s*)(\\()", next: "function_arguments" }, { token : [ "entity.name.function", "text", "punctuation.operator", "text", "storage.type", "text", "paren.lparen" ], regex : "(" + identifierRe + ")(\\s*)(:)(\\s*)(function)(\\s*)(\\()", next: "function_arguments" }, { token : [ "text", "text", "storage.type", "text", "paren.lparen" ], regex : "(:)(\\s*)(function)(\\s*)(\\()", next: "function_arguments" }, { token : "keyword", regex : "(?:" + kwBeforeRe + ")\\b", next : "start" }, { token : ["support.constant"], regex : /that\b/ }, { token : ["storage.type", "punctuation.operator", "support.function.firebug"], regex : /(console)(\.)(warn|info|log|error|time|trace|timeEnd|assert)\b/ }, { token : keywordMapper, regex : identifierRe }, { token : "punctuation.operator", regex : /[.](?![.])/, next : "property" }, { token : "keyword.operator", regex : /--|\+\+|\.{3}|===|==|=|!=|!==|<+=?|>+=?|!|&&|\|\||\?:|[!$%&*+\-~\/^]=?/, next : "start" }, { token : "punctuation.operator", regex : /[?:,;.]/, next : "start" }, { token : "paren.lparen", regex : /[\[({]/, next : "start" }, { token : "paren.rparen", regex : /[\])}]/ }, { token: "comment", regex: /^#!.*$/ } ], property: [{ token : "text", regex : "\\s+" }, { token : [ "storage.type", "punctuation.operator", "entity.name.function", "text", "keyword.operator", "text", "storage.type", "text", "entity.name.function", "text", "paren.lparen" ], regex : "(" + identifierRe + ")(\\.)(" + identifierRe +")(\\s*)(=)(\\s*)(function)(?:(\\s+)(\\w+))?(\\s*)(\\()", next: "function_arguments" }, { token : "punctuation.operator", regex : /[.](?![.])/ }, { token : "support.function", regex : /(s(?:h(?:ift|ow(?:Mod(?:elessDialog|alDialog)|Help))|croll(?:X|By(?:Pages|Lines)?|Y|To)?|t(?:op|rike)|i(?:n|zeToContent|debar|gnText)|ort|u(?:p|b(?:str(?:ing)?)?)|pli(?:ce|t)|e(?:nd|t(?:Re(?:sizable|questHeader)|M(?:i(?:nutes|lliseconds)|onth)|Seconds|Ho(?:tKeys|urs)|Year|Cursor|Time(?:out)?|Interval|ZOptions|Date|UTC(?:M(?:i(?:nutes|lliseconds)|onth)|Seconds|Hours|Date|FullYear)|FullYear|Active)|arch)|qrt|lice|avePreferences|mall)|h(?:ome|andleEvent)|navigate|c(?:har(?:CodeAt|At)|o(?:s|n(?:cat|textual|firm)|mpile)|eil|lear(?:Timeout|Interval)?|a(?:ptureEvents|ll)|reate(?:StyleSheet|Popup|EventObject))|t(?:o(?:GMTString|S(?:tring|ource)|U(?:TCString|pperCase)|Lo(?:caleString|werCase))|est|a(?:n|int(?:Enabled)?))|i(?:s(?:NaN|Finite)|ndexOf|talics)|d(?:isableExternalCapture|ump|etachEvent)|u(?:n(?:shift|taint|escape|watch)|pdateCommands)|j(?:oin|avaEnabled)|p(?:o(?:p|w)|ush|lugins.refresh|a(?:ddings|rse(?:Int|Float)?)|r(?:int|ompt|eference))|e(?:scape|nableExternalCapture|val|lementFromPoint|x(?:p|ec(?:Script|Command)?))|valueOf|UTC|queryCommand(?:State|Indeterm|Enabled|Value)|f(?:i(?:nd|le(?:ModifiedDate|Size|CreatedDate|UpdatedDate)|xed)|o(?:nt(?:size|color)|rward)|loor|romCharCode)|watch|l(?:ink|o(?:ad|g)|astIndexOf)|a(?:sin|nchor|cos|t(?:tachEvent|ob|an(?:2)?)|pply|lert|b(?:s|ort))|r(?:ou(?:nd|teEvents)|e(?:size(?:By|To)|calc|turnValue|place|verse|l(?:oad|ease(?:Capture|Events)))|andom)|g(?:o|et(?:ResponseHeader|M(?:i(?:nutes|lliseconds)|onth)|Se(?:conds|lection)|Hours|Year|Time(?:zoneOffset)?|Da(?:y|te)|UTC(?:M(?:i(?:nutes|lliseconds)|onth)|Seconds|Hours|Da(?:y|te)|FullYear)|FullYear|A(?:ttention|llResponseHeaders)))|m(?:in|ove(?:B(?:y|elow)|To(?:Absolute)?|Above)|ergeAttributes|a(?:tch|rgins|x))|b(?:toa|ig|o(?:ld|rderWidths)|link|ack))\b(?=\()/ }, { token : "support.function.dom", regex : /(s(?:ub(?:stringData|mit)|plitText|e(?:t(?:NamedItem|Attribute(?:Node)?)|lect))|has(?:ChildNodes|Feature)|namedItem|c(?:l(?:ick|o(?:se|neNode))|reate(?:C(?:omment|DATASection|aption)|T(?:Head|extNode|Foot)|DocumentFragment|ProcessingInstruction|E(?:ntityReference|lement)|Attribute))|tabIndex|i(?:nsert(?:Row|Before|Cell|Data)|tem)|open|delete(?:Row|C(?:ell|aption)|T(?:Head|Foot)|Data)|focus|write(?:ln)?|a(?:dd|ppend(?:Child|Data))|re(?:set|place(?:Child|Data)|move(?:NamedItem|Child|Attribute(?:Node)?)?)|get(?:NamedItem|Element(?:sBy(?:Name|TagName|ClassName)|ById)|Attribute(?:Node)?)|blur)\b(?=\()/ }, { token : "support.constant", regex : /(s(?:ystemLanguage|cr(?:ipts|ollbars|een(?:X|Y|Top|Left))|t(?:yle(?:Sheets)?|atus(?:Text|bar)?)|ibling(?:Below|Above)|ource|uffixes|e(?:curity(?:Policy)?|l(?:ection|f)))|h(?:istory|ost(?:name)?|as(?:h|Focus))|y|X(?:MLDocument|SLDocument)|n(?:ext|ame(?:space(?:s|URI)|Prop))|M(?:IN_VALUE|AX_VALUE)|c(?:haracterSet|o(?:n(?:structor|trollers)|okieEnabled|lorDepth|mp(?:onents|lete))|urrent|puClass|l(?:i(?:p(?:boardData)?|entInformation)|osed|asses)|alle(?:e|r)|rypto)|t(?:o(?:olbar|p)|ext(?:Transform|Indent|Decoration|Align)|ags)|SQRT(?:1_2|2)|i(?:n(?:ner(?:Height|Width)|put)|ds|gnoreCase)|zIndex|o(?:scpu|n(?:readystatechange|Line)|uter(?:Height|Width)|p(?:sProfile|ener)|ffscreenBuffering)|NEGATIVE_INFINITY|d(?:i(?:splay|alog(?:Height|Top|Width|Left|Arguments)|rectories)|e(?:scription|fault(?:Status|Ch(?:ecked|arset)|View)))|u(?:ser(?:Profile|Language|Agent)|n(?:iqueID|defined)|pdateInterval)|_content|p(?:ixelDepth|ort|ersonalbar|kcs11|l(?:ugins|atform)|a(?:thname|dding(?:Right|Bottom|Top|Left)|rent(?:Window|Layer)?|ge(?:X(?:Offset)?|Y(?:Offset)?))|r(?:o(?:to(?:col|type)|duct(?:Sub)?|mpter)|e(?:vious|fix)))|e(?:n(?:coding|abledPlugin)|x(?:ternal|pando)|mbeds)|v(?:isibility|endor(?:Sub)?|Linkcolor)|URLUnencoded|P(?:I|OSITIVE_INFINITY)|f(?:ilename|o(?:nt(?:Size|Family|Weight)|rmName)|rame(?:s|Element)|gColor)|E|whiteSpace|l(?:i(?:stStyleType|n(?:eHeight|kColor))|o(?:ca(?:tion(?:bar)?|lName)|wsrc)|e(?:ngth|ft(?:Context)?)|a(?:st(?:M(?:odified|atch)|Index|Paren)|yer(?:s|X)|nguage))|a(?:pp(?:MinorVersion|Name|Co(?:deName|re)|Version)|vail(?:Height|Top|Width|Left)|ll|r(?:ity|guments)|Linkcolor|bove)|r(?:ight(?:Context)?|e(?:sponse(?:XML|Text)|adyState))|global|x|m(?:imeTypes|ultiline|enubar|argin(?:Right|Bottom|Top|Left))|L(?:N(?:10|2)|OG(?:10E|2E))|b(?:o(?:ttom|rder(?:Width|RightWidth|BottomWidth|Style|Color|TopWidth|LeftWidth))|ufferDepth|elow|ackground(?:Color|Image)))\b/ }, { token : "identifier", regex : identifierRe }, { regex: "", token: "empty", next: "no_regex" } ], "start": [ DocCommentHighlightRules.getStartRule("doc-start"), comments("start"), { token: "string.regexp", regex: "\\/", next: "regex" }, { token : "text", regex : "\\s+|^$", next : "start" }, { token: "empty", regex: "", next: "no_regex" } ], "regex": [ { token: "regexp.keyword.operator", regex: "\\\\(?:u[\\da-fA-F]{4}|x[\\da-fA-F]{2}|.)" }, { token: "string.regexp", regex: "/[sxngimy]*", next: "no_regex" }, { token : "invalid", regex: /\{\d+\b,?\d*\}[+*]|[+*$^?][+*]|[$^][?]|\?{3,}/ }, { token : "constant.language.escape", regex: /\(\?[:=!]|\)|\{\d+\b,?\d*\}|[+*]\?|[()$^+*?.]/ }, { token : "constant.language.delimiter", regex: /\|/ }, { token: "constant.language.escape", regex: /\[\^?/, next: "regex_character_class" }, { token: "empty", regex: "$", next: "no_regex" }, { defaultToken: "string.regexp" } ], "regex_character_class": [ { token: "regexp.charclass.keyword.operator", regex: "\\\\(?:u[\\da-fA-F]{4}|x[\\da-fA-F]{2}|.)" }, { token: "constant.language.escape", regex: "]", next: "regex" }, { token: "constant.language.escape", regex: "-" }, { token: "empty", regex: "$", next: "no_regex" }, { defaultToken: "string.regexp.charachterclass" } ], "function_arguments": [ { token: "variable.parameter", regex: identifierRe }, { token: "punctuation.operator", regex: "[, ]+" }, { token: "punctuation.operator", regex: "$" }, { token: "empty", regex: "", next: "no_regex" } ], "qqstring" : [ { token : "constant.language.escape", regex : escapedRe }, { token : "string", regex : "\\\\$", next : "qqstring" }, { token : "string", regex : '"|$', next : "no_regex" }, { defaultToken: "string" } ], "qstring" : [ { token : "constant.language.escape", regex : escapedRe }, { token : "string", regex : "\\\\$", next : "qstring" }, { token : "string", regex : "'|$", next : "no_regex" }, { defaultToken: "string" } ] }; if (!options || !options.noES6) { this.$rules.no_regex.unshift({ regex: "[{}]", onMatch: function(val, state, stack) { this.next = val == "{" ? this.nextState : ""; if (val == "{" && stack.length) { stack.unshift("start", state); } else if (val == "}" && stack.length) { stack.shift(); this.next = stack.shift(); if (this.next.indexOf("string") != -1 || this.next.indexOf("jsx") != -1) return "paren.quasi.end"; } return val == "{" ? "paren.lparen" : "paren.rparen"; }, nextState: "start" }, { token : "string.quasi.start", regex : /`/, push : [{ token : "constant.language.escape", regex : escapedRe }, { token : "paren.quasi.start", regex : /\${/, push : "start" }, { token : "string.quasi.end", regex : /`/, next : "pop" }, { defaultToken: "string.quasi" }] }); if (!options || options.jsx != false) JSX.call(this); } this.embedRules(DocCommentHighlightRules, "doc-", [ DocCommentHighlightRules.getEndRule("no_regex") ]); this.normalizeRules(); }; oop.inherits(JavaScriptHighlightRules, TextHighlightRules); function JSX() { var tagRegex = identifierRe.replace("\\d", "\\d\\-"); var jsxTag = { onMatch : function(val, state, stack) { var offset = val.charAt(1) == "/" ? 2 : 1; if (offset == 1) { if (state != this.nextState) stack.unshift(this.next, this.nextState, 0); else stack.unshift(this.next); stack[2]++; } else if (offset == 2) { if (state == this.nextState) { stack[1]--; if (!stack[1] || stack[1] < 0) { stack.shift(); stack.shift(); } } } return [{ type: "meta.tag.punctuation." + (offset == 1 ? "" : "end-") + "tag-open.xml", value: val.slice(0, offset) }, { type: "meta.tag.tag-name.xml", value: val.substr(offset) }]; }, regex : "</?" + tagRegex + "", next: "jsxAttributes", nextState: "jsx" }; this.$rules.start.unshift(jsxTag); var jsxJsRule = { regex: "{", token: "paren.quasi.start", push: "start" }; this.$rules.jsx = [ jsxJsRule, jsxTag, {include : "reference"}, {defaultToken: "string"} ]; this.$rules.jsxAttributes = [{ token : "meta.tag.punctuation.tag-close.xml", regex : "/?>", onMatch : function(value, currentState, stack) { if (currentState == stack[0]) stack.shift(); if (value.length == 2) { if (stack[0] == this.nextState) stack[1]--; if (!stack[1] || stack[1] < 0) { stack.splice(0, 2); } } this.next = stack[0] || "start"; return [{type: this.token, value: value}]; }, nextState: "jsx" }, jsxJsRule, comments("jsxAttributes"), { token : "entity.other.attribute-name.xml", regex : tagRegex }, { token : "keyword.operator.attribute-equals.xml", regex : "=" }, { token : "text.tag-whitespace.xml", regex : "\\s+" }, { token : "string.attribute-value.xml", regex : "'", stateName : "jsx_attr_q", push : [ {token : "string.attribute-value.xml", regex: "'", next: "pop"}, {include : "reference"}, {defaultToken : "string.attribute-value.xml"} ] }, { token : "string.attribute-value.xml", regex : '"', stateName : "jsx_attr_qq", push : [ {token : "string.attribute-value.xml", regex: '"', next: "pop"}, {include : "reference"}, {defaultToken : "string.attribute-value.xml"} ] }, jsxTag ]; this.$rules.reference = [{ token : "constant.language.escape.reference.xml", regex : "(?:&#[0-9]+;)|(?:&#x[0-9a-fA-F]+;)|(?:&[a-zA-Z0-9_:\\.-]+;)" }]; } function comments(next) { return [ { token : "comment", // multi line comment regex : /\/\*/, next: [ DocCommentHighlightRules.getTagRule(), {token : "comment", regex : "\\*\\/", next : next || "pop"}, {defaultToken : "comment", caseInsensitive: true} ] }, { token : "comment", regex : "\\/\\/", next: [ DocCommentHighlightRules.getTagRule(), {token : "comment", regex : "$|^", next : next || "pop"}, {defaultToken : "comment", caseInsensitive: true} ] } ]; } exports.JavaScriptHighlightRules = JavaScriptHighlightRules; }); ace.define("ace/mode/css_highlight_rules",["require","exports","module","ace/lib/oop","ace/lib/lang","ace/mode/text_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var lang = require("../lib/lang"); var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; var supportType = exports.supportType = "align-content|align-items|align-self|all|animation|animation-delay|animation-direction|animation-duration|animation-fill-mode|animation-iteration-count|animation-name|animation-play-state|animation-timing-function|backface-visibility|background|background-attachment|background-blend-mode|background-clip|background-color|background-image|background-origin|background-position|background-repeat|background-size|border|border-bottom|border-bottom-color|border-bottom-left-radius|border-bottom-right-radius|border-bottom-style|border-bottom-width|border-collapse|border-color|border-image|border-image-outset|border-image-repeat|border-image-slice|border-image-source|border-image-width|border-left|border-left-color|border-left-style|border-left-width|border-radius|border-right|border-right-color|border-right-style|border-right-width|border-spacing|border-style|border-top|border-top-color|border-top-left-radius|border-top-right-radius|border-top-style|border-top-width|border-width|bottom|box-shadow|box-sizing|caption-side|clear|clip|color|column-count|column-fill|column-gap|column-rule|column-rule-color|column-rule-style|column-rule-width|column-span|column-width|columns|content|counter-increment|counter-reset|cursor|direction|display|empty-cells|filter|flex|flex-basis|flex-direction|flex-flow|flex-grow|flex-shrink|flex-wrap|float|font|font-family|font-size|font-size-adjust|font-stretch|font-style|font-variant|font-weight|hanging-punctuation|height|justify-content|left|letter-spacing|line-height|list-style|list-style-image|list-style-position|list-style-type|margin|margin-bottom|margin-left|margin-right|margin-top|max-height|max-width|min-height|min-width|nav-down|nav-index|nav-left|nav-right|nav-up|opacity|order|outline|outline-color|outline-offset|outline-style|outline-width|overflow|overflow-x|overflow-y|padding|padding-bottom|padding-left|padding-right|padding-top|page-break-after|page-break-before|page-break-inside|perspective|perspective-origin|position|quotes|resize|right|tab-size|table-layout|text-align|text-align-last|text-decoration|text-decoration-color|text-decoration-line|text-decoration-style|text-indent|text-justify|text-overflow|text-shadow|text-transform|top|transform|transform-origin|transform-style|transition|transition-delay|transition-duration|transition-property|transition-timing-function|unicode-bidi|vertical-align|visibility|white-space|width|word-break|word-spacing|word-wrap|z-index"; var supportFunction = exports.supportFunction = "rgb|rgba|url|attr|counter|counters"; var supportConstant = exports.supportConstant = "absolute|after-edge|after|all-scroll|all|alphabetic|always|antialiased|armenian|auto|avoid-column|avoid-page|avoid|balance|baseline|before-edge|before|below|bidi-override|block-line-height|block|bold|bolder|border-box|both|bottom|box|break-all|break-word|capitalize|caps-height|caption|center|central|char|circle|cjk-ideographic|clone|close-quote|col-resize|collapse|column|consider-shifts|contain|content-box|cover|crosshair|cubic-bezier|dashed|decimal-leading-zero|decimal|default|disabled|disc|disregard-shifts|distribute-all-lines|distribute-letter|distribute-space|distribute|dotted|double|e-resize|ease-in|ease-in-out|ease-out|ease|ellipsis|end|exclude-ruby|fill|fixed|georgian|glyphs|grid-height|groove|hand|hanging|hebrew|help|hidden|hiragana-iroha|hiragana|horizontal|icon|ideograph-alpha|ideograph-numeric|ideograph-parenthesis|ideograph-space|ideographic|inactive|include-ruby|inherit|initial|inline-block|inline-box|inline-line-height|inline-table|inline|inset|inside|inter-ideograph|inter-word|invert|italic|justify|katakana-iroha|katakana|keep-all|last|left|lighter|line-edge|line-through|line|linear|list-item|local|loose|lower-alpha|lower-greek|lower-latin|lower-roman|lowercase|lr-tb|ltr|mathematical|max-height|max-size|medium|menu|message-box|middle|move|n-resize|ne-resize|newspaper|no-change|no-close-quote|no-drop|no-open-quote|no-repeat|none|normal|not-allowed|nowrap|nw-resize|oblique|open-quote|outset|outside|overline|padding-box|page|pointer|pre-line|pre-wrap|pre|preserve-3d|progress|relative|repeat-x|repeat-y|repeat|replaced|reset-size|ridge|right|round|row-resize|rtl|s-resize|scroll|se-resize|separate|slice|small-caps|small-caption|solid|space|square|start|static|status-bar|step-end|step-start|steps|stretch|strict|sub|super|sw-resize|table-caption|table-cell|table-column-group|table-column|table-footer-group|table-header-group|table-row-group|table-row|table|tb-rl|text-after-edge|text-before-edge|text-bottom|text-size|text-top|text|thick|thin|transparent|underline|upper-alpha|upper-latin|upper-roman|uppercase|use-script|vertical-ideographic|vertical-text|visible|w-resize|wait|whitespace|z-index|zero"; var supportConstantColor = exports.supportConstantColor = "aqua|black|blue|fuchsia|gray|green|lime|maroon|navy|olive|orange|purple|red|silver|teal|white|yellow"; var supportConstantFonts = exports.supportConstantFonts = "arial|century|comic|courier|cursive|fantasy|garamond|georgia|helvetica|impact|lucida|symbol|system|tahoma|times|trebuchet|utopia|verdana|webdings|sans-serif|serif|monospace"; var numRe = exports.numRe = "\\-?(?:(?:[0-9]+)|(?:[0-9]*\\.[0-9]+))"; var pseudoElements = exports.pseudoElements = "(\\:+)\\b(after|before|first-letter|first-line|moz-selection|selection)\\b"; var pseudoClasses = exports.pseudoClasses = "(:)\\b(active|checked|disabled|empty|enabled|first-child|first-of-type|focus|hover|indeterminate|invalid|last-child|last-of-type|link|not|nth-child|nth-last-child|nth-last-of-type|nth-of-type|only-child|only-of-type|required|root|target|valid|visited)\\b"; var CssHighlightRules = function() { var keywordMapper = this.createKeywordMapper({ "support.function": supportFunction, "support.constant": supportConstant, "support.type": supportType, "support.constant.color": supportConstantColor, "support.constant.fonts": supportConstantFonts }, "text", true); this.$rules = { "start" : [{ token : "comment", // multi line comment regex : "\\/\\*", push : "comment" }, { token: "paren.lparen", regex: "\\{", push: "ruleset" }, { token: "string", regex: "@.*?{", push: "media" }, { token: "keyword", regex: "#[a-z0-9-_]+" }, { token: "variable", regex: "\\.[a-z0-9-_]+" }, { token: "string", regex: ":[a-z0-9-_]+" }, { token: "constant", regex: "[a-z0-9-_]+" }, { caseInsensitive: true }], "media" : [{ token : "comment", // multi line comment regex : "\\/\\*", push : "comment" }, { token: "paren.lparen", regex: "\\{", push: "ruleset" }, { token: "string", regex: "\\}", next: "pop" }, { token: "keyword", regex: "#[a-z0-9-_]+" }, { token: "variable", regex: "\\.[a-z0-9-_]+" }, { token: "string", regex: ":[a-z0-9-_]+" }, { token: "constant", regex: "[a-z0-9-_]+" }, { caseInsensitive: true }], "comment" : [{ token : "comment", regex : "\\*\\/", next : "pop" }, { defaultToken : "comment" }], "ruleset" : [ { token : "paren.rparen", regex : "\\}", next: "pop" }, { token : "comment", // multi line comment regex : "\\/\\*", push : "comment" }, { token : "string", // single line regex : '["](?:(?:\\\\.)|(?:[^"\\\\]))*?["]' }, { token : "string", // single line regex : "['](?:(?:\\\\.)|(?:[^'\\\\]))*?[']" }, { token : ["constant.numeric", "keyword"], regex : "(" + numRe + ")(ch|cm|deg|em|ex|fr|gd|grad|Hz|in|kHz|mm|ms|pc|pt|px|rad|rem|s|turn|vh|vm|vw|%)" }, { token : "constant.numeric", regex : numRe }, { token : "constant.numeric", // hex6 color regex : "#[a-f0-9]{6}" }, { token : "constant.numeric", // hex3 color regex : "#[a-f0-9]{3}" }, { token : ["punctuation", "entity.other.attribute-name.pseudo-element.css"], regex : pseudoElements }, { token : ["punctuation", "entity.other.attribute-name.pseudo-class.css"], regex : pseudoClasses }, { token : ["support.function", "string", "support.function"], regex : "(url\\()(.*)(\\))" }, { token : keywordMapper, regex : "\\-?[a-zA-Z_][a-zA-Z0-9_\\-]*" }, { caseInsensitive: true }] }; this.normalizeRules(); }; oop.inherits(CssHighlightRules, TextHighlightRules); exports.CssHighlightRules = CssHighlightRules; }); ace.define("ace/mode/xml_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; var XmlHighlightRules = function(normalize) { var tagRegex = "[_:a-zA-Z\xc0-\uffff][-_:.a-zA-Z0-9\xc0-\uffff]*"; this.$rules = { start : [ {token : "string.cdata.xml", regex : "<\\!\\[CDATA\\[", next : "cdata"}, { token : ["punctuation.xml-decl.xml", "keyword.xml-decl.xml"], regex : "(<\\?)(xml)(?=[\\s])", next : "xml_decl", caseInsensitive: true }, { token : ["punctuation.instruction.xml", "keyword.instruction.xml"], regex : "(<\\?)(" + tagRegex + ")", next : "processing_instruction" }, {token : "comment.xml", regex : "<\\!--", next : "comment"}, { token : ["xml-pe.doctype.xml", "xml-pe.doctype.xml"], regex : "(<\\!)(DOCTYPE)(?=[\\s])", next : "doctype", caseInsensitive: true }, {include : "tag"}, {token : "text.end-tag-open.xml", regex: "</"}, {token : "text.tag-open.xml", regex: "<"}, {include : "reference"}, {defaultToken : "text.xml"} ], xml_decl : [{ token : "entity.other.attribute-name.decl-attribute-name.xml", regex : "(?:" + tagRegex + ":)?" + tagRegex + "" }, { token : "keyword.operator.decl-attribute-equals.xml", regex : "=" }, { include: "whitespace" }, { include: "string" }, { token : "punctuation.xml-decl.xml", regex : "\\?>", next : "start" }], processing_instruction : [ {token : "punctuation.instruction.xml", regex : "\\?>", next : "start"}, {defaultToken : "instruction.xml"} ], doctype : [ {include : "whitespace"}, {include : "string"}, {token : "xml-pe.doctype.xml", regex : ">", next : "start"}, {token : "xml-pe.xml", regex : "[-_a-zA-Z0-9:]+"}, {token : "punctuation.int-subset", regex : "\\[", push : "int_subset"} ], int_subset : [{ token : "text.xml", regex : "\\s+" }, { token: "punctuation.int-subset.xml", regex: "]", next: "pop" }, { token : ["punctuation.markup-decl.xml", "keyword.markup-decl.xml"], regex : "(<\\!)(" + tagRegex + ")", push : [{ token : "text", regex : "\\s+" }, { token : "punctuation.markup-decl.xml", regex : ">", next : "pop" }, {include : "string"}] }], cdata : [ {token : "string.cdata.xml", regex : "\\]\\]>", next : "start"}, {token : "text.xml", regex : "\\s+"}, {token : "text.xml", regex : "(?:[^\\]]|\\](?!\\]>))+"} ], comment : [ {token : "comment.xml", regex : "-->", next : "start"}, {defaultToken : "comment.xml"} ], reference : [{ token : "constant.language.escape.reference.xml", regex : "(?:&#[0-9]+;)|(?:&#x[0-9a-fA-F]+;)|(?:&[a-zA-Z0-9_:\\.-]+;)" }], attr_reference : [{ token : "constant.language.escape.reference.attribute-value.xml", regex : "(?:&#[0-9]+;)|(?:&#x[0-9a-fA-F]+;)|(?:&[a-zA-Z0-9_:\\.-]+;)" }], tag : [{ token : ["meta.tag.punctuation.tag-open.xml", "meta.tag.punctuation.end-tag-open.xml", "meta.tag.tag-name.xml"], regex : "(?:(<)|(</))((?:" + tagRegex + ":)?" + tagRegex + ")", next: [ {include : "attributes"}, {token : "meta.tag.punctuation.tag-close.xml", regex : "/?>", next : "start"} ] }], tag_whitespace : [ {token : "text.tag-whitespace.xml", regex : "\\s+"} ], whitespace : [ {token : "text.whitespace.xml", regex : "\\s+"} ], string: [{ token : "string.xml", regex : "'", push : [ {token : "string.xml", regex: "'", next: "pop"}, {defaultToken : "string.xml"} ] }, { token : "string.xml", regex : '"', push : [ {token : "string.xml", regex: '"', next: "pop"}, {defaultToken : "string.xml"} ] }], attributes: [{ token : "entity.other.attribute-name.xml", regex : "(?:" + tagRegex + ":)?" + tagRegex + "" }, { token : "keyword.operator.attribute-equals.xml", regex : "=" }, { include: "tag_whitespace" }, { include: "attribute_value" }], attribute_value: [{ token : "string.attribute-value.xml", regex : "'", push : [ {token : "string.attribute-value.xml", regex: "'", next: "pop"}, {include : "attr_reference"}, {defaultToken : "string.attribute-value.xml"} ] }, { token : "string.attribute-value.xml", regex : '"', push : [ {token : "string.attribute-value.xml", regex: '"', next: "pop"}, {include : "attr_reference"}, {defaultToken : "string.attribute-value.xml"} ] }] }; if (this.constructor === XmlHighlightRules) this.normalizeRules(); }; (function() { this.embedTagRules = function(HighlightRules, prefix, tag){ this.$rules.tag.unshift({ token : ["meta.tag.punctuation.tag-open.xml", "meta.tag." + tag + ".tag-name.xml"], regex : "(<)(" + tag + "(?=\\s|>|$))", next: [ {include : "attributes"}, {token : "meta.tag.punctuation.tag-close.xml", regex : "/?>", next : prefix + "start"} ] }); this.$rules[tag + "-end"] = [ {include : "attributes"}, {token : "meta.tag.punctuation.tag-close.xml", regex : "/?>", next: "start", onMatch : function(value, currentState, stack) { stack.splice(0); return this.token; }} ] this.embedRules(HighlightRules, prefix, [{ token: ["meta.tag.punctuation.end-tag-open.xml", "meta.tag." + tag + ".tag-name.xml"], regex : "(</)(" + tag + "(?=\\s|>|$))", next: tag + "-end" }, { token: "string.cdata.xml", regex : "<\\!\\[CDATA\\[" }, { token: "string.cdata.xml", regex : "\\]\\]>" }]); }; }).call(TextHighlightRules.prototype); oop.inherits(XmlHighlightRules, TextHighlightRules); exports.XmlHighlightRules = XmlHighlightRules; }); ace.define("ace/mode/html_highlight_rules",["require","exports","module","ace/lib/oop","ace/lib/lang","ace/mode/css_highlight_rules","ace/mode/javascript_highlight_rules","ace/mode/xml_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var lang = require("../lib/lang"); var CssHighlightRules = require("./css_highlight_rules").CssHighlightRules; var JavaScriptHighlightRules = require("./javascript_highlight_rules").JavaScriptHighlightRules; var XmlHighlightRules = require("./xml_highlight_rules").XmlHighlightRules; var tagMap = lang.createMap({ a : 'anchor', button : 'form', form : 'form', img : 'image', input : 'form', label : 'form', option : 'form', script : 'script', select : 'form', textarea : 'form', style : 'style', table : 'table', tbody : 'table', td : 'table', tfoot : 'table', th : 'table', tr : 'table' }); var HtmlHighlightRules = function() { XmlHighlightRules.call(this); this.addRules({ attributes: [{ include : "tag_whitespace" }, { token : "entity.other.attribute-name.xml", regex : "[-_a-zA-Z0-9:.]+" }, { token : "keyword.operator.attribute-equals.xml", regex : "=", push : [{ include: "tag_whitespace" }, { token : "string.unquoted.attribute-value.html", regex : "[^<>='\"`\\s]+", next : "pop" }, { token : "empty", regex : "", next : "pop" }] }, { include : "attribute_value" }], tag: [{ token : function(start, tag) { var group = tagMap[tag]; return ["meta.tag.punctuation." + (start == "<" ? "" : "end-") + "tag-open.xml", "meta.tag" + (group ? "." + group : "") + ".tag-name.xml"]; }, regex : "(</?)([-_a-zA-Z0-9:.]+)", next: "tag_stuff" }], tag_stuff: [ {include : "attributes"}, {token : "meta.tag.punctuation.tag-close.xml", regex : "/?>", next : "start"} ] }); this.embedTagRules(CssHighlightRules, "css-", "style"); this.embedTagRules(new JavaScriptHighlightRules({jsx: false}).getRules(), "js-", "script"); if (this.constructor === HtmlHighlightRules) this.normalizeRules(); }; oop.inherits(HtmlHighlightRules, XmlHighlightRules); exports.HtmlHighlightRules = HtmlHighlightRules; }); ace.define("ace/mode/markdown_highlight_rules",["require","exports","module","ace/lib/oop","ace/lib/lang","ace/mode/text_highlight_rules","ace/mode/javascript_highlight_rules","ace/mode/xml_highlight_rules","ace/mode/html_highlight_rules","ace/mode/css_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var lang = require("../lib/lang"); var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; var JavaScriptHighlightRules = require("./javascript_highlight_rules").JavaScriptHighlightRules; var XmlHighlightRules = require("./xml_highlight_rules").XmlHighlightRules; var HtmlHighlightRules = require("./html_highlight_rules").HtmlHighlightRules; var CssHighlightRules = require("./css_highlight_rules").CssHighlightRules; var escaped = function(ch) { return "(?:[^" + lang.escapeRegExp(ch) + "\\\\]|\\\\.)*"; } function github_embed(tag, prefix) { return { // Github style block token : "support.function", regex : "^\\s*```" + tag + "\\s*$", push : prefix + "start" }; } var MarkdownHighlightRules = function() { HtmlHighlightRules.call(this); this.$rules["start"].unshift({ token : "empty_line", regex : '^$', next: "allowBlock" }, { // h1 token: "markup.heading.1", regex: "^=+(?=\\s*$)" }, { // h2 token: "markup.heading.2", regex: "^\\-+(?=\\s*$)" }, { token : function(value) { return "markup.heading." + value.length; }, regex : /^#{1,6}(?=\s*[^ #]|\s+#.)/, next : "header" }, github_embed("(?:javascript|js)", "jscode-"), github_embed("xml", "xmlcode-"), github_embed("html", "htmlcode-"), github_embed("css", "csscode-"), { // Github style block token : "support.function", regex : "^\\s*```\\s*\\S*(?:{.*?\\})?\\s*$", next : "githubblock" }, { // block quote token : "string.blockquote", regex : "^\\s*>\\s*(?:[*+-]|\\d+\\.)?\\s+", next : "blockquote" }, { // HR * - _ token : "constant", regex : "^ {0,2}(?:(?: ?\\* ?){3,}|(?: ?\\- ?){3,}|(?: ?\\_ ?){3,})\\s*$", next: "allowBlock" }, { // list token : "markup.list", regex : "^\\s{0,3}(?:[*+-]|\\d+\\.)\\s+", next : "listblock-start" }, { include : "basic" }); this.addRules({ "basic" : [{ token : "constant.language.escape", regex : /\\[\\`*_{}\[\]()#+\-.!]/ }, { // code span ` token : "support.function", regex : "(`+)(.*?[^`])(\\1)" }, { // reference token : ["text", "constant", "text", "url", "string", "text"], regex : "^([ ]{0,3}\\[)([^\\]]+)(\\]:\\s*)([^ ]+)(\\s*(?:[\"][^\"]+[\"])?(\\s*))$" }, { // link by reference token : ["text", "string", "text", "constant", "text"], regex : "(\\[)(" + escaped("]") + ")(\\]\\s*\\[)("+ escaped("]") + ")(\\])" }, { // link by url token : ["text", "string", "text", "markup.underline", "string", "text"], regex : "(\\[)(" + // [ escaped("]") + // link text ")(\\]\\()"+ // ]( '((?:[^\\)\\s\\\\]|\\\\.|\\s(?=[^"]))*)' + // href '(\\s*"' + escaped('"') + '"\\s*)?' + // "title" "(\\))" // ) }, { // strong ** __ token : "string.strong", regex : "([*]{2}|[_]{2}(?=\\S))(.*?\\S[*_]*)(\\1)" }, { // emphasis * _ token : "string.emphasis", regex : "([*]|[_](?=\\S))(.*?\\S[*_]*)(\\1)" }, { // token : ["text", "url", "text"], regex : "(<)("+ "(?:https?|ftp|dict):[^'\">\\s]+"+ "|"+ "(?:mailto:)?[-.\\w]+\\@[-a-z0-9]+(?:\\.[-a-z0-9]+)*\\.[a-z]+"+ ")(>)" }], "allowBlock": [ {token : "support.function", regex : "^ {4}.+", next : "allowBlock"}, {token : "empty_line", regex : '^$', next: "allowBlock"}, {token : "empty", regex : "", next : "start"} ], "header" : [{ regex: "$", next : "start" }, { include: "basic" }, { defaultToken : "heading" } ], "listblock-start" : [{ token : "support.variable", regex : /(?:\[[ x]\])?/, next : "listblock" }], "listblock" : [ { // Lists only escape on completely blank lines. token : "empty_line", regex : "^$", next : "start" }, { // list token : "markup.list", regex : "^\\s{0,3}(?:[*+-]|\\d+\\.)\\s+", next : "listblock-start" }, { include : "basic", noEscape: true }, { // Github style block token : "support.function", regex : "^\\s*```\\s*[a-zA-Z]*(?:{.*?\\})?\\s*$", next : "githubblock" }, { defaultToken : "list" //do not use markup.list to allow stling leading `*` differntly } ], "blockquote" : [ { // Blockquotes only escape on blank lines. token : "empty_line", regex : "^\\s*$", next : "start" }, { // block quote token : "string.blockquote", regex : "^\\s*>\\s*(?:[*+-]|\\d+\\.)?\\s+", next : "blockquote" }, { include : "basic", noEscape: true }, { defaultToken : "string.blockquote" } ], "githubblock" : [ { token : "support.function", regex : "^\\s*```", next : "start" }, { token : "support.function", regex : ".+" } ] }); this.embedRules(JavaScriptHighlightRules, "jscode-", [{ token : "support.function", regex : "^\\s*```", next : "pop" }]); this.embedRules(HtmlHighlightRules, "htmlcode-", [{ token : "support.function", regex : "^\\s*```", next : "pop" }]); this.embedRules(CssHighlightRules, "csscode-", [{ token : "support.function", regex : "^\\s*```", next : "pop" }]); this.embedRules(XmlHighlightRules, "xmlcode-", [{ token : "support.function", regex : "^\\s*```", next : "pop" }]); this.normalizeRules(); }; oop.inherits(MarkdownHighlightRules, TextHighlightRules); exports.MarkdownHighlightRules = MarkdownHighlightRules; }); ace.define("ace/mode/mask_highlight_rules",["require","exports","module","ace/lib/oop","ace/lib/lang","ace/mode/text_highlight_rules","ace/mode/javascript_highlight_rules","ace/mode/css_highlight_rules","ace/mode/markdown_highlight_rules","ace/mode/html_highlight_rules"], function(require, exports, module) { "use strict"; exports.MaskHighlightRules = MaskHighlightRules; var oop = require("../lib/oop"); var lang = require("../lib/lang"); var TextRules = require("./text_highlight_rules").TextHighlightRules; var JSRules = require("./javascript_highlight_rules").JavaScriptHighlightRules; var CssRules = require("./css_highlight_rules").CssHighlightRules; var MDRules = require("./markdown_highlight_rules").MarkdownHighlightRules; var HTMLRules = require("./html_highlight_rules").HtmlHighlightRules; var token_TAG = "keyword.support.constant.language", token_COMPO = "support.function.markup.bold", token_KEYWORD = "keyword", token_LANG = "constant.language", token_UTIL = "keyword.control.markup.italic", token_ATTR = "support.variable.class", token_PUNKT = "keyword.operator", token_ITALIC = "markup.italic", token_BOLD = "markup.bold", token_LPARE = "paren.lparen", token_RPARE = "paren.rparen"; var const_FUNCTIONS, const_KEYWORDS, const_CONST, const_TAGS; (function(){ const_FUNCTIONS = lang.arrayToMap( ("log").split("|") ); const_CONST = lang.arrayToMap( (":dualbind|:bind|:import|slot|event|style|html|markdown|md").split("|") ); const_KEYWORDS = lang.arrayToMap( ("debugger|define|var|if|each|for|of|else|switch|case|with|visible|+if|+each|+for|+switch|+with|+visible|include|import").split("|") ); const_TAGS = lang.arrayToMap( ("a|abbr|acronym|address|applet|area|article|aside|audio|b|base|basefont|bdo|" + "big|blockquote|body|br|button|canvas|caption|center|cite|code|col|colgroup|" + "command|datalist|dd|del|details|dfn|dir|div|dl|dt|em|embed|fieldset|" + "figcaption|figure|font|footer|form|frame|frameset|h1|h2|h3|h4|h5|h6|head|" + "header|hgroup|hr|html|i|iframe|img|input|ins|keygen|kbd|label|legend|li|" + "link|map|mark|menu|meta|meter|nav|noframes|noscript|object|ol|optgroup|" + "option|output|p|param|pre|progress|q|rp|rt|ruby|s|samp|script|section|select|" + "small|source|span|strike|strong|style|sub|summary|sup|table|tbody|td|" + "textarea|tfoot|th|thead|time|title|tr|tt|u|ul|var|video|wbr|xmp").split("|") ); }()); function MaskHighlightRules () { this.$rules = { "start" : [ Token("comment", "\\/\\/.*$"), Token("comment", "\\/\\*", [ Token("comment", ".*?\\*\\/", "start"), Token("comment", ".+") ]), Blocks.string("'''"), Blocks.string('"""'), Blocks.string('"'), Blocks.string("'"), Blocks.syntax(/(markdown|md)\b/, "md-multiline", "multiline"), Blocks.syntax(/html\b/, "html-multiline", "multiline"), Blocks.syntax(/(slot|event)\b/, "js-block", "block"), Blocks.syntax(/style\b/, "css-block", "block"), Blocks.syntax(/var\b/, "js-statement", "attr"), Blocks.tag(), Token(token_LPARE, "[[({>]"), Token(token_RPARE, "[\\])};]", "start"), { caseInsensitive: true } ] }; var rules = this; addJavaScript("interpolation", /\]/, token_RPARE + "." + token_ITALIC); addJavaScript("statement", /\)|}|;/); addJavaScript("block", /\}/); addCss(); addMarkdown(); addHtml(); function addJavaScript(name, escape, closeType) { var prfx = "js-" + name + "-", rootTokens = name === "block" ? ["start"] : ["start", "no_regex"]; add( JSRules , prfx , escape , rootTokens , closeType ); } function addCss() { add(CssRules, "css-block-", /\}/); } function addMarkdown() { add(MDRules, "md-multiline-", /("""|''')/, []); } function addHtml() { add(HTMLRules, "html-multiline-", /("""|''')/); } function add(Rules, strPrfx, rgxEnd, rootTokens, closeType) { var next = "pop"; var tokens = rootTokens || [ "start" ]; if (tokens.length === 0) { tokens = null; } if (/block|multiline/.test(strPrfx)) { next = strPrfx + "end"; rules.$rules[next] = [ Token("empty", "", "start") ]; } rules.embedRules( Rules , strPrfx , [ Token(closeType || token_RPARE, rgxEnd, next) ] , tokens , tokens == null ? true : false ); } this.normalizeRules(); } oop.inherits(MaskHighlightRules, TextRules); var Blocks = { string: function(str, next){ var token = Token( "string.start" , str , [ Token(token_LPARE + "." + token_ITALIC, /~\[/, Blocks.interpolation()), Token("string.end", str, "pop"), { defaultToken: "string" } ] , next ); if (str.length === 1){ var escaped = Token("string.escape", "\\\\" + str); token.push.unshift(escaped); } return token; }, interpolation: function(){ return [ Token(token_UTIL, /\s*\w*\s*:/), "js-interpolation-start" ]; }, tagHead: function (rgx) { return Token(token_ATTR, rgx, [ Token(token_ATTR, /[\w\-_]+/), Token(token_LPARE + "." + token_ITALIC, /~\[/, Blocks.interpolation()), Blocks.goUp() ]); }, tag: function () { return { token: 'tag', onMatch : function(value) { if (void 0 !== const_KEYWORDS[value]) return token_KEYWORD; if (void 0 !== const_CONST[value]) return token_LANG; if (void 0 !== const_FUNCTIONS[value]) return "support.function"; if (void 0 !== const_TAGS[value.toLowerCase()]) return token_TAG; return token_COMPO; }, regex : /([@\w\-_:+]+)|((^|\s)(?=\s*(\.|#)))/, push: [ Blocks.tagHead(/\./) , Blocks.tagHead(/#/) , Blocks.expression(), Blocks.attribute(), Token(token_LPARE, /[;>{]/, "pop") ] }; }, syntax: function(rgx, next, type){ return { token: token_LANG, regex : rgx, push: ({ "attr": [ next + "-start", Token(token_PUNKT, /;/, "start") ], "multiline": [ Blocks.tagHead(/\./) , Blocks.tagHead(/#/) , Blocks.attribute(), Blocks.expression(), Token(token_LPARE, /[>\{]/), Token(token_PUNKT, /;/, "start"), Token(token_LPARE, /'''|"""/, [ next + "-start" ]) ], "block": [ Blocks.tagHead(/\./) , Blocks.tagHead(/#/) , Blocks.attribute(), Blocks.expression(), Token(token_LPARE, /\{/, [ next + "-start" ]) ] })[type] }; }, attribute: function(){ return Token(function(value){ return /^x\-/.test(value) ? token_ATTR + "." + token_BOLD : token_ATTR; }, /[\w_-]+/, [ Token(token_PUNKT, /\s*=\s*/, [ Blocks.string('"'), Blocks.string("'"), Blocks.word(), Blocks.goUp() ]), Blocks.goUp() ]); }, expression: function(){ return Token(token_LPARE, /\(/, [ "js-statement-start" ]); }, word: function(){ return Token("string", /[\w-_]+/); }, goUp: function(){ return Token("text", "", "pop"); }, goStart: function(){ return Token("text", "", "start"); } }; function Token(token, rgx, mix) { var push, next, onMatch; if (arguments.length === 4) { push = mix; next = arguments[3]; } else if (typeof mix === "string") { next = mix; } else { push = mix; } if (typeof token === "function") { onMatch = token; token = "empty"; } return { token: token, regex: rgx, push: push, next: next, onMatch: onMatch }; } }); ace.define("ace/mode/matching_brace_outdent",["require","exports","module","ace/range"], function(require, exports, module) { "use strict"; var Range = require("../range").Range; var MatchingBraceOutdent = function() {}; (function() { this.checkOutdent = function(line, input) { if (! /^\s+$/.test(line)) return false; return /^\s*\}/.test(input); }; this.autoOutdent = function(doc, row) { var line = doc.getLine(row); var match = line.match(/^(\s*\})/); if (!match) return 0; var column = match[1].length; var openBracePos = doc.findMatchingBracket({row: row, column: column}); if (!openBracePos || openBracePos.row == row) return 0; var indent = this.$getIndent(doc.getLine(openBracePos.row)); doc.replace(new Range(row, 0, row, column-1), indent); }; this.$getIndent = function(line) { return line.match(/^\s*/)[0]; }; }).call(MatchingBraceOutdent.prototype); exports.MatchingBraceOutdent = MatchingBraceOutdent; }); ace.define("ace/mode/behaviour/css",["require","exports","module","ace/lib/oop","ace/mode/behaviour","ace/mode/behaviour/cstyle","ace/token_iterator"], function(require, exports, module) { "use strict"; var oop = require("../../lib/oop"); var Behaviour = require("../behaviour").Behaviour; var CstyleBehaviour = require("./cstyle").CstyleBehaviour; var TokenIterator = require("../../token_iterator").TokenIterator; var CssBehaviour = function () { this.inherit(CstyleBehaviour); this.add("colon", "insertion", function (state, action, editor, session, text) { if (text === ':') { var cursor = editor.getCursorPosition(); var iterator = new TokenIterator(session, cursor.row, cursor.column); var token = iterator.getCurrentToken(); if (token && token.value.match(/\s+/)) { token = iterator.stepBackward(); } if (token && token.type === 'support.type') { var line = session.doc.getLine(cursor.row); var rightChar = line.substring(cursor.column, cursor.column + 1); if (rightChar === ':') { return { text: '', selection: [1, 1] } } if (!line.substring(cursor.column).match(/^\s*;/)) { return { text: ':;', selection: [1, 1] } } } } }); this.add("colon", "deletion", function (state, action, editor, session, range) { var selected = session.doc.getTextRange(range); if (!range.isMultiLine() && selected === ':') { var cursor = editor.getCursorPosition(); var iterator = new TokenIterator(session, cursor.row, cursor.column); var token = iterator.getCurrentToken(); if (token && token.value.match(/\s+/)) { token = iterator.stepBackward(); } if (token && token.type === 'support.type') { var line = session.doc.getLine(range.start.row); var rightChar = line.substring(range.end.column, range.end.column + 1); if (rightChar === ';') { range.end.column ++; return range; } } } }); this.add("semicolon", "insertion", function (state, action, editor, session, text) { if (text === ';') { var cursor = editor.getCursorPosition(); var line = session.doc.getLine(cursor.row); var rightChar = line.substring(cursor.column, cursor.column + 1); if (rightChar === ';') { return { text: '', selection: [1, 1] } } } }); } oop.inherits(CssBehaviour, CstyleBehaviour); exports.CssBehaviour = CssBehaviour; }); ace.define("ace/mode/folding/cstyle",["require","exports","module","ace/lib/oop","ace/range","ace/mode/folding/fold_mode"], function(require, exports, module) { "use strict"; var oop = require("../../lib/oop"); var Range = require("../../range").Range; var BaseFoldMode = require("./fold_mode").FoldMode; var FoldMode = exports.FoldMode = function(commentRegex) { if (commentRegex) { this.foldingStartMarker = new RegExp( this.foldingStartMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.start) ); this.foldingStopMarker = new RegExp( this.foldingStopMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.end) ); } }; oop.inherits(FoldMode, BaseFoldMode); (function() { this.foldingStartMarker = /(\{|\[)[^\}\]]*$|^\s*(\/\*)/; this.foldingStopMarker = /^[^\[\{]*(\}|\])|^[\s\*]*(\*\/)/; this.singleLineBlockCommentRe= /^\s*(\/\*).*\*\/\s*$/; this.tripleStarBlockCommentRe = /^\s*(\/\*\*\*).*\*\/\s*$/; this.startRegionRe = /^\s*(\/\*|\/\/)#?region\b/; this._getFoldWidgetBase = this.getFoldWidget; this.getFoldWidget = function(session, foldStyle, row) { var line = session.getLine(row); if (this.singleLineBlockCommentRe.test(line)) { if (!this.startRegionRe.test(line) && !this.tripleStarBlockCommentRe.test(line)) return ""; } var fw = this._getFoldWidgetBase(session, foldStyle, row); if (!fw && this.startRegionRe.test(line)) return "start"; // lineCommentRegionStart return fw; }; this.getFoldWidgetRange = function(session, foldStyle, row, forceMultiline) { var line = session.getLine(row); if (this.startRegionRe.test(line)) return this.getCommentRegionBlock(session, line, row); var match = line.match(this.foldingStartMarker); if (match) { var i = match.index; if (match[1]) return this.openingBracketBlock(session, match[1], row, i); var range = session.getCommentFoldRange(row, i + match[0].length, 1); if (range && !range.isMultiLine()) { if (forceMultiline) { range = this.getSectionRange(session, row); } else if (foldStyle != "all") range = null; } return range; } if (foldStyle === "markbegin") return; var match = line.match(this.foldingStopMarker); if (match) { var i = match.index + match[0].length; if (match[1]) return this.closingBracketBlock(session, match[1], row, i); return session.getCommentFoldRange(row, i, -1); } }; this.getSectionRange = function(session, row) { var line = session.getLine(row); var startIndent = line.search(/\S/); var startRow = row; var startColumn = line.length; row = row + 1; var endRow = row; var maxRow = session.getLength(); while (++row < maxRow) { line = session.getLine(row); var indent = line.search(/\S/); if (indent === -1) continue; if (startIndent > indent) break; var subRange = this.getFoldWidgetRange(session, "all", row); if (subRange) { if (subRange.start.row <= startRow) { break; } else if (subRange.isMultiLine()) { row = subRange.end.row; } else if (startIndent == indent) { break; } } endRow = row; } return new Range(startRow, startColumn, endRow, session.getLine(endRow).length); }; this.getCommentRegionBlock = function(session, line, row) { var startColumn = line.search(/\s*$/); var maxRow = session.getLength(); var startRow = row; var re = /^\s*(?:\/\*|\/\/|--)#?(end)?region\b/; var depth = 1; while (++row < maxRow) { line = session.getLine(row); var m = re.exec(line); if (!m) continue; if (m[1]) depth--; else depth++; if (!depth) break; } var endRow = row; if (endRow > startRow) { return new Range(startRow, startColumn, endRow, line.length); } }; }).call(FoldMode.prototype); }); ace.define("ace/mode/mask",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/mask_highlight_rules","ace/mode/matching_brace_outdent","ace/mode/behaviour/css","ace/mode/folding/cstyle"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var TextMode = require("./text").Mode; var MaskHighlightRules = require("./mask_highlight_rules").MaskHighlightRules; var MatchingBraceOutdent = require("./matching_brace_outdent").MatchingBraceOutdent; var CssBehaviour = require("./behaviour/css").CssBehaviour; var CStyleFoldMode = require("./folding/cstyle").FoldMode; var Mode = function() { this.HighlightRules = MaskHighlightRules; this.$outdent = new MatchingBraceOutdent(); this.$behaviour = new CssBehaviour(); this.foldingRules = new CStyleFoldMode(); }; oop.inherits(Mode, TextMode); (function() { this.lineCommentStart = "//"; this.blockComment = {start: "/*", end: "*/"}; this.getNextLineIndent = function(state, line, tab) { var indent = this.$getIndent(line); var tokens = this.getTokenizer().getLineTokens(line, state).tokens; if (tokens.length && tokens[tokens.length-1].type == "comment") { return indent; } var match = line.match(/^.*\{\s*$/); if (match) { indent += tab; } return indent; }; this.checkOutdent = function(state, line, input) { return this.$outdent.checkOutdent(line, input); }; this.autoOutdent = function(state, doc, row) { this.$outdent.autoOutdent(doc, row); }; this.$id = "ace/mode/mask"; }).call(Mode.prototype); exports.Mode = Mode; });
PypiClean
/SMAK-3.0.tar.gz/SMAK-3.0/envilite/bipfile.py
from __future__ import absolute_import, division, print_function, unicode_literals import array import logging import numpy as np import os import sys import spectral as spy from .spyfile import SpyFile, MemmapFile from spectral.utilities.python23 import typecode, tobytes, frombytes byte_typecode = typecode('b') class BipFile(SpyFile, MemmapFile): ''' A class to interface image files stored with bands interleaved by pixel. ''' def __init__(self, params, metadata=None): self.interleave = spy.BIP if metadata is None: metadata = {} SpyFile.__init__(self, params, metadata) self._memmap = self._open_memmap('r') def _open_memmap(self, mode): logger = logging.getLogger('spectral') if (os.path.getsize(self.filename) < sys.maxsize): try: (R, C, B) = self.shape return np.memmap(self.filename, dtype=self.dtype, mode=mode, offset=self.offset, shape=self.shape) except: logger.debug('Unable to create memmap interface.') return None else: return None def read_band(self, band, use_memmap=True): '''Reads a single band from the image. Arguments: `band` (int): Index of band to read. `use_memmap` (bool, default True): Specifies whether the file's memmap interface should be used to read the data. Setting this arg to True only has an effect if a memmap is being used (i.e., if `img.using_memmap` is True). Returns: :class:`numpy.ndarray` An `MxN` array of values for the specified band. ''' if self._memmap is not None and use_memmap is True: data = np.array(self._memmap[:, :, band]) if self.scale_factor != 1: data = data / float(self.scale_factor) return data vals = array.array(byte_typecode) delta = self.sample_size * (self.nbands - 1) nVals = self.nrows * self.ncols sample_size = self.sample_size f = self.fid f.seek(self.offset + self.sample_size * band, 0) # Pixel format is BIP for i in range(nVals - 1): vals.fromfile(f, sample_size) f.seek(delta, 1) vals.fromfile(f, sample_size) arr = np.frombuffer(tobytes(vals), dtype=self.dtype) arr = arr.reshape(self.nrows, self.ncols) if self.scale_factor != 1: return arr / float(self.scale_factor) return arr def read_bands(self, bands, use_memmap=True): '''Reads multiple bands from the image. Arguments: `bands` (list of ints): Indices of bands to read. `use_memmap` (bool, default True): Specifies whether the file's memmap interface should be used to read the data. Setting this arg to True only has an effect if a memmap is being used (i.e., if `img.using_memmap` is True). Returns: :class:`numpy.ndarray` An `MxNxL` array of values for the specified bands. `M` and `N` are the number of rows & columns in the image and `L` equals len(`bands`). ''' if self._memmap is not None and use_memmap is True: data = np.array(self._memmap[:, :, bands]) if self.scale_factor != 1: data = data / float(self.scale_factor) return data vals = array.array(byte_typecode) offset = self.offset delta = self.sample_size * self.nbands nVals = self.nrows * self.ncols sample_size = self.sample_size # Increments between bands delta_b = list(bands[:]) for i in range(len(delta_b)): delta_b[i] *= self.sample_size f = self.fid # Pixel format is BIP for i in range(nVals): pixelOffset = offset + i * delta for j in range(len(bands)): f.seek(pixelOffset + delta_b[j], 0) # Next band vals.fromfile(f, sample_size) arr = np.frombuffer(tobytes(vals), dtype=self.dtype) arr = arr.reshape(self.nrows, self.ncols, len(bands)) if self.scale_factor != 1: return arr / float(self.scale_factor) return arr def read_pixel(self, row, col, use_memmap=True): '''Reads the pixel at position (row,col) from the file. Arguments: `row`, `col` (int): Indices of the row & column for the pixel `use_memmap` (bool, default True): Specifies whether the file's memmap interface should be used to read the data. Setting this arg to True only has an effect if a memmap is being used (i.e., if `img.using_memmap` is True). Returns: :class:`numpy.ndarray` A length-`B` array, where `B` is the number of image bands. ''' if self._memmap is not None and use_memmap is True: data = np.array(self._memmap[row, col, :]) if self.scale_factor != 1: data = data / float(self.scale_factor) return data vals = array.array(byte_typecode) f = self.fid f.seek(self.offset + self.sample_size * self.nbands * (row * self.ncols + col), 0) # Pixel format is BIP so read entire pixel. vals.fromfile(f, self.nbands * self.sample_size) pixel = np.frombuffer(tobytes(vals), dtype=self.dtype) if self.scale_factor != 1: return pixel / float(self.scale_factor) return pixel def read_subregion(self, row_bounds, col_bounds, bands=None, use_memmap=True): ''' Reads a contiguous rectangular sub-region from the image. Arguments: `row_bounds` (2-tuple of ints): (a, b) -> Rows a through b-1 will be read. `col_bounds` (2-tuple of ints): (a, b) -> Columnss a through b-1 will be read. `bands` (list of ints): Optional list of bands to read. If not specified, all bands are read. `use_memmap` (bool, default True): Specifies whether the file's memmap interface should be used to read the data. Setting this arg to True only has an effect if a memmap is being used (i.e., if `img.using_memmap` is True). Returns: :class:`numpy.ndarray` An `MxNxL` array. ''' if self._memmap is not None and use_memmap is True: if bands is None: data = np.array(self._memmap[row_bounds[0]: row_bounds[1], col_bounds[0]: col_bounds[1], :]) else: data = np.array(self._memmap[row_bounds[0]: row_bounds[1], col_bounds[0]: col_bounds[1], bands]) if self.scale_factor != 1: data = data / float(self.scale_factor) return data offset = self.offset nbands = self.nbands nSubRows = row_bounds[1] - row_bounds[0] # Rows in sub-image nSubCols = col_bounds[1] - col_bounds[0] # Cols in sub-image d_row = self.sample_size * self.ncols * self.nbands colStartPos = col_bounds[0] * self.sample_size * self.nbands vals = array.array(byte_typecode) nVals = self.nrows * self.ncols sample_size = self.sample_size # Increments between bands if bands is not None: allBands = 0 nSubBands = len(bands) delta_b = bands[:] for i in range(len(delta_b)): delta_b[i] *= self.sample_size else: allBands = 1 nSubBands = self.nbands f = self.fid # Pixel format is BIP for i in range(row_bounds[0], row_bounds[1]): f.seek(offset + i * d_row + colStartPos, 0) rowPos = f.tell() if allBands: # This is the simple one vals.fromfile(f, nSubCols * nbands * sample_size) else: # Need to pull out specific bands for each column. for j in range(nSubCols): f.seek(rowPos + j * self.sample_size * self.nbands, 0) pixelPos = f.tell() for k in range(len(bands)): f.seek(pixelPos + delta_b[k], 0) # Next band vals.fromfile(f, sample_size) arr = np.frombuffer(tobytes(vals), dtype=self.dtype) arr = arr.reshape(nSubRows, nSubCols, nSubBands) if self.scale_factor != 1: return arr / float(self.scale_factor) return arr def read_subimage(self, rows, cols, bands=None, use_memmap=False): ''' Reads arbitrary rows, columns, and bands from the image. Arguments: `rows` (list of ints): Indices of rows to read. `cols` (list of ints): Indices of columns to read. `bands` (list of ints): Optional list of bands to read. If not specified, all bands are read. `use_memmap` (bool, default False): Specifies whether the file's memmap interface should be used to read the data. Setting this arg to True only has an effect if a memmap is being used (i.e., if `img.using_memmap` is True). Returns: :class:`numpy.ndarray` An `MxNxL` array, where `M` = len(`rows`), `N` = len(`cols`), and `L` = len(bands) (or # of image bands if `bands` == None). ''' if self._memmap is not None and use_memmap is True: if bands is None: data = np.array(self._memmap.take(rows, 0).take(cols, 1)) else: data = np.array( self._memmap.take(rows, 0).take(cols, 1).take(bands, 2)) if self.scale_factor != 1: data = data / float(self.scale_factor) return data offset = self.offset nbands = self.nbands nSubRows = len(rows) # Rows in sub-image nSubCols = len(cols) # Cols in sub-image d_band = self.sample_size d_col = d_band * self.nbands d_row = d_col * self.ncols vals = array.array(byte_typecode) nVals = self.nrows * self.ncols sample_size = self.sample_size # Increments between bands if bands is not None: allBands = 0 nSubBands = len(bands) else: allBands = 1 bands = list(range(self.nbands)) nSubBands = self.nbands f = self.fid # Pixel format is BIP for i in rows: for j in cols: if allBands: f.seek(offset + i * d_row + j * d_col, 0) vals.fromfile(f, nSubBands * sample_size) else: for k in bands: f.seek(offset + i * d_row + j * d_col + k * d_band, 0) vals.fromfile(f, sample_size) arr = np.frombuffer(tobytes(vals), dtype=self.dtype) arr = arr.reshape(nSubRows, nSubCols, nSubBands) if self.scale_factor != 1: return arr / float(self.scale_factor) return arr def read_datum(self, i, j, k, use_memmap=True): '''Reads the band `k` value for pixel at row `i` and column `j`. Arguments: `i`, `j`, `k` (integer): Row, column and band index, respectively. `use_memmap` (bool, default True): Specifies whether the file's memmap interface should be used to read the data. Setting this arg to True only has an effect if a memmap is being used (i.e., if `img.using_memmap` is True). Using this function is not an efficient way to iterate over bands or pixels. For such cases, use readBands or readPixel instead. ''' if self._memmap is not None and use_memmap is True: datum = self._memmap[i, j, k] if self.scale_factor != 1: datum /= float(self.scale_factor) return datum vals = array.array(byte_typecode) f = self.fid f.seek(self.offset + self.sample_size * (self.nbands * (i * self.ncols + j) + k), 0) # Pixel format is BIP so read entire pixel. vals.fromfile(f, self.sample_size) arr = np.frombuffer(tobytes(vals), dtype=self.dtype) return arr.tolist()[0] / float(self.scale_factor)
PypiClean
/jupyterlab_remote_contents-0.1.1.tar.gz/jupyterlab_remote_contents-0.1.1/node_modules/rxjs/internal/scheduler/AsyncAction.js
"use strict"; var __extends = (this && this.__extends) || (function () { var extendStatics = function (d, b) { extendStatics = Object.setPrototypeOf || ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || function (d, b) { for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p]; }; return extendStatics(d, b); } return function (d, b) { extendStatics(d, b); function __() { this.constructor = d; } d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); }; })(); Object.defineProperty(exports, "__esModule", { value: true }); var Action_1 = require("./Action"); var AsyncAction = (function (_super) { __extends(AsyncAction, _super); function AsyncAction(scheduler, work) { var _this = _super.call(this, scheduler, work) || this; _this.scheduler = scheduler; _this.work = work; _this.pending = false; return _this; } AsyncAction.prototype.schedule = function (state, delay) { if (delay === void 0) { delay = 0; } if (this.closed) { return this; } this.state = state; var id = this.id; var scheduler = this.scheduler; if (id != null) { this.id = this.recycleAsyncId(scheduler, id, delay); } this.pending = true; this.delay = delay; this.id = this.id || this.requestAsyncId(scheduler, this.id, delay); return this; }; AsyncAction.prototype.requestAsyncId = function (scheduler, id, delay) { if (delay === void 0) { delay = 0; } return setInterval(scheduler.flush.bind(scheduler, this), delay); }; AsyncAction.prototype.recycleAsyncId = function (scheduler, id, delay) { if (delay === void 0) { delay = 0; } if (delay !== null && this.delay === delay && this.pending === false) { return id; } clearInterval(id); return undefined; }; AsyncAction.prototype.execute = function (state, delay) { if (this.closed) { return new Error('executing a cancelled action'); } this.pending = false; var error = this._execute(state, delay); if (error) { return error; } else if (this.pending === false && this.id != null) { this.id = this.recycleAsyncId(this.scheduler, this.id, null); } }; AsyncAction.prototype._execute = function (state, delay) { var errored = false; var errorValue = undefined; try { this.work(state); } catch (e) { errored = true; errorValue = !!e && e || new Error(e); } if (errored) { this.unsubscribe(); return errorValue; } }; AsyncAction.prototype._unsubscribe = function () { var id = this.id; var scheduler = this.scheduler; var actions = scheduler.actions; var index = actions.indexOf(this); this.work = null; this.state = null; this.pending = false; this.scheduler = null; if (index !== -1) { actions.splice(index, 1); } if (id != null) { this.id = this.recycleAsyncId(scheduler, id, null); } this.delay = null; }; return AsyncAction; }(Action_1.Action)); exports.AsyncAction = AsyncAction; //# sourceMappingURL=AsyncAction.js.map
PypiClean
/astro-prospector-1.2.0.tar.gz/astro-prospector-1.2.0/demo/tutorial.rst
.. _tutorial: Tutorial ======== Here is a guide to running |Codename| fits from the command line using parameter files, and working with the output. This is a generalization of the techniques demonstrated in the quickstart, with more detailed descriptions of how each of the ingredients works. We assume you have installed |Codename| and all its dependencies as laid out in the docs. The next thing you need to do is make a temporary work directory, ``<workdir>`` .. code-block:: shell cd <workdir> cp <codedir>/demo/demo_* . We now have a *parameter file* or two, and some data. Take a look at the ``demo_photometry.dat`` file in an editor, you'll see it is a simple ascii file, with a few rows and several columns. Each row is a different galaxy, each column is a different piece of information about that galaxy. This is just an example. In practice |Codename| can work with a wide variety of data types. The parameter file ------------------ Open up ``demo_params.py`` in an editor, preferably one with syntax highlighting. You'll see that it's a python file. It includes some imports, a number of methods that build the ingredients for the fitting, and then an executable portion. **Executable Script** The executable portion of the parameter file that comes after the ``if __name__ == "__main__"`` line is run when the parameter file is called. Here the possible command line arguments and their default values are defined, including any custom arguments that you might add. In this example we have added several command line arguments that control how the data is read and how the The supplied command line arguments are then parsed and placed in a dictionary. This dictionary is passed to all the ingredient building methods (described below), which return the data dictionary and necessary model objects. The data dictionary and model objects are passed to a function that runs the prospector fit (:py:func:`prospect.fitting.fit_model`). Finally, the fit results are written to an output file. **Building the fit ingredients: build_model** Several methods must be defined in the parameter file to build the ingredients for the fit. The purpose of these functions and their required output are described here. You will want to modify some of these for your specific model and data. Note that each of these functions will be passed a dictionary of command line arguments. These command line arguments, including any you add to the command line parser in the executable portion of the script, can therefore be used to control the behaviour of the ingredient building functions. For example, a custom command line argument can be used to control the type of model that is fit, or how or from where the data is loaded. First, the :py:func:`build_model` function is where the model that we will fit will be constructed. The specific model that you choose to construct depends on your data and your scientific question. We have to specify a dictionary or list of model parameter specifications (see :doc:`models`). Each specification is a dictionary that describes a single parameter. We can build the model by adjusting predefined sets of model parameter specifications, stored in the :py:class:`models.templates.TemplateLibrary` dictionary-like object. In this example we choose the ``"parametric_sfh"`` set, which has the parameters necessary for a vasic delay-tau SFH fit with simple attenuation by a dust screen. This parameter set can be inspected in any of the following ways .. code-block:: python from prospect.models.templates import TemplateLibrary, describe # Show basic description of all pre-defined parameter sets TemplateLibrary.show_contents() # method 1: print the whole dictionary of dictionaries model_params = TemplateLibrary["parametric_sfh"] print(model_params) # Method 2: show a prettier summary of the free and fixed parameters print(describe(model_params)) You'll see that this model has 5 free parameters. Any parameter with ``"isfree": True`` in its specification will be varied during the fit. We have set priors on these parameters, visible as e.g. ``model_params["mass"]["prior"]``. You may wish to change the default priors for your particular science case, using the prior objects in the :py:mod:`models.priors` module. An example of adjusting the priors for several parameters is given in the :py:func:`build_model` method in ``demo_params.py``. Any free parameter *must* have an associated prior. Other parameters have their value set to the value of the ``"init"`` key, but do not vary during the fit. They can be made to vary by setting ``"isfree": True`` and specifying a prior. Parameters not listed here will be set to their default values. Typically this means default values in the :py:class:`fsps.StellarPopulation` object; see `python-fsps <http://dan.iel.fm/python-fsps/current/>`_ for details. Once you get a set of parameters from the :py:class:`TemplateLibrary` you can modify or add parameter specifications. Since ``model_params`` is a dictionary (of dictionaries), you can update it with other parameter set dictionaries from the :py:class:`TemplateLibrary`. Finally, the :py:func:`build_model` function takes the ``model_params`` dictionary or list that you build and uses it to instantiate a :py:class:`SedModel` object. .. code-block:: python from prospect.models import SedModel model_params = TemplateLibrary["parametric_sfh"] # Turn on nebular emission and add associated parameters model_params.update(TemplateLibrary["nebular"]) model_params["gas_logu"]["isfree"] = True model = SedModel(model_params) print(model) If you wanted to change the specification of the model using custom command line arguments, you could do it in :py:func:`build_model` by allowing this function to take keyword arguments with the same name as the custom command line argument. This can be useful for example to set the initial value of the redshift ``"zred"`` on an object-by-object basis. Such an example is shown in ``demo_params.py``, which also uses command line arguments to control whether nebular and/or dust emission parameters are added to the model. **Building the fit ingredients: build_obs** The next thing to look at is the :py:func:`build_obs` function. This is where you take the data from whatever format you have and put it into the dictionary format required by |Codename| for a single object. This means you will have to modify this function heavily for your own use. But it also means you can use your existing data formats. Right now, the :py:func:`build_obs` function just reads ascii data from a file, picks out a row (corresponding to the photometry of a single galaxy), and then makes a dictionary using data in that row. You'll note that both the datafile name and the object number are keyword arguments to this function. That means they can be set at execution time on the command line, by also including those variables in the ``run_params`` dictionary. We'll see an example later. When you write your own :py:func:`build_obs` function, you can add all sorts of keyword arguments that control its output (for example, an object name or ID number that can be used to choose or find a single object in your data file). You can also import helper functions and modules. These can be either things like astropy, h5py, and sqlite or your own project specific modules and functions. As long as the output dictionary is in the right format (see dataformat.rst), the body of this function can do anything. **Building the fit ingredients: the rest** Ok, now we go to the :py:func:`build_sps` function. This one is pretty straightforward, it simply instantiates our :py:class:`sources.CSPSpecBasis` object. For nonparameteric fits one would use the :py:class:`sources.FastStepBasis` object. These objects hold all the spectral libraries and produce an SED given a set of parameters. After that is :py:func:`build_noise`, which is for complexifying the noise model -- ignore that for now. Running a fit ---------------------- There are two kinds of fitting packages that can be used with |Codename|. The first is ``emcee`` which implements ensemble MCMC sampling, and the second is ``dynesty``, which implements dynamic nested sampling. It is also possible to perform optimization. If ``emcee`` is used, the result of the optimization will be used to initalize the ensemble of walkers. The choice of which fitting algorithms to use is based on command line flags (``--optimization``, ``--emcee``, and ``--dynesty``.) If no flags are set the model and data objects will be generated and stored in the output file, but no fitting will take place. To run the fit on object number 0 using ``emcee`` after an initial optimization, we would do the following at the command line .. code-block:: shell python demo_params.py --objid=0 --emcee --optimize \ --outfile=demo_obj0_emcee If we wanted to change something about the MCMC parameters, or fit a different object, we could also do that at the command line .. code-block:: shell python demo_params.py --objid=1 --emcee --optimize \ --outfile=demo_obj1_emcee --nwalkers=32 --niter=1024 And if we want to use nested sampling with ``dynesty`` we would do the following .. code-block:: shell python demo_params.py --objid=0 --dynesty \ --outfile=demo_obj0_dynesty Finally, it is sometimes useful to run the script from the interpreter to do some checks. This is best done with the IPython enhanced interactive python. .. code-block:: shell ipython In [1]: %run demo_params.py --objid=0 --debug=True You can then inspect the ``obsdat`` dictionary, the ``model`` object, and the ``run_params`` dictionary to make sure everything is working fine. To see the full list of available command-line options, you can run the following .. code-block:: shell python demo_params.py --help Working with the output -------------------------------- After the fit is completed we should have a file with a name like ``demo_obj0_<fitter>_<timestamp>_mcmc.h5``. This is an HDF5 file containing sampling results and various configuration data, as well as the observational data that was fit. By setting ``run_params["output_pickles"]=True`` you can also output versions of this information in the less portable pickle format. We will read the HDF5 with python and make some plots using utilities in |Codename| To read the data back in from the output files that we've generated, use methods in ``prospect.io.read_results``. .. code-block:: python import prospect.io.read_results as reader res, obs, model = reader.results_from("demo_obj_<fitter>_<timestamp>_mcmc.h5") The ``res`` object is a dictionary containing various useful results. You can look at ``res.keys()`` to see a list of what it contains. The ``obs`` object is just the ``obs`` dictionary that was used in the fitting. The ``model`` object is the model object that was used in the fitting. **Diagnostic plots** There are also some methods in this module for basic diagnostic plots. The ``subcorner`` method requires that you have the `corner <http://corner.readthedocs.io/en/latest/>`_ package installed. It's possible now to examine the traces (i.e. the evolution of parameter value with MC iteration) and the posterior PDFs for the parameters. .. code-block:: python # Trace plots tfig = reader.traceplot(res) # Corner figure of posterior PDFs cfig = reader.subcorner(res) **Working with samples** If you want to get the *maximum a posteriori* sample, or percentiles of the posterior pdf, that can be done as follows (note that for ``dynesty`` the weights of each posterior sample must be taken into account when calculating quantiles) : .. code-block:: python # Maximum posterior probability sample imax = np.argmax(res['lnprobability']) csz = res["chain"].shape if res["chain"].ndim > 2: # emcee i, j = np.unravel_index(imax, res['lnprobability'].shape) theta_max = res['chain'][i, j, :].copy() flatchain = res["chain"].reshape(csz[0] * csz[1], csz[2]) else: # dynesty theta_max = res['chain'][imax, :].copy() flatchain = res["chain"] # 16th, 50th, and 84th percentiles of the posterior from prospect.plotting.corner import quantile weights = res.get("weights", None) post_pcts = quantile(flatchain.T, q=[0.16, 0.50, 0.84], weights=weights) **Stored "best-fit" model** Further, the prediction of the data for the MAP posterior sample may be stored for you. .. code-block:: python # Plot the stored maximum ln-probability sample import matplotlib.pyplot as pl best = res["bestfit"] a = model.params["zred"] + 1 pl.plot(a * best["restframe_wavelengths"], best['spectrum'], label="MAP spectrum") if obs['filters'] is not None: pwave = [f.wave_effective for f in obs["filters"]] pl.plot(pwave, best['photometry'], label="MAP photometry") pl.set_title(best["parameter"]) This stored best-fit information is only available if the `sps` object was passed to the :py:func:`write_hdf5` after the fit is run. If it isn't available, you can regnerate the model predictions for the highest probability sample using the approach below. **Regenerating Model predictions** If necessary, one can regenerate models at any position in the posterior chain. This requires that we have the sps object used in the fitting to generate models, which we can regenerate using the :py:func:`read_results.get_sps` method. .. code-block:: python # We need the correct sps object to generate models sps = reader.get_sps(res) Now we will choose a specific parameter value from the chain and plot what the observations and the model look like, as well as the uncertainty normalized residual. For ``emcee`` results we will use the last iteration of the first walker, while for ``dynesty`` results we will just use the last sample in the chain. .. code-block:: python # Choose the walker and iteration number by hand. walker, iteration = 0, -1 if res["chain"].ndim > 2: # if you used emcee for the inference theta = res['chain'][walker, iteration, :] else: # if you used dynesty theta = res['chain'][iteration, :] # Or get a fair sample from the posterior from prospect.plotting.utils import sample_posterior theta = sample_posterior(res["chain"], weights=res.get("weights", None), nsample=1)[0,:] # Get the modeled spectra and photometry. # These have the same shape as the obs['spectrum'] and obs['maggies'] arrays. spec, phot, mfrac = model.predict(theta, obs=res['obs'], sps=sps) # mfrac is the ratio of the surviving stellar mass to the formed mass (the ``"mass"`` parameter). # Plot the model SED import matplotlib.pyplot as pl wave = [f.wave_effective for f in res['obs']['filters']] sedfig, sedax = pl.subplots() sedax.plot(wave, res['obs']['maggies'], '-o', label='Observations') sedax.plot(wave, phot, '-o', label='Model at {},{}'.format(walker, iteration)) sedax.set_ylabel("Maggies") sedax.set_xlabel("wavelength") sedax.set_xscale('log') # Plot residuals for this walker and iteration chifig, chiax = pl.subplots() chi = (res['obs']['maggies'] - phot) / res['obs']['maggies_unc'] chiax.plot(wave, chi, 'o') chiax.set_ylabel("Chi") chiax.set_xlabel("wavelength") chiax.set_xscale('log') .. |Codename| replace:: Prospector
PypiClean
/google-ads-21.3.0.tar.gz/google-ads-21.3.0/google/ads/googleads/v13/services/types/keyword_plan_campaign_keyword_service.py
from __future__ import annotations from typing import MutableSequence import proto # type: ignore from google.ads.googleads.v13.resources.types import ( keyword_plan_campaign_keyword, ) from google.protobuf import field_mask_pb2 # type: ignore from google.rpc import status_pb2 # type: ignore __protobuf__ = proto.module( package="google.ads.googleads.v13.services", marshal="google.ads.googleads.v13", manifest={ "MutateKeywordPlanCampaignKeywordsRequest", "KeywordPlanCampaignKeywordOperation", "MutateKeywordPlanCampaignKeywordsResponse", "MutateKeywordPlanCampaignKeywordResult", }, ) class MutateKeywordPlanCampaignKeywordsRequest(proto.Message): r"""Request message for [KeywordPlanCampaignKeywordService.MutateKeywordPlanCampaignKeywords][google.ads.googleads.v13.services.KeywordPlanCampaignKeywordService.MutateKeywordPlanCampaignKeywords]. Attributes: customer_id (str): Required. The ID of the customer whose campaign keywords are being modified. operations (MutableSequence[google.ads.googleads.v13.services.types.KeywordPlanCampaignKeywordOperation]): Required. The list of operations to perform on individual Keyword Plan campaign keywords. partial_failure (bool): If true, successful operations will be carried out and invalid operations will return errors. If false, all operations will be carried out in one transaction if and only if they are all valid. Default is false. validate_only (bool): If true, the request is validated but not executed. Only errors are returned, not results. """ customer_id: str = proto.Field( proto.STRING, number=1, ) operations: MutableSequence[ "KeywordPlanCampaignKeywordOperation" ] = proto.RepeatedField( proto.MESSAGE, number=2, message="KeywordPlanCampaignKeywordOperation", ) partial_failure: bool = proto.Field( proto.BOOL, number=3, ) validate_only: bool = proto.Field( proto.BOOL, number=4, ) class KeywordPlanCampaignKeywordOperation(proto.Message): r"""A single operation (create, update, remove) on a Keyword Plan campaign keyword. This message has `oneof`_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members. .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields Attributes: update_mask (google.protobuf.field_mask_pb2.FieldMask): The FieldMask that determines which resource fields are modified in an update. create (google.ads.googleads.v13.resources.types.KeywordPlanCampaignKeyword): Create operation: No resource name is expected for the new Keyword Plan campaign keyword. This field is a member of `oneof`_ ``operation``. update (google.ads.googleads.v13.resources.types.KeywordPlanCampaignKeyword): Update operation: The Keyword Plan campaign keyword expected to have a valid resource name. This field is a member of `oneof`_ ``operation``. remove (str): Remove operation: A resource name for the removed Keyword Plan campaign keywords expected in this format: ``customers/{customer_id}/keywordPlanCampaignKeywords/{kp_campaign_keyword_id}`` This field is a member of `oneof`_ ``operation``. """ update_mask: field_mask_pb2.FieldMask = proto.Field( proto.MESSAGE, number=4, message=field_mask_pb2.FieldMask, ) create: keyword_plan_campaign_keyword.KeywordPlanCampaignKeyword = proto.Field( proto.MESSAGE, number=1, oneof="operation", message=keyword_plan_campaign_keyword.KeywordPlanCampaignKeyword, ) update: keyword_plan_campaign_keyword.KeywordPlanCampaignKeyword = proto.Field( proto.MESSAGE, number=2, oneof="operation", message=keyword_plan_campaign_keyword.KeywordPlanCampaignKeyword, ) remove: str = proto.Field( proto.STRING, number=3, oneof="operation", ) class MutateKeywordPlanCampaignKeywordsResponse(proto.Message): r"""Response message for a Keyword Plan campaign keyword mutate. Attributes: partial_failure_error (google.rpc.status_pb2.Status): Errors that pertain to operation failures in the partial failure mode. Returned only when partial_failure = true and all errors occur inside the operations. If any errors occur outside the operations (for example, auth errors), we return an RPC level error. results (MutableSequence[google.ads.googleads.v13.services.types.MutateKeywordPlanCampaignKeywordResult]): All results for the mutate. """ partial_failure_error: status_pb2.Status = proto.Field( proto.MESSAGE, number=3, message=status_pb2.Status, ) results: MutableSequence[ "MutateKeywordPlanCampaignKeywordResult" ] = proto.RepeatedField( proto.MESSAGE, number=2, message="MutateKeywordPlanCampaignKeywordResult", ) class MutateKeywordPlanCampaignKeywordResult(proto.Message): r"""The result for the Keyword Plan campaign keyword mutate. Attributes: resource_name (str): Returned for successful operations. """ resource_name: str = proto.Field( proto.STRING, number=1, ) __all__ = tuple(sorted(__protobuf__.manifest))
PypiClean
/edmunds-framework-0.5.1.tar.gz/edmunds-framework-0.5.1/docs/http/session.md
# Session To activate session, enabled it by adding instances to your settings: ```python from edmunds.session.drivers.sessioncookie import SessionCookie APP = { 'session': { 'enabled': True, 'instances': [ { 'name': 'sessioncookie', 'driver': SessionCookie }, ], }, } ``` The instances will all be used for session, so you can have multiple at once. The available drivers are: - **SessionCookie**: Sessions using cookies (see [docs](http://flask.pocoo.org/docs/0.11/quickstart/#sessions)) ## Usage Controller will have the first driver loaded for usage: ```python from edmunds.http.controller import Controller class MyController(Controller): def login(self): prev_username = self.session['username'] prev_username = self.session.pop('username', None) del self.session['username'] self.session['username'] = self._input['username'] ``` ## Usage outside controller When in request-context, but not inside a controller, you can use the application to get the driver-instance: ```python session = app.session() session = app.session('sessioncookie') session['key'] = 'value' print session['key'] del session['key'] ```
PypiClean
/ElfAnalyzer-0.0.2.tar.gz/ElfAnalyzer-0.0.2/ElfAnalyzer.py
################### # This module parses and analyzes ELF file for Forensic and # investigations. # Copyright (C) 2023 ElfAnalyzer # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see <https://www.gnu.org/licenses/>. ################### """ This module parses and analyzes ELF file for Forensic and investigations. """ __version__ = "0.0.2" __author__ = "Maurice Lambert" __author_email__ = "[email protected]" __maintainer__ = "Maurice Lambert" __maintainer_email__ = "[email protected]" __description__ = """ This module parses and analyzes ELF file for Forensic and investigations. """ __url__ = "https://github.com/mauricelambert/ElfAnalyzer" # __all__ = [] __license__ = "GPL-3.0 License" __copyright__ = """ ElfAnalyzer Copyright (C) 2023 Maurice Lambert This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. """ copyright = __copyright__ license = __license__ print(copyright) from ctypes import ( Structure, c_bool, c_wchar, c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, c_long, c_ulong, c_longlong, c_size_t, c_ssize_t, c_float, c_double, c_longdouble, c_char_p, c_wchar_p, c_void_p, c_uint16, c_int32, c_ulonglong, c_int8, c_uint8, c_int16, c_uint32, c_int64, c_uint64, c_char, _SimpleCData, sizeof as _sizeof, ) from typing import TypeVar, Union, Any, Iterable, List, Tuple from sys import argv, executable, exit, stderr from urllib.request import urlopen from dataclasses import dataclass from _io import _BufferedIOBase from functools import partial from string import printable from os.path import getsize from inspect import isclass from _ctypes import Array from io import BytesIO from enum import Enum Section = TypeVar("Section") try: from EntropyAnalysis import charts_chunks_file_entropy, Section from matplotlib import pyplot except ImportError: entropy_charts_import = False else: entropy_charts_import = True _CData = tuple(x for x in c_char.mro() if x.__name__ == "_CData")[0] printable = printable[:-5].encode() _issubclass = issubclass def issubclass(test: Any, *args): """ This function checks if the tested elements is a subclass of comparator. """ if isclass(test): return _issubclass(test, *args) return False @dataclass class Field: """ This class implements """ value: Any information: str usage: str = None description: str = None class FileString(str): """ This class implements strings with positions (_start_position_ and _end_position_ attributes). """ pass class FileBytes(bytes): """ This class implements bytes with positions (_start_position_ and _end_position_ attributes). """ pass class Data: """ This class helps you to print a title for a "CLI section". """ verbose: bool = False no_color: bool = False def __init__( self, name: str, start_position: int, end_position: int, data: bytes, information: str, format: bool = True, ): self.name = name self.start_position = start_position self.end_position = end_position self.data = data self.information = information self.format = format def vprint(self) -> None: """ This method prints verbose data. """ if self.verbose: self.print() def print(self) -> None: """ This method prints the data. """ if self.no_color: print(self) return None print( "\x1b[38;2;183;121;227m" + self.name.ljust(25) + "\x1b[38;2;255;240;175m" + f"{self.start_position:0>8x}-{self.end_position:0>8x}".ljust(20) + "\x1b[38;2;255;208;11m" + ( ( self.data.hex().ljust(40) + "\x1b[38;2;212;171;242m" + "".join( chr(x) if x in printable else "." for x in self.data ).ljust(20) ) if len(self.data) <= 20 else "\x1b[38;2;212;171;242m" + "".join( chr(x) if x in printable else "." for x in self.data ).ljust(40) ) + "\x1b[38;2;201;247;87m" + ( self.information.replace("_", " ").title() if self.format else self.information ) + "\x1b[39m" ) def __str__(self): return ( self.name.ljust(25) + f"{self.start_position:0>8x}-{self.end_position:0>8x}".ljust(20) + ( ( self.data.hex().ljust(40) + "".join( chr(x) if x in printable else "." for x in self.data ).ljust(20) ) if len(self.data) <= 20 else "".join( chr(x) if x in printable else "." for x in self.data ).ljust(20) ) + (self.information if self.format else self.information) ) class Title: """ This class helps you to print a title for a "CLI section". """ def __init__(self, value: str): self.value = value def print(self) -> None: """ This method prints the title. """ if Data.no_color: print("\n" + str(self) + "\n") return None print( "\n\x1b[48;2;50;50;50m\x1b[38;2;175;241;11m" + str(self) + "\x1b[49m\x1b[39m\n" ) def __str__(self): return f"{' ' + self.value + ' ':*^139}" class _DynamicType(int): """ This class is an integer type with usage and description attributes. """ def __new__(cls, value: int, usage: str, description: str): self = int.__new__(cls, value) self.usage = usage self.description = description return self class _DynamicFlags(int): """ This class is an integer type with description attribut. """ def __new__(cls, value: int, description: str): self = int.__new__(cls, value) self.description = description return self class DataToCClass: """ This class implements methods to get ctypes from data. """ order: str = "little" def data_to_bytes( type: type, data: Union[bytes, int, str] ) -> _SimpleCData: """ This method converts bytes, int or str to ctypes (c_char, c_char_p). """ if isinstance(data, int): data = data.to_bytes() elif isinstance(data, str): data = data.encode("latin-1") return type(data[::-1] if DataToCClass.order == "little" else data) def data_to_int(type: type, data: Union[bytes, int, None]) -> _SimpleCData: """ This method converts bytes, int or None to ctypes (c_bool, c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, c_long, c_ulong, c_longlong, c_ulonglong, c_size_t, c_ssize_t, c_void_p, c_int8, c_int16, c_int32, c_int64, c_uint8, c_uint16, c_uint32, c_uint64). """ if isinstance(data, bytes): data = int.from_bytes( data[::-1] if DataToCClass.order == "little" else data ) return type(data) def data_to_str( type: type, data: Union[bytes, str], encoding: str = "utf-8" ) -> _SimpleCData: """ This method converts bytes or str to ctypes (c_wchar, c_wchar_p). """ if isinstance(data, bytes): data = data.decode(encoding) return type(data) def data_to_float(type: type, data: Union[bytes, float]) -> _SimpleCData: """ This method converts bytes or float to ctypes (c_float, c_double, c_longdouble). """ if isinstance(data, bytes): data = float.fromhex( (data[::-1] if DataToCClass.order == "little" else data).hex() ) return type(data) data_to_ctypes = { c_bool: partial(DataToCClass.data_to_int, c_bool), c_char: partial(DataToCClass.data_to_bytes, c_char), c_wchar: partial(DataToCClass.data_to_str, c_wchar), c_byte: partial(DataToCClass.data_to_int, c_byte), c_int8: partial(DataToCClass.data_to_int, c_int8), c_ubyte: partial(DataToCClass.data_to_int, c_ubyte), c_uint8: partial(DataToCClass.data_to_int, c_uint8), c_short: partial(DataToCClass.data_to_int, c_short), c_int16: partial(DataToCClass.data_to_int, c_int16), c_ushort: partial(DataToCClass.data_to_int, c_ushort), c_uint16: partial(DataToCClass.data_to_int, c_uint16), c_int: partial(DataToCClass.data_to_int, c_int), c_int32: partial(DataToCClass.data_to_int, c_int32), c_uint: partial(DataToCClass.data_to_int, c_uint), c_uint32: partial(DataToCClass.data_to_int, c_uint32), c_long: partial(DataToCClass.data_to_int, c_long), c_ulong: partial(DataToCClass.data_to_int, c_ulong), c_longlong: partial(DataToCClass.data_to_int, c_longlong), c_int64: partial(DataToCClass.data_to_int, c_int64), c_ulonglong: partial(DataToCClass.data_to_int, c_ulonglong), c_uint64: partial(DataToCClass.data_to_int, c_uint64), c_size_t: partial(DataToCClass.data_to_int, c_size_t), c_ssize_t: partial(DataToCClass.data_to_int, c_ssize_t), c_float: partial(DataToCClass.data_to_float, c_float), c_double: partial(DataToCClass.data_to_float, c_double), c_longdouble: partial(DataToCClass.data_to_float, c_longdouble), c_char_p: partial(DataToCClass.data_to_bytes, c_char_p), c_wchar_p: partial(DataToCClass.data_to_str, c_wchar), c_void_p: partial(DataToCClass.data_to_int, c_void_p), } class BaseStructure: """ This class implements the Structure base (methods). """ def __init__(self, data: Union[bytes, _BufferedIOBase]) -> None: self._source = b"" if isinstance(data, bytes): data = BytesIO(data) for attribute_name, attribute_value in self.__annotations__.items(): start_position = data.tell() if issubclass(attribute_value, Array): cClass = self.array_to_cclass(attribute_value) cClass_size = sizeof(cClass) used_data = data.read(sizeof(attribute_value)) self._source += used_data value = attribute_value( *( data_to_ctypes[cClass]( used_data[x * cClass_size : (x + 1) * cClass_size] ) for x in range(attribute_value._length_) ) ) setattr(self, attribute_name, value) elif issubclass(attribute_value, BaseStructure): used_data = data.read(sizeof(attribute_value)) self._source += used_data value = attribute_value(used_data) setattr(self, attribute_name, value) else: cClass = self.class_to_cclass(attribute_value) used_data = data.read(sizeof(cClass)) value = data_to_ctypes[cClass](used_data) self._source += used_data setattr(self, attribute_name, value) value._data_ = used_data value._start_position_ = start_position value._end_position_ = data.tell() @classmethod def array_to_cclass(cls, array: Array) -> type: """ This method returns the inherited ctype. """ return cls.class_to_cclass(array._type_) @staticmethod def class_to_cclass(cls: type) -> type: """ This method returns the inherited ctype. """ precedent_class = None for element in cls.mro(): if element is _SimpleCData: return precedent_class precedent_class = element @classmethod def __sizeof__(cls) -> int: """ This method returns the octet size to build the instance. """ counter = 0 for value in cls.__annotations__.values(): counter += sizeof(value) return counter def __repr__(self): return self.__class__.__name__ + "(" + repr(self._source) + ")" def __str__(self): return ( self.__class__.__name__ + "(" + ", ".join( f"{attr}=" + ( ( getattr(self, attr).__class__.__name__ + f"({getattr(self, attr).value})" ) if isinstance(getattr(self, attr), Array) else getattr(self, attr) ) for attr in self.__annotations__ ) + ")" ) Structure = TypeVar("Structure") def sizeof(object: Union[_CData, type]) -> int: """ This function returns the size of this object. """ if isinstance(object, _CData) or issubclass(object, _CData): return _sizeof(object) return object.__sizeof__() def structure(cls: type) -> type: """ This decorator helps to build C Structures. """ def wrap(cls: type) -> type: """ This function builds the C Structure class. """ return type( cls.__name__, (cls, BaseStructure), {"__annotations__": cls.__annotations__}, ) return wrap(cls) class Elf32_Addr(c_uint32): pass class Elf32_Half(c_uint16): pass class Elf32_Section(c_uint16): pass class Elf32_Versym(c_uint16): pass class Elf32_Off(c_uint32): pass class Elf32_Sword(c_int32): pass class Elf32_Word(c_uint32): pass class Elf32_Sxword(c_int64): pass class Elf32_Xword(c_uint64): pass class Elf64_Addr(c_uint64): pass class Elf64_Half(c_uint16): pass class Elf64_Section(c_uint16): pass class Elf64_Versym(c_uint16): pass class Elf64_Off(c_uint64): pass class Elf64_Sword(c_int32): pass class Elf64_Word(c_uint32): pass class Elf64_Sxword(c_int64): pass class Elf64_Xword(c_uint64): pass class ELfIdentClass(Enum): INVALID = 0 OBJECT_32_BITS = 1 OBJECT_64_BITS = 2 class ELfIdentData(Enum): INVALID = 0 LITTLE_ENDIAN = 1 BIG_ENDIAN = 2 class ELfIdentVersion(Enum): INVALID = 0 CURRENT = 1 class ELfIdentOS(Enum): SYSV = NONE = 0 HPUX = 1 NETBSD = 2 LINUX = 3 SOLARIS = 6 AIX = 7 IRIX = 8 FREEBSD = 9 TRU64 = 10 MODESTO = 11 OPENBSD = 12 OPENVMS = 13 NSK = 14 AROS = 15 ARM = 97 MSP = 255 class ElfType(Enum): NO_FILE_TYPE = 0 RELOCATABLE = 1 EXECUTABLE = 2 SHARED_OBJECT = 3 CORE = 4 OS_SPECIFIC_LOOS = 0xFE00 OS_SPECIFIC_HIOS = 0xFEFF PROCESSOR_SPECIFIC_LOPROC = 0xFF00 PROCESSOR_SPECIFIC_HIPROC = 0xFFFF class ElfMachine(Enum): NO_MACHINE = 0 ATAT_WE_32100 = 1 SPARC = 2 INTEL_80386 = 3 MOTOROLA_68000 = 4 MOTOROLA_88000 = 5 INTEL_80860 = 7 MIPS_I = 8 IBM_SYSTEM370 = 9 MIPS_RS3000 = 10 PA_RISC = 15 FUJITSU_VPP500 = 17 SPARC32PLUS = 18 INTEL_80960 = 19 POWERPC = 20 POWERPC64 = 21 IBM_SYSTEM390 = 22 NEC_V800 = 36 FUJITSU_FR20 = 37 TRW_RH32 = 38 MOTOROLA_RCE = 39 ARM = 40 DIGITAL_ALPHA = 41 HITACHI_SH = 42 SPARC_V9 = 43 SIEMENS_TRICORE = 44 ARC = 45 HITACHI_H8_300 = 46 HITACHI_H8_300H = 47 HITACHI_H8S = 48 HITACHI_H8_500 = 49 INTEL_IA_64 = 50 STANFORD_MIPS_X = 51 MOTOROLA_COLDFIRE = 52 MOTOROLA_68HC12 = 53 FUJITSU_MMA = 54 SIEMENS_PCP = 55 SONY_NCPU_RISC = 56 DENSO_NDR1 = 57 MOTOROLA_STARCORE = 58 TOYOTA_ME16 = 59 ST100 = 60 TINYJ = 61 AMD_X86_64 = 62 SONY_PDSP = 63 PDP10 = 64 PDP11 = 65 SIEMENS_FX66 = 66 ST9PLUS = 67 ST7 = 68 MOTOROLA_68HC16 = 69 MOTOROLA_68HC11 = 70 MOTOROLA_68HC08 = 71 MOTOROLA_68HC05 = 72 SILICON_SVX = 73 ST19 = 74 DIGITAL_VAX = 75 AXIS_CRIS = 76 INFINEON_JAVELIN = 77 LSI_DSP64_FIREPATH = 78 LSI_DSP16_ZSP = 79 DONALD_KNUTH_MMIX = 80 HARVARD_HUANY = 81 SITERA_PRISM = 82 ATMEL_AVR = 83 FUJITSU_FR30 = 84 MITSUBISHI_D10V = 85 MITSUBISHI_D30V = 86 NEC_V850 = 87 MITSUBISHI_M32R = 88 MATSUSHITA_MN10300 = 89 MATSUSHITA_MN10200 = 90 PICOJAVA = 91 OPENRISC = 92 ARC_A5 = 93 TENSILICA_XTENSA = 94 ALPHAMOSAIC_VIDEOCORE = 95 TMM_GPP = 96 NS32K = 97 TPC = 98 TREBIA_SNP1K = 99 ST200 = 100 ElfVersion = ELfIdentVersion class SpecialSectionIndexes(Enum): SHN_UNDEF = 0 SHN_LOPROC = SHN_LORESERVE = 0xFF00 SHN_HIPROC = 0xFF1F SHN_LOOS = 0xFF20 SHN_HIOS = 0xFF3F SHN_ABS = 0xFFF1 SHN_COMMON = 0xFFF2 SHN_HIRESERVE = SHN_XINDEX = 0xFFFF class ProgramHeaderType(Enum): PT_NULL = 0 PT_LOAD = 1 PT_DYNAMIC = 2 PT_INTERP = 3 PT_NOTE = 4 PT_SHLIB = 5 PT_PHDR = 6 PT_TLS = 7 PT_NUM = 8 PT_LOOS = 0x60000000 PT_GNU_EH_FRAME = 0x6474E550 PT_GNU_STACK = 0x6474E551 PT_GNU_RELRO = 0x6474E552 PT_HIOS = 0x6FFFFFFF PT_LOPROC = 0x70000000 PT_HIPROC = 0x7FFFFFFF class ProgramHeaderFlags(Enum): PF_EXECUTE = 1 PF_WRITE = 2 PF_READ = 3 PF_MASKOS = 0x0FF00000 PF_MASKPROC = 0xF0000000 class SectionHeaderType(Enum): SHT_NULL = 0 SHT_PROGBITS = 1 SHT_SYMTAB = 2 SHT_STRTAB = 3 SHT_RELA = 4 SHT_HASH = 5 SHT_DYNAMIC = 6 SHT_NOTE = 7 SHT_NOBITS = 8 SHT_REL = 9 SHT_SHLIB = 10 SHT_DYNSYM = 11 SHT_INIT_ARRAY = 14 SHT_FINI_ARRAY = 15 SHT_PREINIT_ARRAY = 16 SHT_GROUP = 17 SHT_SYMTAB_SHNDX = 18 SHT_NUM = 19 SHT_FILTER = 0x7FFFFFF SHT_LOOS = 0x60000000 SHT_HIOS = 0x6FFFFFFF SHT_VERSYM2 = 0x6FFFFFF0 SHT_GNU_ATTRIBUTES = 0x6FFFFFF5 SHT_GNU_HASH = 0x6FFFFFF6 SHT_GNU_LIBLIST = 0x6FFFFFF7 SHT_CHECKSUM = 0x6FFFFFF8 SHT_VERDEF = 0x6FFFFFFD SHT_VERNEED = 0x6FFFFFFE SHT_VERSYM1 = 0x6FFFFFFF SHT_LOPROC = 0x70000000 SHT_AUXILIARY = 0x7FFFFFFD SHT_HIPROC = 0x7FFFFFFF SHT_LOUSER = 0x80000000 SHT_HIUSER = 0xFFFFFFFF class SectionAttributeFlags(Enum): SHF_WRITE = 0x1 SHF_ALLOC = 0x2 SHF_EXECINSTR = 0x4 SHF_MERGE = 0x10 SHF_STRINGS = 0x20 SHF_INFO_LINK = 0x40 SHF_LINK_ORDER = 0x80 SHF_OS_NONCONFORMING = 0x100 SHF_GROUP = 0x200 SHF_TLS = 0x400 SHF_MASKOS = 0x0FF00000 SHF_MASKPROC = 0xF0000000 class SectionGroupFlags(Enum): GRP_COMDAT = 0x1 GRP_MASKOS = 0x0FF00000 GRP_MASKPROC = 0xF0000000 class SymbolBinding(Enum): STB_LOCAL = 0 STB_GLOBAL = 1 STB_WEAK = 2 STB_LOOS = 10 STB_HIOS = 12 STB_LOPROC = 13 STB_HIPROC = 15 class SymbolType(Enum): STT_NOTYPE = 0 STT_OBJECT = 1 STT_FUNC = 2 STT_SECTION = 3 STT_FILE = 4 STT_COMMON = 5 STT_TLS = 6 STT_RELC = 8 STT_SRELC = 9 STT_LOOS = 10 STT_HIOS = 12 STT_LOPROC = 13 STT_HIPROC = 15 class SymbolVisibility(Enum): STV_DEFAULT = 0 STV_INTERNAL = 1 STV_HIDDEN = 2 STV_PROTECTED = 3 class DynamicType(Enum): DT_NULL = _DynamicType(0, "ignored", "End of dynamic array") DT_NEEDED = _DynamicType(1, "value", "Needed library name offset") DT_PLTRELSZ = _DynamicType(2, "value", "Relocation entries size") DT_PLTGOT = _DynamicType(3, "pointer", "Address procedure linkage table") DT_HASH = _DynamicType(4, "pointer", "Address symbol hash table") DT_STRTAB = _DynamicType(5, "pointer", "Address string table (.dynstr)") DT_SYMTAB = _DynamicType(6, "pointer", "Address symbol table (.dynsym)") DT_RELA = _DynamicType(7, "pointer", "Address relocation table") DT_RELASZ = _DynamicType(8, "value", "Relocation table size") DT_RELAENT = _DynamicType(9, "value", "Relocation entry size") DT_STRSZ = _DynamicType(10, "value", "String table size") DT_SYMENT = _DynamicType(11, "value", "Symbol table entry size") DT_INIT = _DynamicType(12, "pointer", "Initialization function address") DT_FINI = _DynamicType(13, "pointer", "Termination function address") DT_SONAME = _DynamicType(14, "value", "Shared object name") DT_RPATH = _DynamicType(15, "value", "Library search path string") DT_SYMBOLIC = _DynamicType(16, "ignored", "Alters dynamic linker's symbol") DT_REL = _DynamicType(17, "pointer", "Address relocation table") DT_RELSZ = _DynamicType(18, "value", "Relocation table size") DT_RELENT = _DynamicType(19, "value", "Relocation entry size") DT_PLTREL = _DynamicType(20, "value", "Relocation entry type") DT_DEBUG = _DynamicType(21, "pointer", "Used for debugging") DT_TEXTREL = _DynamicType( 22, "ignored", "No relocation on non-writable segment" ) DT_JMPREL = _DynamicType(23, "pointer", "Procedure linkage table") DT_BIND_NOW = _DynamicType(24, "ignored", "Relocations before execution") DT_INIT_ARRAY = _DynamicType( 25, "pointer", "Initialization functions pointers" ) DT_FINI_ARRAY = _DynamicType( 26, "pointer", "Termination functions pointers" ) DT_INIT_ARRAYSZ = _DynamicType( 27, "value", "Initialization functions number" ) DT_FINI_ARRAYSZ = _DynamicType(28, "value", "Termination functions number") DT_RUNPATH = _DynamicType(29, "value", "Library search path") DT_FLAGS = _DynamicType(30, "value", "Flag values specific") DT_ENCODING = _DynamicType( 32, "unspecified", "Values interpretation rules" ) DT_PREINIT_ARRAY = _DynamicType( 32, "pointer", "Pre-initialization functions" ) DT_PREINIT_ARRAYSZ = _DynamicType(33, "value", "Pre-init functions size") DT_LOOS = _DynamicType( 0x6000000D, "unspecified", "System-specific semantics" ) DT_HIOS = _DynamicType( 0x6FFFF000, "unspecified", "System-specific semantics" ) DT_LOPROC = _DynamicType( 0x70000000, "unspecified", "Processor-specific semantics" ) DT_HIPROC = _DynamicType( 0x7FFFFFFF, "unspecified", "Processor-specific semantics" ) class DynamicFlags(Enum): DF_ORIGIN = _DynamicFlags(0x1, "Load libraries using filepath") DF_SYMBOLIC = _DynamicFlags(0x2, "Link start with object itself") DF_TEXTREL = _DynamicFlags(0x4, "No relocation on non-writable segment") DF_BIND_NOW = _DynamicFlags(0x8, "Relocations before execution") DF_STATIC_TLS = _DynamicFlags(0x10, "This object can't be link") @structure class ElfIdent: ei_mag: c_char * 4 ei_class: c_ubyte ei_data: c_ubyte ei_version: c_ubyte ei_osabi: c_ubyte ei_abiversion: c_ubyte ei_pad: c_char ei_nident: c_char * 6 @structure class ElfHeader32: e_ident: ElfIdent e_type: Elf32_Half e_machine: Elf32_Half e_version: Elf32_Word e_entry: Elf32_Addr e_phoff: Elf32_Off e_shoff: Elf32_Off e_flags: Elf32_Word e_ehsize: Elf32_Half e_phentsize: Elf32_Half e_phnum: Elf32_Half e_shentsize: Elf32_Half e_shnum: Elf32_Half e_shstrndx: Elf32_Half @structure class ElfHeader64: e_ident: ElfIdent e_type: Elf64_Half e_machine: Elf64_Half e_version: Elf64_Word e_entry: Elf64_Addr e_phoff: Elf64_Off e_shoff: Elf64_Off e_flags: Elf64_Word e_ehsize: Elf64_Half e_phentsize: Elf64_Half e_phnum: Elf64_Half e_shentsize: Elf64_Half e_shnum: Elf64_Half e_shstrndx: Elf64_Half @structure class ProgramHeader32: p_type: Elf32_Word p_offset: Elf32_Off p_vaddr: Elf32_Addr p_paddr: Elf32_Addr p_filesz: Elf32_Word p_memsz: Elf32_Word p_flags: Elf32_Word p_align: Elf32_Word @structure class ProgramHeader64: p_type: Elf64_Word p_flags: Elf64_Word p_offset: Elf64_Off p_vaddr: Elf64_Addr p_paddr: Elf64_Addr p_filesz: Elf64_Xword p_memsz: Elf64_Xword p_align: Elf64_Xword @structure class SectionHeader32: sh_name: Elf32_Word sh_type: Elf32_Word sh_flags: Elf32_Word sh_addr: Elf32_Addr sh_offset: Elf32_Off sh_size: Elf32_Word sh_link: Elf32_Word sh_info: Elf32_Word sh_addralign: Elf32_Word sh_entsize: Elf32_Word @structure class SectionHeader64: sh_name: Elf64_Word sh_type: Elf64_Word sh_flags: Elf64_Xword sh_addr: Elf64_Addr sh_offset: Elf64_Off sh_size: Elf64_Xword sh_link: Elf64_Word sh_info: Elf64_Word sh_addralign: Elf64_Xword sh_entsize: Elf64_Xword @structure class SymbolTableEntry32: st_name: Elf32_Word st_value: Elf32_Addr st_size: Elf32_Word st_info: c_byte st_other: c_byte st_shndx: Elf32_Half @structure class SymbolTableEntry64: st_name: Elf64_Word st_info: c_byte st_other: c_byte st_shndx: Elf64_Half st_value: Elf64_Addr st_size: Elf32_Xword @structure class RelocationEntries32: r_offset: Elf32_Addr r_info: Elf32_Word @structure class RelocationEntriesAddend32: r_offset: Elf32_Addr r_info: Elf32_Word r_addend: Elf32_Sword @structure class RelocationEntries64: r_offset: Elf64_Addr r_info: Elf64_Xword @structure class RelocationEntriesAddend64: r_offset: Elf64_Addr r_info: Elf64_Xword r_addend: Elf32_Sxword @structure class Note32: name_size: Elf32_Word descriptor_size: Elf32_Word type: Elf32_Word @structure class Note64: name_size: Elf64_Word descriptor_size: Elf64_Word type: Elf64_Word @structure class Dynamic32: dynamic_tag: Elf32_Sword dynamic_value: Elf32_Word @structure class Dynamic64: dynamic_tag: Elf64_Sxword dynamic_value: Elf64_Xword sections_description = { ".bss": "Uninitialized data", ".comment": "Version control information", ".data": "Initialized data", ".data1": "Initialized data", ".debug": "Symbolic debugging information", ".dynamic": "Dynamic linking information", ".dynstr": "Dynamic linking strings", ".dynsym": "Dynamic linking symbol table", ".fini": "Process termination code", ".fini_array": "Termination function pointers", ".got": "Global offset table", ".hash": "Symbol hash table", ".init": "Process initialization code", ".init_array": "Initialization function pointers", ".interp": "Program interpreter", ".line": "Line number for debugging", ".note": "Specific vendor information", ".plt": "Procedure linkage table", ".preinit_array": "Pre-initialization functions", ".rel": "Relocation information,", ".rodata": "Read-only data", ".rodata1": "Read-only data", ".shstrtab": "Section names", ".strtab": "Strings (symbol table)", ".symtab": "Symbol table", ".symtab_shndx": "Special symbol table", ".tbss": "Uninitialized thread-local data", ".tdata": "Initialized thread-local data", ".text": "Executable instruction", } def enum_from_value(value: _CData, enum_class: Enum) -> Field: """ This function returns a Field with Enum name and value. """ for constant in enum_class: if constant.value == value.value: return Field( value, constant.name, getattr(constant.value, "usage", None), getattr(constant.value, "description", None), ) return Field(value, "UNDEFINED") def enum_from_flags(value: _CData, enum_class: Enum) -> Iterable[Field]: """ This function yields Fields with Enum name and value. """ for constant in enum_class: if constant.value & value.value: yield Field( value, constant.name, getattr(constant.value, "usage", None), getattr(constant.value, "description", None), ) def parse_from_structure(file: _BufferedIOBase, structure: type) -> Structure: """ This function reads file and parse readed data to Structure and returns it. """ return structure(file.read(sizeof(structure))) def read_until(file: _BufferedIOBase, end_data: bytes) -> bytes: """ This function reads file until data end doesn't match the end_data params. """ old_position = file.tell() data = file.read(1) position = file.tell() while not data.endswith(end_data) and old_position < position: old_position = position data += file.read(1) position = file.tell() return data def read_string(file: _BufferedIOBase) -> c_char_p: """ This function reads file a NULL terminating string from file position. """ return c_char_p(read_until(file, b"\0")) def get_padding_length(data_size: int, padding_to: int) -> int: """ This function returns the padding length for this field. """ padding_length = data_size % padding_to return padding_to - padding_length if padding_length else 0 def start_printable() -> None: """ This function starts printing. """ print( "\x1b[38;2;183;121;227m" + "Data name".ljust(25) + "\x1b[38;2;255;240;175m" + "Position".ljust(20) + "\x1b[38;2;255;208;11m" + "Data hexadecimal".ljust(40) + "\x1b[38;2;212;171;242m" + "Data".ljust(20) + "\x1b[38;2;201;247;87m" + "Information" + "\x1b[39m\n" ) def main() -> int: """ This function runs the script from the command line. """ url = False verbose = False no_color = False if "-u" in argv: argv.remove("-u") url = True if "-v" in argv: argv.remove("-v") verbose = True if "-c" in argv: argv.remove("-c") no_color = True if len(argv) != 2: print( f'USAGES: "{executable}" "{argv[0]}" [-c(no ' "color)] [-v(verbose)] [-u(url)] ElfFile", file=stderr, ) return 1 file = ( BytesIO(data := urlopen(argv[1]).read()) if url else open(argv[1], "rb") ) filesize = len(data) if url else getsize(argv[1]) Data.verbose = verbose Data.no_color = no_color ( elfindent, elf_headers, programs_headers, elf_sections, symbols_tables, comments, note_sections, notes, dynamics, sections, ) = parse_elffile(file) cli( elfindent, elf_headers, programs_headers, elf_sections, symbols_tables, comments, notes, dynamics, sections, ) if entropy_charts_import: file.seek(0) charts_chunks_file_entropy( file, part_size=round(filesize / 100), sections=sections, ) file.close() return 0 def cli( elf_ident: ElfIdent, elf_header: Union[ElfHeader32, ElfHeader64], elf_tables: List[Union[ProgramHeader32, ProgramHeader64]], elf_sections: List[Union[SectionHeader32, SectionHeader64]], symbols: List[Tuple[str, Union[SymbolTableEntry32, SymbolTableEntry64]]], comments: List[bytes], note_sections: List[Union[SectionHeader32, SectionHeader64]], dynamicStructures: List[Union[Dynamic32, Dynamic64]], sections: List[Section], ) -> None: """ This function prints results in CLI. """ Title("ELF identification").print() Data( "Magic bytes", elf_ident.ei_mag.value._start_position_, elf_ident.ei_mag.value._end_position_, elf_ident.ei_mag.value.value, elf_ident.ei_mag.information, ).print() Data( "ELF class", elf_ident.ei_class.value._start_position_, elf_ident.ei_class.value._end_position_, elf_ident.ei_class.value._data_, f"{elf_ident.ei_class.information} ({elf_ident.ei_class.value.value})", ).print() Data( "ELF data", elf_ident.ei_data.value._start_position_, elf_ident.ei_data.value._end_position_, elf_ident.ei_data.value._data_, f"{elf_ident.ei_data.information} ({elf_ident.ei_data.value.value})", ).print() Data( "ELF version", elf_ident.ei_version.value._start_position_, elf_ident.ei_version.value._end_position_, elf_ident.ei_version.value._data_, elf_ident.ei_version.information + f" ({elf_ident.ei_version.value.value})", ).print() Data( "ELF operating system", elf_ident.ei_osabi.value._start_position_, elf_ident.ei_osabi.value._end_position_, elf_ident.ei_osabi.value._data_, elf_ident.ei_osabi.information + f" ({elf_ident.ei_osabi.value.value})", False, ).print() Data( "ELF defined OS", elf_ident.ei_abiversion.value._start_position_, elf_ident.ei_abiversion.value._end_position_, elf_ident.ei_abiversion.value._data_, elf_ident.ei_abiversion.information + f" ({elf_ident.ei_abiversion.value.value})", ).print() Data( "ELF start padding", elf_ident.ei_pad.value._start_position_, elf_ident.ei_pad.value._end_position_, elf_ident.ei_pad.value._data_, elf_ident.ei_pad.information, ).vprint() Data( "ELF padding", elf_ident.ei_nident.value._start_position_, elf_ident.ei_nident.value._end_position_, elf_ident.ei_nident.value._data_, elf_ident.ei_nident.information, ).vprint() Title("ELF headers").print() Data( "ELF type", elf_header.e_type.value._start_position_, elf_header.e_type.value._end_position_, elf_header.e_type.value._data_, f"{elf_header.e_type.information} ({elf_header.e_type.value.value})", ).print() Data( "ELF machine", elf_header.e_machine.value._start_position_, elf_header.e_machine.value._end_position_, elf_header.e_machine.value._data_, elf_header.e_machine.information + f" ({elf_header.e_machine.value.value})", False, ).print() Data( "ELF version", elf_header.e_version.value._start_position_, elf_header.e_version.value._end_position_, elf_header.e_version.value._data_, elf_header.e_version.information + f" ({elf_header.e_version.value.value})", ).print() Data( "ELF entry point", elf_header.e_entry.value._start_position_, elf_header.e_entry.value._end_position_, elf_header.e_entry.value._data_, f"{elf_header.e_entry.information} ({elf_header.e_entry.value.value})", ).print() Data( "ELF header table offset", elf_header.e_phoff.value._start_position_, elf_header.e_phoff.value._end_position_, elf_header.e_phoff.value._data_, f"{elf_header.e_phoff.information} ({elf_header.e_phoff.value.value})", ).vprint() Data( "ELF section table offset", elf_header.e_shoff.value._start_position_, elf_header.e_shoff.value._end_position_, elf_header.e_shoff.value._data_, f"{elf_header.e_shoff.information} ({elf_header.e_shoff.value.value})", ).vprint() Data( "ELF processor specific", elf_header.e_flags.value._start_position_, elf_header.e_flags.value._end_position_, elf_header.e_flags.value._data_, f"{elf_header.e_flags.information} ({elf_header.e_flags.value.value})", ).print() Data( "ELF header's size", elf_header.e_ehsize.value._start_position_, elf_header.e_ehsize.value._end_position_, elf_header.e_ehsize.value._data_, elf_header.e_ehsize.information + f" ({elf_header.e_ehsize.value.value})", False, ).print() Data( "ELF entry header size", elf_header.e_phentsize.value._start_position_, elf_header.e_phentsize.value._end_position_, elf_header.e_phentsize.value._data_, elf_header.e_phentsize.information + f" ({elf_header.e_phentsize.value.value})", ).print() Data( "ELF header entry length", elf_header.e_phnum.value._start_position_, elf_header.e_phnum.value._end_position_, elf_header.e_phnum.value._data_, f"{elf_header.e_phnum.information} ({elf_header.e_phnum.value.value})", ).print() Data( "ELF entry section size", elf_header.e_shentsize.value._start_position_, elf_header.e_shentsize.value._end_position_, elf_header.e_shentsize.value._data_, elf_header.e_shentsize.information + f" ({elf_header.e_shentsize.value.value})", ).print() Data( "ELF section entry length", elf_header.e_shnum.value._start_position_, elf_header.e_shnum.value._end_position_, elf_header.e_shnum.value._data_, f"{elf_header.e_shnum.information} ({elf_header.e_shnum.value.value})", ).print() Data( "Section header table", elf_header.e_shstrndx.value._start_position_, elf_header.e_shstrndx.value._end_position_, elf_header.e_shstrndx.value._data_, elf_header.e_shstrndx.information + f" ({elf_header.e_shstrndx.value.value})", ).print() Title("ELF header table").print() for elf_table in elf_tables: Data( "Program header type", elf_table.p_type.value._start_position_, elf_table.p_type.value._end_position_, elf_table.p_type.value._data_, f"{elf_table.p_type.information} ({elf_table.p_type.value.value})", False, ).print() for flags in elf_table.flags: Data( "Program header flags", flags.value._start_position_, flags.value._end_position_, flags.value._data_, f"{flags.information} ({flags.value.value})", False, ).print() Data( "Program header address", elf_table.p_offset.value._start_position_, elf_table.p_offset.value._end_position_, elf_table.p_offset.value._data_, elf_table.p_offset.information + f" ({elf_table.p_offset.value.value})", ).print() Data( "Virtual address memory", elf_table.p_vaddr.value._start_position_, elf_table.p_vaddr.value._end_position_, elf_table.p_vaddr.value._data_, elf_table.p_vaddr.information + f" ({elf_table.p_vaddr.value.value})", ).vprint() Data( "Physical address", elf_table.p_paddr.value._start_position_, elf_table.p_paddr.value._end_position_, elf_table.p_paddr.value._data_, elf_table.p_paddr.information + f" ({elf_table.p_paddr.value.value})", ).vprint() Data( "Segment length file", elf_table.p_filesz.value._start_position_, elf_table.p_filesz.value._end_position_, elf_table.p_filesz.value._data_, elf_table.p_filesz.information + f" ({elf_table.p_filesz.value.value})", ).print() Data( "Segment length memory", elf_table.p_memsz.value._start_position_, elf_table.p_memsz.value._end_position_, elf_table.p_memsz.value._data_, elf_table.p_memsz.information + f" ({elf_table.p_memsz.value.value})", ).print() Data( "Segment alignment", elf_table.p_align.value._start_position_, elf_table.p_align.value._end_position_, elf_table.p_align.value._data_, elf_table.p_align.information + f" ({elf_table.p_align.value.value})", ).print() Title("ELF section table").print() for elf_section in elf_sections: Data( "Name: " + elf_section.name, elf_section.name._start_position_, elf_section.name._end_position_, elf_section.name._data_, sections_description.get( "." + elf_section.name.split(".")[1] if "." in elf_section.name else "", "Undefined section role.", ), False, ).print() Data( "Section name position", elf_section.sh_name.value._start_position_, elf_section.sh_name.value._end_position_, elf_section.sh_name.value._data_, elf_section.sh_name.information + f" ({elf_section.sh_name.value.value})", ).vprint() Data( "Section type", elf_section.sh_type.value._start_position_, elf_section.sh_type.value._end_position_, elf_section.sh_type.value._data_, elf_section.sh_type.information + f" ({elf_section.sh_type.value.value})", False, ).print() for flag in elf_section.flags: Data( "Section flags", flag.value._start_position_, flag.value._end_position_, flag.value._data_, f"{flag.information} ({flag.value.value})", False, ).print() Data( "Section memory address", elf_section.sh_addr.value._start_position_, elf_section.sh_addr.value._end_position_, elf_section.sh_addr.value._data_, elf_section.sh_addr.information + f" ({elf_section.sh_addr.value.value})", ).vprint() Data( "Section offset", elf_section.sh_offset.value._start_position_, elf_section.sh_offset.value._end_position_, elf_section.sh_offset.value._data_, elf_section.sh_offset.information + f" ({elf_section.sh_offset.value.value})", ).print() Data( "Section size", elf_section.sh_size.value._start_position_, elf_section.sh_size.value._end_position_, elf_section.sh_size.value._data_, elf_section.sh_size.information + f" ({elf_section.sh_size.value.value})", ).print() Data( "Section link", elf_section.sh_link.value._start_position_, elf_section.sh_link.value._end_position_, elf_section.sh_link.value._data_, elf_section.sh_link.information + f" ({elf_section.sh_link.value.value})", ).print() Data( "Section info", elf_section.sh_info.value._start_position_, elf_section.sh_info.value._end_position_, elf_section.sh_info.value._data_, elf_section.sh_info.information + f" ({elf_section.sh_info.value.value})", ).print() Data( "Section alignment", elf_section.sh_addralign.value._start_position_, elf_section.sh_addralign.value._end_position_, elf_section.sh_addralign.value._data_, elf_section.sh_addralign.information + f" ({elf_section.sh_addralign.value.value})", ).vprint() Data( "Symbol table entry size", elf_section.sh_entsize.value._start_position_, elf_section.sh_entsize.value._end_position_, elf_section.sh_entsize.value._data_, elf_section.sh_entsize.information + f" ({elf_section.sh_entsize.value.value})", ).print() precedent_name = "" for name, symbol in symbols: if name != precedent_name: Title("Symbol tables " + name).print() precedent_name = name Data( "Symbol value", symbol.st_value.value._start_position_, symbol.st_value.value._end_position_, symbol.st_value.value._data_, f"{symbol.st_value.information} ({symbol.st_value.value.value})", False, ).print() Data( "Associated sizes", symbol.st_size.value._start_position_, symbol.st_size.value._end_position_, symbol.st_size.value._data_, f"{symbol.st_size.information} ({symbol.st_size.value.value})", False, ).print() Data( "Section header index", symbol.st_shndx.value._start_position_, symbol.st_shndx.value._end_position_, symbol.st_shndx.value._data_, symbol.st_shndx.information + (" exported" if symbol.st_shndx.value.value else " imported") + " (" + ( elf_sections[symbol.st_shndx.value.value].name if len(elf_sections) > symbol.st_shndx.value.value else str(symbol.st_shndx.value.value) ) + ")", False, ).print() Data( "Symbol binding", symbol.st_info._start_position_, symbol.st_info._end_position_, symbol.st_info._data_, f"{symbol.st_bind.information} ({symbol.st_bind.value.value})", False, ).print() Data( "Symbol type", symbol.st_info._start_position_, symbol.st_info._end_position_, symbol.st_info._data_, f"{symbol.st_type.information} ({symbol.st_type.value.value})", False, ).print() Data( "Symbol visibility", symbol.st_other._start_position_, symbol.st_other._end_position_, symbol.st_other._data_, symbol.st_visibility.information + f" ({symbol.st_visibility.value.value})", False, ).print() Data( "Symbol name", symbol.name._start_position_, symbol.name._end_position_, symbol.name._data_, "Name: " + symbol.name, False, ).print() first = True for data in comments: if first: Title("Comment section").print() first = False Data( "Version control info", data._start_position_, data._end_position_, data, data.string, False, ).print() first = True for note in note_sections: if first: Title("Note sections").print() first = False Data( "Note name size", note.name_size._start_position_, note.name_size._end_position_, note.name_size._data_, f"Note name size ({note.name_size.value})", False, ).print() Data( "Descriptor size", note.descriptor_size._start_position_, note.descriptor_size._end_position_, note.descriptor_size._data_, f"Note descriptor size ({note.descriptor_size.value})", False, ).print() Data( "Note type", note.type._start_position_, note.type._end_position_, note.type._data_, f"Note type ({note.type.value})", False, ).print() Data( "Note name", note.name._start_position_, note.name._end_position_, note.name, note.name.string, False, ).print() Data( "Note descriptor", note.descriptor._start_position_, note.descriptor._end_position_, note.descriptor, "", False, ).print() first = True for dynamic in dynamicStructures: if first: Title("Dynamic section").print() first = False Data( f"Tag {dynamic.dynamic_tag.information}", dynamic.dynamic_tag._start_position_, dynamic.dynamic_tag._end_position_, dynamic.dynamic_tag.value._data_, str(dynamic.dynamic_tag.description), False, ).print() if dynamic.dynamic_tag.value.value != DynamicType.DT_FLAGS.value: Data( ( "Address" if dynamic.dynamic_tag.usage == "pointer" else "Value" ), dynamic.dynamic_value._start_position_, dynamic.dynamic_value._end_position_, dynamic.dynamic_value._data_, str(dynamic.dynamic_value.value), False, ).print() else: for flag in dynamic.dynamic_value.flags: Data( "Flags " + str(flag.value.value), flag._start_position_, flag._end_position_, flag.value._data_, flag.description, False, ).print() def parse_elffile( file: _BufferedIOBase, ) -> Tuple[ ElfIdent, Union[ElfHeader32, ElfHeader64], List[Union[ProgramHeader32, ProgramHeader64]], List[Union[SectionHeader32, SectionHeader64]], List[Tuple[str, Union[SymbolTableEntry32, SymbolTableEntry64]]], List[bytes], List[Union[SectionHeader32, SectionHeader64]], List[Union[Note32, Note64]], List[Union[Dynamic32, Dynamic64]], List[Section], ]: """ This function parses ELF file. """ elfindent, elf_classe = parse_elfidentification(file) elf_headers = parse_elfheaders(file, elf_classe) programs_headers = [*parse_programheaders(file, elf_headers, elf_classe)] ( elf_sections, strtab_section, symtab_section, dynstr_section, dynsym_section, comment_section, dynamic_section, note_sections, sections, ) = parse_elfsections(file, elf_headers, elf_classe) symbols_tables = [ *parse_elfsymbolstable( file, dynsym_section, dynstr_section, symtab_section, strtab_section, elf_classe, ) ] comments = [*parse_elfcomment(file, comment_section)] notes = [*parse_elfnote(file, note_sections, elf_classe)] dynamics = [*parse_elfdynamic(file, dynamic_section, elf_classe)] return ( elfindent, elf_headers, programs_headers, elf_sections, symbols_tables, comments, note_sections, notes, dynamics, sections, ) def parse_elfidentification(file: _BufferedIOBase) -> Tuple[ElfIdent, str]: """ This function parses ELF identification headers. """ elf_ident = parse_from_structure(file, ElfIdent) elf_ident.ei_mag = Field( elf_ident.ei_mag, "ELF magic bytes" if elf_ident.ei_mag.value == b"\x7fELF" else "Invalid magic bytes", ) elf_ident.ei_class = enum_from_value(elf_ident.ei_class, ELfIdentClass) elf_classe = "64" if elf_ident.ei_class.value.value == 2 else "32" elf_ident.ei_data = enum_from_value(elf_ident.ei_data, ELfIdentData) DataToCClass.order = ( "little" if elf_ident.ei_data.value.value == 1 else "big" ) elf_ident.ei_version = enum_from_value( elf_ident.ei_version, ELfIdentVersion ) elf_ident.ei_osabi = enum_from_value(elf_ident.ei_osabi, ELfIdentOS) elf_ident.ei_abiversion = Field( elf_ident.ei_abiversion, "OS specified" if elf_ident.ei_abiversion else "OS unspecified", ) elf_ident.ei_pad = Field(elf_ident.ei_pad, "Start padding") elf_ident.ei_nident = Field(elf_ident.ei_nident, "Padding") return elf_ident, elf_classe def parse_elfheaders( file: _BufferedIOBase, elf_classe: str ) -> Union[ElfHeader32, ElfHeader64]: """ This function parses ELF headers. """ file.seek(0) elf_header = parse_from_structure( file, globals()["ElfHeader" + elf_classe] ) elf_header.e_type = enum_from_value(elf_header.e_type, ElfType) elf_header.e_machine = enum_from_value(elf_header.e_machine, ElfMachine) elf_header.e_version = enum_from_value(elf_header.e_version, ElfVersion) elf_header.e_entry = Field( elf_header.e_entry, "Entry point" if elf_header.e_entry else "No entry point", ) elf_header.e_phoff = Field( elf_header.e_phoff, "Program header table offset" if elf_header.e_phoff else "No program header table", ) elf_header.e_shoff = Field( elf_header.e_shoff, "Section table offset" if elf_header.e_shoff else "No header table", ) elf_header.e_flags = Field(elf_header.e_flags, "Processor specific flags") elf_header.e_ehsize = Field(elf_header.e_ehsize, "ELF header's size") elf_header.e_phentsize = Field( elf_header.e_phentsize, "Entry header table size" ) elf_header.e_phnum = Field(elf_header.e_phnum, "Header table entry number") elf_header.e_shentsize = Field( elf_header.e_shentsize, "Entry section header's size" ) elf_header.e_shnum = Field( elf_header.e_shnum, "Section header entry number" ) elf_header.e_shstrndx = Field( elf_header.e_shstrndx, "Section header table address" ) return elf_header def parse_programheaders( file: _BufferedIOBase, elf_header: Union[ElfHeader32, ElfHeader64], elf_classe: str, ) -> Iterable[Union[ProgramHeader32, ProgramHeader64]]: """ This function parses program headers. """ file.seek(elf_header.e_phoff.value.value) for _ in range(elf_header.e_phnum.value.value): elf_table = parse_from_structure( file, globals()["ProgramHeader" + elf_classe] ) elf_table.p_type = enum_from_value(elf_table.p_type, ProgramHeaderType) elf_table.flags = [ *enum_from_flags(elf_table.p_flags, ProgramHeaderFlags) ] elf_table.p_offset = Field( elf_table.p_offset, "Program header file position" ) elf_table.p_vaddr = Field( elf_table.p_vaddr, "Program header virtual position" ) elf_table.p_paddr = Field( elf_table.p_paddr, "Program header physical position" ) elf_table.p_filesz = Field( elf_table.p_filesz, "Segment size in bytes in file image" ) elf_table.p_memsz = Field( elf_table.p_memsz, "Segment size in bytes in memory image" ) elf_table.p_align = Field( elf_table.p_align, "No segment alignment" if elf_table.p_align.value in (0, 1) else "Segment alignment", ) yield elf_table def parse_elfsections( file: _BufferedIOBase, elf_header: Union[ElfHeader32, ElfHeader64], elf_classe: str, ) -> Tuple[ List[Union[SectionHeader32, SectionHeader64]], Union[SectionHeader32, SectionHeader64, None], Union[SectionHeader32, SectionHeader64, None], Union[SectionHeader32, SectionHeader64, None], Union[SectionHeader32, SectionHeader64, None], Union[SectionHeader32, SectionHeader64, None], Union[SectionHeader32, SectionHeader64, None], List[Union[SectionHeader32, SectionHeader64]], List[Section], ]: """ This function parses ELK sections. """ file.seek(elf_header.e_shoff.value.value) elf_sections = [ parse_from_structure(file, globals()["SectionHeader" + elf_classe]) for _ in range(elf_header.e_shnum.value.value) ] sections = [] headers_names_table_address = elf_sections[ elf_header.e_shstrndx.value.value ].sh_offset.value strtab_section = None symtab_section = None dynstr_section = None dynsym_section = None comment_section = None note_sections = [] dynamic_section = None for elf_section in elf_sections: position = file.tell() file.seek(headers_names_table_address + elf_section.sh_name.value) name = read_string(file) elf_section.name = FileString(name.value.decode("latin-1")) elf_section.name._start_position_ = ( headers_names_table_address + elf_section.sh_name.value ) elf_section.name._end_position_ = file.tell() elf_section.name._data_ = name.value + b"\0" file.seek(position) if elf_section.name == ".strtab": strtab_section = elf_section if elf_section.name == ".symtab": symtab_section = elf_section if elf_section.name == ".dynstr": dynstr_section = elf_section if elf_section.name == ".dynsym": dynstr_section = elf_section if elf_section.name == ".comment": comment_section = elf_section if elf_section.name == ".dynamic": dynamic_section = elf_section if elf_section.name.startswith(".note"): note_sections.append(elf_section) if entropy_charts_import: sections.append( Section( elf_section.name, elf_section.sh_offset.value, elf_section.sh_size.value, ) ) elf_section.sh_name = Field( elf_section.sh_name, "Section name position" ) elf_section.sh_type = enum_from_value( elf_section.sh_type, SectionHeaderType ) elf_section.flags = [ *enum_from_flags(elf_section.sh_flags, SectionAttributeFlags) ] elf_section.sh_addr = Field( elf_section.sh_addr, "Section memory address" ) elf_section.sh_offset = Field( elf_section.sh_offset, "Section file offset" ) elf_section.sh_size = Field( elf_section.sh_size, "Section size in bytes" ) elf_section.sh_link = Field(elf_section.sh_link, "Section link") elf_section.sh_info = Field(elf_section.sh_info, "Section info") elf_section.sh_addralign = Field( elf_section.sh_addralign, "Section without alignment" if elf_section.sh_addralign.value in (1, 0) else "Section alignment", ) elf_section.sh_entsize = Field( elf_section.sh_entsize, "No section symbal table" if elf_section.sh_entsize.value == 0 else "Symbol table entry size", ) return ( elf_sections, strtab_section, symtab_section, dynstr_section, dynsym_section, comment_section, dynamic_section, note_sections, sections, ) def parse_elfsymbolstable( file: _BufferedIOBase, dynsym_section: Union[ElfHeader32, ElfHeader64, None], dynstr_section: Union[ElfHeader32, ElfHeader64, None], symtab_section: Union[ElfHeader32, ElfHeader64, None], strtab_section: Union[ElfHeader32, ElfHeader64, None], elf_classe: str, ) -> Iterable[Tuple[str, Union[SymbolTableEntry32, SymbolTableEntry64]]]: """ This function parses ELF symbols table. """ for symbol_section, str_section in ( (dynsym_section, dynstr_section), (symtab_section, strtab_section), ): if str_section is None or symbol_section is None: continue file.seek(str_section.sh_offset.value.value) data = BytesIO(file.read(str_section.sh_size.value.value)) symboltable_structure = globals()["SymbolTableEntry" + elf_classe] symboltable_structure_size = sizeof(symboltable_structure) file.seek(symbol_section.sh_offset.value.value) size = symbol_section.sh_size.value.value for _ in range(size // symboltable_structure_size): symbol = parse_from_structure(file, symboltable_structure) symbol.st_value = Field(symbol.st_value, "Symbol table value") symbol.st_size = Field(symbol.st_size, "Symbol table size") symbol.st_shndx = enum_from_value( symbol.st_shndx, SpecialSectionIndexes ) symbol.st_bind = enum_from_value( c_byte(symbol.st_info.value >> 4), SymbolBinding ) symbol.st_type = enum_from_value( c_byte(symbol.st_info.value & 0xF), SymbolType ) symbol.st_visibility = enum_from_value( c_byte(symbol.st_other.value & 0x3), SymbolVisibility ) start_position = ( data.seek(symbol.st_name.value) + str_section.sh_offset.value.value ) symbol.st_name = read_string(data) symbol.name = FileString(symbol.st_name.value.decode("latin-1")) symbol.name._start_position_ = start_position symbol.name._end_position_ = ( symbol.name._start_position_ + len(symbol.name) + 1 ) symbol.name._data_ = symbol.st_name.value + b"\0" yield symbol_section.name, symbol def parse_elfcomment( file: _BufferedIOBase, comment_section: Union[SectionHeader32, SectionHeader64], ) -> Iterable[bytes]: """ This function parses ELF comment section. """ if comment_section: position = file.seek(comment_section.sh_offset.value.value) for data in file.read(comment_section.sh_size.value.value).split( b"\0" ): if data: data = FileBytes(data + b"\0") data._start_position_ = position data._end_position_ = position + len(data) + 1 data.string = data.decode("latin-1") yield data position += len(data) + 1 else: position += 1 def parse_elfnote( file: _BufferedIOBase, note_sections: List[Union[SectionHeader32, SectionHeader64]], elf_classe: str, ) -> Iterable[Union[Note32, Note64]]: """ This function parses ELF note sections. """ for note in note_sections: file.seek(note.sh_offset.value.value) note = parse_from_structure(file, globals()["Note" + elf_classe]) position = file.tell() note.name = FileBytes( file.read( note.name_size.value + get_padding_length(note.name_size.value, 4) ) ) note.name.string = note.name.decode("latin-1") note.name._start_position_ = position note.name._end_position_ = file.tell() position = file.tell() note.descriptor = FileBytes( file.read( note.descriptor_size.value + get_padding_length(note.name_size.value, 4) ) ) note.descriptor._start_position_ = position note.descriptor._end_position_ = file.tell() yield note def parse_elfdynamic( file: _BufferedIOBase, dynamic_section: Union[SectionHeader32, SectionHeader64, None], elf_classe: str, ) -> Iterable[Union[Dynamic32, Dynamic64]]: """ This function parses ELF dynamic section. """ if dynamic_section is None: return None file.seek(dynamic_section.sh_offset.value.value) d_tag = 1 while d_tag: position = file.tell() dynamic = parse_from_structure(file, globals()["Dynamic" + elf_classe]) dynamic.dynamic_tag = enum_from_value(dynamic.dynamic_tag, DynamicType) dynamic.dynamic_tag._start_position_ = position dynamic.dynamic_tag._end_position_ = position + sizeof( dynamic.dynamic_tag.value ) if dynamic.dynamic_tag.value.value != DynamicType.DT_FLAGS.value: dynamic.dynamic_value._start_position_ = position + sizeof( dynamic.dynamic_tag.value ) dynamic.dynamic_value._end_position_ = file.tell() else: dynamic.dynamic_value.flags = [] for flag in enum_from_flags(dynamic.dynamic_value, DynamicFlags): flag._start_position_ = position + sizeof( dynamic.dynamic_tag.value ) flag._end_position_ = file.tell() dynamic.dynamic_value.flags.append(flag) d_tag = dynamic.dynamic_tag.value.value yield dynamic if __name__ == "__main__": exit(main())
PypiClean
/deepcolor-0.0.5.tar.gz/deepcolor-0.0.5/README.md
# deepCOLOR: DEEP Generative model for single-cell COLOcalization Representation DeepCOLOR is intended to analyze colocalization relation ships between single cell transcriptomes, integrating them with spatial transcriptome. ## Instalation You can install deepCOLOR using pip command from your shell. ```shell pip install deepcolor imgaug==0.2.5 ``` ## Usage You need to prepare [`AnnData` objects](https://anndata.readthedocs.io/en/latest/) which includes raw count matrix of gene expression for single cell and spatial transcriptome respectively. You can see the usage in [IPython Notebook](tutorial/deepcolor_tutorial.ipynb).
PypiClean
/bpy36-1.0.0-py3-none-any.whl/bpy2/2.79/scripts/modules/rna_prop_ui.py
# <pep8 compliant> import bpy def rna_idprop_ui_get(item, create=True): try: return item['_RNA_UI'] except: if create: item['_RNA_UI'] = {} return item['_RNA_UI'] else: return None def rna_idprop_ui_del(item): try: del item['_RNA_UI'] except KeyError: pass def rna_idprop_ui_prop_update(item, prop): prop_rna = item.path_resolve("[\"%s\"]" % prop.replace("\"", "\\\""), False) if isinstance(prop_rna, bpy.types.bpy_prop): prop_rna.update() def rna_idprop_ui_prop_get(item, prop, create=True): rna_ui = rna_idprop_ui_get(item, create) if rna_ui is None: return None try: return rna_ui[prop] except: rna_ui[prop] = {} return rna_ui[prop] def rna_idprop_ui_prop_clear(item, prop, remove=True): rna_ui = rna_idprop_ui_get(item, False) if rna_ui is None: return try: del rna_ui[prop] except KeyError: pass if remove and len(item.keys()) == 1: rna_idprop_ui_del(item) def rna_idprop_context_value(context, context_member, property_type): space = context.space_data if space is None or isinstance(space, bpy.types.SpaceProperties): pin_id = space.pin_id else: pin_id = None if pin_id and isinstance(pin_id, property_type): rna_item = pin_id context_member = "space_data.pin_id" else: rna_item = eval("context." + context_member) return rna_item, context_member def rna_idprop_has_properties(rna_item): keys = rna_item.keys() nbr_props = len(keys) return (nbr_props > 1) or (nbr_props and '_RNA_UI' not in keys) def draw(layout, context, context_member, property_type, use_edit=True): def assign_props(prop, val, key): prop.data_path = context_member prop.property = key try: prop.value = str(val) except: pass rna_item, context_member = rna_idprop_context_value(context, context_member, property_type) # poll should really get this... if not rna_item: return from bpy.utils import escape_identifier if rna_item.id_data.library is not None: use_edit = False assert(isinstance(rna_item, property_type)) items = rna_item.items() items.sort() if use_edit: row = layout.row() props = row.operator("wm.properties_add", text="Add") props.data_path = context_member del row rna_properties = {prop.identifier for prop in rna_item.bl_rna.properties if prop.is_runtime} if items else None for key, val in items: if key == '_RNA_UI': continue row = layout.row() to_dict = getattr(val, "to_dict", None) to_list = getattr(val, "to_list", None) # val_orig = val # UNUSED if to_dict: val = to_dict() val_draw = str(val) elif to_list: val = to_list() val_draw = str(val) else: val_draw = val box = row.box() if use_edit: split = box.split(percentage=0.75) row = split.row() else: row = box.row() row.label(text=key, translate=False) # explicit exception for arrays is_rna = (key in rna_properties) if to_dict or to_list: row.label(text=val_draw, translate=False) else: if is_rna: row.prop(rna_item, key, text="") else: row.prop(rna_item, '["%s"]' % escape_identifier(key), text="") if use_edit: row = split.row(align=True) if not is_rna: props = row.operator("wm.properties_edit", text="Edit") assign_props(props, val_draw, key) props = row.operator("wm.properties_remove", text="", icon='ZOOMOUT') assign_props(props, val_draw, key) else: row.label(text="API Defined") class PropertyPanel: """ The subclass should have its own poll function and the variable '_context_path' MUST be set. """ bl_label = "Custom Properties" bl_options = {'DEFAULT_CLOSED'} @classmethod def poll(cls, context): rna_item, context_member = rna_idprop_context_value(context, cls._context_path, cls._property_type) return bool(rna_item) """ def draw_header(self, context): rna_item, context_member = rna_idprop_context_value(context, self._context_path, self._property_type) tot = len(rna_item.keys()) if tot: self.layout().label("%d:" % tot) """ def draw(self, context): draw(self.layout, context, self._context_path, self._property_type)
PypiClean
/apache-superset-3.0.0rc3.tar.gz/apache-superset-3.0.0rc3/superset/static/assets/a8fed202e88865a3c129.chunk.js
"use strict";(globalThis.webpackChunksuperset=globalThis.webpackChunksuperset||[]).push([[3749],{83749:(e,t,l)=>{l.r(t),l.d(t,{default:()=>o});var r,n,C=l(67294);function i(){return i=Object.assign?Object.assign.bind():function(e){for(var t=1;t<arguments.length;t++){var l=arguments[t];for(var r in l)Object.prototype.hasOwnProperty.call(l,r)&&(e[r]=l[r])}return e},i.apply(this,arguments)}function a(e,t){let{title:l,titleId:a,...o}=e;return C.createElement("svg",i({width:24,height:24,viewBox:"0 0 24 24",fill:"none",xmlns:"http://www.w3.org/2000/svg",ref:t,"aria-labelledby":a},o),l?C.createElement("title",{id:a},l):null,r||(r=C.createElement("path",{fillRule:"evenodd",clipRule:"evenodd",d:"M12 2.5C6.47715 2.5 2 6.97715 2 12.5C2 18.0228 6.47715 22.5 12 22.5C17.5228 22.5 22 18.0228 22 12.5C22 9.84784 20.9464 7.3043 19.0711 5.42893C17.1957 3.55357 14.6522 2.5 12 2.5ZM13 20.43V19.5C13 18.9477 12.5523 18.5 12 18.5C11.4477 18.5 11 18.9477 11 19.5V20.43C7.37981 19.9709 4.52909 17.1202 4.07 13.5H5C5.55228 13.5 6 13.0523 6 12.5C6 11.9477 5.55228 11.5 5 11.5H4.07C4.52909 7.87981 7.37981 5.02909 11 4.57V5.5C11 6.05228 11.4477 6.5 12 6.5C12.5523 6.5 13 6.05228 13 5.5V4.57C16.6202 5.02909 19.4709 7.87981 19.93 11.5H19C18.4477 11.5 18 11.9477 18 12.5C18 13.0523 18.4477 13.5 19 13.5H19.93C19.4709 17.1202 16.6202 19.9709 13 20.43Z",fill:"currentColor"})),n||(n=C.createElement("path",{fillRule:"evenodd",clipRule:"evenodd",d:"M10.1399 10.1701L15.1399 8.05005C15.5147 7.89198 15.9479 7.97671 16.2356 8.26436C16.5232 8.552 16.608 8.98523 16.4499 9.36005L14.3299 14.3601C14.2289 14.5931 14.043 14.7791 13.8099 14.8801L8.80989 17.0001C8.68491 17.0594 8.54826 17.0902 8.40989 17.0901C8.14612 17.0863 7.89452 16.9785 7.70989 16.7901C7.42189 16.5006 7.33877 16.0652 7.49989 15.6901L9.61989 10.6901C9.72088 10.457 9.9068 10.271 10.1399 10.1701ZM10.3699 14.1501L12.6499 13.1501L13.6499 10.8701L11.3699 11.8701L10.3699 14.1501Z",fill:"currentColor"})))}const o=C.forwardRef(a)}}]); //# sourceMappingURL=a8fed202e88865a3c129.chunk.js.map
PypiClean
/types_aiobotocore_medialive-2.3.4.post1-py3-none-any.whl/types_aiobotocore_medialive/paginator.py
import sys from typing import Generic, Iterator, TypeVar from aiobotocore.paginate import AioPaginator from botocore.paginate import PageIterator from .type_defs import ( DescribeScheduleResponseTypeDef, ListChannelsResponseTypeDef, ListInputDevicesResponseTypeDef, ListInputDeviceTransfersResponseTypeDef, ListInputSecurityGroupsResponseTypeDef, ListInputsResponseTypeDef, ListMultiplexesResponseTypeDef, ListMultiplexProgramsResponseTypeDef, ListOfferingsResponseTypeDef, ListReservationsResponseTypeDef, PaginatorConfigTypeDef, ) if sys.version_info >= (3, 8): from typing import AsyncIterator else: from typing_extensions import AsyncIterator __all__ = ( "DescribeSchedulePaginator", "ListChannelsPaginator", "ListInputDeviceTransfersPaginator", "ListInputDevicesPaginator", "ListInputSecurityGroupsPaginator", "ListInputsPaginator", "ListMultiplexProgramsPaginator", "ListMultiplexesPaginator", "ListOfferingsPaginator", "ListReservationsPaginator", ) _ItemTypeDef = TypeVar("_ItemTypeDef") class _PageIterator(Generic[_ItemTypeDef], PageIterator): def __iter__(self) -> Iterator[_ItemTypeDef]: """ Proxy method to specify iterator item type. """ class DescribeSchedulePaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.DescribeSchedule) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#describeschedulepaginator) """ def paginate( self, *, ChannelId: str, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[DescribeScheduleResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.DescribeSchedule.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#describeschedulepaginator) """ class ListChannelsPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListChannels) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listchannelspaginator) """ def paginate( self, *, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListChannelsResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListChannels.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listchannelspaginator) """ class ListInputDeviceTransfersPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputDeviceTransfers) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputdevicetransferspaginator) """ def paginate( self, *, TransferType: str, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListInputDeviceTransfersResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputDeviceTransfers.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputdevicetransferspaginator) """ class ListInputDevicesPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputDevices) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputdevicespaginator) """ def paginate( self, *, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListInputDevicesResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputDevices.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputdevicespaginator) """ class ListInputSecurityGroupsPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputSecurityGroups) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputsecuritygroupspaginator) """ def paginate( self, *, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListInputSecurityGroupsResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputSecurityGroups.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputsecuritygroupspaginator) """ class ListInputsPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputs) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputspaginator) """ def paginate( self, *, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListInputsResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListInputs.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listinputspaginator) """ class ListMultiplexProgramsPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListMultiplexPrograms) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listmultiplexprogramspaginator) """ def paginate( self, *, MultiplexId: str, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListMultiplexProgramsResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListMultiplexPrograms.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listmultiplexprogramspaginator) """ class ListMultiplexesPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListMultiplexes) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listmultiplexespaginator) """ def paginate( self, *, PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListMultiplexesResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListMultiplexes.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listmultiplexespaginator) """ class ListOfferingsPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListOfferings) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listofferingspaginator) """ def paginate( self, *, ChannelClass: str = ..., ChannelConfiguration: str = ..., Codec: str = ..., Duration: str = ..., MaximumBitrate: str = ..., MaximumFramerate: str = ..., Resolution: str = ..., ResourceType: str = ..., SpecialFeature: str = ..., VideoQuality: str = ..., PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListOfferingsResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListOfferings.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listofferingspaginator) """ class ListReservationsPaginator(AioPaginator): """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListReservations) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listreservationspaginator) """ def paginate( self, *, ChannelClass: str = ..., Codec: str = ..., MaximumBitrate: str = ..., MaximumFramerate: str = ..., Resolution: str = ..., ResourceType: str = ..., SpecialFeature: str = ..., VideoQuality: str = ..., PaginationConfig: PaginatorConfigTypeDef = ... ) -> AsyncIterator[ListReservationsResponseTypeDef]: """ [Show boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/medialive.html#MediaLive.Paginator.ListReservations.paginate) [Show types-aiobotocore documentation](https://youtype.github.io/types_aiobotocore_docs/types_aiobotocore_medialive/paginators/#listreservationspaginator) """
PypiClean
/aesara_nightly-2.8.11.post1.tar.gz/aesara_nightly-2.8.11.post1/aesara/tensor/fft.py
import numpy as np from aesara.gradient import DisconnectedType from aesara.graph.basic import Apply from aesara.graph.op import Op from aesara.tensor.basic import as_tensor_variable from aesara.tensor.math import sqrt from aesara.tensor.subtensor import set_subtensor from aesara.tensor.type import TensorType, integer_dtypes class RFFTOp(Op): __props__ = () def output_type(self, inp): # add extra dim for real/imag return TensorType(inp.dtype, shape=(None,) * (inp.type.ndim + 1)) def make_node(self, a, s=None): a = as_tensor_variable(a) if a.ndim < 2: raise TypeError( "%s: input must have dimension > 2, with first dimension batches" % self.__class__.__name__ ) if s is None: s = a.shape[1:] s = as_tensor_variable(s) else: s = as_tensor_variable(s) if s.dtype not in integer_dtypes: raise TypeError( "%s: length of the transformed axis must be" " of type integer" % self.__class__.__name__ ) return Apply(self, [a, s], [self.output_type(a)()]) def perform(self, node, inputs, output_storage): a = inputs[0] s = inputs[1] A = np.fft.rfftn(a, s=tuple(s)) # Format output with two extra dimensions for real and imaginary # parts. out = np.zeros(A.shape + (2,), dtype=a.dtype) out[..., 0], out[..., 1] = np.real(A), np.imag(A) output_storage[0][0] = out def grad(self, inputs, output_grads): (gout,) = output_grads s = inputs[1] # Divide the last dimension of the output gradients by 2, they are # double-counted by the real-IFFT due to symmetry, except the first # and last elements (for even transforms) which are unique. idx = ( [slice(None)] * (gout.ndim - 2) + [slice(1, (s[-1] // 2) + (s[-1] % 2))] + [slice(None)] ) gout = set_subtensor(gout[idx], gout[idx] * 0.5) return [irfft_op(gout, s), DisconnectedType()()] def connection_pattern(self, node): # Specify that shape input parameter has no connection to graph and gradients. return [[True], [False]] rfft_op = RFFTOp() class IRFFTOp(Op): __props__ = () def output_type(self, inp): # remove extra dim for real/imag return TensorType(inp.dtype, shape=(None,) * (inp.type.ndim - 1)) def make_node(self, a, s=None): a = as_tensor_variable(a) if a.ndim < 3: raise TypeError( f"{self.__class__.__name__}: input must have dimension >= 3, with " + "first dimension batches and last real/imag parts" ) if s is None: s = a.shape[1:-1] s = set_subtensor(s[-1], (s[-1] - 1) * 2) s = as_tensor_variable(s) else: s = as_tensor_variable(s) if s.dtype not in integer_dtypes: raise TypeError( "%s: length of the transformed axis must be" " of type integer" % self.__class__.__name__ ) return Apply(self, [a, s], [self.output_type(a)()]) def perform(self, node, inputs, output_storage): a = inputs[0] s = inputs[1] # Reconstruct complex array from two float dimensions inp = a[..., 0] + 1j * a[..., 1] out = np.fft.irfftn(inp, s=tuple(s)) # Remove numpy's default normalization # Cast to input type (numpy outputs float64 by default) output_storage[0][0] = (out * s.prod()).astype(a.dtype) def grad(self, inputs, output_grads): (gout,) = output_grads s = inputs[1] gf = rfft_op(gout, s) # Multiply the last dimension of the gradient by 2, they represent # both positive and negative frequencies, except the first # and last elements (for even transforms) which are unique. idx = ( [slice(None)] * (gf.ndim - 2) + [slice(1, (s[-1] // 2) + (s[-1] % 2))] + [slice(None)] ) gf = set_subtensor(gf[idx], gf[idx] * 2) return [gf, DisconnectedType()()] def connection_pattern(self, node): # Specify that shape input parameter has no connection to graph and gradients. return [[True], [False]] irfft_op = IRFFTOp() def rfft(inp, norm=None): r""" Performs the fast Fourier transform of a real-valued input. The input must be a real-valued variable of dimensions (m, ..., n). It performs FFTs of size (..., n) on m batches. The output is a tensor of dimensions (m, ..., n//2+1, 2). The second to last dimension of the output contains the n//2+1 non-trivial elements of the real-valued FFTs. The real and imaginary parts are stored as a pair of float arrays. Parameters ---------- inp Array of floats of size (m, ..., n), containing m inputs of size (..., n). norm : {None, 'ortho', 'no_norm'} Normalization of transform. Following numpy, default *None* normalizes only the inverse transform by n, 'ortho' yields the unitary transform (:math:`1/\sqrt n` forward and inverse). In addition, 'no_norm' leaves the transform unnormalized. """ s = inp.shape[1:] cond_norm = _unitary(norm) scaling = 1 if cond_norm == "ortho": scaling = sqrt(s.prod().astype(inp.dtype)) return rfft_op(inp, s) / scaling def irfft(inp, norm=None, is_odd=False): r""" Performs the inverse fast Fourier Transform with real-valued output. The input is a variable of dimensions (m, ..., n//2+1, 2) representing the non-trivial elements of m real-valued Fourier transforms of initial size (..., n). The real and imaginary parts are stored as a pair of float arrays. The output is a real-valued variable of dimensions (m, ..., n) giving the m inverse FFTs. Parameters ---------- inp Array of size (m, ..., n//2+1, 2), containing m inputs with n//2+1 non-trivial elements on the last dimension and real and imaginary parts stored as separate real arrays. norm : {None, 'ortho', 'no_norm'} Normalization of transform. Following numpy, default *None* normalizes only the inverse transform by n, 'ortho' yields the unitary transform (:math:`1/\sqrt n` forward and inverse). In addition, 'no_norm' leaves the transform unnormalized. is_odd : {True, False} Set to True to get a real inverse transform output with an odd last dimension of length (N-1)*2 + 1 for an input last dimension of length N. """ if is_odd not in (True, False): raise ValueError(f"Invalid value {is_odd} for id_odd, must be True or False") s = inp.shape[1:-1] if is_odd: s = set_subtensor(s[-1], (s[-1] - 1) * 2 + 1) else: s = set_subtensor(s[-1], (s[-1] - 1) * 2) cond_norm = _unitary(norm) scaling = 1 # Numpy's default normalization is 1/N on the inverse transform. if cond_norm is None: scaling = s.prod().astype(inp.dtype) elif cond_norm == "ortho": scaling = sqrt(s.prod().astype(inp.dtype)) return irfft_op(inp, s) / scaling def _unitary(norm): if norm not in (None, "ortho", "no_norm"): raise ValueError( f"Invalid value {norm} for norm, must be None, 'ortho' or 'no norm'" ) return norm
PypiClean
/hydro-tune-0.1.0.tar.gz/hydro-tune-0.1.0/hydro/optim/adam.py
from typing import List, Optional, Union import math import torch from torch import Tensor from torch.optim.optimizer import ( Optimizer, _use_grad_for_differentiable, _get_value, _stack_if_compiling, _dispatch_sqrt, _default_to_fused_or_foreach, _capturable_doc, _differentiable_doc, _foreach_doc, _fused_doc, _maximize_doc, ) from torch.utils._foreach_utils import _group_tensors_by_device_and_dtype from .utils import Coefficient, is_coefficient, make_coefficient, reduce_array_if_possible_for __all__ = ["Adam", "adam"] class Adam(Optimizer): r"""Pytorch 2.0 Implements Adam algorithm. """ def __init__( self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, amsgrad=False, *, scaling_num: Union[int, float] = -1, foreach: Optional[bool] = None, maximize: bool = False, capturable: bool = False, differentiable: bool = False, fused: Optional[bool] = None, ): lr, eps, beta1, beta2, weight_decay = reduce_array_if_possible_for(lr, eps, betas[0], betas[1], weight_decay) betas = (beta1, beta2) lr = make_coefficient("learning rate", lr, lb=0.0, ub=float("inf")) eps = make_coefficient("epsilon value", eps, lb=0.0, ub=float("inf")) betas = make_coefficient("beta parameter at index", betas, lb=0.0, ub=1.0, is_tuple=True) weight_decay = make_coefficient("weight_decay value", weight_decay, lb=0.0, ub=float("inf")) defaults = dict( lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, amsgrad=amsgrad, maximize=maximize, foreach=foreach, capturable=capturable, scaling_num=scaling_num, differentiable=differentiable, fused=fused, ) super().__init__(params, defaults) if fused: if differentiable: raise RuntimeError("`fused` does not support `differentiable`") self._step_supports_amp_scaling = True # TODO(crcrpar): [low prec params & their higher prec copy] # Suppor AMP with FP16/BF16 model params which would need # higher prec copy of params to do update math in higher prec to # alleviate the loss of information. if not all(p.is_cuda and torch.is_floating_point(p) for pg in self.param_groups for p in pg["params"]): raise RuntimeError("`fused=True` requires all the params to be CUDA, floating point Tensor") if foreach: raise RuntimeError("`fused` and `foreach` cannot be `True` together.") def __setstate__(self, state): super().__setstate__(state) for group in self.param_groups: group.setdefault("amsgrad", False) group.setdefault("maximize", False) group.setdefault("foreach", None) group.setdefault("capturable", False) group.setdefault("differentiable", False) group.setdefault("fused", None) state_values = list(self.state.values()) step_is_tensor = (len(state_values) != 0) and torch.is_tensor(state_values[0]["step"]) if not step_is_tensor: for s in state_values: s["step"] = torch.tensor(float(s["step"])) def _init_group(self, group, params_with_grad, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps): for p in group["params"]: if p.grad is not None: params_with_grad.append(p) if p.grad.is_sparse: raise RuntimeError("Adam does not support sparse gradients, please consider SparseAdam instead") grads.append(p.grad) state = self.state[p] # Lazy state initialization if len(state) == 0: state["step"] = ( torch.zeros((1,), dtype=torch.float, device=p.device) if group["capturable"] or group["fused"] else torch.tensor(0.0) ) # Exponential moving average of gradient values state["exp_avg"] = torch.zeros_like(p, memory_format=torch.preserve_format) # Exponential moving average of squared gradient values state["exp_avg_sq"] = torch.zeros_like(p, memory_format=torch.preserve_format) if group["amsgrad"]: # Maintains max of all exp. moving avg. of sq. grad. values state["max_exp_avg_sq"] = torch.zeros_like(p, memory_format=torch.preserve_format) exp_avgs.append(state["exp_avg"]) exp_avg_sqs.append(state["exp_avg_sq"]) if group["amsgrad"]: max_exp_avg_sqs.append(state["max_exp_avg_sq"]) if group["differentiable"] and state["step"].requires_grad: raise RuntimeError("`requires_grad` is not supported for `step` in differentiable mode") state_steps.append(state["step"]) # @torch.no_grad() @_use_grad_for_differentiable def step(self, closure=None): """Performs a single optimization step. Args: closure (Callable, optional): A closure that reevaluates the model and returns the loss. """ self._cuda_graph_capture_health_check() loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: params_with_grad = [] grads = [] exp_avgs = [] exp_avg_sqs = [] max_exp_avg_sqs = [] state_steps = [] beta1, beta2 = group["betas"] self._init_group(group, params_with_grad, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps) adam( params_with_grad, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps, amsgrad=group["amsgrad"], beta1=beta1, beta2=beta2, lr=group["lr"], weight_decay=group["weight_decay"], eps=group["eps"], maximize=group["maximize"], foreach=group["foreach"], capturable=group["capturable"], scaling_num=group["scaling_num"], differentiable=group["differentiable"], fused=group["fused"], grad_scale=getattr(self, "grad_scale", None), found_inf=getattr(self, "found_inf", None), ) return loss Adam.__doc__ = r"""Implements Adam algorithm. .. math:: \begin{aligned} &\rule{110mm}{0.4pt} \\ &\textbf{input} : \gamma \text{ (lr)}, \beta_1, \beta_2 \text{ (betas)},\theta_0 \text{ (params)},f(\theta) \text{ (objective)} \\ &\hspace{13mm} \lambda \text{ (weight decay)}, \: \textit{amsgrad}, \:\textit{maximize} \\ &\textbf{initialize} : m_0 \leftarrow 0 \text{ ( first moment)}, v_0\leftarrow 0 \text{ (second moment)},\: \widehat{v_0}^{max}\leftarrow 0\\[-1.ex] &\rule{110mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm}\textbf{if} \: \textit{maximize}: \\ &\hspace{10mm}g_t \leftarrow -\nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm}\textbf{else} \\ &\hspace{10mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm}\textbf{if} \: \lambda \neq 0 \\ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \\ &\hspace{5mm}m_t \leftarrow \beta_1 m_{t-1} + (1 - \beta_1) g_t \\ &\hspace{5mm}v_t \leftarrow \beta_2 v_{t-1} + (1-\beta_2) g^2_t \\ &\hspace{5mm}\widehat{m_t} \leftarrow m_t/\big(1-\beta_1^t \big) \\ &\hspace{5mm}\widehat{v_t} \leftarrow v_t/\big(1-\beta_2^t \big) \\ &\hspace{5mm}\textbf{if} \: amsgrad \\ &\hspace{10mm}\widehat{v_t}^{max} \leftarrow \mathrm{max}(\widehat{v_t}^{max}, \widehat{v_t}) \\ &\hspace{10mm}\theta_t \leftarrow \theta_{t-1} - \gamma \widehat{m_t}/ \big(\sqrt{\widehat{v_t}^{max}} + \epsilon \big) \\ &\hspace{5mm}\textbf{else} \\ &\hspace{10mm}\theta_t \leftarrow \theta_{t-1} - \gamma \widehat{m_t}/ \big(\sqrt{\widehat{v_t}} + \epsilon \big) \\ &\rule{110mm}{0.4pt} \\[-1.ex] &\bf{return} \: \theta_t \\[-1.ex] &\rule{110mm}{0.4pt} \\[-1.ex] \end{aligned} For further details regarding the algorithm we refer to `Adam: A Method for Stochastic Optimization`_. """ + r""" Args: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): learning rate (default: 1e-3) betas (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-8) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) amsgrad (bool, optional): whether to use the AMSGrad variant of this algorithm from the paper `On the Convergence of Adam and Beyond`_ (default: False) {foreach} {maximize} {capturable} {differentiable} {fused} .. _Adam\: A Method for Stochastic Optimization: https://arxiv.org/abs/1412.6980 .. _On the Convergence of Adam and Beyond: https://openreview.net/forum?id=ryQu7f-RZ """.format( foreach=_foreach_doc, maximize=_maximize_doc, capturable=_capturable_doc, differentiable=_differentiable_doc, fused=_fused_doc, ) def adam( params: List[Tensor], grads: List[Tensor], exp_avgs: List[Tensor], exp_avg_sqs: List[Tensor], max_exp_avg_sqs: List[Tensor], state_steps: List[Tensor], # kwonly args with defaults are not supported by functions compiled with torchscript issue #70627 # setting this as kwarg for now as functional API is compiled by torch/distributed/optim foreach: Optional[bool] = None, capturable: bool = False, differentiable: bool = False, fused: Optional[bool] = None, grad_scale: Optional[Tensor] = None, found_inf: Optional[Tensor] = None, *, amsgrad: bool, beta1: Union[float, Coefficient], beta2: Union[float, Coefficient], lr: Union[float, Coefficient], weight_decay: Union[float, Coefficient], eps: Union[float, Coefficient], maximize: bool, scaling_num: Union[int, float], ): r"""Functional API that performs Adam algorithm computation. See :class:`~torch.optim.Adam` for details. """ # Respect when the user inputs False/True for foreach or fused. We only want to change # the default when neither have been user-specified. Note that we default to foreach # and pass False to use_fused. This is not a mistake--we want to give the fused impl # bake-in time before making it the default, even if it is typically faster. if fused is None and foreach is None: # _, foreach = _default_to_fused_or_foreach(params, differentiable, use_fused=False) foreach = False # not implemented in Hydro if fused is None: fused = False if foreach is None: foreach = False if not all(isinstance(t, torch.Tensor) for t in state_steps): raise RuntimeError("API has changed, `state_steps` argument must contain a list of singleton tensors") if foreach and torch.jit.is_scripting(): raise RuntimeError("torch.jit.script not supported with foreach optimizers") if fused and not torch.jit.is_scripting(): func = _fused_adam raise NotImplementedError("Currently, `fused_adam` is not implemented in Hydro") elif foreach and not torch.jit.is_scripting(): func = _multi_tensor_adam raise NotImplementedError("Currently, `_multi_tensor_adam` is not implemented in Hydro") else: func = _single_tensor_adam func( params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps, amsgrad=amsgrad, beta1=beta1, beta2=beta2, lr=lr, weight_decay=weight_decay, eps=eps, maximize=maximize, capturable=capturable, scaling_num=scaling_num, differentiable=differentiable, grad_scale=grad_scale, found_inf=found_inf, ) def _single_tensor_adam( params: List[Tensor], grads: List[Tensor], exp_avgs: List[Tensor], exp_avg_sqs: List[Tensor], max_exp_avg_sqs: List[Tensor], state_steps: List[Tensor], grad_scale: Optional[Tensor], found_inf: Optional[Tensor], *, amsgrad: bool, beta1: Union[float, Coefficient], beta2: Union[float, Coefficient], lr: Union[float, Coefficient], weight_decay: Union[float, Coefficient], eps: Union[float, Coefficient], maximize: bool, capturable: bool, differentiable: bool, scaling_num: Union[int, float], ): for i, param in enumerate(params): grad = grads[i] if not maximize else -grads[i] exp_avg = exp_avgs[i] exp_avg_sq = exp_avg_sqs[i] step_t = state_steps[i] if capturable: assert param.is_cuda and step_t.is_cuda, "If capturable=True, params and state_steps must be CUDA tensors." raise NotImplementedError("Capturable Adam not implemented for Hydro") else: assert not step_t.is_cuda, "If capturable=False, state_steps should not be CUDA tensors." # update step step_t += 1 if is_coefficient(weight_decay) or weight_decay != 0: if is_coefficient(weight_decay): grad = grad + weight_decay[param] * param else: grad = grad.add(param, alpha=weight_decay) if torch.is_complex(param): grad = torch.view_as_real(grad) exp_avg = torch.view_as_real(exp_avg) exp_avg_sq = torch.view_as_real(exp_avg_sq) param = torch.view_as_real(param) # Decay the first and second moment running average coefficient if is_coefficient(beta1): exp_avg.mul_(beta1[param]).add_((1 - beta1[param]) * grad) else: exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) if is_coefficient(beta2): exp_avg_sq.mul_(beta2[param]).add_((1 - beta2[param]) * grad * grad.conj()) else: exp_avg_sq.mul_(beta2).addcmul_(grad, grad.conj(), value=1 - beta2) if capturable: step = step_t # 1 - beta1 ** step can't be captured in a CUDA graph, even if step is a CUDA tensor # (incurs "RuntimeError: CUDA error: operation not permitted when stream is capturing") bias_correction1 = 1 - torch.pow(beta1, step) bias_correction2 = 1 - torch.pow(beta2, step) step_size = lr / bias_correction1 step_size_neg = step_size.neg() bias_correction2_sqrt = bias_correction2.sqrt() if amsgrad: # Maintains the maximum of all 2nd moment running avg. till now if differentiable: max_exp_avg_sqs_i = max_exp_avg_sqs[i].clone() else: max_exp_avg_sqs_i = max_exp_avg_sqs[i] max_exp_avg_sqs[i].copy_(torch.maximum(max_exp_avg_sqs_i, exp_avg_sq)) # Uses the max. for normalizing running avg. of gradient # Folds in (admittedly ugly) 1-elem step_size math here to avoid extra param-set-sized read+write # (can't fold it into addcdiv_ below because addcdiv_ requires value is a Number, not a Tensor) denom = (max_exp_avg_sqs[i].sqrt() / (bias_correction2_sqrt * step_size_neg)).add_(eps / step_size_neg) else: denom = (exp_avg_sq.sqrt() / (bias_correction2_sqrt * step_size_neg)).add_(eps / step_size_neg) param.addcdiv_(exp_avg, denom) else: step = step_t.item() if is_coefficient(beta1): bias_correction1 = 1 - beta1[param] ** step else: bias_correction1 = 1 - beta1**step if is_coefficient(beta2): bias_correction2_sqrt = (1 - beta2[param] ** step).sqrt() else: bias_correction2_sqrt = math.sqrt(1 - beta2**step) if scaling_num > 0 and is_coefficient(lr): if param.infshape.ninf() == 2: step_size = scaling_num * lr[param] / bias_correction1 else: step_size = lr[param] / bias_correction1 elif is_coefficient(lr): step_size = lr[param] / bias_correction1 else: step_size = lr / bias_correction1 if amsgrad: # Maintains the maximum of all 2nd moment running avg. till now torch.maximum(max_exp_avg_sqs[i], exp_avg_sq, out=max_exp_avg_sqs[i]) # Use the max. for normalizing running avg. of gradient if is_coefficient(eps): denom = (max_exp_avg_sqs[i].sqrt() / bias_correction2_sqrt).add_(eps[param]) else: denom = (max_exp_avg_sqs[i].sqrt() / bias_correction2_sqrt).add_(eps) else: if is_coefficient(eps): denom = (exp_avg_sq.sqrt() / bias_correction2_sqrt).add_(eps[param]) else: denom = (exp_avg_sq.sqrt() / bias_correction2_sqrt).add_(eps) if torch.is_tensor(step_size): param.add_(-step_size * (exp_avg / denom)) else: param.addcdiv_(exp_avg, denom, value=-step_size) def _multi_tensor_adam( params: List[Tensor], grads: List[Tensor], exp_avgs: List[Tensor], exp_avg_sqs: List[Tensor], max_exp_avg_sqs: List[Tensor], state_steps: List[Tensor], grad_scale: Optional[Tensor], found_inf: Optional[Tensor], *, amsgrad: bool, beta1: float, beta2: float, lr: float, weight_decay: float, eps: float, maximize: bool, capturable: bool, differentiable: bool, scaling_num: Union[int, float], ): if len(params) == 0: return if capturable: assert all( p.is_cuda and step.is_cuda for p, step in zip(params, state_steps) ), "If capturable=True, params and state_steps must be CUDA tensors." assert grad_scale is None and found_inf is None assert not differentiable, "_foreach ops don't support autograd" grouped_tensors = _group_tensors_by_device_and_dtype([params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps]) for ( device_params, device_grads, device_exp_avgs, device_exp_avg_sqs, device_max_exp_avg_sqs, device_state_steps, ) in grouped_tensors.values(): if maximize: device_grads = torch._foreach_neg(tuple(device_grads)) # type: ignore[assignment] # Handle complex parameters device_grads = [torch.view_as_real(x) if torch.is_complex(x) else x for x in device_grads] device_exp_avgs = [torch.view_as_real(x) if torch.is_complex(x) else x for x in device_exp_avgs] device_exp_avg_sqs = [torch.view_as_real(x) if torch.is_complex(x) else x for x in device_exp_avg_sqs] params_ = [torch.view_as_real(x) if torch.is_complex(x) else x for x in device_params] # update steps torch._foreach_add_(device_state_steps, 1) if weight_decay != 0: device_grads = torch._foreach_add(device_grads, device_params, alpha=weight_decay) # Decay the first and second moment running average coefficient torch._foreach_mul_(device_exp_avgs, beta1) torch._foreach_add_(device_exp_avgs, device_grads, alpha=1 - beta1) torch._foreach_mul_(device_exp_avg_sqs, beta2) torch._foreach_addcmul_(device_exp_avg_sqs, device_grads, device_grads, 1 - beta2) if capturable: # TODO: use foreach_pow if/when foreach_pow is added bias_correction1 = [torch.pow(beta1, step) for step in device_state_steps] bias_correction2 = [torch.pow(beta2, step) for step in device_state_steps] # foreach_sub doesn't allow a scalar as the first arg torch._foreach_sub_(bias_correction1, 1) torch._foreach_sub_(bias_correction2, 1) torch._foreach_neg_(bias_correction1) torch._foreach_neg_(bias_correction2) # foreach_div doesn't allow a scalar as the first arg step_size = torch._foreach_div(bias_correction1, lr) torch._foreach_reciprocal_(step_size) torch._foreach_neg_(step_size) bias_correction2_sqrt = torch._foreach_sqrt(bias_correction2) if amsgrad: # Maintains the maximum of all 2nd moment running avg. till now torch._foreach_maximum_(device_max_exp_avg_sqs, device_exp_avg_sqs) # type: ignore[assignment] # Use the max. for normalizing running avg. of gradient max_exp_avg_sq_sqrt = torch._foreach_sqrt(device_max_exp_avg_sqs) # Folds in (admittedly ugly) 1-elem step_size math here to avoid extra param-set-sized read+write # (can't fold it into addcdiv_ below because addcdiv_ requires value is a Number, not a Tensor) torch._foreach_div_(max_exp_avg_sq_sqrt, torch._foreach_mul(bias_correction2_sqrt, step_size)) eps_over_step_size = torch._foreach_div(step_size, eps) torch._foreach_reciprocal_(eps_over_step_size) denom = torch._foreach_add(max_exp_avg_sq_sqrt, eps_over_step_size) else: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) torch._foreach_div_(exp_avg_sq_sqrt, torch._foreach_mul(bias_correction2_sqrt, step_size)) eps_over_step_size = torch._foreach_div(step_size, eps) torch._foreach_reciprocal_(eps_over_step_size) denom = torch._foreach_add(exp_avg_sq_sqrt, eps_over_step_size) torch._foreach_addcdiv_(params_, device_exp_avgs, denom) else: bias_correction1 = [1 - beta1 ** _get_value(step) for step in device_state_steps] bias_correction2 = [1 - beta2 ** _get_value(step) for step in device_state_steps] step_size = _stack_if_compiling([(lr / bc) * -1 for bc in bias_correction1]) bias_correction2_sqrt = [_dispatch_sqrt(bc) for bc in bias_correction2] if amsgrad: # Maintains the maximum of all 2nd moment running avg. till now torch._foreach_maximum_(device_max_exp_avg_sqs, device_exp_avg_sqs) # Use the max. for normalizing running avg. of gradient max_exp_avg_sq_sqrt = torch._foreach_sqrt(device_max_exp_avg_sqs) torch._foreach_div_(max_exp_avg_sq_sqrt, bias_correction2_sqrt) denom = torch._foreach_add(max_exp_avg_sq_sqrt, eps) else: exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) torch._foreach_div_(exp_avg_sq_sqrt, bias_correction2_sqrt) denom = torch._foreach_add(exp_avg_sq_sqrt, eps) torch._foreach_addcdiv_(params_, device_exp_avgs, denom, step_size) def _fused_adam( params: List[Tensor], grads: List[Tensor], exp_avgs: List[Tensor], exp_avg_sqs: List[Tensor], max_exp_avg_sqs: List[Tensor], state_steps: List[Tensor], grad_scale: Optional[Tensor], found_inf: Optional[Tensor], *, amsgrad: bool, beta1: float, beta2: float, lr: float, weight_decay: float, eps: float, maximize: bool, capturable: bool, # Needed for consistency. differentiable: bool, scaling_num: Union[int, float], ) -> None: grouped_tensors = _group_tensors_by_device_and_dtype([params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps]) grad_scale_dict = {grad_scale.device: grad_scale} if grad_scale is not None else None found_inf_dict = {found_inf.device: found_inf} if found_inf is not None else None grouped_tensors = _group_tensors_by_device_and_dtype([params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps]) for device, dtype in grouped_tensors: ( device_params, device_grads, device_exp_avgs, device_exp_avg_sqs, device_max_exp_avg_sqs, device_state_steps, ) = grouped_tensors[(device, dtype)] if grad_scale is not None and found_inf is not None: if device not in grad_scale_dict: grad_scale_dict[device] = grad_scale.to(device, non_blocking=True) if found_inf not in found_inf_dict: found_inf_dict[device] = found_inf.to(device, non_blocking=True) device_grad_scale = grad_scale_dict[device] device_found_inf = found_inf_dict[device] else: device_grad_scale = None device_found_inf = None torch._foreach_add_(device_state_steps, 1) torch._fused_adam_( device_params, device_grads, device_exp_avgs, device_exp_avg_sqs, device_max_exp_avg_sqs, device_state_steps, amsgrad=amsgrad, lr=lr, beta1=beta1, beta2=beta2, weight_decay=weight_decay, eps=eps, maximize=maximize, grad_scale=device_grad_scale, found_inf=device_found_inf, ) if device_found_inf is not None: torch._foreach_sub_(device_state_steps, [device_found_inf] * len(device_state_steps))
PypiClean
/pulumi_google_native-0.31.2a1689827148.tar.gz/pulumi_google_native-0.31.2a1689827148/pulumi_google_native/managedidentities/v1alpha1/domain_backup_iam_binding.py
import copy import warnings import pulumi import pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union, overload from ... import _utilities from ... import iam as _iam __all__ = ['DomainBackupIamBindingArgs', 'DomainBackupIamBinding'] @pulumi.input_type class DomainBackupIamBindingArgs: def __init__(__self__, *, members: pulumi.Input[Sequence[pulumi.Input[str]]], name: pulumi.Input[str], role: pulumi.Input[str], condition: Optional[pulumi.Input['_iam.v1.ConditionArgs']] = None): """ The set of arguments for constructing a DomainBackupIamBinding resource. :param pulumi.Input[Sequence[pulumi.Input[str]]] members: Identities that will be granted the privilege in role. Each entry can have one of the following values: * user:{emailid}: An email address that represents a specific Google account. For example, [email protected] or [email protected]. * serviceAccount:{emailid}: An email address that represents a service account. For example, [email protected]. * group:{emailid}: An email address that represents a Google group. For example, [email protected]. * domain:{domain}: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com. :param pulumi.Input[str] name: The name of the resource to manage IAM policies for. :param pulumi.Input[str] role: The role that should be applied. Only one `IamBinding` can be used per role. :param pulumi.Input['_iam.v1.ConditionArgs'] condition: An IAM Condition for a given binding. """ pulumi.set(__self__, "members", members) pulumi.set(__self__, "name", name) pulumi.set(__self__, "role", role) if condition is not None: pulumi.set(__self__, "condition", condition) @property @pulumi.getter def members(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]: """ Identities that will be granted the privilege in role. Each entry can have one of the following values: * user:{emailid}: An email address that represents a specific Google account. For example, [email protected] or [email protected]. * serviceAccount:{emailid}: An email address that represents a service account. For example, [email protected]. * group:{emailid}: An email address that represents a Google group. For example, [email protected]. * domain:{domain}: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com. """ return pulumi.get(self, "members") @members.setter def members(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]): pulumi.set(self, "members", value) @property @pulumi.getter def name(self) -> pulumi.Input[str]: """ The name of the resource to manage IAM policies for. """ return pulumi.get(self, "name") @name.setter def name(self, value: pulumi.Input[str]): pulumi.set(self, "name", value) @property @pulumi.getter def role(self) -> pulumi.Input[str]: """ The role that should be applied. Only one `IamBinding` can be used per role. """ return pulumi.get(self, "role") @role.setter def role(self, value: pulumi.Input[str]): pulumi.set(self, "role", value) @property @pulumi.getter def condition(self) -> Optional[pulumi.Input['_iam.v1.ConditionArgs']]: """ An IAM Condition for a given binding. """ return pulumi.get(self, "condition") @condition.setter def condition(self, value: Optional[pulumi.Input['_iam.v1.ConditionArgs']]): pulumi.set(self, "condition", value) class DomainBackupIamBinding(pulumi.CustomResource): @overload def __init__(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions] = None, condition: Optional[pulumi.Input[pulumi.InputType['_iam.v1.ConditionArgs']]] = None, members: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None, name: Optional[pulumi.Input[str]] = None, role: Optional[pulumi.Input[str]] = None, __props__=None): """ Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors. :param str resource_name: The name of the resource. :param pulumi.ResourceOptions opts: Options for the resource. :param pulumi.Input[pulumi.InputType['_iam.v1.ConditionArgs']] condition: An IAM Condition for a given binding. :param pulumi.Input[Sequence[pulumi.Input[str]]] members: Identities that will be granted the privilege in role. Each entry can have one of the following values: * user:{emailid}: An email address that represents a specific Google account. For example, [email protected] or [email protected]. * serviceAccount:{emailid}: An email address that represents a service account. For example, [email protected]. * group:{emailid}: An email address that represents a Google group. For example, [email protected]. * domain:{domain}: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com. :param pulumi.Input[str] name: The name of the resource to manage IAM policies for. :param pulumi.Input[str] role: The role that should be applied. Only one `IamBinding` can be used per role. """ ... @overload def __init__(__self__, resource_name: str, args: DomainBackupIamBindingArgs, opts: Optional[pulumi.ResourceOptions] = None): """ Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors. :param str resource_name: The name of the resource. :param DomainBackupIamBindingArgs args: The arguments to use to populate this resource's properties. :param pulumi.ResourceOptions opts: Options for the resource. """ ... def __init__(__self__, resource_name: str, *args, **kwargs): resource_args, opts = _utilities.get_resource_args_opts(DomainBackupIamBindingArgs, pulumi.ResourceOptions, *args, **kwargs) if resource_args is not None: __self__._internal_init(resource_name, opts, **resource_args.__dict__) else: __self__._internal_init(resource_name, *args, **kwargs) def _internal_init(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions] = None, condition: Optional[pulumi.Input[pulumi.InputType['_iam.v1.ConditionArgs']]] = None, members: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None, name: Optional[pulumi.Input[str]] = None, role: Optional[pulumi.Input[str]] = None, __props__=None): opts = pulumi.ResourceOptions.merge(_utilities.get_resource_opts_defaults(), opts) if not isinstance(opts, pulumi.ResourceOptions): raise TypeError('Expected resource options to be a ResourceOptions instance') if opts.id is None: if __props__ is not None: raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource') __props__ = DomainBackupIamBindingArgs.__new__(DomainBackupIamBindingArgs) __props__.__dict__["condition"] = condition if members is None and not opts.urn: raise TypeError("Missing required property 'members'") __props__.__dict__["members"] = members if name is None and not opts.urn: raise TypeError("Missing required property 'name'") __props__.__dict__["name"] = name if role is None and not opts.urn: raise TypeError("Missing required property 'role'") __props__.__dict__["role"] = role __props__.__dict__["etag"] = None __props__.__dict__["project"] = None super(DomainBackupIamBinding, __self__).__init__( 'google-native:managedidentities/v1alpha1:DomainBackupIamBinding', resource_name, __props__, opts) @staticmethod def get(resource_name: str, id: pulumi.Input[str], opts: Optional[pulumi.ResourceOptions] = None) -> 'DomainBackupIamBinding': """ Get an existing DomainBackupIamBinding resource's state with the given name, id, and optional extra properties used to qualify the lookup. :param str resource_name: The unique name of the resulting resource. :param pulumi.Input[str] id: The unique provider ID of the resource to lookup. :param pulumi.ResourceOptions opts: Options for the resource. """ opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id)) __props__ = DomainBackupIamBindingArgs.__new__(DomainBackupIamBindingArgs) __props__.__dict__["condition"] = None __props__.__dict__["etag"] = None __props__.__dict__["members"] = None __props__.__dict__["name"] = None __props__.__dict__["project"] = None __props__.__dict__["role"] = None return DomainBackupIamBinding(resource_name, opts=opts, __props__=__props__) @property @pulumi.getter def condition(self) -> pulumi.Output[Optional['_iam.v1.outputs.Condition']]: """ An IAM Condition for a given binding. See https://cloud.google.com/iam/docs/conditions-overview for additional details. """ return pulumi.get(self, "condition") @property @pulumi.getter def etag(self) -> pulumi.Output[str]: """ The etag of the resource's IAM policy. """ return pulumi.get(self, "etag") @property @pulumi.getter def members(self) -> pulumi.Output[Sequence[str]]: """ Specifies the principals requesting access for a Google Cloud resource. `members` can have the following values: * `allUsers`: A special identifier that represents anyone who is on the internet; with or without a Google account. * `allAuthenticatedUsers`: A special identifier that represents anyone who is authenticated with a Google account or a service account. Does not include identities that come from external identity providers (IdPs) through identity federation. * `user:{emailid}`: An email address that represents a specific Google account. For example, `[email protected]` . * `serviceAccount:{emailid}`: An email address that represents a Google service account. For example, `[email protected]`. * `serviceAccount:{projectid}.svc.id.goog[{namespace}/{kubernetes-sa}]`: An identifier for a [Kubernetes service account](https://cloud.google.com/kubernetes-engine/docs/how-to/kubernetes-service-accounts). For example, `my-project.svc.id.goog[my-namespace/my-kubernetes-sa]`. * `group:{emailid}`: An email address that represents a Google group. For example, `[email protected]`. * `domain:{domain}`: The G Suite domain (primary) that represents all the users of that domain. For example, `google.com` or `example.com`. * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique identifier) representing a user that has been recently deleted. For example, `[email protected]?uid=123456789012345678901`. If the user is recovered, this value reverts to `user:{emailid}` and the recovered user retains the role in the binding. * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus unique identifier) representing a service account that has been recently deleted. For example, `[email protected]?uid=123456789012345678901`. If the service account is undeleted, this value reverts to `serviceAccount:{emailid}` and the undeleted service account retains the role in the binding. * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique identifier) representing a Google group that has been recently deleted. For example, `[email protected]?uid=123456789012345678901`. If the group is recovered, this value reverts to `group:{emailid}` and the recovered group retains the role in the binding. """ return pulumi.get(self, "members") @property @pulumi.getter def name(self) -> pulumi.Output[str]: """ The name of the resource to manage IAM policies for. """ return pulumi.get(self, "name") @property @pulumi.getter def project(self) -> pulumi.Output[str]: """ The project in which the resource belongs. If it is not provided, a default will be supplied. """ return pulumi.get(self, "project") @property @pulumi.getter def role(self) -> pulumi.Output[str]: """ Role that is assigned to the list of `members`, or principals. For example, `roles/viewer`, `roles/editor`, or `roles/owner`. """ return pulumi.get(self, "role")
PypiClean
/sphinx_typlog_theme-0.8.0.tar.gz/sphinx_typlog_theme-0.8.0/README.rst
Sphinx Typlog Theme =================== A sphinx theme sponsored by Typlog_, created by `Hsiaoming Yang`_. .. image:: https://badgen.net/badge/donate/lepture/ff69b4 :target: https://lepture.com/donate :alt: Donate lepture .. image:: https://badgen.net/badge//patreon/f96854?icon=patreon :target: https://patreon.com/lepture :alt: Become a patreon .. image:: https://badgen.net/pypi/v/sphinx-typlog-theme :target: https://pypi.python.org/pypi/sphinx-typlog-theme/ :alt: Latest Version .. image:: https://img.shields.io/pypi/wheel/sphinx-typlog-theme.svg :target: https://pypi.python.org/pypi/sphinx-typlog-theme/ :alt: Wheel Status .. _Typlog: https://typlog.com/ .. _`Hsiaoming Yang`: https://lepture.com/ Examples -------- Here are some examples which are using this theme: - https://sphinx-typlog-theme.readthedocs.io/ - https://docs.authlib.org/ - https://webargs.readthedocs.io/
PypiClean
/python-keyczar-2-0.724.tar.gz/python-keyczar-2-0.724/src/keyczar/keyinfo.py
from keyczar import errors class _NameId(object): def __init__(self, name, key_id): self.name = name self.id = key_id def __str__(self): return self.name class KeyType(_NameId): """ Encodes different key types and their properties: - AES - HMAC-SHA1 - DSA Private - DSA Public - RSA Private - RSA Public """ sizes = property(lambda self: self.__sizes, doc="""List of valid key sizes for this key type.""") # clients can't modify sizes def __init__(self, name, key_id, sizes, output_size): _NameId.__init__(self, name, key_id) self.__sizes = sizes self.output_size = output_size self.default_size = self.__sizes[0] def IsValidSize(self, size): return size in self.__sizes AES = KeyType("AES", 0, [128, 192, 256], 0) HMAC_SHA1 = KeyType("HMAC_SHA1", 1, [256], 20) DSA_PRIV = KeyType("DSA_PRIV", 2, [1024], 48) DSA_PUB = KeyType("DSA_PUB", 3, [1024], 48) RSA_PRIV = KeyType("RSA_PRIV", 4, [2048, 4096, 1024, 768, 512], 256) RSA_PUB = KeyType("RSA_PUB", 4, [2048, 4096, 1024, 768, 512], 256) types = {"AES": AES, "HMAC_SHA1": HMAC_SHA1, "DSA_PRIV": DSA_PRIV, "DSA_PUB": DSA_PUB, "RSA_PRIV": RSA_PRIV, "RSA_PUB": RSA_PUB} def GetType(name): try: return types[name] except KeyError: raise errors.KeyczarError("Invalid Key Type") class KeyStatus(_NameId): """ Encodes the different possible statuses of a key: - Primary: can be used to encrypt and sign new data - Active: can be used to decrypt or verify data signed previously - Inactive: can do the same functions as an active key, but about to be revoked """ PRIMARY = KeyStatus("PRIMARY", 0) ACTIVE = KeyStatus("ACTIVE", 1) INACTIVE = KeyStatus("INACTIVE", 2) statuses = {"PRIMARY": PRIMARY, "ACTIVE": ACTIVE, "INACTIVE": INACTIVE} def GetStatus(value): try: return statuses[value] except KeyError: raise errors.KeyczarError("Invalid Key Status") class KeyPurpose(_NameId): """ Encodes the different possible purposes for which a key can be used: - Decrypt and Encrypt - Encrypt (only) - Sign and Verify - Verify (only) """ DECRYPT_AND_ENCRYPT = KeyPurpose("DECRYPT_AND_ENCRYPT", 0) ENCRYPT = KeyPurpose("ENCRYPT", 1) SIGN_AND_VERIFY = KeyPurpose("SIGN_AND_VERIFY", 2) VERIFY = KeyPurpose("VERIFY", 3) purposes = {"DECRYPT_AND_ENCRYPT": DECRYPT_AND_ENCRYPT, "ENCRYPT": ENCRYPT, "SIGN_AND_VERIFY": SIGN_AND_VERIFY, "VERIFY": VERIFY} def GetPurpose(name): try: return purposes[name] except KeyError: raise errors.KeyczarError("Invalid Key Purpose") class CipherMode(_NameId): """ Encodes the different possible modes for a cipher: - Cipher Block Chaining (CBC) - Counter (CTR) - Electronic Code Book (ECB) - Cipher Block Chaining without IV (DET-CBC) """ def __init__(self, name, key_id, use_iv, OutputSizeFn): _NameId.__init__(self, name, key_id) self.use_iv = use_iv self.GetOutputSize = OutputSizeFn CBC = CipherMode("CBC", 0, True, lambda b, i: (i / b + 2) * b) CTR = CipherMode("CTR", 1, True, lambda b, i: i + b / 2) ECB = CipherMode("ECB", 2, False, lambda b, i: b) DET_CBC = CipherMode("DET_CBC", 3, False, lambda b, i: (i / b + 1) * b) modes = {"CBC": CBC, "CTR": CTR, "ECB": ECB, "DET_CBC": DET_CBC} def GetMode(name): try: return modes[name] except KeyError: raise errors.KeyczarError("Invalid Cipher Mode")
PypiClean
/ixnetwork_restpy-1.1.10.tar.gz/ixnetwork_restpy-1.1.10/ixnetwork_restpy/testplatform/sessions/ixnetwork/globals/topology/grpcclient/grpcclient_6743ae6e031e52a1629f0930a672ebc9.py
import sys from ixnetwork_restpy.base import Base from ixnetwork_restpy.files import Files if sys.version_info >= (3, 5): from typing import List, Any, Union class GRPCClient(Base): """gRPC Port Specific Data The GRPCClient class encapsulates a required gRPCClient resource which will be retrieved from the server every time the property is accessed. """ __slots__ = () _SDM_NAME = "gRPCClient" _SDM_ATT_MAP = { "Count": "count", "DescriptiveName": "descriptiveName", "Name": "name", "RowNames": "rowNames", } _SDM_ENUM_MAP = {} def __init__(self, parent, list_op=False): super(GRPCClient, self).__init__(parent, list_op) @property def Count(self): # type: () -> int """ Returns ------- - number: Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group. """ return self._get_attribute(self._SDM_ATT_MAP["Count"]) @property def DescriptiveName(self): # type: () -> str """ Returns ------- - str: Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but may offer more context. """ return self._get_attribute(self._SDM_ATT_MAP["DescriptiveName"]) @property def Name(self): # type: () -> str """ Returns ------- - str: Name of NGPF element, guaranteed to be unique in Scenario """ return self._get_attribute(self._SDM_ATT_MAP["Name"]) @Name.setter def Name(self, value): # type: (str) -> None self._set_attribute(self._SDM_ATT_MAP["Name"], value) @property def RowNames(self): # type: () -> List[str] """ Returns ------- - list(str): Name of rows """ return self._get_attribute(self._SDM_ATT_MAP["RowNames"]) def update(self, Name=None): # type: (str) -> GRPCClient """Updates gRPCClient resource on the server. Args ---- - Name (str): Name of NGPF element, guaranteed to be unique in Scenario Raises ------ - ServerError: The server has encountered an uncategorized error condition """ return self._update(self._map_locals(self._SDM_ATT_MAP, locals())) def find(self, Count=None, DescriptiveName=None, Name=None, RowNames=None): # type: (int, str, str, List[str]) -> GRPCClient """Finds and retrieves gRPCClient resources from the server. All named parameters are evaluated on the server using regex. The named parameters can be used to selectively retrieve gRPCClient resources from the server. To retrieve an exact match ensure the parameter value starts with ^ and ends with $ By default the find method takes no parameters and will retrieve all gRPCClient resources from the server. Args ---- - Count (number): Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group. - DescriptiveName (str): Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but may offer more context. - Name (str): Name of NGPF element, guaranteed to be unique in Scenario - RowNames (list(str)): Name of rows Returns ------- - self: This instance with matching gRPCClient resources retrieved from the server available through an iterator or index Raises ------ - ServerError: The server has encountered an uncategorized error condition """ return self._select(self._map_locals(self._SDM_ATT_MAP, locals())) def read(self, href): """Retrieves a single instance of gRPCClient data from the server. Args ---- - href (str): An href to the instance to be retrieved Returns ------- - self: This instance with the gRPCClient resources from the server available through an iterator or index Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ return self._read(href)
PypiClean
/heqmsPkg-0.0.15-py3-none-any.whl/heqms_pkg/rel_db/save_result.py
import pandas as pd from gensim.models.doc2vec import Doc2Vec import heqms_pkg.config as cf import heqms_pkg.preprocess.preprocessing as pre import heqms_pkg.rel_db.from_db as fdb class save_result: def __init__(self): # 교과목, 직업 리스트 불러오기 self.part = cf.UnivConfig().partNm self.dbconfig = cf.DBConfig_Dev() # self.deptNm = cf.UnivConfig().deptNm # self.subject = pre.Preprocessing().ma_name('subj', self.deptNm) # self.job = pre.Preprocessing().ma_name('job', self.deptNm) # self.subject_path = 'C:/Users/dirty/python/Jupyter/dldoc2vec/subject_name.txt' # self.subject = pd.read_csv(self.subject_path, names=['subject_name']) # # self.job_path = 'C:/Users/dirty/python/Jupyter/dldoc2vec/job_name.txt' # self.job = pd.read_csv(self.job_path, names=['job_name']) # # # 교과목 테이블 전처리 # self.subject = self.subject.drop_duplicates(['subject_name']) # self.subject = self.subject.astype(str) # self.subject = self.subject.drop(self.subject.index[0]) # self.subject = self.subject.reset_index(drop=True) # self.subject['subject_name'] = self.subject['subject_name'].str.lstrip() # self.subject['subject_name'] = self.subject['subject_name'].str.rstrip() # # # 직업 테이블 전처리 # self.job = self.job.drop_duplicates(['job_name']) # self.job = self.job.astype(str) # self.job = self.job.drop(self.job.index[0]) # self.job = self.job.reset_index(drop=True) # self.job['job_name'] = self.job['job_name'].str.lstrip() # self.job['job_name'] = self.job['job_name'].str.rstrip() # self.model = Doc2Vec.load('C:/Users/dirty/python/Jupyter/dldoc2vec/model/d2v.model') def result(self, part1, part2, deptNm): # 직무,직업,교과 순서 변경 pr = cf.PriorConfig() name, mdf_part1, mdf_part2 = pr.priority(part1, part2) # 불러오기 doc_list = pre.Preprocessing().from_df(mdf_part1, mdf_part2, deptNm)['token_description'] nm_list = pre.Preprocessing().from_df(mdf_part1, mdf_part2, deptNm)['cd'] # 학과명 -> 학과번호 deptNum = cf.UnivConfig.deptNo[deptNm] model_path = cf.linuxPath.model_path fileName = str(deptNum) + '_' + name + '_d2v.model' model = Doc2Vec.load(model_path + "/" + fileName) model.random.seed(42) # 값 고정 # 직업명 리스트 불러오기 part2_nm = pre.Preprocessing().ma_code(mdf_part2, deptNm)['code'] # 코드 불러오기 final = pd.DataFrame(columns=[str(mdf_part1), str(mdf_part2), 'similarity']) # 최종 데이터프레임 형식 생성 for i in range(len(doc_list)): inferred_vector = model.infer_vector(doc_list[i]) try: return_docs = model.docvecs.most_similar(positive=[inferred_vector], topn=1000) except TypeError as e: print(e) continue; # 결과 리스트 -> 테이블로 변형 result_docs = pd.DataFrame(return_docs, columns=[str(mdf_part2), 'similarity']) result_docs[str(mdf_part1)] = nm_list[i] # for j in range(len(return_docs)): # result_docs.loc[j, str(mdf_part1)] = part1_nm[i] # result_docs.loc[j, str(mdf_part2)] = return_docs[j][0] # result_docs.loc[j, 'similarity'] = return_docs[j][1] # 테이블 필터링으로 '직업'을 제외한 '교과목'만 출력 final_docs = pd.merge(result_docs, part2_nm, left_on=str(mdf_part2), right_on='code', how='inner') final_docs = final_docs.drop(['code'], axis=1) final = final.append(final_docs[:30]) # 파트별 코드-파트이름 가져오기 cdnm = fdb.MysqlController().select_cdName cdnm_part1 = cdnm(mdf_part1, deptNm) cdnm_part2 = cdnm(mdf_part2, deptNm) # 코드에 맞는 파트이름 합치기 final = pd.merge(final, cdnm_part1, left_on=str(mdf_part1), right_on=str(mdf_part1) + '_cd', how='inner') final = final.drop([str(mdf_part1) + '_cd'], axis=1) final = pd.merge(final, cdnm_part2, left_on=str(mdf_part2), right_on=str(mdf_part2) + '_cd', how='inner') final = final.drop([str(mdf_part2) + '_cd'], axis=1) # 기존 테이블 양식으로 변환 hanbat_job_subject = pd.DataFrame( columns=[str(mdf_part1).upper() + '_NM', str(mdf_part1).upper() + '_CD', str(mdf_part2).upper() + '_NM', str(mdf_part2).upper() + '_CD', 'RELEVANCE', 'DEPARTMENT_CD']) hanbat_job_subject[str(mdf_part1).upper() + "_CD"] = final[str(mdf_part1)] hanbat_job_subject[str(mdf_part2).upper() + "_CD"] = final[str(mdf_part2)] hanbat_job_subject['RELEVANCE'] = final['similarity'] hanbat_job_subject['DEPARTMENT_CD'] = cf.UnivConfig.deptNo[deptNm] hanbat_job_subject[str(mdf_part1).upper() + '_NM'] = final[str(mdf_part1) + '_name'] hanbat_job_subject[str(mdf_part2).upper() + '_NM'] = final[str(mdf_part2) + '_name'] hanbat_job_subject.reset_index(drop=True, inplace=True) hanbat_job_subject.sort_values(by=[str(mdf_part1).upper() + '_NM', 'RELEVANCE'], axis=0, ascending=[True, False], inplace=True) hanbat_job_subject.reset_index(drop=True, inplace=True) # csv파일로 저장 output_path = cf.linuxPath.output_path finalName = str(deptNum) + '_DL_HEQM_' + name.upper() + '_RESULT.csv' path = output_path + "/" + finalName hanbat_job_subject.to_csv(path, mode='w')
PypiClean
/cg_api-1.0.3.tar.gz/cg_api-1.0.3/cg_api/config/api_config.py
class ApiConfig: class Url: PING = "/ping" SIMPLE_PRICE = "/simple/price" SIMPLE_TOKEN_PRICE = "/simple/token_price/{asset_platform_id}" SIMPLE_SUPPORTED_VS_CURRENCIES = "/simple/supported_vs_currencies" COIN_LIST = "/coins/list" COIN_MARKETS = "/coins/markets" COIN = "/coins/{coin_id}" COIN_TICKERS = "/coins/{coin_id}/tickers" COIN_HISTORY = "/coins/{coin_id}/history" COIN_MARKETCHART = "/coins/{coin_id}/market_chart" COIN_MARKETCHART_RANGE = "/coins/{coin_id}/market_chart/range" COIN_STATUS_UPDATES = "/coins/{coin_id}/status_updates" COIN_OHLC = "/coins/{coin_id}/ohlc" COIN_CONTRACT = "/coins/{asset_platform_id}/contract/{contract_address}" COIN_CONTRACT_MARKET_CHART = "/coins/{asset_platform_id}/contract/{contract_address}/market_chart/" COIN_CONTRACT_MARKET_CHART_RANGE = "/coins/{asset_platform_id}/contract/{contract_address}/market_chart/range" ASSET_PLATFORMS = "/asset_platforms" COIN_CATEGORY_LIST = "/coins/categories/list" COIN_CATEGORIES = "/coins/categories" EXCHANGES = "/exchanges" EXCHANGE_LIST = "/exchanges/list" EXCHANGE = "/exchanges/{exchange_id}" EXCHANGE_TICKERS = "/exchanges/{exchange_id}/tickers" EXCHANGE_STATUSUPDATES = "/exchanges/{exchange_id}/status_updates" EXCHANGE_VOLUMECHART = "/exchanges/{exchange_id}/volume_chart" FINANCE_PLATFORMS = "/finance_platforms" FINANCE_PRODUCTS = "/finance_products" INDEXES = "/indexes" INDEX_MARKET = "/indexes/{market_id}/{index_id}" INDEX_LIST = "/indexes/list" DERIVATIVES = "/derivatives" DERIVATIVE_EXCHANGES = "/derivatives/exchanges" DERIVATIVE_EXCHANGE = "/derivatives/exchanges/{exchange_id}" DERIVATIVE_EXCHANGE_LIST = "/derivatives/exchanges/list" STATUS_UPDATE = "/status_updates" EXCHANGE_RATES = "/exchange_rates" SEARCH = "/search" SEARCH_TRENDING = "/search/trending" GLOBAL = "/global" GLOBAL_DEFI = "/global/decentralized_finance_defi" COMPANIES_PUBLIC_TREASURY = "/companies/public_treasury/{coin_id}" class Default: SCHEME = "https" HOST = "api.coingecko.com" BASE_PATH = "/api/v3"
PypiClean
/Ivolution-1.0.tar.gz/Ivolution-1.0/ivolution/gui/IvolutionWindow.py
import os import webbrowser import logging from gi.repository import Gtk, GLib from AboutDialog import AboutDialog from .. import get_data # import os # parentdir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # os.sys.path.insert(0,parentdir) # import parent folder from .. import Facemovie_lib from .. import FaceParams from .. import FacemovieThread import time class IvolutionWindow(FacemovieThread.Observer, FacemovieThread.Observable): def __init__(self, name): FacemovieThread.Observer.__init__(self, name) FacemovieThread.Observable.__init__(self) self.my_logger = None self.console_logger = None self.builder = Gtk.Builder() self.builder.add_from_file(get_data('ui/IvolutionWindow.glade')) #self.builder.add_from_file("ivolution/data/ui/IvolutionWindow.glade") #self.builder.connect_signals({ "on_ivolutionwindow_destroy" : Gtk.main_quit }) self.window = self.builder.get_object("ivolution_window") self.window.show() self.builder.connect_signals(self) ## Defines parameters needed to run the FaceMovie self.root_fo = "" self.in_fo = "" # Input folder, where images are located self.out_fo = "" # Input folder, where the video will be saved self.mode = "crop" # type of video to be created self.sort = "name" # how image files will be chronologically sorted self.speed = 1 # Speed of the movie self.param = "frontal_face" # type of face profile to be searched for self.in_fo = "" # Input folder, where images are located self.process_running = False self.facemovie = None self.AboutDialog = None # class self.setup() self.setup_logger() def setup(self): """ Sets up all the default paramters and retrieve the element of the GUI we want to follow """ self.AboutDialog = AboutDialog # FIXME : Still not working self.startbutton = self.builder.get_object("startbutton") self.filechooserinput = self.builder.get_object("filechooserinput") self.filechooseroutput = self.builder.get_object("filechooseroutput") self.typecombobox = self.builder.get_object("typecombobox") self.typecombobox.set_active(0) self.speedcombobox = self.builder.get_object("speedcombobox") self.speedcombobox.set_active(0) self.cropradiobutton = self.builder.get_object("cropradiobutton") self.namesortradiobutton = self.builder.get_object("namesortradiobutton") self.progressbar = self.builder.get_object("progressbar") self.statuslabel = self.builder.get_object("statuslabel") # Signal handling related stuff def on_cropradiobutton_toggled(self,widget): """ We need to take care only of this one as both are grouped """ if widget.get_active(): # means crop is activated self.mode = "crop" else: self.mode = "conservative" def on_namesortradiobutton_toggled(self,widget): """ We need to take care only of this one as both are grouped """ if widget.get_active(): # means name is activated self.sort = "name" else: self.sort = "exif" def on_startbutton_pressed(self, widget): """ Sets all parameters and start processing """ self.my_logger.debug("start pressed") if not self.process_running: # start only if not already running self.set_parameters() self.print_parameters() # Instantiating the facemovie self.facemovie = FacemovieThread.FacemovieThread(self.face_params) self.facemovie.subscribe(self) # I want new information ! Subscribes to facemovie reports self.subscribe(self.facemovie) # Trying to subscribe facemovie to our messages self.facemovie.start() self.process_running = True else: self.console_logger.error("Cannot start, process already running !") self.my_logger.error("Cannot start, process already running !") def on_stopbutton_pressed(self, widget): """ Asks the Facemovie thread to terminate """ self.my_logger.debug("Stop pressed") self.console_logger.debug("Stop pressed") self.notify(["STOP", 0.0]) # Asking the Facemovie to stop self.process_running = False def on_destroy(self, widget, data=None): """Called when the IvolutionWindow is closed.""" # Clean up code for saving application state should be added here. Gtk.main_quit() print "Gtk Exited" def on_menu_about_activate(self, widget, data=None): """ Displays the about box for Ivolution # FIXME : Can start several about Dialogs at the same time """ if self.AboutDialog is not None: about = self.AboutDialog() def on_menu_help_activate(self, widget, data=None): """ Opens a browser and points to online help. """ url = "http://jlengrand.github.com/FaceMovie/" webbrowser.open(url,new=2) # in new tab if possible #print "Should open help" #Methods processing data def set_parameters(self): """ Sets all needed parameters for create the movie. """ self.in_fo = self.filechooserinput.get_current_folder() + "/" # TODO : Find correct fix self.out_fo = self.filechooseroutput.get_current_folder() + "/" # TODO : Find correct fix self.param = self.typecombobox.get_active_text() self.speed = self.speedcombobox.get_active() # We need and integer between 0 and 2 # Instantiating the face_params object that will be needed by the facemovie par_fo = os.path.join(self.root_fo, get_data("haarcascades")) self.face_params = FaceParams.FaceParams(par_fo, self.in_fo, self.out_fo, self.param, self.sort, self.mode, self.speed) def print_parameters(self): print "#########" print "Settings:" print "input folder : %s" %( self.in_fo) print "output folder : %s" %( self.out_fo) print "Face Type : %s" %( self.param) print "Speed chosen : %s" %( self.speed) print "Mode chosen : %s" %( self.mode) print "Sort method : %s" %( self.sort) print "#########" def setup_logger(self): """ Configures our logger to save error messages Start logging in file here """ personal_dir = "~/.ivolution" log_root = 'fm.log' log_file = os.path.join(os.path.expanduser(personal_dir),log_root) # create logger for 'facemovie' self.my_logger = logging.getLogger('FileLog') self.my_logger.setLevel(logging.DEBUG) # create file handler which logs even debug messages #fh = logging.StreamHandler() fh = logging.FileHandler(log_file) fh.setLevel(logging.DEBUG) # create console handler with a higher log level self.console_logger = logging.getLogger('ConsoleLog') self.console_logger.setLevel(logging.DEBUG) # not needed ch = logging.StreamHandler() #ch.setLevel(logging.DEBUG) # not needed # add the handlers to the logger self.my_logger.addHandler(fh) self.my_logger.info("######") # Separating different sessions formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') # create formatter and add it to the handlers fh.setFormatter(formatter) #ch.setFormatter(formatter) self.console_logger.addHandler(ch) def update(self, message): """ Trigerred by FacemovieThread. Uses the Observer pattern to inform the user about the progress of the current job. """ self.console_logger.debug(message[0]) self.my_logger.debug(message[0]) self.console_logger.debug(float(message[1])) # Uses GLib to run Thread safe operations on GUI GLib.idle_add(self.progressbar.set_fraction, float(message[1])) GLib.idle_add(self.statuslabel.set_text, message[0]) if float(message[1]) >= 1.0: # 100% of process self.my_logger.debug("Reached end of facemovie process") self.console_logger.debug("Reached end of facemovie process") self.process_running = False if __name__ == "__main__": app = IvolutionWindow() Gtk.main()
PypiClean
/openmm/file.py
import platform from typing import Iterable, Union import numpy as np import openmm try: import netCDF4 as nc FOUND_NETCDF = True except: from scipy.io import netcdf_file as nc FOUND_NETCDF = False from .. import VERSION, ArrayLike class NetCDFFile(): """ Interface for writing AMBER NetCDF trajectory and restart files. Parameters ---------- file : `str` Filename of NetCDF file to write to. If `file` does not have the :code:`.nc` extension, it will automatically be appended. mode : `str` NetCDF file access mode. restart : `bool`, default: `False` Specifies whether the NetCDF file is a trajectory or restart file. **kwargs Keyword arguments to be passed to :code:`netCDF4.Dataset` or :code:`scipy.io.netcdf_file()`. """ def __init__(self, file: str, mode: str, restart: bool = False, **kwargs): if FOUND_NETCDF: self._nc = nc.Dataset(file, mode=mode, format="NETCDF3_64BIT_OFFSET", **kwargs) else: self._nc = nc(file, mode=mode, version=2, **kwargs) self._frame = 0 if mode == "w" else self._nc["time"].shape[0] self._restart = restart def _initialize( self, N: int, cell: bool, velocities: bool, forces: bool, remd: str = None, temp0: float = None, remd_dimtype: ArrayLike = None, remd_indices: ArrayLike = None, remd_repidx: int = -1, remd_crdidx: int = -1, remd_values: ArrayLike = None) -> None: """ Initialize the NetCDF file according to AMBER NetCDF Trajectory/Restart Convention Version 1.0, Revision C (https://ambermd.org/netcdf/nctraj.xhtml). Parameters ---------- N : `int` Number of particles. cell : `bool` Specifies whether simulation box length and angle information is available. velocities : `bool` Specifies whether particle velocities should be written. forces : `bool` Specifies whether forces exerted on particles should be written. remd : `str`, :code:`{"temp", "multi"}`, optional Specifies whether information about a replica exchange molecular dynamics (REMD) simulation is written. .. container:: **Valid values**: * :code:`"temp"` for regular REMD. * :code:`"multi"` for multi-dimensional REMD. temp0 : `float`, optional Temperature that the thermostat is set to maintain for a REMD restart file only. **Reference unit**: :math:`\mathrm{K}`. remd_dimtype : array-like, optional Array specifying the exchange type(s) for the REMD dimension(s). Required for a multi-dimensional REMD restart file. remd_indices : array-like, optional Array specifying the position in all dimensions that each frame is in. Required for a multi-dimensional REMD restart file. remd_repidx : `int`, optional Overall index of the frame in replica space. remd_crdidx : `int`, optional Overall index of the frame in coordinate space. remd_values : array-like, optional Replica value the specified replica dimension has for that given frame. Required for a multi-dimensional REMD restart file. """ self._nc.Conventions = "AMBER" if self._restart: self._nc.Conventions += "RESTART" self._nc.ConventionVersion = "1.0" self._nc.program = "MDHelper" self._nc.programVersion = VERSION self._nc.title = (f"OpenMM {openmm.Platform.getOpenMMVersion()} / " f"{platform.node()}") if remd == "multi": self._nc.createDimension("remd_dimension", len(remd_dimtype)) self._nc.createDimension("spatial", 3) self._nc.createDimension("atom", N) if self._restart: self._nc.createDimension("frame", 1) self._nc.createVariable("coordinates", "d", ("atom", "spatial")) else: self._nc.createDimension("frame", None) self._nc.createVariable("coordinates", "f", ("frame", "atom", "spatial")) self._nc.variables["coordinates"].units = "angstrom" self._nc.createVariable("time", "d", ("frame",)) self._nc.variables["time"].units = "picosecond" if cell: self._nc.createDimension("cell_spatial", 3) self._nc.createDimension("cell_angular", 3) self._nc.createDimension("label", 5) self._nc.createVariable("spatial", "c", ("spatial",)) self._nc.variables["spatial"][:] = list("xyz") self._nc.createVariable("cell_spatial", "c", ("cell_spatial",)) self._nc.variables["cell_spatial"][:] = list("abc") self._nc.createVariable("cell_angular", "c", ("cell_angular", "label")) self._nc.variables["cell_angular"][:] = [list("alpha"), list("beta "), list("gamma")] if self._restart: self._nc.createVariable("cell_lengths", "d", ("cell_spatial",)) self._nc.createVariable("cell_angles", "d", ("cell_angular",)) else: self._nc.createVariable("cell_lengths", "f", ("frame", "cell_spatial")) self._nc.createVariable("cell_angles", "f", ("frame", "cell_angular")) self._nc.variables["cell_lengths"].units = "angstrom" self._nc.variables["cell_angles"].units = "degree" if velocities: if self._restart: self._nc.createVariable("velocities", "d", ("atom", "spatial")) else: self._nc.createVariable("velocities", "f", ("frame", "atom", "spatial")) self._nc.variables["velocities"].units = "angstrom/picosecond" self._nc.variables["velocities"].scale_factor = 20.455 if forces: if self._restart: self._nc.createVariable("forces", "d", ("atom", "spatial")) else: self._nc.createVariable("forces", "f", ("frame", "atom", "spatial")) self._nc.variables["forces"].units = "kilocalorie/mole/angstrom" if remd is not None: if remd == "temp": self._nc.createVariable("temp0", "d", ("frame",)) if self._restart: if temp0 is None: emsg = ("Temperature must be provided for a REMD " "restart file.") raise ValueError(emsg) self._nc.variables["temp0"][0] = temp0 self._nc.variables["temp0"].units = "kelvin" elif remd == "multi": self._nc.createVariable("remd_dimtype", "i", ("remd_dimension",)) self._nc.createVariable("remd_repidx", "i", ("frame",)) self._nc.createVariable("remd_crdidx", "i", ("frame",)) if self._restart: if remd_dimtype is None: emsg = ("Dimension types must be provided for a " "multi-dimensional REMD restart file.") raise ValueError(emsg) self._nc.variables["remd_dimtype"] = remd_dimtype self._nc.createVariable("remd_indices", "i", ("remd_dimension",)) if remd_indices is None: emsg = ("Dimension indices must be provided for a " "multi-dimensional REMD restart file.") raise ValueError(emsg) self._nc.variables["remd_indices"] = remd_indices self._nc.variables["remd_repidx"][0] = remd_repidx self._nc.variables["remd_crdidx"][0] = remd_crdidx self._nc.createVariable("remd_values", "d", ("remd_dimension",)) if remd_values is None: emsg = ("Replica values must be provided for a " "multi-dimensional REMD restart file.") raise ValueError(emsg) self._nc.variables["remd_values"][:] = remd_values else: self._nc.createVariable("remd_indices", "i", ("frame", "remd_dimension")) self._nc.createVariable("remd_values", "d", ("frame", "remd_dimension")) def write_model( self, time: Union[float, np.ndarray], coordinates: np.ndarray, velocities: np.ndarray = None, forces: np.ndarray = None, cell_lengths: np.ndarray = None, cell_angles: np.ndarray = None, *, restart: bool = False) -> None: """ Write the simulation state(s) to the NetCDF file. Parameters ---------- time : `float` or `numpy.ndarray` Time(s). The dimensionality determines whether a single or multiple frames are written. **Reference unit**: :math:`\mathrm{ps}`. coordinates : `numpy.ndarray` Particle coordinates of :math:`N` particles over :math:`N_t` frames. The dimensionality depends on whether a single or multiple frames are to be written and must be compatible with that for `time`. **Shape**: :math:`(N,\,3)` or :math:`(N_t,\,N,\,3)`. **Reference unit**: :math:`\mathrm{Å}`. velocities : `numpy.ndarray`, optional Particle velocities of :math:`N` particles over :math:`N_t` frames. The dimensionality depends on whether a single or multiple frames are to be written and must be compatible with that for `time`. **Shape**: :math:`(N,\,3)` or :math:`(N_t,\,N,\,3)`. **Reference unit**: :math:`\mathrm{Å/ps}`. forces : `numpy.ndarray`, optional Forces exerted on :math:`N` particles over :math:`N_t` frames. The dimensionality depends on whether a single or multiple frames are to be written and must be compatible with that for `time`. **Shape**: :math:`(N,\,3)` or :math:`(N_t,\,N,\,3)`. **Reference unit**: :math:`\mathrm{Å/ps}`. cell_lengths : `numpy.ndarray`, optional Simulation box dimensions. **Shape**: :math:`(3,)`. **Reference unit**: :math:`\mathrm{Å}`. cell_angles : `numpy.ndarray`, optional Angles that define the shape of the simulation box. **Shape**: :math:`(3,)`. **Reference unit**: :math:`^\circ`. restart : `bool`, keyword-only, default: `False` Prevents the frame index from being incremented if writing a NetCDF restart file. """ n_frames = len(time) if isinstance(time, Iterable) else 1 frames = slice(self._frame, self._frame + n_frames) self._nc.variables["time"][frames] = time self._nc.variables["coordinates"][frames] = coordinates if velocities is not None: self._nc.variables["velocities"][frames] = velocities if forces is not None: self._nc.variables["forces"][frames] = forces if cell_lengths is not None: self._nc.variables["cell_lengths"][frames] = cell_lengths if cell_angles is not None: self._nc.variables["cell_angles"][frames] = cell_angles self._nc.sync() if not restart: self._frame += n_frames
PypiClean
/aws_ir-0.3.0.tar.gz/aws_ir-0.3.0/README.rst
AWS IR ====== Python installable command line utility for mitigation of instance and key compromises. Documentation ------------- Read the full documentation on `read the docs <https://aws_ir.readthedocs.io/en/latest/>`__. Quickstart ---------- For more information see the `user guide <https://aws_ir.readthedocs.io/en/latest/user_guide.html>`__. Installation ************ ``pip install aws_ir`` Or see `installing <https://aws_ir.readthedocs.io/en/latest/installing.html>`__. AWS Credentials *************** Ensure aws credentials are configured under the user running aws_ir as documented `by amazon <https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html>`__. Check back soon for an IAM policy featuring the minimum set of required permissions Optional Arguments ****************** .. code-block:: bash usage: aws_ir [-h] [--version] [--verbose] [--profile PROFILE] [--case-number CASE_NUMBER] [--examiner-cidr-range EXAMINER_CIDR_RANGE] [--bucket-name BUCKET_NAME] [--dry-run] {instance-compromise,key-compromise} ... Incident Response command line for Amazon Web Services. This command line interface is designed to process host and key based incursions without delay or error. positional arguments: {instance-compromise,key-compromise} instance-compromise key-compromise optional arguments: -h, --help show this help message and exit --version show program's version number and exit --verbose log debug messages --profile PROFILE A named boto profile to use instead of the default profile. --case-number CASE_NUMBER The case number to use., usually of the form "cr-16-053018-2d2d" --examiner-cidr-range EXAMINER_CIDR_RANGE The IP/CIDR for the examiner and/or the tool. This will be added as the only allowed range in the isolated security group. --bucket-name BUCKET_NAME Optional. The id of the s3 bucket to use. This must already exist --dry-run Dry run. Pass dry run parameter to perform API calls but will not modify any resources. Key Compromise ************** The ``aws_ir`` subcommand ``key-compromise`` disables access keys in the case of a key compromise. It's single argument is the access key id, he compromised key is disabled via the AWS api. .. code-block:: bash usage: aws_ir key-compromise [-h] --access-key-id ACCESS_KEY_ID [--plugins PLUGINS] optional arguments: -h, --help show this help message and exit --access-key-id ACCESS_KEY_ID --plugins PLUGINS Run some or all of the plugins in a custom order. Provided as a comma separated listSupported plugins: disableaccess_key,revokests_key Below is the output of running the ``key-compromise`` subcommand. .. code-block:: bash $ aws_ir key-compromise --access-key-id AKIAINLHPIG64YJXPK5A 2017-07-20T21:04:01 - aws_ir.cli - INFO - Initialization successful proceeding to incident plan. 2017-07-20T21:04:01 - aws_ir.plans.key - INFO - Attempting key disable. 2017-07-20T21:04:03 - aws_ir.plans.key - INFO - STS Tokens revoked issued prior to NOW. 2017-07-20T21:04:03 - aws_ir.plans.key - INFO - Disable complete. Uploading results. Processing complete for cr-17-072104-7d5f Artifacts stored in s3://cloud-response-9cabd252416b4e5a893395c533f340b7 Instance Compromise ******************* The ``aws_ir`` subcommand ``instance-compromise`` preserves forensic artifacts from a compromised instance after isolating the instance. Once all artifacts are collected and tagged the compromised instance is powered off. The ``instance-compromise`` subcommand takes three arguments, the ``instance-ip`` of the compromised instance, a ``user`` with ssh access to the target instance, and the ``ssh-key`` used for authentication. Currently ``user`` must be capable of passwordless sudo for memory capture to complete. If ``user`` does not have passwordless sudo capabilities all artifiacts save for the memory capture will be gathered. .. code-block:: bash $ aws_ir instance-compromise -h usage: aws_ir instance-compromise [-h] [--target TARGET] [--targets TARGETS] [--user USER] [--ssh-key SSH_KEY] [--plugins PLUGINS] optional arguments: -h, --help show this help message and exit --target TARGET instance-id|instance-ip --targets TARGETS File of resources to process instance-id or ip-address. --user USER this is the privileged ssh user for acquiring memory from the instance. Required for memory only. --ssh-key SSH_KEY provide the path to the ssh private key for the user. Required for memory only. --plugins PLUGINS Run some or all of the plugins in a custom order. Provided as a comma separated list of supported plugins: examineracl_host,gather_host,isolate_host,snapsh otdisks_host,stop_host,tag_host,get_memory AWS IR saves all forensic artifacts except for disk snapshots in an s3 bucket created for each case. Disk snapshots are tagged with the same case number as the rest of the rest of the artifacts. Below is the output of running the ``instance-compromise`` subcommand. .. code-block:: bash $ aws_ir --examiner-cidr-range '4.4.4.4/32' instance-compromise --target 52.40.162.126 --user ec2-user --ssh-key ~/Downloads/testing-041.pem 2017-07-20T21:10:50 - aws_ir.cli - INFO - Initialization successful proceeding to incident plan. 2017-07-20T21:10:50 - aws_ir.libs.case - INFO - Initial connection to AmazonWebServices made. 2017-07-20T21:11:03 - aws_ir.libs.case - INFO - Inventory AWS Regions Complete 14 found. 2017-07-20T21:11:03 - aws_ir.libs.case - INFO - Inventory Availability Zones Complete 37 found. 2017-07-20T21:11:03 - aws_ir.libs.case - INFO - Beginning inventory of resources world wide. This might take a minute... 2017-07-20T21:11:03 - aws_ir.libs.inventory - INFO - Searching ap-south-1 for instance. 2017-07-20T21:11:05 - aws_ir.libs.inventory - INFO - Searching eu-west-2 for instance. 2017-07-20T21:11:05 - aws_ir.libs.inventory - INFO - Searching eu-west-1 for instance. 2017-07-20T21:11:06 - aws_ir.libs.inventory - INFO - Searching ap-northeast-2 for instance. 2017-07-20T21:11:07 - aws_ir.libs.inventory - INFO - Searching ap-northeast-1 for instance. 2017-07-20T21:11:08 - aws_ir.libs.inventory - INFO - Searching sa-east-1 for instance. 2017-07-20T21:11:09 - aws_ir.libs.inventory - INFO - Searching ca-central-1 for instance. 2017-07-20T21:11:09 - aws_ir.libs.inventory - INFO - Searching ap-southeast-1 for instance. 2017-07-20T21:11:10 - aws_ir.libs.inventory - INFO - Searching ap-southeast-2 for instance. 2017-07-20T21:11:11 - aws_ir.libs.inventory - INFO - Searching eu-central-1 for instance. 2017-07-20T21:11:12 - aws_ir.libs.inventory - INFO - Searching us-east-1 for instance. 2017-07-20T21:11:13 - aws_ir.libs.inventory - INFO - Searching us-east-2 for instance. 2017-07-20T21:11:13 - aws_ir.libs.inventory - INFO - Searching us-west-1 for instance. 2017-07-20T21:11:13 - aws_ir.libs.inventory - INFO - Searching us-west-2 for instance. 2017-07-20T21:11:14 - aws_ir.libs.case - INFO - Inventory complete. Proceeding to resource identification. 2017-07-20T21:11:14 - aws_ir.plans.host - INFO - Proceeding with incident plan steps included are ['gather_host', 'isolate_host', 'tag_host', 'snapshotdisks_host', 'examineracl_host', 'get_memory', 'stop_host'] 2017-07-20T21:11:14 - aws_ir.plans.host - INFO - Executing step gather_host. 2017-07-20T21:11:15 - aws_ir.plans.host - INFO - Executing step isolate_host. 2017-07-20T21:11:16 - aws_ir.plans.host - INFO - Executing step tag_host. 2017-07-20T21:11:17 - aws_ir.plans.host - INFO - Executing step snapshotdisks_host. True 2017-07-20T21:11:17 - aws_ir.plans.host - INFO - Executing step examineracl_host. 2017-07-20T21:11:19 - aws_ir.plans.host - INFO - Executing step get_memory. 2017-07-20T21:11:19 - aws_ir.plans.host - INFO - attempting memory run 2017-07-20T21:11:19 - aws_ir.plans.host - INFO - Attempting run margarita shotgun for ec2-user on 52.40.162.126 with /Users/akrug/Downloads/testing-041.pem 2017-07-20T21:11:21 - margaritashotgun.repository - INFO - downloading https://threatresponse-lime-modules.s3.amazonaws.com/modules/lime-4.9.32-15.41.amzn1.x86_64.ko as lime-2017-07-21T04:11:21-4.9.32-15.41.amzn1.x86_64.ko 2017-07-20T21:11:25 - margaritashotgun.memory - INFO - 52.40.162.126: dumping memory to s3://cloud-response-a0f2d7e68ef44c36a79ccfe4dcef205a/52.40.162.126-2017-07-21T04:11:19-mem.lime 2017-07-20T21:15:43 - margaritashotgun.memory - INFO - 52.40.162.126: capture 10% complete 2017-07-20T21:19:37 - margaritashotgun.memory - INFO - 52.40.162.126: capture 20% complete 2017-07-20T21:23:41 - margaritashotgun.memory - INFO - 52.40.162.126: capture 30% complete 2017-07-20T21:28:17 - margaritashotgun.memory - INFO - 52.40.162.126: capture 40% complete 2017-07-20T21:32:42 - margaritashotgun.memory - INFO - 52.40.162.126: capture 50% complete 2017-07-20T21:37:18 - margaritashotgun.memory - INFO - 52.40.162.126: capture 60% complete 2017-07-20T21:39:18 - margaritashotgun.memory - INFO - 52.40.162.126: capture 70% complete 2017-07-20T22:00:13 - margaritashotgun.memory - INFO - 52.40.162.126: capture 80% complete 2017-07-20T22:04:19 - margaritashotgun.memory - INFO - 52.40.162.126: capture 90% complete 2017-07-20T22:17:32 - margaritashotgun.memory - INFO - 52.40.162.126: capture 100% complete 2017-07-20T21:41:52 - aws_ir.plans.host - INFO - memory capture completed for: ['52.40.162.126'], failed for: [] 2017-07-20T21:41:52 - aws_ir.plans.host - INFO - Executing step stop_host. Processing complete for cr-17-072104-7d5f Artifacts stored in s3://cloud-response-a0f2d7e68ef44c36a79ccfe4dcef205a Instance Compromise -- Isolation Achieved ******************* See below that I've connected to the compromised workstation from my examiner IP address. SSH is all that is permitted due to the NACL and Security Group additions. .. code-block:: bash [root@ip-172-31-9-119 ec2-user]# yum install iotop Loaded plugins: priorities, update-motd, upgrade-helper Resolving Dependencies --> Running transaction check ---> Package iotop.noarch 0:0.3.2-7.6.amzn1 will be installed --> Finished Dependency Resolution Dependencies Resolved iotop-0.3.2-7.6.amzn1.noarch.r FAILED http://packages.us-west-1.amazonaws.com/2017.03/main/201703c0ffee/x86_64/Packages/iotop-0.3.2-7.6.amzn1.noarch.rpm?instance_id=i-0d4216a9fda54fcb6&region=us-west-2: [Errno 12] Timeout on http://packages.us-west-1.amazonaws.com/2017.03/main/201703c0ffee/x86_64/Packages/iotop-0.3.2-7.6.amzn1.noarch.rpm?instance_id=i-0d4216a9fda54fcb6&region=us-west-2: (28, 'Connection timed out after 10000 milliseconds') Trying other mirror. ^C Exiting on user cancel [root@ip-172-31-9-119 ec2-user]# ping 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. ^C --- 4.2.2.2 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3076ms [root@ip-172-31-9-119 ec2-user]# User Guide ********** Read more about each subcommand in our `user guide <https://aws_ir.readthedocs.io/en/latest/user_guide.html>`__.
PypiClean
/graphql-example-0.4.4.tar.gz/graphql-example-0.4.4/vendor/pyflakes/checker.py
import __future__ import doctest import os import sys PY2 = sys.version_info < (3, 0) PY32 = sys.version_info < (3, 3) # Python 2.5 to 3.2 PY33 = sys.version_info < (3, 4) # Python 2.5 to 3.3 PY34 = sys.version_info < (3, 5) # Python 2.5 to 3.4 try: sys.pypy_version_info PYPY = True except AttributeError: PYPY = False builtin_vars = dir(__import__('__builtin__' if PY2 else 'builtins')) try: import ast except ImportError: # Python 2.5 import _ast as ast if 'decorator_list' not in ast.ClassDef._fields: # Patch the missing attribute 'decorator_list' ast.ClassDef.decorator_list = () ast.FunctionDef.decorator_list = property(lambda s: s.decorators) from pyflakes import messages if PY2: def getNodeType(node_class): # workaround str.upper() which is locale-dependent return str(unicode(node_class.__name__).upper()) else: def getNodeType(node_class): return node_class.__name__.upper() # Python >= 3.3 uses ast.Try instead of (ast.TryExcept + ast.TryFinally) if PY32: def getAlternatives(n): if isinstance(n, (ast.If, ast.TryFinally)): return [n.body] if isinstance(n, ast.TryExcept): return [n.body + n.orelse] + [[hdl] for hdl in n.handlers] else: def getAlternatives(n): if isinstance(n, ast.If): return [n.body] if isinstance(n, ast.Try): return [n.body + n.orelse] + [[hdl] for hdl in n.handlers] if PY34: LOOP_TYPES = (ast.While, ast.For) else: LOOP_TYPES = (ast.While, ast.For, ast.AsyncFor) class _FieldsOrder(dict): """Fix order of AST node fields.""" def _get_fields(self, node_class): # handle iter before target, and generators before element fields = node_class._fields if 'iter' in fields: key_first = 'iter'.find elif 'generators' in fields: key_first = 'generators'.find else: key_first = 'value'.find return tuple(sorted(fields, key=key_first, reverse=True)) def __missing__(self, node_class): self[node_class] = fields = self._get_fields(node_class) return fields def counter(items): """ Simplest required implementation of collections.Counter. Required as 2.6 does not have Counter in collections. """ results = {} for item in items: results[item] = results.get(item, 0) + 1 return results def iter_child_nodes(node, omit=None, _fields_order=_FieldsOrder()): """ Yield all direct child nodes of *node*, that is, all fields that are nodes and all items of fields that are lists of nodes. """ for name in _fields_order[node.__class__]: if name == omit: continue field = getattr(node, name, None) if isinstance(field, ast.AST): yield field elif isinstance(field, list): for item in field: yield item def convert_to_value(item): if isinstance(item, ast.Str): return item.s elif hasattr(ast, 'Bytes') and isinstance(item, ast.Bytes): return item.s elif isinstance(item, ast.Tuple): return tuple(convert_to_value(i) for i in item.elts) elif isinstance(item, ast.Num): return item.n elif isinstance(item, ast.Name): result = VariableKey(item=item) constants_lookup = { 'True': True, 'False': False, 'None': None, } return constants_lookup.get( result.name, result, ) elif (not PY33) and isinstance(item, ast.NameConstant): # None, True, False are nameconstants in python3, but names in 2 return item.value else: return UnhandledKeyType() class Binding(object): """ Represents the binding of a value to a name. The checker uses this to keep track of which names have been bound and which names have not. See L{Assignment} for a special type of binding that is checked with stricter rules. @ivar used: pair of (L{Scope}, node) indicating the scope and the node that this binding was last used. """ def __init__(self, name, source): self.name = name self.source = source self.used = False def __str__(self): return self.name def __repr__(self): return '<%s object %r from line %r at 0x%x>' % (self.__class__.__name__, self.name, self.source.lineno, id(self)) def redefines(self, other): return isinstance(other, Definition) and self.name == other.name class Definition(Binding): """ A binding that defines a function or a class. """ class UnhandledKeyType(object): """ A dictionary key of a type that we cannot or do not check for duplicates. """ class VariableKey(object): """ A dictionary key which is a variable. @ivar item: The variable AST object. """ def __init__(self, item): self.name = item.id def __eq__(self, compare): return ( compare.__class__ == self.__class__ and compare.name == self.name ) def __hash__(self): return hash(self.name) class Importation(Definition): """ A binding created by an import statement. @ivar fullName: The complete name given to the import statement, possibly including multiple dotted components. @type fullName: C{str} """ def __init__(self, name, source, full_name=None): self.fullName = full_name or name self.redefined = [] super(Importation, self).__init__(name, source) def redefines(self, other): if isinstance(other, SubmoduleImportation): # See note in SubmoduleImportation about RedefinedWhileUnused return self.fullName == other.fullName return isinstance(other, Definition) and self.name == other.name def _has_alias(self): """Return whether importation needs an as clause.""" return not self.fullName.split('.')[-1] == self.name @property def source_statement(self): """Generate a source statement equivalent to the import.""" if self._has_alias(): return 'import %s as %s' % (self.fullName, self.name) else: return 'import %s' % self.fullName def __str__(self): """Return import full name with alias.""" if self._has_alias(): return self.fullName + ' as ' + self.name else: return self.fullName class SubmoduleImportation(Importation): """ A binding created by a submodule import statement. A submodule import is a special case where the root module is implicitly imported, without an 'as' clause, and the submodule is also imported. Python does not restrict which attributes of the root module may be used. This class is only used when the submodule import is without an 'as' clause. pyflakes handles this case by registering the root module name in the scope, allowing any attribute of the root module to be accessed. RedefinedWhileUnused is suppressed in `redefines` unless the submodule name is also the same, to avoid false positives. """ def __init__(self, name, source): # A dot should only appear in the name when it is a submodule import assert '.' in name and (not source or isinstance(source, ast.Import)) package_name = name.split('.')[0] super(SubmoduleImportation, self).__init__(package_name, source) self.fullName = name def redefines(self, other): if isinstance(other, Importation): return self.fullName == other.fullName return super(SubmoduleImportation, self).redefines(other) def __str__(self): return self.fullName @property def source_statement(self): return 'import ' + self.fullName class ImportationFrom(Importation): def __init__(self, name, source, module, real_name=None): self.module = module self.real_name = real_name or name if module.endswith('.'): full_name = module + self.real_name else: full_name = module + '.' + self.real_name super(ImportationFrom, self).__init__(name, source, full_name) def __str__(self): """Return import full name with alias.""" if self.real_name != self.name: return self.fullName + ' as ' + self.name else: return self.fullName @property def source_statement(self): if self.real_name != self.name: return 'from %s import %s as %s' % (self.module, self.real_name, self.name) else: return 'from %s import %s' % (self.module, self.name) class StarImportation(Importation): """A binding created by a 'from x import *' statement.""" def __init__(self, name, source): super(StarImportation, self).__init__('*', source) # Each star importation needs a unique name, and # may not be the module name otherwise it will be deemed imported self.name = name + '.*' self.fullName = name @property def source_statement(self): return 'from ' + self.fullName + ' import *' def __str__(self): # When the module ends with a ., avoid the ambiguous '..*' if self.fullName.endswith('.'): return self.source_statement else: return self.name class FutureImportation(ImportationFrom): """ A binding created by a from `__future__` import statement. `__future__` imports are implicitly used. """ def __init__(self, name, source, scope): super(FutureImportation, self).__init__(name, source, '__future__') self.used = (scope, source) class Argument(Binding): """ Represents binding a name as an argument. """ class Assignment(Binding): """ Represents binding a name with an explicit assignment. The checker will raise warnings for any Assignment that isn't used. Also, the checker does not consider assignments in tuple/list unpacking to be Assignments, rather it treats them as simple Bindings. """ class FunctionDefinition(Definition): pass class ClassDefinition(Definition): pass class ExportBinding(Binding): """ A binding created by an C{__all__} assignment. If the names in the list can be determined statically, they will be treated as names for export and additional checking applied to them. The only C{__all__} assignment that can be recognized is one which takes the value of a literal list containing literal strings. For example:: __all__ = ["foo", "bar"] Names which are imported and not otherwise used but appear in the value of C{__all__} will not have an unused import warning reported for them. """ def __init__(self, name, source, scope): if '__all__' in scope and isinstance(source, ast.AugAssign): self.names = list(scope['__all__'].names) else: self.names = [] if isinstance(source.value, (ast.List, ast.Tuple)): for node in source.value.elts: if isinstance(node, ast.Str): self.names.append(node.s) super(ExportBinding, self).__init__(name, source) class Scope(dict): importStarred = False # set to True when import * is found def __repr__(self): scope_cls = self.__class__.__name__ return '<%s at 0x%x %s>' % (scope_cls, id(self), dict.__repr__(self)) class ClassScope(Scope): pass class FunctionScope(Scope): """ I represent a name scope for a function. @ivar globals: Names declared 'global' in this function. """ usesLocals = False alwaysUsed = set(['__tracebackhide__', '__traceback_info__', '__traceback_supplement__']) def __init__(self): super(FunctionScope, self).__init__() # Simplify: manage the special locals as globals self.globals = self.alwaysUsed.copy() self.returnValue = None # First non-empty return self.isGenerator = False # Detect a generator def unusedAssignments(self): """ Return a generator for the assignments which have not been used. """ for name, binding in self.items(): if (not binding.used and name not in self.globals and not self.usesLocals and isinstance(binding, Assignment)): yield name, binding class GeneratorScope(Scope): pass class ModuleScope(Scope): """Scope for a module.""" _futures_allowed = True class DoctestScope(ModuleScope): """Scope for a doctest.""" # Globally defined names which are not attributes of the builtins module, or # are only present on some platforms. _MAGIC_GLOBALS = ['__file__', '__builtins__', 'WindowsError'] def getNodeName(node): # Returns node.id, or node.name, or None if hasattr(node, 'id'): # One of the many nodes with an id return node.id if hasattr(node, 'name'): # an ExceptHandler node return node.name class Checker(object): """ I check the cleanliness and sanity of Python code. @ivar _deferredFunctions: Tracking list used by L{deferFunction}. Elements of the list are two-tuples. The first element is the callable passed to L{deferFunction}. The second element is a copy of the scope stack at the time L{deferFunction} was called. @ivar _deferredAssignments: Similar to C{_deferredFunctions}, but for callables which are deferred assignment checks. """ nodeDepth = 0 offset = None traceTree = False builtIns = set(builtin_vars).union(_MAGIC_GLOBALS) _customBuiltIns = os.environ.get('PYFLAKES_BUILTINS') if _customBuiltIns: builtIns.update(_customBuiltIns.split(',')) del _customBuiltIns def __init__(self, tree, filename='(none)', builtins=None, withDoctest='PYFLAKES_DOCTEST' in os.environ): self._nodeHandlers = {} self._deferredFunctions = [] self._deferredAssignments = [] self.deadScopes = [] self.messages = [] self.filename = filename if builtins: self.builtIns = self.builtIns.union(builtins) self.withDoctest = withDoctest self.scopeStack = [ModuleScope()] self.exceptHandlers = [()] self.root = tree self.handleChildren(tree) self.runDeferred(self._deferredFunctions) # Set _deferredFunctions to None so that deferFunction will fail # noisily if called after we've run through the deferred functions. self._deferredFunctions = None self.runDeferred(self._deferredAssignments) # Set _deferredAssignments to None so that deferAssignment will fail # noisily if called after we've run through the deferred assignments. self._deferredAssignments = None del self.scopeStack[1:] self.popScope() self.checkDeadScopes() def deferFunction(self, callable): """ Schedule a function handler to be called just before completion. This is used for handling function bodies, which must be deferred because code later in the file might modify the global scope. When `callable` is called, the scope at the time this is called will be restored, however it will contain any new bindings added to it. """ self._deferredFunctions.append((callable, self.scopeStack[:], self.offset)) def deferAssignment(self, callable): """ Schedule an assignment handler to be called just after deferred function handlers. """ self._deferredAssignments.append((callable, self.scopeStack[:], self.offset)) def runDeferred(self, deferred): """ Run the callables in C{deferred} using their associated scope stack. """ for handler, scope, offset in deferred: self.scopeStack = scope self.offset = offset handler() def _in_doctest(self): return (len(self.scopeStack) >= 2 and isinstance(self.scopeStack[1], DoctestScope)) @property def futuresAllowed(self): if not all(isinstance(scope, ModuleScope) for scope in self.scopeStack): return False return self.scope._futures_allowed @futuresAllowed.setter def futuresAllowed(self, value): assert value is False if isinstance(self.scope, ModuleScope): self.scope._futures_allowed = False @property def scope(self): return self.scopeStack[-1] def popScope(self): self.deadScopes.append(self.scopeStack.pop()) def checkDeadScopes(self): """ Look at scopes which have been fully examined and report names in them which were imported but unused. """ for scope in self.deadScopes: # imports in classes are public members if isinstance(scope, ClassScope): continue all_binding = scope.get('__all__') if all_binding and not isinstance(all_binding, ExportBinding): all_binding = None if all_binding: all_names = set(all_binding.names) undefined = all_names.difference(scope) else: all_names = undefined = [] if undefined: if not scope.importStarred and \ os.path.basename(self.filename) != '__init__.py': # Look for possible mistakes in the export list for name in undefined: self.report(messages.UndefinedExport, scope['__all__'].source, name) # mark all import '*' as used by the undefined in __all__ if scope.importStarred: for binding in scope.values(): if isinstance(binding, StarImportation): binding.used = all_binding # Look for imported names that aren't used. for value in scope.values(): if isinstance(value, Importation): used = value.used or value.name in all_names if not used: messg = messages.UnusedImport self.report(messg, value.source, str(value)) for node in value.redefined: if isinstance(self.getParent(node), ast.For): messg = messages.ImportShadowedByLoopVar elif used: continue else: messg = messages.RedefinedWhileUnused self.report(messg, node, value.name, value.source) def pushScope(self, scopeClass=FunctionScope): self.scopeStack.append(scopeClass()) def report(self, messageClass, *args, **kwargs): self.messages.append(messageClass(self.filename, *args, **kwargs)) def getParent(self, node): # Lookup the first parent which is not Tuple, List or Starred while True: node = node.parent if not hasattr(node, 'elts') and not hasattr(node, 'ctx'): return node def getCommonAncestor(self, lnode, rnode, stop): if stop in (lnode, rnode) or not (hasattr(lnode, 'parent') and hasattr(rnode, 'parent')): return None if lnode is rnode: return lnode if (lnode.depth > rnode.depth): return self.getCommonAncestor(lnode.parent, rnode, stop) if (lnode.depth < rnode.depth): return self.getCommonAncestor(lnode, rnode.parent, stop) return self.getCommonAncestor(lnode.parent, rnode.parent, stop) def descendantOf(self, node, ancestors, stop): for a in ancestors: if self.getCommonAncestor(node, a, stop): return True return False def differentForks(self, lnode, rnode): """True, if lnode and rnode are located on different forks of IF/TRY""" ancestor = self.getCommonAncestor(lnode, rnode, self.root) parts = getAlternatives(ancestor) if parts: for items in parts: if self.descendantOf(lnode, items, ancestor) ^ \ self.descendantOf(rnode, items, ancestor): return True return False def addBinding(self, node, value): """ Called when a binding is altered. - `node` is the statement responsible for the change - `value` is the new value, a Binding instance """ # assert value.source in (node, node.parent): for scope in self.scopeStack[::-1]: if value.name in scope: break existing = scope.get(value.name) if existing and not self.differentForks(node, existing.source): parent_stmt = self.getParent(value.source) if isinstance(existing, Importation) and isinstance(parent_stmt, ast.For): self.report(messages.ImportShadowedByLoopVar, node, value.name, existing.source) elif scope is self.scope: if (isinstance(parent_stmt, ast.comprehension) and not isinstance(self.getParent(existing.source), (ast.For, ast.comprehension))): self.report(messages.RedefinedInListComp, node, value.name, existing.source) elif not existing.used and value.redefines(existing): self.report(messages.RedefinedWhileUnused, node, value.name, existing.source) elif isinstance(existing, Importation) and value.redefines(existing): existing.redefined.append(node) if value.name in self.scope: # then assume the rebound name is used as a global or within a loop value.used = self.scope[value.name].used self.scope[value.name] = value def getNodeHandler(self, node_class): try: return self._nodeHandlers[node_class] except KeyError: nodeType = getNodeType(node_class) self._nodeHandlers[node_class] = handler = getattr(self, nodeType) return handler def handleNodeLoad(self, node): name = getNodeName(node) if not name: return in_generators = None importStarred = None # try enclosing function scopes and global scope for scope in self.scopeStack[-1::-1]: # only generators used in a class scope can access the names # of the class. this is skipped during the first iteration if in_generators is False and isinstance(scope, ClassScope): continue try: scope[name].used = (self.scope, node) except KeyError: pass else: return importStarred = importStarred or scope.importStarred if in_generators is not False: in_generators = isinstance(scope, GeneratorScope) # look in the built-ins if name in self.builtIns: return if importStarred: from_list = [] for scope in self.scopeStack[-1::-1]: for binding in scope.values(): if isinstance(binding, StarImportation): # mark '*' imports as used for each scope binding.used = (self.scope, node) from_list.append(binding.fullName) # report * usage, with a list of possible sources from_list = ', '.join(sorted(from_list)) self.report(messages.ImportStarUsage, node, name, from_list) return if name == '__path__' and os.path.basename(self.filename) == '__init__.py': # the special name __path__ is valid only in packages return # protected with a NameError handler? if 'NameError' not in self.exceptHandlers[-1]: self.report(messages.UndefinedName, node, name) def handleNodeStore(self, node): name = getNodeName(node) if not name: return # if the name hasn't already been defined in the current scope if isinstance(self.scope, FunctionScope) and name not in self.scope: # for each function or module scope above us for scope in self.scopeStack[:-1]: if not isinstance(scope, (FunctionScope, ModuleScope)): continue # if the name was defined in that scope, and the name has # been accessed already in the current scope, and hasn't # been declared global used = name in scope and scope[name].used if used and used[0] is self.scope and name not in self.scope.globals: # then it's probably a mistake self.report(messages.UndefinedLocal, scope[name].used[1], name, scope[name].source) break parent_stmt = self.getParent(node) if isinstance(parent_stmt, (ast.For, ast.comprehension)) or ( parent_stmt != node.parent and not self.isLiteralTupleUnpacking(parent_stmt)): binding = Binding(name, node) elif name == '__all__' and isinstance(self.scope, ModuleScope): binding = ExportBinding(name, node.parent, self.scope) else: binding = Assignment(name, node) self.addBinding(node, binding) def handleNodeDelete(self, node): def on_conditional_branch(): """ Return `True` if node is part of a conditional body. """ current = getattr(node, 'parent', None) while current: if isinstance(current, (ast.If, ast.While, ast.IfExp)): return True current = getattr(current, 'parent', None) return False name = getNodeName(node) if not name: return if on_conditional_branch(): # We cannot predict if this conditional branch is going to # be executed. return if isinstance(self.scope, FunctionScope) and name in self.scope.globals: self.scope.globals.remove(name) else: try: del self.scope[name] except KeyError: self.report(messages.UndefinedName, node, name) def handleChildren(self, tree, omit=None): for node in iter_child_nodes(tree, omit=omit): self.handleNode(node, tree) def isLiteralTupleUnpacking(self, node): if isinstance(node, ast.Assign): for child in node.targets + [node.value]: if not hasattr(child, 'elts'): return False return True def isDocstring(self, node): """ Determine if the given node is a docstring, as long as it is at the correct place in the node tree. """ return isinstance(node, ast.Str) or (isinstance(node, ast.Expr) and isinstance(node.value, ast.Str)) def getDocstring(self, node): if isinstance(node, ast.Expr): node = node.value if not isinstance(node, ast.Str): return (None, None) if PYPY: doctest_lineno = node.lineno - 1 else: # Computed incorrectly if the docstring has backslash doctest_lineno = node.lineno - node.s.count('\n') - 1 return (node.s, doctest_lineno) def handleNode(self, node, parent): if node is None: return if self.offset and getattr(node, 'lineno', None) is not None: node.lineno += self.offset[0] node.col_offset += self.offset[1] if self.traceTree: print(' ' * self.nodeDepth + node.__class__.__name__) if self.futuresAllowed and not (isinstance(node, ast.ImportFrom) or self.isDocstring(node)): self.futuresAllowed = False self.nodeDepth += 1 node.depth = self.nodeDepth node.parent = parent try: handler = self.getNodeHandler(node.__class__) handler(node) finally: self.nodeDepth -= 1 if self.traceTree: print(' ' * self.nodeDepth + 'end ' + node.__class__.__name__) _getDoctestExamples = doctest.DocTestParser().get_examples def handleDoctests(self, node): try: if hasattr(node, 'docstring'): docstring = node.docstring # This is just a reasonable guess. In Python 3.7, docstrings no # longer have line numbers associated with them. This will be # incorrect if there are empty lines between the beginning # of the function and the docstring. node_lineno = node.lineno if hasattr(node, 'args'): node_lineno = max([node_lineno] + [arg.lineno for arg in node.args.args]) else: (docstring, node_lineno) = self.getDocstring(node.body[0]) examples = docstring and self._getDoctestExamples(docstring) except (ValueError, IndexError): # e.g. line 6 of the docstring for <string> has inconsistent # leading whitespace: ... return if not examples: return # Place doctest in module scope saved_stack = self.scopeStack self.scopeStack = [self.scopeStack[0]] node_offset = self.offset or (0, 0) self.pushScope(DoctestScope) underscore_in_builtins = '_' in self.builtIns if not underscore_in_builtins: self.builtIns.add('_') for example in examples: try: tree = compile(example.source, "<doctest>", "exec", ast.PyCF_ONLY_AST) except SyntaxError: e = sys.exc_info()[1] if PYPY: e.offset += 1 position = (node_lineno + example.lineno + e.lineno, example.indent + 4 + (e.offset or 0)) self.report(messages.DoctestSyntaxError, node, position) else: self.offset = (node_offset[0] + node_lineno + example.lineno, node_offset[1] + example.indent + 4) self.handleChildren(tree) self.offset = node_offset if not underscore_in_builtins: self.builtIns.remove('_') self.popScope() self.scopeStack = saved_stack def ignore(self, node): pass # "stmt" type nodes DELETE = PRINT = FOR = ASYNCFOR = WHILE = IF = WITH = WITHITEM = \ ASYNCWITH = ASYNCWITHITEM = RAISE = TRYFINALLY = EXEC = \ EXPR = ASSIGN = handleChildren PASS = ignore # "expr" type nodes BOOLOP = BINOP = UNARYOP = IFEXP = SET = \ COMPARE = CALL = REPR = ATTRIBUTE = SUBSCRIPT = \ STARRED = NAMECONSTANT = handleChildren NUM = STR = BYTES = ELLIPSIS = ignore # "slice" type nodes SLICE = EXTSLICE = INDEX = handleChildren # expression contexts are node instances too, though being constants LOAD = STORE = DEL = AUGLOAD = AUGSTORE = PARAM = ignore # same for operators AND = OR = ADD = SUB = MULT = DIV = MOD = POW = LSHIFT = RSHIFT = \ BITOR = BITXOR = BITAND = FLOORDIV = INVERT = NOT = UADD = USUB = \ EQ = NOTEQ = LT = LTE = GT = GTE = IS = ISNOT = IN = NOTIN = \ MATMULT = ignore # additional node types COMPREHENSION = KEYWORD = FORMATTEDVALUE = JOINEDSTR = handleChildren def DICT(self, node): # Complain if there are duplicate keys with different values # If they have the same value it's not going to cause potentially # unexpected behaviour so we'll not complain. keys = [ convert_to_value(key) for key in node.keys ] key_counts = counter(keys) duplicate_keys = [ key for key, count in key_counts.items() if count > 1 ] for key in duplicate_keys: key_indices = [i for i, i_key in enumerate(keys) if i_key == key] values = counter( convert_to_value(node.values[index]) for index in key_indices ) if any(count == 1 for value, count in values.items()): for key_index in key_indices: key_node = node.keys[key_index] if isinstance(key, VariableKey): self.report(messages.MultiValueRepeatedKeyVariable, key_node, key.name) else: self.report( messages.MultiValueRepeatedKeyLiteral, key_node, key, ) self.handleChildren(node) def ASSERT(self, node): if isinstance(node.test, ast.Tuple) and node.test.elts != []: self.report(messages.AssertTuple, node) self.handleChildren(node) def GLOBAL(self, node): """ Keep track of globals declarations. """ global_scope_index = 1 if self._in_doctest() else 0 global_scope = self.scopeStack[global_scope_index] # Ignore 'global' statement in global scope. if self.scope is not global_scope: # One 'global' statement can bind multiple (comma-delimited) names. for node_name in node.names: node_value = Assignment(node_name, node) # Remove UndefinedName messages already reported for this name. # TODO: if the global is not used in this scope, it does not # become a globally defined name. See test_unused_global. self.messages = [ m for m in self.messages if not isinstance(m, messages.UndefinedName) or m.message_args[0] != node_name] # Bind name to global scope if it doesn't exist already. global_scope.setdefault(node_name, node_value) # Bind name to non-global scopes, but as already "used". node_value.used = (global_scope, node) for scope in self.scopeStack[global_scope_index + 1:]: scope[node_name] = node_value NONLOCAL = GLOBAL def GENERATOREXP(self, node): self.pushScope(GeneratorScope) self.handleChildren(node) self.popScope() LISTCOMP = handleChildren if PY2 else GENERATOREXP DICTCOMP = SETCOMP = GENERATOREXP def NAME(self, node): """ Handle occurrence of Name (which can be a load/store/delete access.) """ # Locate the name in locals / function / globals scopes. if isinstance(node.ctx, (ast.Load, ast.AugLoad)): self.handleNodeLoad(node) if (node.id == 'locals' and isinstance(self.scope, FunctionScope) and isinstance(node.parent, ast.Call)): # we are doing locals() call in current scope self.scope.usesLocals = True elif isinstance(node.ctx, (ast.Store, ast.AugStore)): self.handleNodeStore(node) elif isinstance(node.ctx, ast.Del): self.handleNodeDelete(node) else: # must be a Param context -- this only happens for names in function # arguments, but these aren't dispatched through here raise RuntimeError("Got impossible expression context: %r" % (node.ctx,)) def CONTINUE(self, node): # Walk the tree up until we see a loop (OK), a function or class # definition (not OK), for 'continue', a finally block (not OK), or # the top module scope (not OK) n = node while hasattr(n, 'parent'): n, n_child = n.parent, n if isinstance(n, LOOP_TYPES): # Doesn't apply unless it's in the loop itself if n_child not in n.orelse: return if isinstance(n, (ast.FunctionDef, ast.ClassDef)): break # Handle Try/TryFinally difference in Python < and >= 3.3 if hasattr(n, 'finalbody') and isinstance(node, ast.Continue): if n_child in n.finalbody: self.report(messages.ContinueInFinally, node) return if isinstance(node, ast.Continue): self.report(messages.ContinueOutsideLoop, node) else: # ast.Break self.report(messages.BreakOutsideLoop, node) BREAK = CONTINUE def RETURN(self, node): if isinstance(self.scope, (ClassScope, ModuleScope)): self.report(messages.ReturnOutsideFunction, node) return if ( node.value and hasattr(self.scope, 'returnValue') and not self.scope.returnValue ): self.scope.returnValue = node.value self.handleNode(node.value, node) def YIELD(self, node): if isinstance(self.scope, (ClassScope, ModuleScope)): self.report(messages.YieldOutsideFunction, node) return self.scope.isGenerator = True self.handleNode(node.value, node) AWAIT = YIELDFROM = YIELD def FUNCTIONDEF(self, node): for deco in node.decorator_list: self.handleNode(deco, node) self.LAMBDA(node) self.addBinding(node, FunctionDefinition(node.name, node)) # doctest does not process doctest within a doctest, # or in nested functions. if (self.withDoctest and not self._in_doctest() and not isinstance(self.scope, FunctionScope)): self.deferFunction(lambda: self.handleDoctests(node)) ASYNCFUNCTIONDEF = FUNCTIONDEF def LAMBDA(self, node): args = [] annotations = [] if PY2: def addArgs(arglist): for arg in arglist: if isinstance(arg, ast.Tuple): addArgs(arg.elts) else: args.append(arg.id) addArgs(node.args.args) defaults = node.args.defaults else: for arg in node.args.args + node.args.kwonlyargs: args.append(arg.arg) annotations.append(arg.annotation) defaults = node.args.defaults + node.args.kw_defaults # Only for Python3 FunctionDefs is_py3_func = hasattr(node, 'returns') for arg_name in ('vararg', 'kwarg'): wildcard = getattr(node.args, arg_name) if not wildcard: continue args.append(wildcard if PY33 else wildcard.arg) if is_py3_func: if PY33: # Python 2.5 to 3.3 argannotation = arg_name + 'annotation' annotations.append(getattr(node.args, argannotation)) else: # Python >= 3.4 annotations.append(wildcard.annotation) if is_py3_func: annotations.append(node.returns) if len(set(args)) < len(args): for (idx, arg) in enumerate(args): if arg in args[:idx]: self.report(messages.DuplicateArgument, node, arg) for child in annotations + defaults: if child: self.handleNode(child, node) def runFunction(): self.pushScope() for name in args: self.addBinding(node, Argument(name, node)) if isinstance(node.body, list): # case for FunctionDefs for stmt in node.body: self.handleNode(stmt, node) else: # case for Lambdas self.handleNode(node.body, node) def checkUnusedAssignments(): """ Check to see if any assignments have not been used. """ for name, binding in self.scope.unusedAssignments(): self.report(messages.UnusedVariable, binding.source, name) self.deferAssignment(checkUnusedAssignments) if PY32: def checkReturnWithArgumentInsideGenerator(): """ Check to see if there is any return statement with arguments but the function is a generator. """ if self.scope.isGenerator and self.scope.returnValue: self.report(messages.ReturnWithArgsInsideGenerator, self.scope.returnValue) self.deferAssignment(checkReturnWithArgumentInsideGenerator) self.popScope() self.deferFunction(runFunction) def CLASSDEF(self, node): """ Check names used in a class definition, including its decorators, base classes, and the body of its definition. Additionally, add its name to the current scope. """ for deco in node.decorator_list: self.handleNode(deco, node) for baseNode in node.bases: self.handleNode(baseNode, node) if not PY2: for keywordNode in node.keywords: self.handleNode(keywordNode, node) self.pushScope(ClassScope) # doctest does not process doctest within a doctest # classes within classes are processed. if (self.withDoctest and not self._in_doctest() and not isinstance(self.scope, FunctionScope)): self.deferFunction(lambda: self.handleDoctests(node)) for stmt in node.body: self.handleNode(stmt, node) self.popScope() self.addBinding(node, ClassDefinition(node.name, node)) def AUGASSIGN(self, node): self.handleNodeLoad(node.target) self.handleNode(node.value, node) self.handleNode(node.target, node) def TUPLE(self, node): if not PY2 and isinstance(node.ctx, ast.Store): # Python 3 advanced tuple unpacking: a, *b, c = d. # Only one starred expression is allowed, and no more than 1<<8 # assignments are allowed before a stared expression. There is # also a limit of 1<<24 expressions after the starred expression, # which is impossible to test due to memory restrictions, but we # add it here anyway has_starred = False star_loc = -1 for i, n in enumerate(node.elts): if isinstance(n, ast.Starred): if has_starred: self.report(messages.TwoStarredExpressions, node) # The SyntaxError doesn't distinguish two from more # than two. break has_starred = True star_loc = i if star_loc >= 1 << 8 or len(node.elts) - star_loc - 1 >= 1 << 24: self.report(messages.TooManyExpressionsInStarredAssignment, node) self.handleChildren(node) LIST = TUPLE def IMPORT(self, node): for alias in node.names: if '.' in alias.name and not alias.asname: importation = SubmoduleImportation(alias.name, node) else: name = alias.asname or alias.name importation = Importation(name, node, alias.name) self.addBinding(node, importation) def IMPORTFROM(self, node): if node.module == '__future__': if not self.futuresAllowed: self.report(messages.LateFutureImport, node, [n.name for n in node.names]) else: self.futuresAllowed = False module = ('.' * node.level) + (node.module or '') for alias in node.names: name = alias.asname or alias.name if node.module == '__future__': importation = FutureImportation(name, node, self.scope) if alias.name not in __future__.all_feature_names: self.report(messages.FutureFeatureNotDefined, node, alias.name) elif alias.name == '*': # Only Python 2, local import * is a SyntaxWarning if not PY2 and not isinstance(self.scope, ModuleScope): self.report(messages.ImportStarNotPermitted, node, module) continue self.scope.importStarred = True self.report(messages.ImportStarUsed, node, module) importation = StarImportation(module, node) else: importation = ImportationFrom(name, node, module, alias.name) self.addBinding(node, importation) def TRY(self, node): handler_names = [] # List the exception handlers for i, handler in enumerate(node.handlers): if isinstance(handler.type, ast.Tuple): for exc_type in handler.type.elts: handler_names.append(getNodeName(exc_type)) elif handler.type: handler_names.append(getNodeName(handler.type)) if handler.type is None and i < len(node.handlers) - 1: self.report(messages.DefaultExceptNotLast, handler) # Memorize the except handlers and process the body self.exceptHandlers.append(handler_names) for child in node.body: self.handleNode(child, node) self.exceptHandlers.pop() # Process the other nodes: "except:", "else:", "finally:" self.handleChildren(node, omit='body') TRYEXCEPT = TRY def EXCEPTHANDLER(self, node): if PY2 or node.name is None: self.handleChildren(node) return # 3.x: the name of the exception, which is not a Name node, but # a simple string, creates a local that is only bound within the scope # of the except: block. for scope in self.scopeStack[::-1]: if node.name in scope: is_name_previously_defined = True break else: is_name_previously_defined = False self.handleNodeStore(node) self.handleChildren(node) if not is_name_previously_defined: # See discussion on https://github.com/PyCQA/pyflakes/pull/59 # We're removing the local name since it's being unbound # after leaving the except: block and it's always unbound # if the except: block is never entered. This will cause an # "undefined name" error raised if the checked code tries to # use the name afterwards. # # Unless it's been removed already. Then do nothing. try: del self.scope[node.name] except KeyError: pass def ANNASSIGN(self, node): if node.value: # Only bind the *targets* if the assignment has a value. # Otherwise it's not really ast.Store and shouldn't silence # UndefinedLocal warnings. self.handleNode(node.target, node) self.handleNode(node.annotation, node) if node.value: # If the assignment has value, handle the *value* now. self.handleNode(node.value, node)
PypiClean
/Orange3%E2%80%93MNE-1.0.13.tar.gz/Orange3–MNE-1.0.13/Orange3MNE/EegClassification/models/SkLearnClassifier.py
from sklearn import metrics class SkLearnClassifier: def __init__(self, model): self.model = model def get_model(self): return self.model def fit(self, x_train, y_train, x_val, y_val): """ Fits the model using training data Taken from original source code: https://bitbucket.org/lvareka/cnnforgtn/src/eb1327b165c02b8cb1dce6059df163117086a357/models/sklearnclassifier.py#lines-23 :param x: x[number of examples x number of channels x samples in each epoch x 1] :param y: y[number of examples x number of categories - default 2] :param x_val: validation data to evaluate :param y_val: validation data to evaluate """ if x_train.ndim == 4: x_train = x_train.reshape((x_train.shape[-4], -1)) if len(y_train.shape) > 1: self.model.fit(x_train, y_train[:, 0]) else: self.model.fit(x_train, y_train) return self.evaluate(x_val, y_val) def evaluate(self, x, y): """ Evaluates the classifier using testing data: Taken from original source code: https://bitbucket.org/lvareka/cnnforgtn/src/eb1327b165c02b8cb1dce6059df163117086a357/models/sklearnclassifier.py#lines-35 :param x: [number of examples x length of each feature vector] :param y: [number of examples x number of categories - default 2] :return: [(loss) accuracy precision recall] """ predictions = [] real_outputs = [] for i in range(x.shape[0]): pattern = x[i, :].reshape(1, -1) prediction = self.model.predict(pattern) predictions.append(prediction[0]) if len(y.shape) > 1: real_outputs.append(y[i, 0]) else: real_outputs.append(y[i]) acc = metrics.accuracy_score(real_outputs, predictions) try: auc = metrics.roc_auc_score(real_outputs, predictions) prec = metrics.precision_score(real_outputs, predictions) recall = metrics.recall_score(real_outputs, predictions) return [acc, auc, prec, recall] except ValueError as err: print(err) return [acc]
PypiClean
/brownian_stock-0.0.47-py3-none-any.whl/brownian_stock/repository/statements_sql_repository.py
from logging import getLogger from typing import List, Optional, Tuple import pandas as pd import polars as pl import sqlalchemy import tqdm from ..models.statements import StatementsHistory from . import repository_path as rp logger = getLogger(__name__) class StatementsSQLRepository: def __init__(self, repository_path: rp.AbstractRepositoryPath): self.repository_path = repository_path def load(self, limit: Optional[int] = None) -> List[StatementsHistory]: statements_list = [] failed_list: List[Tuple[str, Exception]] = [] conn = self.__get_connection() brand_df = pl.read_sql("SELECT Code FROM brand;", conn) brand_list = brand_df["Code"].unique().to_list() for brand in tqdm.tqdm(brand_list): try: df = load_statements(conn, brand) if df is None: continue statements = StatementsHistory(df) statements_list.append(statements) except Exception as e: failed_list.append((brand, e)) for code, error in failed_list: logger.error(f"[*] Failed to load {code}") logger.exception(error) return statements_list def __get_connection(self) -> str: """Sqlite用のConnectionStringを生成する https://sfu-db.github.io/connector-x/databases/sqlite.html """ conn = "sqlite:///" + str(self.repository_path.sqlite_path.absolute()) return conn def log(self, msg: str) -> None: print(msg) def load_statements(conn_str: str, brand: str) -> Optional[pl.DataFrame]: """DBから決算情報を読み込む""" query = f""" SELECT * FROM statements JOIN brand ON statements.LocalCode = brand.Code WHERE statements.LocalCode = '{brand}'; """.replace( "\n", "" ) engine = sqlalchemy.create_engine(conn_str) """""" with engine.connect() as conn: pandas_df = pd.read_sql(sqlalchemy.text(query), conn) pandas_df.drop(columns="id", inplace=True) df: pl.DataFrame = pl.from_pandas(pandas_df) if len(df) == 0: return None return df
PypiClean
/robotframework-lsp-1.11.0.tar.gz/robotframework-lsp-1.11.0/robotframework_ls/vendored/robocorp_ls_core/libs/robotidy_lib/robotidy/transformers/ReplaceRunKeywordIf.py
from robot.api.parsing import ElseHeader, ElseIfHeader, End, If, IfHeader, KeywordCall, Token from robotidy.disablers import skip_if_disabled, skip_section_if_disabled from robotidy.transformers import Transformer from robotidy.utils import after_last_dot, is_var, normalize_name def insert_separators(indent, tokens, separator): yield Token(Token.SEPARATOR, indent) for token in tokens[:-1]: yield token yield Token(Token.SEPARATOR, separator) yield tokens[-1] yield Token(Token.EOL) class ReplaceRunKeywordIf(Transformer): """ Replace ``Run Keyword If`` keyword calls with IF expressions. Following code: ```robotframework *** Keywords *** Keyword Run Keyword If ${condition} ... Keyword ${arg} ... ELSE IF ${condition2} Keyword2 ... ELSE Keyword3 ``` Will be transformed to: ```robotframework *** Keywords *** Keyword IF ${condition} Keyword ${arg} ELSE IF ${condition2} Keyword2 ELSE Keyword3 END ``` Any return value will be applied to every ``ELSE``/``ELSE IF`` branch: ```robotframework *** Keywords *** Keyword ${var} Run Keyword If ${condition} Keyword ELSE Keyword2 ``` Output: ```robotframework *** Keywords *** Keyword IF ${condition} ${var} Keyword ELSE ${var} Keyword2 END ``` Run Keywords inside Run Keyword If will be split into separate keywords: ```robotframework *** Keywords *** Keyword Run Keyword If ${condition} Run Keywords Keyword ${arg} AND Keyword2 ``` Output: ```robotframework *** Keywords *** Keyword IF ${condition} Keyword ${arg} Keyword2 END ``` """ @skip_section_if_disabled def visit_Section(self, node): # noqa return self.generic_visit(node) @skip_if_disabled def visit_KeywordCall(self, node): # noqa if not node.keyword: return node if after_last_dot(normalize_name(node.keyword)) == "runkeywordif": return self.create_branched(node) return node def create_branched(self, node): separator = node.tokens[0] assign = node.get_tokens(Token.ASSIGN) raw_args = node.get_tokens(Token.ARGUMENT) if len(raw_args) < 2: return node end = End([separator, Token(Token.END), Token(Token.EOL)]) prev_if = None for branch in reversed(list(self.split_args_on_delimiters(raw_args, ("ELSE", "ELSE IF"), assign=assign))): if branch[0].value == "ELSE": if len(branch) < 2: return node args = branch[1:] if self.check_for_useless_set_variable(args, assign): continue header = ElseHeader([separator, Token(Token.ELSE), Token(Token.EOL)]) elif branch[0].value == "ELSE IF": if len(branch) < 3: return node header = ElseIfHeader( [ separator, Token(Token.ELSE_IF), Token(Token.SEPARATOR, self.formatting_config.separator), branch[1], Token(Token.EOL), ] ) args = branch[2:] else: if len(branch) < 2: return node header = IfHeader( [ separator, Token(Token.IF), Token(Token.SEPARATOR, self.formatting_config.separator), branch[0], Token(Token.EOL), ] ) args = branch[1:] keywords = self.create_keywords(args, assign, separator.value + self.formatting_config.indent) if_block = If(header=header, body=keywords, orelse=prev_if) prev_if = if_block prev_if.end = end return prev_if def create_keywords(self, arg_tokens, assign, indent): keyword_name = normalize_name(arg_tokens[0].value) if keyword_name == "runkeywords": return [ self.args_to_keyword(keyword[1:], assign, indent) for keyword in self.split_args_on_delimiters(arg_tokens, ("AND",)) ] elif is_var(keyword_name): keyword_token = Token(Token.KEYWORD_NAME, "Run Keyword") arg_tokens = [keyword_token] + arg_tokens return [self.args_to_keyword(arg_tokens, assign, indent)] def args_to_keyword(self, arg_tokens, assign, indent): separated_tokens = list( insert_separators( indent, [*assign, Token(Token.KEYWORD, arg_tokens[0].value), *arg_tokens[1:]], self.formatting_config.separator, ) ) return KeywordCall.from_tokens(separated_tokens) @staticmethod def split_args_on_delimiters(args, delimiters, assign=None): split_points = [index for index, arg in enumerate(args) if arg.value in delimiters] prev_index = 0 for split_point in split_points: yield args[prev_index:split_point] prev_index = split_point yield args[prev_index : len(args)] if assign and "ELSE" in delimiters and not any(arg.value == "ELSE" for arg in args): values = [Token(Token.ARGUMENT, "${None}")] * len(assign) yield [Token(Token.ELSE), Token(Token.ARGUMENT, "Set Variable"), *values] @staticmethod def check_for_useless_set_variable(tokens, assign): if not assign or normalize_name(tokens[0].value) != "setvariable" or len(tokens[1:]) != len(assign): return False for var, var_assign in zip(tokens[1:], assign): if normalize_name(var.value) != normalize_name(var_assign.value): return False return True
PypiClean
/recital-client-0.2.1.tar.gz/recital-client-0.2.1/recital/api/named_entities/post_entities.py
from typing import Any, Dict, List, Optional, Union import httpx from ...client import AuthenticatedClient from ...models.entity_in import EntityIn from ...models.entity_in_db import EntityInDB from ...models.http_validation_error import HTTPValidationError from ...types import Response def _get_kwargs( *, client: AuthenticatedClient, version_id: int, json_body: List[EntityIn], ) -> Dict[str, Any]: url = "{}/api/v1/files/versions/{version_id}/entities/".format(client.base_url, version_id=version_id) headers: Dict[str, Any] = client.get_headers() cookies: Dict[str, Any] = client.get_cookies() json_json_body = [] for json_body_item_data in json_body: json_body_item = json_body_item_data.to_dict() json_json_body.append(json_body_item) return { "url": url, "headers": headers, "cookies": cookies, "timeout": client.get_timeout(), "json": json_json_body, } def _parse_response( *, response: httpx.Response ) -> Optional[Union[List[EntityInDB], None, None, None, HTTPValidationError]]: if response.status_code == 201: response_201 = [] _response_201 = response.json() for response_201_item_data in _response_201: response_201_item = EntityInDB.from_dict(response_201_item_data) response_201.append(response_201_item) return response_201 if response.status_code == 401: response_401 = None return response_401 if response.status_code == 404: response_404 = None return response_404 if response.status_code == 403: response_403 = None return response_403 if response.status_code == 422: response_422 = HTTPValidationError.from_dict(response.json()) return response_422 return None def _build_response( *, response: httpx.Response ) -> Response[Union[List[EntityInDB], None, None, None, HTTPValidationError]]: return Response( status_code=response.status_code, content=response.content, headers=response.headers, parsed=_parse_response(response=response), ) def sync_detailed( *, client: AuthenticatedClient, version_id: int, json_body: List[EntityIn], ) -> Response[Union[List[EntityInDB], None, None, None, HTTPValidationError]]: kwargs = _get_kwargs( client=client, version_id=version_id, json_body=json_body, ) response = httpx.post( **kwargs, ) return _build_response(response=response) def sync( *, client: AuthenticatedClient, version_id: int, json_body: List[EntityIn], ) -> Optional[Union[List[EntityInDB], None, None, None, HTTPValidationError]]: """ Add named entities to a file version. Only services can access this route. """ return sync_detailed( client=client, version_id=version_id, json_body=json_body, ).parsed async def asyncio_detailed( *, client: AuthenticatedClient, version_id: int, json_body: List[EntityIn], ) -> Response[Union[List[EntityInDB], None, None, None, HTTPValidationError]]: kwargs = _get_kwargs( client=client, version_id=version_id, json_body=json_body, ) async with httpx.AsyncClient() as _client: response = await _client.post(**kwargs) return _build_response(response=response) async def asyncio( *, client: AuthenticatedClient, version_id: int, json_body: List[EntityIn], ) -> Optional[Union[List[EntityInDB], None, None, None, HTTPValidationError]]: """ Add named entities to a file version. Only services can access this route. """ return ( await asyncio_detailed( client=client, version_id=version_id, json_body=json_body, ) ).parsed
PypiClean
/dork-compose-1.13.0.0.0.1.tar.gz/dork-compose-1.13.0.0.0.1/dork_compose/plugin.py
import os import contextlib from compose.cli.command import get_client from compose.cli.docker_client import docker_client from compose.config import config from compose.config.environment import Environment from dork_compose.injections import dork_config_load from compose.const import API_VERSIONS from compose.project import Project from helpers import notdefault, tru import pkg_resources import filelock import logging log = logging.getLogger(__name__) @contextlib.contextmanager def load(plugins, command): instances = [] environment = { 'DORK_PROJECT': 'default', 'DORK_INSTANCE': 'default', 'DORK_SOURCE': os.path.abspath(os.curdir), 'DORK_DATA_DIR': '~/.dork', } environment.update(os.environ) for plugin in plugins.split(':'): local = {} if '=' in plugin: (plugin, f) = plugin.split('=') else: f = "%s/%s.py" % (pkg_resources.resource_filename('dork_compose', 'plugins'), plugin) f = os.path.expanduser(f) try: execfile(os.path.expanduser(f), local) instances.append(local['Plugin'](environment.copy(), plugin, command)) environment.update(instances[-1].environment()) log.debug('Loaded plugin %s.' % plugin) except Exception as ex: log.warning('Could not load plugin %s: %s' % (plugin, ex)) pass # If there is no explicit project name in the environment, set # [project]--[instance] if 'COMPOSE_PROJECT_NAME' not in environment: parts = filter(tru, [ notdefault(environment['DORK_PROJECT']), notdefault(environment['DORK_INSTANCE']) ]) if parts: environment.update({ 'COMPOSE_PROJECT_NAME': '--'.join(parts) }) os.environ.update(environment) try: yield instances finally: for instance in instances: instance.cleanup() class Plugin(object): """ Interface definition for plugins that can interact with the docker-compose process. """ def __init__(self, env, name, command): self.name = name self.env = env.copy() self.log = logging.getLogger(__name__) def initialize(self): return True def cleanup(self): pass def environment(self): return {} @property def basedir(self): return self.env['DORK_SOURCE'] @property def datadir(self): return os.path.expanduser(self.env['DORK_DATA_DIR']) @property def lockdir(self): return '%s/locks' % self.datadir @property def project(self): return self.env['DORK_PROJECT'] @property def instance(self): return self.env['DORK_INSTANCE'] def info(self, project): return {} def alter_config_schema(self, schema): pass def preprocess_config(self, config): """ Alter the docker-compose configuration object. The object is passed by reference. :type config: compose.config.config.Config """ pass def building(self, service, no_cache, pull, force_rm): pass def after_build(self, service, no_cache, pull, force_rm): pass def initializing(self, project, service_names=None): pass def creating_container(self, service): pass def starting_container(self, container): pass def initialized(self, project, containers=None): pass def removing(self, project, include_volumes=False): pass def removed(self, project, include_volumes=False): pass def snapshot_save(self, snapshots=(), volumes=()): """ Save the current volumes under the names provided. :type snapshots: list[str] """ pass def snapshot_load(self, snapshots=(), volumes=()): """ Try to load the snapshots provided. If multiple snapshots are requested the last valid one should be used. Returns the id of the snapshot loaded. :type snapshots: list[str] :rtype: str """ pass def snapshot_rm(self, snapshots=()): """ Remove the list of snapshots. Invalid snapshots are silently ignored. Return a list of snapshot id's that actually have been removed. :type snapshots: list[str] :rtype: list[str] """ return [] def snapshot_ls(self): """ List available snapshots. I the snapshots parameter is provided, reduce check for existence of these and return the narrowed list. :rtype: list[str] """ return [] def snapshot_autosave(self): """ Choose an automatic name for the next snapshot. :rtype: str """ return None def snapshot_autoload(self, snapshots=()): """ Choose the most appropriate snapshot to be loaded from the list provided. If no snapshot applies, return [None]. :type snapshots: list[str] :rtype: str """ return None def snapshot_autoclean(self, snapshots=()): """ Decide which snapshots in the list can be cleaned up safely, without loosing information. :type snapshots: list[str] :rtype: list[str] """ return [] @property def auxiliary_project(self): return None @property def auxiliary_project_name(self): return 'dork_aux_%s' % self.name def attach_auxiliary_project(self, network): if not self.auxiliary_project: return aux = self.get_auxiliary_project() if not os.path.exists(self.lockdir): os.makedirs(self.lockdir) lock = filelock.FileLock("%s/%s" % (self.lockdir, self.auxiliary_project_name)) with lock.acquire(60): aux.up(detached=True, remove_orphans=True) client = docker_client(self.environment()) containers = client.containers(filters={ 'label': [ 'org.iamdork.auxiliary.network', 'com.docker.compose.project=%s' % self.auxiliary_project_name ], }) for container in containers: if network not in container['NetworkSettings']['Networks']: client.connect_container_to_network(container, network) def detach_auxiliary_project(self, network): if not self.auxiliary_project: return aux = self.get_auxiliary_project() if not os.path.exists(self.lockdir): os.makedirs(self.lockdir) lock = filelock.FileLock("%s/%s" % (self.lockdir, self.auxiliary_project_name)) with lock.acquire(60): client = docker_client(self.environment()) containers = client.containers(filters={ 'label': [ 'org.iamdork.auxiliary.network', 'com.docker.compose.project=%s' % self.auxiliary_project_name ], }) for container in containers: if network in container['NetworkSettings']['Networks']: if (len(container['NetworkSettings']['Networks']) - 1) == len(aux.networks.networks): aux.down(remove_image_type=None, include_volumes=False, remove_orphans=True) break else: client.disconnect_container_from_network(container, network) def get_auxiliary_project(self): config_details = config.find(self.auxiliary_project, [], Environment(self.environment())) project_name = self.auxiliary_project_name config_data = dork_config_load([], config_details) client = get_client(self.environment(), version=API_VERSIONS[config_data.version]) return Project.from_config(project_name, config_data, client)
PypiClean
/jupyterhub_url_sharing-0.1.0.tar.gz/jupyterhub_url_sharing-0.1.0/node_modules/webpack/lib/container/ContainerPlugin.js
"use strict"; const createSchemaValidation = require("../util/create-schema-validation"); const ContainerEntryDependency = require("./ContainerEntryDependency"); const ContainerEntryModuleFactory = require("./ContainerEntryModuleFactory"); const ContainerExposedDependency = require("./ContainerExposedDependency"); const { parseOptions } = require("./options"); /** @typedef {import("../../declarations/plugins/container/ContainerPlugin").ContainerPluginOptions} ContainerPluginOptions */ /** @typedef {import("../Compiler")} Compiler */ const validate = createSchemaValidation( require("../../schemas/plugins/container/ContainerPlugin.check.js"), () => require("../../schemas/plugins/container/ContainerPlugin.json"), { name: "Container Plugin", baseDataPath: "options" } ); const PLUGIN_NAME = "ContainerPlugin"; class ContainerPlugin { /** * @param {ContainerPluginOptions} options options */ constructor(options) { validate(options); this._options = { name: options.name, shareScope: options.shareScope || "default", library: options.library || { type: "var", name: options.name }, runtime: options.runtime, filename: options.filename || undefined, exposes: parseOptions( options.exposes, item => ({ import: Array.isArray(item) ? item : [item], name: undefined }), item => ({ import: Array.isArray(item.import) ? item.import : [item.import], name: item.name || undefined }) ) }; } /** * Apply the plugin * @param {Compiler} compiler the compiler instance * @returns {void} */ apply(compiler) { const { name, exposes, shareScope, filename, library, runtime } = this._options; if (!compiler.options.output.enabledLibraryTypes.includes(library.type)) { compiler.options.output.enabledLibraryTypes.push(library.type); } compiler.hooks.make.tapAsync(PLUGIN_NAME, (compilation, callback) => { const dep = new ContainerEntryDependency(name, exposes, shareScope); dep.loc = { name }; compilation.addEntry( compilation.options.context, dep, { name, filename, runtime, library }, error => { if (error) return callback(error); callback(); } ); }); compiler.hooks.thisCompilation.tap( PLUGIN_NAME, (compilation, { normalModuleFactory }) => { compilation.dependencyFactories.set( ContainerEntryDependency, new ContainerEntryModuleFactory() ); compilation.dependencyFactories.set( ContainerExposedDependency, normalModuleFactory ); } ); } } module.exports = ContainerPlugin;
PypiClean
/certora_cli_alpha_naftali_CERT_1897_parametric_instantiation_always-20230507.7.41.137068-py3-none-any.whl/certora_cli/certoraRun.py
import sys import time import logging from typing import List, Optional from pathlib import Path scripts_dir_path = Path(__file__).parent.resolve() # containing directory sys.path.insert(0, str(scripts_dir_path)) from Shared.certoraUtils import run_jar_cmd from Shared.certoraUtils import check_results_from_file, is_ci_or_git_action, run_local_spec_check from Shared.certoraUtils import remove_file, is_new_api from Shared.certoraUtils import CertoraUserInputError from Shared.certoraUtils import get_certora_internal_dir, safe_create_dir from Shared.certoraUtils import Mode, reset_certora_internal_dir from Shared.certoraUtils import print_completion_message, mode_has_spec_file from EVMVerifier.certoraCloudIO import CloudVerification, validate_version_and_branch from EVMVerifier.certoraCloudIO import validate_version_and_branch_new_api from EVMVerifier.certoraCollectRunMetadata import collect_run_metadata from Shared.certoraLogging import LoggingManager from EVMVerifier.certoraBuild import build from EVMVerifier.certoraContext import get_local_run_cmd, get_args, handle_flags_in_args from EVMVerifier import certoraContextValidator as Cv BUILD_SCRIPT_PATH = Path("EVMVerifier/certoraBuild.py") # logger for issues regarding the general run flow. # Also serves as the default logger for errors originating from unexpected places. run_logger = logging.getLogger("run") def run_certora(args: List[str], is_library: bool = False) -> Optional[Path]: """ The main function that is responsible for the general flow of the script. The general flow is: 1. Parse program arguments 2. Run the necessary steps (type checking/ build/ cloud verification/ local verification) 3. Shut down IMPORTANT - if run_certora is not run with is_library set to true we assume the scripts always reaches the shut down code. DO NOT USE SYS.EXIT() IN THE SCRIPT FILES! If is_library is set to False The program terminates with an exit code of 0 in case of success and 1 otherwise If is_library is set to True and the prover does not run locally the link to the status url is returned, else None is returned """ # If we are not in debug mode, we do not want to print the traceback in case of exceptions. if '--debug' not in args: # We check manually, because we want no traceback in argument parsing exceptions sys.tracebacklimit = 0 # creating the default internal dir, files may be copied to user defined build directory after # parsing the input reset_certora_internal_dir() safe_create_dir(get_certora_internal_dir(), revert=False) logging_manager = LoggingManager() # adds ' around arguments with spaces pretty_args = [f"'{arg}'" if ' ' in str(arg) else str(arg) for arg in args] if is_new_api(): handle_flags_in_args(args) context, conf_dict = get_args(args) # Parse arguments logging_manager.set_log_level_and_format(is_quiet=context.short_output, debug=context.debug, debug_topics=context.debug_topics, show_debug_topics=context.show_debug_topics) if context.short_output is False: if is_ci_or_git_action(): context.short_output = True timings = {} exit_code = 0 # The exit code of the script. 0 means success, any other number is an error. return_value = None try: collect_run_metadata(wd=Path.cwd(), raw_args=sys.argv, conf_dict=conf_dict, context=context) \ .dump() # When a TAC file is provided, no build arguments will be processed if context.mode not in [Mode.TAC]: run_logger.debug(f"There is no TAC file. Going to script {BUILD_SCRIPT_PATH} to main_with_args()") build_start = time.perf_counter() # If we are not in CI, we also check the spec for Syntax errors. build(context, ignore_spec_syntax_check=is_library) build_end = time.perf_counter() timings["buildTime"] = round(build_end - build_start, 4) if not context.build_only and exit_code == 0: # either we skipped building (TAC MODE) or build succeeded if context.local: compare_with_expected_file = Path(context.expected_file).exists() specified_tool_output = context.tool_output is not None # If we want to compare results we have tell the jar where to store the output of the current run, # But we don't want to override the path if it was specified if compare_with_expected_file and not specified_tool_output: context.tool_output = 'tmpOutput.json' check_cmd = get_local_run_cmd(context) # In local mode, this is reserved for Certora devs, so let the script print it print(f"Verifier run command:\n {check_cmd}", flush=True) run_result = \ run_jar_cmd(check_cmd, compare_with_expected_file, logger_topic="verification", print_output=True) if run_result != 0: exit_code = 1 else: print_completion_message("Finished running verifier:") print(f"\t{check_cmd}") if compare_with_expected_file: print("Comparing tool output to the expected output:") result = check_results_from_file(context.tool_output, context.expected_file) if not result: exit_code = 1 if not specified_tool_output: # Remove actual before starting the current test remove_file(context.tool_output) else: # Remote run # In cloud mode, we first run a local type checker """ Before running the local type checker, we see if the current package version is compatible with the latest. We check it before running the local type checker, because local type checking errors could be simply a result of syntax introduced in the newest version. The line below Will raise an exception if the local version is incompatible. """ if is_new_api(): validate_version_and_branch_new_api(context) else: validate_version_and_branch(context.cloud if context.cloud else context.staging, context.commit_sha1) # Syntax checking and typechecking if mode_has_spec_file(context.mode): attr = context.disable_local_typechecking if is_new_api() else context.disableLocalTypeChecking if attr: run_logger.warning( "Local checks of CVL specification files disabled. It is recommended to enable " "the checks.") else: typechecking_start = time.perf_counter() spec_check_failed = run_local_spec_check(with_typechecking=True) if spec_check_failed: raise CertoraUserInputError("CVL specification syntax and type check failed") else: typechecking_end = time.perf_counter() timings['typecheckingTime'] = round(typechecking_end - typechecking_start, 4) if not context.typecheck_only and exit_code == 0: # Local typechecking either succeeded or skipped context.key = Cv.validate_certora_key() cloud_verifier = CloudVerification(context, timings) # Wrap strings with space with ' so it can be copied and pasted to shell pretty_args = [f"'{arg}'" if ' ' in arg else arg for arg in args] cl_args = ' '.join(pretty_args) logging_manager.remove_debug_logger() result = cloud_verifier.cli_verify_and_report(cl_args, context.send_only) if cloud_verifier.statusUrl: return_value = Path(cloud_verifier.statusUrl) if not result: exit_code = 1 except Exception as e: err_msg = "Encountered an error running Certora Prover" if isinstance(e, CertoraUserInputError): err_msg = f"{err_msg}:\n{e}" else: err_msg += ", please contact Certora" if not logging_manager.is_debugging: err_msg += "; consider running the script again with --debug to find out why it failed" run_logger.debug("Failure traceback: ", exc_info=e) run_logger.fatal(err_msg) exit_code = 1 except KeyboardInterrupt: print('\nInterrupted by user', flush=True) # We go down a line because last characters in terminal were ^C sys.exit(1) # We exit ALWAYS, even if we are running from a library # If the exit_code is 0, we do not call sys.exit() -> calling sys.exit() also exits any script that wraps this one if not is_library and exit_code != 0: sys.exit(exit_code) return return_value def entry_point() -> None: """ This function is the entry point of the certora_cli customer-facing package, as well as this script. It is important this function gets no arguments! """ run_certora(sys.argv[1:], is_library=False) if __name__ == '__main__': entry_point()
PypiClean
/django-adminlte-full-0.2.0.tar.gz/django-adminlte-full-0.2.0/adminlte_full/static/adminlte_full/plugins/summernote/lang/summernote-uk-UA.min.js
!function(e,t){if("object"==typeof exports&&"object"==typeof module)module.exports=t();else if("function"==typeof define&&define.amd)define([],t);else{var r=t();for(var o in r)("object"==typeof exports?exports:e)[o]=r[o]}}(window,(function(){return function(e){var t={};function r(o){if(t[o])return t[o].exports;var n=t[o]={i:o,l:!1,exports:{}};return e[o].call(n.exports,n,n.exports,r),n.l=!0,n.exports}return r.m=e,r.c=t,r.d=function(e,t,o){r.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:o})},r.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.t=function(e,t){if(1&t&&(e=r(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var o=Object.create(null);if(r.r(o),Object.defineProperty(o,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var n in e)r.d(o,n,function(t){return e[t]}.bind(null,n));return o},r.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return r.d(t,"a",t),t},r.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},r.p="",r(r.s=46)}({46:function(e,t){var r;(r=jQuery).extend(r.summernote.lang,{"uk-UA":{font:{bold:"Напівжирний",italic:"Курсив",underline:"Підкреслений",clear:"Прибрати стилі шрифту",height:"Висота лінії",name:"Шрифт",strikethrough:"Закреслений",subscript:"Нижній індекс",superscript:"Верхній індекс",size:"Розмір шрифту"},image:{image:"Картинка",insert:"Вставити картинку",resizeFull:"Відновити розмір",resizeHalf:"Зменшити до 50%",resizeQuarter:"Зменшити до 25%",floatLeft:"Розташувати ліворуч",floatRight:"Розташувати праворуч",floatNone:"Початкове розташування",shapeRounded:"Форма: Заокруглена",shapeCircle:"Форма: Коло",shapeThumbnail:"Форма: Мініатюра",shapeNone:"Форма: Немає",dragImageHere:"Перетягніть сюди картинку",dropImage:"Перетягніть картинку",selectFromFiles:"Вибрати з файлів",maximumFileSize:"Maximum file size",maximumFileSizeError:"Maximum file size exceeded.",url:"URL картинки",remove:"Видалити картинку",original:"Original"},video:{video:"Відео",videoLink:"Посилання на відео",insert:"Вставити відео",url:"URL відео",providers:"(YouTube, Vimeo, Vine, Instagram, DailyMotion чи Youku)"},link:{link:"Посилання",insert:"Вставити посилання",unlink:"Прибрати посилання",edit:"Редагувати",textToDisplay:"Текст, що відображається",url:"URL для переходу",openInNewWindow:"Відкривати у новому вікні"},table:{table:"Таблиця",addRowAbove:"Add row above",addRowBelow:"Add row below",addColLeft:"Add column left",addColRight:"Add column right",delRow:"Delete row",delCol:"Delete column",delTable:"Delete table"},hr:{insert:"Вставити горизонтальну лінію"},style:{style:"Стиль",p:"Нормальний",blockquote:"Цитата",pre:"Код",h1:"Заголовок 1",h2:"Заголовок 2",h3:"Заголовок 3",h4:"Заголовок 4",h5:"Заголовок 5",h6:"Заголовок 6"},lists:{unordered:"Маркований список",ordered:"Нумерований список"},options:{help:"Допомога",fullscreen:"На весь екран",codeview:"Початковий код"},paragraph:{paragraph:"Параграф",outdent:"Зменшити відступ",indent:"Збільшити відступ",left:"Вирівняти по лівому краю",center:"Вирівняти по центру",right:"Вирівняти по правому краю",justify:"Розтягнути по ширині"},color:{recent:"Останній колір",more:"Ще кольори",background:"Колір фону",foreground:"Колір шрифту",transparent:"Прозорий",setTransparent:"Зробити прозорим",reset:"Відновити",resetToDefault:"Відновити початкові"},shortcut:{shortcuts:"Комбінації клавіш",close:"Закрити",textFormatting:"Форматування тексту",action:"Дія",paragraphFormatting:"Форматування параграфу",documentStyle:"Стиль документу",extraKeys:"Extra keys"},help:{insertParagraph:"Insert Paragraph",undo:"Undoes the last command",redo:"Redoes the last command",tab:"Tab",untab:"Untab",bold:"Set a bold style",italic:"Set a italic style",underline:"Set a underline style",strikethrough:"Set a strikethrough style",removeFormat:"Clean a style",justifyLeft:"Set left align",justifyCenter:"Set center align",justifyRight:"Set right align",justifyFull:"Set full align",insertUnorderedList:"Toggle unordered list",insertOrderedList:"Toggle ordered list",outdent:"Outdent on current paragraph",indent:"Indent on current paragraph",formatPara:"Change current block's format as a paragraph(P tag)",formatH1:"Change current block's format as H1",formatH2:"Change current block's format as H2",formatH3:"Change current block's format as H3",formatH4:"Change current block's format as H4",formatH5:"Change current block's format as H5",formatH6:"Change current block's format as H6",insertHorizontalRule:"Insert horizontal rule","linkDialog.show":"Show Link Dialog"},history:{undo:"Відмінити",redo:"Повторити"},specialChar:{specialChar:"SPECIAL CHARACTERS",select:"Select Special characters"}}})}})}));
PypiClean
/cleanrl_test-1.1.2.tar.gz/cleanrl_test-1.1.2/cleanrl/ddpg_continuous_action_jax.py
import argparse import os import random import time from distutils.util import strtobool from typing import Sequence import flax import flax.linen as nn import gym import jax import jax.numpy as jnp import numpy as np import optax import pybullet_envs # noqa from flax.training.train_state import TrainState from stable_baselines3.common.buffers import ReplayBuffer from torch.utils.tensorboard import SummaryWriter def parse_args(): # fmt: off parser = argparse.ArgumentParser() parser.add_argument("--exp-name", type=str, default=os.path.basename(__file__).rstrip(".py"), help="the name of this experiment") parser.add_argument("--seed", type=int, default=1, help="seed of the experiment") parser.add_argument("--track", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True, help="if toggled, this experiment will be tracked with Weights and Biases") parser.add_argument("--wandb-project-name", type=str, default="cleanRL", help="the wandb's project name") parser.add_argument("--wandb-entity", type=str, default=None, help="the entity (team) of wandb's project") parser.add_argument("--capture-video", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True, help="whether to capture videos of the agent performances (check out `videos` folder)") # Algorithm specific arguments parser.add_argument("--env-id", type=str, default="HalfCheetah-v2", help="the id of the environment") parser.add_argument("--total-timesteps", type=int, default=1000000, help="total timesteps of the experiments") parser.add_argument("--learning-rate", type=float, default=3e-4, help="the learning rate of the optimizer") parser.add_argument("--buffer-size", type=int, default=int(1e6), help="the replay memory buffer size") parser.add_argument("--gamma", type=float, default=0.99, help="the discount factor gamma") parser.add_argument("--tau", type=float, default=0.005, help="target smoothing coefficient (default: 0.005)") parser.add_argument("--batch-size", type=int, default=256, help="the batch size of sample from the reply memory") parser.add_argument("--exploration-noise", type=float, default=0.1, help="the scale of exploration noise") parser.add_argument("--learning-starts", type=int, default=25e3, help="timestep to start learning") parser.add_argument("--policy-frequency", type=int, default=2, help="the frequency of training policy (delayed)") parser.add_argument("--noise-clip", type=float, default=0.5, help="noise clip parameter of the Target Policy Smoothing Regularization") args = parser.parse_args() # fmt: on return args def make_env(env_id, seed, idx, capture_video, run_name): def thunk(): env = gym.make(env_id) env = gym.wrappers.RecordEpisodeStatistics(env) if capture_video: if idx == 0: env = gym.wrappers.RecordVideo(env, f"videos/{run_name}") env.seed(seed) env.action_space.seed(seed) env.observation_space.seed(seed) return env return thunk # ALGO LOGIC: initialize agent here: class QNetwork(nn.Module): @nn.compact def __call__(self, x: jnp.ndarray, a: jnp.ndarray): x = jnp.concatenate([x, a], -1) x = nn.Dense(256)(x) x = nn.relu(x) x = nn.Dense(256)(x) x = nn.relu(x) x = nn.Dense(1)(x) return x class Actor(nn.Module): action_dim: Sequence[int] action_scale: Sequence[int] action_bias: Sequence[int] @nn.compact def __call__(self, x): x = nn.Dense(256)(x) x = nn.relu(x) x = nn.Dense(256)(x) x = nn.relu(x) x = nn.Dense(self.action_dim)(x) x = nn.tanh(x) x = x * self.action_scale + self.action_bias return x class TrainState(TrainState): target_params: flax.core.FrozenDict if __name__ == "__main__": args = parse_args() run_name = f"{args.env_id}__{args.exp_name}__{args.seed}__{int(time.time())}" if args.track: import wandb wandb.init( project=args.wandb_project_name, entity=args.wandb_entity, sync_tensorboard=True, config=vars(args), name=run_name, monitor_gym=True, save_code=True, ) writer = SummaryWriter(f"runs/{run_name}") writer.add_text( "hyperparameters", "|param|value|\n|-|-|\n%s" % ("\n".join([f"|{key}|{value}|" for key, value in vars(args).items()])), ) # TRY NOT TO MODIFY: seeding random.seed(args.seed) np.random.seed(args.seed) key = jax.random.PRNGKey(args.seed) key, actor_key, qf1_key = jax.random.split(key, 3) # env setup envs = gym.vector.SyncVectorEnv([make_env(args.env_id, args.seed, 0, args.capture_video, run_name)]) assert isinstance(envs.single_action_space, gym.spaces.Box), "only continuous action space is supported" max_action = float(envs.single_action_space.high[0]) envs.single_observation_space.dtype = np.float32 rb = ReplayBuffer( args.buffer_size, envs.single_observation_space, envs.single_action_space, device="cpu", handle_timeout_termination=True, ) # TRY NOT TO MODIFY: start the game obs = envs.reset() action_scale = np.array((envs.action_space.high - envs.action_space.low) / 2.0) action_bias = np.array((envs.action_space.high + envs.action_space.low) / 2.0) actor = Actor( action_dim=np.prod(envs.single_action_space.shape), action_scale=action_scale, action_bias=action_bias, ) qf1 = QNetwork() actor_state = TrainState.create( apply_fn=actor.apply, params=actor.init(actor_key, obs), target_params=actor.init(actor_key, obs), tx=optax.adam(learning_rate=args.learning_rate), ) qf1_state = TrainState.create( apply_fn=qf1.apply, params=qf1.init(qf1_key, obs, envs.action_space.sample()), target_params=qf1.init(qf1_key, obs, envs.action_space.sample()), tx=optax.adam(learning_rate=args.learning_rate), ) actor.apply = jax.jit(actor.apply) qf1.apply = jax.jit(qf1.apply) @jax.jit def update_critic( actor_state: TrainState, qf1_state: TrainState, observations: np.ndarray, actions: np.ndarray, next_observations: np.ndarray, rewards: np.ndarray, dones: np.ndarray, ): next_state_actions = (actor.apply(actor_state.target_params, next_observations)).clip(-1, 1) # TODO: proper clip qf1_next_target = qf1.apply(qf1_state.target_params, next_observations, next_state_actions).reshape(-1) next_q_value = (rewards + (1 - dones) * args.gamma * (qf1_next_target)).reshape(-1) def mse_loss(params): qf1_a_values = qf1.apply(params, observations, actions).squeeze() return ((qf1_a_values - next_q_value) ** 2).mean(), qf1_a_values.mean() (qf1_loss_value, qf1_a_values), grads = jax.value_and_grad(mse_loss, has_aux=True)(qf1_state.params) qf1_state = qf1_state.apply_gradients(grads=grads) return qf1_state, qf1_loss_value, qf1_a_values @jax.jit def update_actor( actor_state: TrainState, qf1_state: TrainState, observations: np.ndarray, ): def actor_loss(params): return -qf1.apply(qf1_state.params, observations, actor.apply(params, observations)).mean() actor_loss_value, grads = jax.value_and_grad(actor_loss)(actor_state.params) actor_state = actor_state.apply_gradients(grads=grads) actor_state = actor_state.replace( target_params=optax.incremental_update(actor_state.params, actor_state.target_params, args.tau) ) qf1_state = qf1_state.replace( target_params=optax.incremental_update(qf1_state.params, qf1_state.target_params, args.tau) ) return actor_state, qf1_state, actor_loss_value start_time = time.time() for global_step in range(args.total_timesteps): # ALGO LOGIC: put action logic here if global_step < args.learning_starts: actions = np.array([envs.single_action_space.sample() for _ in range(envs.num_envs)]) else: actions = actor.apply(actor_state.params, obs) actions = np.array( [ (jax.device_get(actions)[0] + np.random.normal(0, action_scale * args.exploration_noise)[0]).clip( envs.single_action_space.low, envs.single_action_space.high ) ] ) # TRY NOT TO MODIFY: execute the game and log data. next_obs, rewards, dones, infos = envs.step(actions) # TRY NOT TO MODIFY: record rewards for plotting purposes for info in infos: if "episode" in info.keys(): print(f"global_step={global_step}, episodic_return={info['episode']['r']}") writer.add_scalar("charts/episodic_return", info["episode"]["r"], global_step) writer.add_scalar("charts/episodic_length", info["episode"]["l"], global_step) break # TRY NOT TO MODIFY: save data to reply buffer; handle `terminal_observation` real_next_obs = next_obs.copy() for idx, d in enumerate(dones): if d: real_next_obs[idx] = infos[idx]["terminal_observation"] rb.add(obs, real_next_obs, actions, rewards, dones, infos) # TRY NOT TO MODIFY: CRUCIAL step easy to overlook obs = next_obs # ALGO LOGIC: training. if global_step > args.learning_starts: data = rb.sample(args.batch_size) qf1_state, qf1_loss_value, qf1_a_values = update_critic( actor_state, qf1_state, data.observations.numpy(), data.actions.numpy(), data.next_observations.numpy(), data.rewards.flatten().numpy(), data.dones.flatten().numpy(), ) if global_step % args.policy_frequency == 0: actor_state, qf1_state, actor_loss_value = update_actor( actor_state, qf1_state, data.observations.numpy(), ) if global_step % 100 == 0: writer.add_scalar("losses/qf1_loss", qf1_loss_value.item(), global_step) writer.add_scalar("losses/actor_loss", actor_loss_value.item(), global_step) writer.add_scalar("losses/qf1_values", qf1_a_values.item(), global_step) print("SPS:", int(global_step / (time.time() - start_time))) writer.add_scalar("charts/SPS", int(global_step / (time.time() - start_time)), global_step) envs.close() writer.close()
PypiClean
/control-toolbox-0.1.0.tar.gz/control-toolbox-0.1.0/control/SystemIdentification.py
import warnings import numpy as np import pandas as pd from tensorflow.keras import initializers from tensorflow.keras.layers import Dense from tensorflow.keras.models import Sequential warnings.filterwarnings("ignore") class SystemIdentification(): ''' System Identification module ''' def __init__(self, path_x, path_x_dot, path_y): ''' Parameters ---------- path_x : file path DESCRIPTION. path to CSV file consisting the X matrix data. path_x_dot : file path DESCRIPTION. path to CSV file consisting X_dot matrix data. path_y : file path DESCRIPTION. path to CSV file consisting Y matrix data. Returns ------- None. ''' data_x = pd.read_csv(path_x) data_x_dot = pd.read_csv(path_x_dot) data_y = pd.read_csv(path_y) self._x = data_x.to_numpy() self._x_dot = data_x_dot.to_numpy() self._y = data_y.to_numpy() def fit(self, num_epochs=500): ''' Parameters ---------- num_epochs : int, optional DESCRIPTION. Number of epochs. The default is 500. Returns ------- None. ''' self._stateModel = Sequential() self._stateModel.add(Dense(self._x.shape[0], input_dim=2, activation='sigmoid', kernel_initializer=initializers.glorot_normal())) self._stateModel.add(Dense(self._x.shape[0]/2, activation='sigmoid')) self._stateModel.add(Dense(2, activation='relu')) self._stateModel.compile(loss='mse', optimizer='adam', metrics=['accuracy']) self._stateModel.fit(self._x, self._x_dot, epochs=num_epochs, batch_size=10) self._outputModel = Sequential() self._outputModel.add(Dense(self._x.shape[0], input_dim=2, activation='sigmoid', kernel_initializer=initializers.glorot_normal())) self._outputModel.add(Dense(self._x.shape[0]/2, activation='sigmoid')) self._outputModel.add(Dense(1, activation='relu')) self._outputModel.compile(loss='mse', optimizer='adam', metrics=['accuracy']) self._outputModel.fit(self._x, self._y, epochs=num_epochs, batch_size=10) def model(self): ''' Returns ------- model_dict : dict DESCRIPTION. dictionary of matrices A and C. ''' hyp_state = self._stateModel.predict(self._x) hyp_output = self._stateModel.predict(self._x) A = np.linalg.pinv(self._x)@hyp_state C = [email protected](self._x).T model_dict = {"A":A, "C":C} return model_dict
PypiClean
/novastella-0.0.1.zip/novastella-0.0.1/nova/vde.py
import pylab as pl from streamfunction import SF from elliptic import EQ from inverse import INV from eqConfig import Config from itertools import cycle import numpy as np from radial_build import RB import copy from shelf import PKL import cross_coil as cc import scipy as sp from surface import bernstein from scipy.interpolate import interp1d import scipy.optimize as op import seaborn as sns rc = {'figure.figsize':[7*12/14,7],'savefig.dpi':110, #*12/16 'savefig.jpeg_quality':100,'savefig.pad_inches':0.1, 'lines.linewidth':0.75} sns.set(context='paper',style='white',font='sans-serif',palette='Set2', font_scale=7/8,rc=rc) color = cycle(sns.color_palette('Set2')) pl.figure() pl.axis('equal') pl.axis('off') eqdsk = 'vde' #eqdsk = 'SN' sf = SF(Config(eqdsk)) sf.eq['ncoil'] = 0 if eqdsk == 'vde': eq = EQ(sf,dCoil=1,limit=[4.25,8,-4.5,2],n=8e3) #eq = EQ(sf,dCoil=1,limit=[3,9,-6,6],n=5e3) else: eq = EQ(sf,dCoil=1,limit=[5,13,-5.5,5],n=1e3) eq.set_eq_psi() levels = eq.sf.contour() psi_o = eq.psi.copy() psi_flux = np.linspace(0,1,500) eq.edgeBC() def flux_fit(b,*args): eq.fP = b[0]*1e2 eq.fF = b[1]*1e2 ''' b *= b_norm sf.Pprime = bern.spline(b[:bern.n+1]) sf.FFprime = bern.spline(b[bern.n+1:]) #sf.FFprime = bern.spline(b[:bern.n+1]) b /= b_norm ''' eq.coreBC() psi = eq.solve() err = np.sqrt(np.mean((psi-psi_o)**2)) print(err) return err ''' n=2 bern = bernstein(psi_flux,n=n) # set up bezier curves bPprime = bern.fit(sf.Pprime(psi_flux))[1] bFFprime = bern.fit(sf.FFprime(psi_flux))[1] b_norm = np.append(bPprime,bFFprime) bo = np.ones(len(b_norm)) opp = op.minimize(flux_fit,bo,options={'disp':True},method='L-BFGS-B') ''' opp = op.minimize(flux_fit,[1e-2,1e-2],options={'disp':True},method='SLSQP',tol=1e-12) print('fun:{:1.8f}'.format(opp.fun)) #eq.coreBC() eq.psi = eq.solve() eq.set_eq_psi() eq.plotb() sf.contour(levels=levels,color=next(color),Xnorm=False) eq.sf.Bquiver() fig,ax = pl.subplots(2,sharex=True) ax[0].plot(psi_flux,sf.Pprime(psi_flux)) ax[0].plot(sf.eq['pnorm'],sf.eq['pprime'],'.') ax[1].plot(psi_flux,sf.FFprime(psi_flux)) ax[1].plot(sf.eq['pnorm'],sf.eq['ffprim'],'.') fig.subplots_adjust(hspace=0.1) sns.despine()
PypiClean
/STIM_Module-1.0.0-py3-none-any.whl/STIM_Module/application.py
import timeit from time import sleep, time import tkinter as tk from tkinter import CENTER, E, N, S, TOP, W, Label, StringVar, ttk import matplotlib from matplotlib import pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import sqlite3 as sq from pandas import DataFrame from threading import Thread from PIL import ImageTk, Image from STIM_Module.dummy_matplot import ret_graph, ret_pro_graph from STIM_Module.api_funcs import * from STIM_Module.analysis import just_the_tips def matplot_init(color="grey"): COLOR = color matplotlib.rcParams['text.color'] = COLOR matplotlib.rcParams['axes.labelcolor'] = COLOR matplotlib.rcParams['xtick.color'] = COLOR matplotlib.rcParams['ytick.color'] = COLOR matplotlib.rcParams['axes.edgecolor'] = "black" matplotlib.rcParams['axes.facecolor'] = "black" def styles_init(): # Frame Default Styles frame_style = ttk.Style() frame_style.configure('My.TFrame', background="#808c9f") # Labels Default Styles l_style = ttk.Style() l_style.configure("Text.TLabel", background="#808c9f", foreground="white", anchor="center", font=("Californian FB", 9)) t_l_style = ttk.Style() t_l_style.configure("Title.TLabel", background="#808c9f", foreground="white", anchor="center", font=("Californian FB", 12, "bold")) # Buttons Default Styles b_style = ttk.Style() b_style.configure("My.TButton", background="#808c9f", font=("Californian FB", 9)) # Entry Default Styles NOT WOKRING e_style = ttk.Style() e_style.configure("My.TEntry", background="#909cAf", font=("Californian FB", 9), foreground="dark blue") def popUp(binst, master, invalid=False): binst.destroy() win = tk.Toplevel() win.config(bg="#808c9f") win.geometry("300x125") win.wm_title("Enter your Summoner Name") if invalid: l = ttk.Label(win, text="Invalid input! Enter Summoner Name:", style="Text.TLabel") else: l = ttk.Label(win, text="Enter Summoner Name:", style="Text.TLabel") l.place(relx=.5, rely=.3, anchor=CENTER) sum_name = StringVar() e = ttk.Entry(win, width=10, textvariable=sum_name, style="My.TEntry") e.place(relx=.5, rely=.5, anchor=CENTER) b = ttk.Button(win, text="Enter", style="My.TButton", command=lambda win=win, sum_name=sum_name, master=master: custom_destroy(win, sum_name, master)) b.place(relx=.5, rely=.7, anchor=CENTER) win.bind('<Return>', lambda event: custom_destroy(win, sum_name, master)) e.focus() win.protocol("WM_DELETE_WINDOW", lambda: custom_destroy(win, sum_name, master)) def custom_destroy(win, sum_name, master): for ele in master.winfo_children(): ele.destroy() if (check_summoner_exists(sum_name.get()) == False or sum_name.get() == ""): popUp(win, master, invalid=True) MainWindow(master, button=False) else: SecondaryWindow(master, sum_name) class AsyncGraphDraw(Thread): def __init__(self, parent, sum_name=None, game_id=None, row_num=1, is_pro=False): super().__init__() self.parent = parent self.sum_name = sum_name self.game_id = game_id self.row_num = row_num self.is_pro = is_pro def run(self): if self.is_pro: draw_graph(self.parent, "g", self.sum_name[0], self.game_id, 1, self.row_num) draw_graph(self.parent, "e", self.sum_name[0], self.game_id, 2, self.row_num) draw_graph(self.parent, "d", self.sum_name[0], self.game_id, 3, self.row_num) else: draw_graph(self.parent, "g", self.sum_name.get(), self.game_id, 1, self.row_num) draw_graph(self.parent, "e", self.sum_name.get(), self.game_id, 2, self.row_num) draw_graph(self.parent, "d", self.sum_name.get(), self.game_id, 3, self.row_num) def draw_all_graphs(parent, sum_name=None, game_id=None, row_num=1, filename=None): if filename is not None: draw_graph(parent, "g", col_num=1, row_num=row_num, filename=filename) draw_graph(parent, "e", col_num=2, row_num=row_num, filename=filename) draw_graph(parent, "d", col_num=3, row_num=row_num, filename=filename) else: draw_graph(parent, "g", sum_name.get(), game_id, 1, row_num) draw_graph(parent, "e", sum_name.get(), game_id, 2, row_num) draw_graph(parent, "d", sum_name.get(), game_id, 3, row_num) def draw_graph(parent, type="g", sum_name=None, game_id=None, col_num=0, row_num=0, filename=None): matplot_init("white") df_obj = DataFrame() figure2 = plt.Figure(figsize=(4, 4), dpi=50, facecolor='#707c8f') ax2 = figure2.add_subplot(111) ax2.patch.set_facecolor('black') line2 = FigureCanvasTkAgg(figure2, parent) if filename is None: df_obj, xVar, yVar, line_color = ret_graph(type, sum_name, game_id) else: df_obj, xVar, yVar, line_color = ret_pro_graph(type, filename) df_obj = df_obj[[xVar, yVar]].groupby(xVar).sum() df_obj.plot(kind='line', legend=True, ax=ax2, color=line_color, marker='o', fontsize=10, ylabel=yVar) ax2.set_title("Time Vs. %s" % yVar) widget = line2.get_tk_widget() widget.grid(column=col_num, row=row_num) def delete_user_csvs(root): dir_name = "./data" if os.path.exists(dir_name): files = os.listdir(dir_name) for file in files: if file.endswith(".csv") or file.endswith(".json"): os.remove(os.path.join(dir_name, file)) root.destroy() class MainWindow(ttk.Frame): def __init__(self, master, button=True): ttk.Frame.__init__(self, master, style="My.TFrame") self.pack() if button: self.login_button = ttk.Button(self, text="Log In", command=popUp, style="My.TButton") self.login_button['command'] = lambda inst=self, master=master: popUp(self.login_button, master) self.login_button.grid(column=1, row=3, sticky=N) title = "Welcome to The League of Legends \nStatistics Tracker and Improvement Manager \nor STIM for short!" what_is_STIM = [ "STIM is a companion app for Riot Games' multiplayer online battle arena (MOBA) game League of Legends.", "What this companion app does is it pulls real match data from your most recent match and displays it in a friendly format.", "It displays the most recent 3 games and 3 random games from a pro player, and displays important statistics such as gold and", "experience gain and the gold differential versus your opponent. Our app displays this data and", "also formulates an analysis to provide you tips and advice on how to improve your gameplay." ] how_to_use = [ "1. Login using your summoner username\n" "2. Scroll through your user games and pro games independently\n" "3. Comparison between the two displayed games will be displayed in the advice section below the graphs.\n" "4. Invalid summoner names will be rejected and you will be prompted again for a valid summoner name" ] credits = [ "Project Leader and Backend: Benjamin Covert\n" "Front End and GUI: David Hutchins\n" "Game Analysis and Distribution: Jaxton Willman" ] ttk.Label(self, text=title, style="Title.TLabel", anchor="center", justify="center").grid(column=1, row=0, sticky=N) ttk.Label(self, text="What is STIM?", style="Title.TLabel", anchor="center", justify="center").grid(column=0, row=1, sticky=N) ttk.Label(self, text=' '.join(what_is_STIM), style="Text.TLabel", anchor="center", justify="left", wraplength=300).grid(column=0, row=2, sticky=N) ttk.Label(self, text="How to Use it?", style="Title.TLabel", anchor="center", justify="center").grid(column=1, row=1, sticky=N) ttk.Label(self, text=' '.join(how_to_use), style="Text.TLabel", anchor="center", justify="left", wraplength=300).grid(column=1, row=2, sticky=N) ttk.Label(self, text="Credits", style="Title.TLabel", anchor="center", justify="center").grid(column=2, row=1, sticky=N) ttk.Label(self, text=' '.join(credits), style="Title.TLabel", anchor="center", justify="left", wraplength=300).grid(column=2, row=2, sticky=N) # Load image ROOT_DIR = os.path.abspath(os.path.dirname(__file__)) # This is your Project Root img_path = os.path.join(ROOT_DIR, "assets/Images/lol_image.jpg") if os.path.exists(img_path): self.img = ImageTk.PhotoImage(Image.open(img_path).resize((500, 250))) ttk.Label(self, image=self.img, anchor="center", borderwidth=0, background="#808c9f").grid(column=0, row=4, columnspan=3) # response = requests.get("https://freepngimg.com/thumb/league_of_legends/27974-5-league-of-legends-logo-transparent-background.png") # self.img = ImageTk.PhotoImage(Image.open(io.BytesIO(response.content)).resize((500, 250))) # ttk.Label(self, image=self.img, anchor="center", borderwidth=0, background="#808c9f").grid(column=0, row=4, columnspan=3) class SecondaryWindow(ttk.Frame): # Summoner Name Verification def __init__(self, master, sum_name, pro_name=None): ttk.Frame.__init__(self, master, style="My.TFrame") # time_var = timeit.timeit(lambda: collect_data_for_rank(), number=1) pro_name = [] pro_games_thread = Thread(target=collect_data_for_rank, args=("RANKED_SOLO_5x5", "DIAMOND", "I", pro_name)) # THIS TAKES 7 SECONDS THIS IS MULTITHREADABLE IF I REMOVE RETURN pro_games_thread.start() # print(time_var) # TODO: Label is not showing up before popUp is called, not a big deal just good for flare num_games = 3 puuid, sum_level = get_summoner(sum_name.get()) recent_game_ids = get_recent_game_ids(puuid, num_games) # ttk.Label(self, text="Summoner Name: %s\nSummoner Level: %s" % (sum_name.get(), str(sum_level)), style="Title.TLabel").grid(column=0, row=0, sticky=(W, N), padx= 5) # User Game Display create_sqlite_db(sum_name.get()) csv_thread = Thread(target=add_data_to_db, args=(sum_name.get(), puuid, num_games, recent_game_ids)) csv_thread.start() self.pack() dot = 0 dots = [".", "..", "..."] while (csv_thread.is_alive() or pro_games_thread.is_alive()): l = ttk.Label(self, text="Loading%s" % dots[dot % 3], style="Title.TLabel") l.grid(column=0, row=0, sticky=W) self.update() master.update() dot += 1 sleep(.3) l.destroy() connection = sq.connect("data/game_data.db") cursor = connection.cursor() query = f'SELECT ID FROM {"GAMEDATA_" + "".join(str(pro_name[0]).split())}' cursor.execute(query) numeric_ids = [ID[0] for ID in cursor.fetchall()] pro_game_ids = ['NA1_' + str(ID) for ID in numeric_ids] # print("LOOK HERE:::::::", recent_game_ids) # print("NUMERIC IDS::::::::", numeric_ids) GameDisplayWindow(master, self, sum_name, 0, 0, recent_game_ids, pro_name, pro_game_ids=pro_game_ids) class GameDisplayWindow(ttk.Frame): def __init__(self, master, parent, sum_name, user_game_num, pro_game_num, game_ids, pro_name, pro_game_ids): number_user_games = len(game_ids) number_pro_games = len(pro_game_ids) parent.pack_forget() ttk.Frame.__init__(self, master, style="My.TFrame") self.pack() _, sum_level = get_summoner(sum_name.get()) #recent_game_id = game_ids[user_game_num] ttk.Label(self, text="Summoner Name: %s\nSummoner Level: %s" % (sum_name.get(), str(sum_level)), style="Title.TLabel").grid(column=0, row=0, sticky=(W, N), padx=5) # Drawing User Games ttk.Label(self, text="%s's Stats For \nGame %s" % (sum_name.get(), ((user_game_num % number_user_games) + 1)), style="Title.TLabel").grid(column=0, row=1, sticky=W) user_game_thread = AsyncGraphDraw(self, sum_name, game_ids[user_game_num], row_num=1) user_game_thread.start() self.switch_button = ttk.Button(self, text="Switch Accounts", style="My.TButton", command=lambda: popUp(self.switch_button, master)) self.switch_button.grid(column=0, row=1, sticky=(N, W)) ttk.Button(self, text="View Next User Game", style="My.TButton", command=lambda: GameDisplayWindow(master, self, sum_name, ((user_game_num + 1) % number_user_games), pro_game_num, game_ids, pro_name, pro_game_ids)).grid(column=0, row=1, sticky=(S, W), pady=25) ttk.Button(self, text="View Previous User Game", style="My.TButton", command=lambda: GameDisplayWindow(master, self, sum_name, ((user_game_num - 1) % number_user_games), pro_game_num, game_ids, pro_name, pro_game_ids)).grid(column=0, row=1, sticky=(S, W)) # Drawing Pro Games ttk.Label(self, text="Pro's Stats For \nGame %d" % ((pro_game_num % number_pro_games) + 1), style="Title.TLabel").grid(column=0, row=2, sticky=W) # print("IDS:", pro_game_ids) pro_game_thread = AsyncGraphDraw(self, pro_name, pro_game_ids[pro_game_num], row_num=2, is_pro=True) pro_game_thread.start() ttk.Button(self, text="View Next Pro Game", style="My.TButton", command=lambda: GameDisplayWindow(master, self, sum_name, user_game_num, ((pro_game_num + 1) % number_pro_games), game_ids, pro_name, pro_game_ids)).grid(column=0, row=2, sticky=(S, W), pady=25) ttk.Button(self, text="View Previous Pro Game", style="My.TButton", command=lambda: GameDisplayWindow(master, self, sum_name, user_game_num, ((pro_game_num - 1) % number_pro_games), game_ids, pro_name, pro_game_ids)).grid(column=0, row=2, sticky=(S, W)) # Display advice and analysis ttk.Label(self, text="Advice For This Comparison:", style="Title.TLabel").grid(column=0, row=4, sticky=(W, N), pady=20) tips = just_the_tips(sum_name.get(), game_ids[user_game_num], pro_name, pro_game_ids[pro_game_num]) tip_row_num = 5 for tip in tips: ttk.Label(self, text=tip, style="Text.TLabel", wraplength=800).grid(column=0, columnspan=5, row=tip_row_num, sticky=(W, N)) tip_row_num += 1 def main(): root = tk.Tk() root.geometry("1200x900") # Window size root.config(bg="#808c9f") root.title("Statistics Tracker and Improvement Manager") styles_init() main_window = MainWindow(root) root.protocol("WM_DELETE_WINDOW", lambda: delete_user_csvs(root)) root.mainloop() if __name__ == "__main__": main()
PypiClean
/superset-2.0.0-custom-test-0.0.1.tar.gz/superset-2.0.0-custom-test-0.0.1/superset/migrations/versions/2020-11-30_17-54_8ee129739cf9_security_converge_css_templates.py
# revision identifiers, used by Alembic. revision = "8ee129739cf9" down_revision = "e38177dbf641" from alembic import op from sqlalchemy.exc import SQLAlchemyError from sqlalchemy.orm import Session from superset.migrations.shared.security_converge import ( add_pvms, get_reversed_new_pvms, get_reversed_pvm_map, migrate_roles, Pvm, ) NEW_PVMS = { "CssTemplate": ( "can_read", "can_write", ) } PVM_MAP = { Pvm("CssTemplateModelView", "can_list"): (Pvm("CssTemplate", "can_read"),), Pvm("CssTemplateModelView", "can_show"): (Pvm("CssTemplate", "can_read"),), Pvm( "CssTemplateModelView", "can_add", ): (Pvm("CssTemplate", "can_write"),), Pvm( "CssTemplateModelView", "can_edit", ): (Pvm("CssTemplate", "can_write"),), Pvm( "CssTemplateModelView", "can_delete", ): (Pvm("CssTemplate", "can_write"),), Pvm( "CssTemplateModelView", "muldelete", ): (Pvm("CssTemplate", "can_write"),), Pvm( "CssTemplateAsyncModelView", "can_list", ): (Pvm("CssTemplate", "can_read"),), Pvm( "CssTemplateAsyncModelView", "muldelete", ): (Pvm("CssTemplate", "can_write"),), } def upgrade(): bind = op.get_bind() session = Session(bind=bind) # Add the new permissions on the migration itself add_pvms(session, NEW_PVMS) migrate_roles(session, PVM_MAP) try: session.commit() except SQLAlchemyError as ex: print(f"An error occurred while upgrading permissions: {ex}") session.rollback() def downgrade(): bind = op.get_bind() session = Session(bind=bind) # Add the old permissions on the migration itself add_pvms(session, get_reversed_new_pvms(PVM_MAP)) migrate_roles(session, get_reversed_pvm_map(PVM_MAP)) try: session.commit() except SQLAlchemyError as ex: print(f"An error occurred while downgrading permissions: {ex}") session.rollback() pass
PypiClean
/c3s_magic_wps-1.0.0rc1-py3-none-any.whl/c3s_magic_wps/processes/wps_zmnam.py
import logging import os from pywps import FORMATS, ComplexInput, ComplexOutput, Format, LiteralInput, LiteralOutput, Process from pywps.app.Common import Metadata from pywps.response.status import WPS_STATUS from .. import runner, util from .utils import default_outputs, model_experiment_ensemble, outputs_from_plot_names, year_ranges LOGGER = logging.getLogger("PYWPS") class ZMNAM(Process): def __init__(self): self.variables = ['zg'] self.frequency = 'day' inputs = [ *model_experiment_ensemble(model='MPI-ESM-MR', experiment='historical', ensemble='r1i1p1', max_occurs=1, required_variables=self.variables, required_frequency=self.frequency), *year_ranges((1979, 2005)), ] self.pressure_levels = [5000, 25000, 50000, 100000] self.plotlist = [("{}Pa_mo_reg".format(i), [Format('image/png')]) for i in self.pressure_levels] self.plotlist.extend([("{}Pa_da_pdf".format(i), [Format('image/png')]) for i in self.pressure_levels]) self.plotlist.extend([("{}Pa_mo_ts".format(i), [Format('image/png')]) for i in self.pressure_levels]) outputs = [ *outputs_from_plot_names(self.plotlist), ComplexOutput('regr_map', 'Regr Map Data', abstract='Generated output data of ESMValTool processing.', as_reference=True, supported_formats=[FORMATS.NETCDF]), ComplexOutput('eofs', 'EOF Data', abstract='Generated output data of ESMValTool processing.', as_reference=True, supported_formats=[FORMATS.NETCDF]), ComplexOutput('pc_mo', 'PC Mo Data', abstract='Generated output data of ESMValTool processing.', as_reference=True, supported_formats=[FORMATS.NETCDF]), ComplexOutput('pc_da', 'PC Da Data', abstract='Generated output data of ESMValTool processing.', as_reference=True, supported_formats=[FORMATS.NETCDF]), ComplexOutput('archive', 'Archive', abstract='The complete output of the ESMValTool processing as an zip archive.', as_reference=True, supported_formats=[Format('application/zip')]), *default_outputs(), ] super(ZMNAM, self).__init__( self._handler, identifier="zmnam", title="Stratosphere-troposphere coupling and annular modes indices (ZMNAM)", version=runner.VERSION, abstract="Stratosphere-troposphere coupling and annular modes indices (ZMNAM)", metadata=[ Metadata('ESMValTool', 'http://www.esmvaltool.org/'), Metadata( 'Documentation', 'https://esmvaltool.readthedocs.io/en/version2_development/recipes/recipe_zmnam.html', role=util.WPS_ROLE_DOC, ), Metadata('Estimated Calculation Time', '3 minutes'), ], inputs=inputs, outputs=outputs, status_supported=True, store_supported=True) def _handler(self, request, response): response.update_status("starting ...", 0) workdir = self.workdir # build esgf search constraints constraints = dict( model=request.inputs['model'][0].data, experiment=request.inputs['experiment'][0].data, ensemble=request.inputs['ensemble'][0].data, ) # generate recipe response.update_status("generate recipe ...", 10) recipe_file, config_file = runner.generate_recipe( workdir=workdir, diag='zmnam', constraints=constraints, start_year=request.inputs['start_year'][0].data, end_year=request.inputs['end_year'][0].data, output_format='png', ) # recipe output response.outputs['recipe'].output_format = FORMATS.TEXT response.outputs['recipe'].file = recipe_file # run diag response.update_status("running diagnostic ...", 20) result = runner.run(recipe_file, config_file) response.outputs['success'].data = result['success'] # log output response.outputs['log'].output_format = FORMATS.TEXT response.outputs['log'].file = result['logfile'] # debug log output response.outputs['debug_log'].output_format = FORMATS.TEXT response.outputs['debug_log'].file = result['debug_logfile'] if result['success']: try: self.get_outputs(result, response) except Exception as e: response.update_status("exception occured: " + str(e), 85) else: LOGGER.exception('esmvaltool failed!') response.update_status("exception occured: " + result['exception'], 85) response.update_status("creating archive of diagnostic result ...", 90) response.outputs['archive'].output_format = Format('application/zip') response.outputs['archive'].file = runner.compress_output(os.path.join(workdir, 'output'), 'zmnam_result.zip') response.update_status("done.", 100) return response def get_outputs(self, result, response): # result plot response.update_status("collecting output ...", 80) for plot, _ in self.plotlist: key = '{}_plot'.format(plot.lower()) response.outputs[key].output_format = Format('application/png') response.outputs[key].file = runner.get_output(result['plot_dir'], path_filter=os.path.join('zmnam', 'main'), name_filter="*_{}".format(plot), output_format="png") response.outputs['regr_map'].output_format = FORMATS.NETCDF response.outputs['regr_map'].file = runner.get_output(result['work_dir'], path_filter=os.path.join('zmnam', 'main'), name_filter="*regr_map*", output_format="nc") response.outputs['eofs'].output_format = FORMATS.NETCDF response.outputs['eofs'].file = runner.get_output(result['work_dir'], path_filter=os.path.join('zmnam', 'main'), name_filter="*eofs*", output_format="nc") response.outputs['pc_mo'].output_format = FORMATS.NETCDF response.outputs['pc_mo'].file = runner.get_output(result['work_dir'], path_filter=os.path.join('zmnam', 'main'), name_filter="*pc_mo*", output_format="nc") response.outputs['pc_da'].output_format = FORMATS.NETCDF response.outputs['pc_da'].file = runner.get_output(result['work_dir'], path_filter=os.path.join('zmnam', 'main'), name_filter="*pc_da*", output_format="nc")
PypiClean
/xmind-switch2-1.0.0.tar.gz/xmind-switch2-1.0.0/xmind2testcase/cli.py
import logging import sys from xmind2testcase.zentao import xmind_to_zentao_csv_file from xmind2testcase.testlink import xmind_to_testlink_xml_file from xmind2testcase.utils import get_absolute_path, xmind_testcase_to_json_file from webtool.application import launch logging.basicConfig(level=logging.INFO, format='%(asctime)s %(name)s %(levelname)s [%(module)s - %(funcName)s]: %(message)s', datefmt='%Y/%m/%d %H:%M:%S') using_doc = """ Xmind2Testcase is a tool to parse xmind file into testcase file, which will help you generate a testlink recognized xml file or a zentao recognized cvs file, then you can import it into testlink or zentao. Usage: xmind2testcase [path_to_xmind_file] [-csv] [-xml] [-json] xmind2testcase [webtool] [port_num] Example: xmind2testcase /path/to/testcase.xmind => output testcase.csv、testcase.xml、testcase.json xmind2testcase /path/to/testcase.xmind -csv => output testcase.csv xmind2testcase /path/to/testcase.xmind -xml => output testcase.xml xmind2testcase /path/to/testcase.xmind -json => output testcase.json xmind2testcase webtool => launch the web testcase conversion tool locally: 127.0.0.1:5001 xmind2testcase webtool 8000 => launch the web testcase conversion tool locally: 127.0.0.1:8000 """ def cli_main(): if len(sys.argv) > 1 and sys.argv[1].endswith('.xmind'): xmind_file = sys.argv[1] xmind_file = get_absolute_path(xmind_file) logging.info('Start to convert XMind file: %s', xmind_file) if len(sys.argv) == 3 and sys.argv[2] == '-json': testlink_json_file = xmind_testcase_to_json_file(xmind_file) logging.info('Convert XMind file to testcase json file successfully: %s', testlink_json_file) elif len(sys.argv) == 3 and sys.argv[2] == '-xml': testlink_xml_file = xmind_to_testlink_xml_file(xmind_file) logging.info('Convert XMind file to testlink xml files successfully: %s', testlink_xml_file) elif len(sys.argv) == 3 and sys.argv[2] == '-csv': zentao_csv_file = xmind_to_zentao_csv_file(xmind_file) logging.info('Convert XMind file to zentao csv file successfully: %s', zentao_csv_file) else: testlink_json_file = xmind_testcase_to_json_file(xmind_file) testlink_xml_file = xmind_to_testlink_xml_file(xmind_file) zentao_csv_file = xmind_to_zentao_csv_file(xmind_file) logging.info('Convert XMind file successfully: \n' '1、 testcase json file(%s)\n' '2、 testlink xml file(%s)\n' '3、 zentao csv file(%s)', testlink_json_file, testlink_xml_file, zentao_csv_file) elif len(sys.argv) > 1 and sys.argv[1] == 'webtool': if len(sys.argv) == 3: try: port = int(sys.argv[2]) launch(port=port) except ValueError: launch() else: launch() else: print(using_doc) if __name__ == '__main__': cli_main()
PypiClean
/influxgraph-graphite-api-1.2.0.tar.gz/influxgraph-graphite-api-1.2.0/influxgraph_graphite_api/evaluator.py
import itertools import re import six from .render.datalib import fetchData, TimeSeries from .render.grammar import grammar def pathsFromTarget(requestContext, target): tokens = grammar.parseString(target) paths = list(pathsFromTokens(requestContext, tokens)) return paths def pathsFromTokens(requestContext, tokens, replacements=None): iters = [] if tokens.template: arglist = dict() if tokens.template.kwargs: for kwarg in tokens.template.kwargs: arg = kwarg.args[0] if arg.string: arglist[kwarg.argname] = arg.string[1:-1] if tokens.template.args: for i, arg in enumerate(tokens.template.args): if arg.string: arglist[str(i + 1)] = arg.string[1:-1] if 'template' in requestContext: arglist.update(requestContext['template']) iters.append(pathsFromTokens(requestContext, tokens.template, arglist)) elif tokens.expression: iters.append(pathsFromTokens(requestContext, tokens.expression, replacements)) elif tokens.pathExpression: expression = tokens.pathExpression if replacements: for name in replacements: val = replacements[name] expression = expression.replace('$'+name, str(val)) iters.append([expression]) elif tokens.call: if tokens.call.funcname == 'template': # if template propagates down here, it means the grammar didn't # match the invocation as tokens.template. this generally happens # if you try to pass non-numeric/string args raise ValueError("invalid template() syntax, only string/numeric " "arguments are allowed") iters.extend([pathsFromTokens(requestContext, arg, replacements) for arg in tokens.call.args]) iters.extend([pathsFromTokens(requestContext, kwarg.args[0], replacements) for kwarg in tokens.call.kwargs]) for path in itertools.chain(*iters): yield path def evaluateTarget(requestContext, target, data_store=None): tokens = grammar.parseString(target) if data_store is None: paths = list(pathsFromTokens(requestContext, tokens)) data_store = fetchData(requestContext, paths) result = evaluateTokens(requestContext, tokens, data_store) if isinstance(result, TimeSeries): return [result] # we have to return a list of TimeSeries objects return result def evaluateTokens(requestContext, tokens, data_store=None, replacements=None): if data_store is None: paths = list(pathsFromTokens(requestContext, tokens)) data_store = fetchData(requestContext, paths) if tokens.template: arglist = dict() if tokens.template.kwargs: args = [(kwarg.argname, evaluateTokens(requestContext, kwarg.args[0], data_store)) for kwarg in tokens.template.kwargs] arglist.update(dict(args)) if tokens.template.args: args = [(str(i + 1), evaluateTokens(requestContext, arg, data_store)) for i, arg in enumerate(tokens.template.args)] arglist.update(dict(args)) if 'template' in requestContext: arglist.update(requestContext['template']) return evaluateTokens(requestContext, tokens.template, data_store, arglist) elif tokens.expression: return evaluateTokens(requestContext, tokens.expression, data_store, replacements) elif tokens.pathExpression: expression = tokens.pathExpression if replacements: for name in replacements: val = replacements[name] if expression == '$'+name: if not isinstance(val, six.string_types): return val elif re.match('^-?[\d.]+$', val): return float(val) else: return val else: expression = expression.replace('$'+name, str(val)) return data_store.get_series_list(expression) elif tokens.call: if tokens.call.funcname == 'template': # if template propagates down here, it means the grammar didn't # match the invocation as tokens.template. this generally happens # if you try to pass non-numeric/string args raise ValueError("invalid template() syntax, only string/numeric " "arguments are allowed") func = app.functions[tokens.call.funcname] args = [evaluateTokens(requestContext, arg, data_store, replacements) for arg in tokens.call.args] requestContext['args'] = tokens.call.args kwargs = dict([(kwarg.argname, evaluateTokens(requestContext, kwarg.args[0], data_store, replacements)) for kwarg in tokens.call.kwargs]) ret = func(requestContext, *args, **kwargs) return ret elif tokens.number: if tokens.number.integer: return int(tokens.number.integer) elif tokens.number.float: return float(tokens.number.float) elif tokens.number.scientific: return float(tokens.number.scientific[0]) elif tokens.string: return tokens.string[1:-1] elif tokens.boolean: return tokens.boolean[0] == 'true' else: raise ValueError("unknown token in target evaluator") from .app import app # noqa
PypiClean
/uniohomeassistant-0.1.3.tar.gz/uniohomeassistant-0.1.3/homeassistant/components/dyson/air_quality.py
import logging from libpurecool.dyson_pure_cool import DysonPureCool from libpurecool.dyson_pure_state_v2 import DysonEnvironmentalSensorV2State from homeassistant.components.air_quality import DOMAIN, AirQualityEntity from . import DYSON_DEVICES ATTRIBUTION = "Dyson purifier air quality sensor" _LOGGER = logging.getLogger(__name__) DYSON_AIQ_DEVICES = "dyson_aiq_devices" ATTR_VOC = "volatile_organic_compounds" def setup_platform(hass, config, add_entities, discovery_info=None): """Set up the Dyson Sensors.""" if discovery_info is None: return hass.data.setdefault(DYSON_AIQ_DEVICES, []) # Get Dyson Devices from parent component device_ids = [device.unique_id for device in hass.data[DYSON_AIQ_DEVICES]] new_entities = [] for device in hass.data[DYSON_DEVICES]: if isinstance(device, DysonPureCool) and device.serial not in device_ids: new_entities.append(DysonAirSensor(device)) if not new_entities: return hass.data[DYSON_AIQ_DEVICES].extend(new_entities) add_entities(hass.data[DYSON_AIQ_DEVICES]) class DysonAirSensor(AirQualityEntity): """Representation of a generic Dyson air quality sensor.""" def __init__(self, device): """Create a new generic air quality Dyson sensor.""" self._device = device self._old_value = None self._name = device.name async def async_added_to_hass(self): """Call when entity is added to hass.""" self._device.add_message_listener(self.on_message) def on_message(self, message): """Handle new messages which are received from the fan.""" _LOGGER.debug( "%s: Message received for %s device: %s", DOMAIN, self.name, message ) if ( self._old_value is None or self._old_value != self._device.environmental_state ) and isinstance(message, DysonEnvironmentalSensorV2State): self._old_value = self._device.environmental_state self.schedule_update_ha_state() @property def should_poll(self): """No polling needed.""" return False @property def name(self): """Return the name of the Dyson sensor.""" return self._name @property def attribution(self): """Return the attribution.""" return ATTRIBUTION @property def air_quality_index(self): """Return the Air Quality Index (AQI).""" return max( self.particulate_matter_2_5, self.particulate_matter_10, self.nitrogen_dioxide, self.volatile_organic_compounds, ) @property def particulate_matter_2_5(self): """Return the particulate matter 2.5 level.""" if self._device.environmental_state: return int(self._device.environmental_state.particulate_matter_25) return None @property def particulate_matter_10(self): """Return the particulate matter 10 level.""" if self._device.environmental_state: return int(self._device.environmental_state.particulate_matter_10) return None @property def nitrogen_dioxide(self): """Return the NO2 (nitrogen dioxide) level.""" if self._device.environmental_state: return int(self._device.environmental_state.nitrogen_dioxide) return None @property def volatile_organic_compounds(self): """Return the VOC (Volatile Organic Compounds) level.""" if self._device.environmental_state: return int(self._device.environmental_state.volatile_organic_compounds) return None @property def unique_id(self): """Return the sensor's unique id.""" return self._device.serial @property def device_state_attributes(self): """Return the device state attributes.""" data = {} voc = self.volatile_organic_compounds if voc is not None: data[ATTR_VOC] = voc return data
PypiClean
/quantum_kite-0.0.3-cp38-cp38-macosx_10_9_x86_64.whl/kite/examples/dos_dccond_haldane.py
__all__ = ["main"] import kite import numpy as np import pybinding as pb def haldane(onsite=(0, 0), t=1): """Return lattice specification for Haldane model""" # parameters a = 0.24595 # [nm] unit cell length a_cc = 0.142 # [nm] carbon-carbon distance t2 = t/10 # define lattice vectors a1 = a * np.array([1, 0]) a2 = a * np.array([1 / 2, 1 / 2 * np.sqrt(3)]) # create a lattice with 2 primitive vectors lat = pb.Lattice(a1=a1, a2=a2) # add sublattices lat.add_sublattices( # name, position, and onsite potential ('A', [0, -a_cc/2], onsite[0]), ('B', [0, a_cc/2], onsite[1]) ) # Add hoppings lat.add_hoppings( # inside the main cell, between which atoms, and the value ([0, 0], 'A', 'B', -t), # between neighboring cells, between which atoms, and the value ([1, -1], 'A', 'B', -t), ([0, -1], 'A', 'B', -t), ([1, 0], 'A', 'A', -t2 * 1j), ([0, -1], 'A', 'A', -t2 * 1j), ([-1, 1], 'A', 'A', -t2 * 1j), ([1, 0], 'B', 'B', -t2 * -1j), ([0, -1], 'B', 'B', -t2 * -1j), ([-1, 1], 'B', 'B', -t2 * -1j) ) return lat def main(onsite=(0, 0), t=1): """Prepare the input file for KITEx""" # load lattice lattice = haldane(onsite, t) # add Disorder disorder = kite.Disorder(lattice) disorder.add_disorder('A', 'Uniform', +0.0, 0.4) disorder.add_disorder('B', 'Uniform', +0.0, 0.4) # number of decomposition parts [nx,ny] in each direction of matrix. # This divides the lattice into various sections, each of which is calculated in parallel nx = ny = 2 # number of unit cells in each direction. lx = ly = 128 # make config object which caries info about # - the number of decomposition parts [nx, ny], # - lengths of structure [lx, ly] # - boundary conditions [mode,mode, ... ] with modes: # . "periodic" # . "open" # . "twisted" -- this option needs the extra argument angles=[phi_1,..,phi_DIM] where phi_i \in [0, 2*M_PI] # . "random" # Boundary Mode mode = "periodic" # - specify precision of the exported hopping and onsite data, 0 - float, 1 - double, and 2 - long double. # - scaling, if None it's automatic, if present select spectrum_range=[e_min, e_max] configuration = kite.Configuration( divisions=[nx, ny], length=[lx, ly], boundaries=[mode, mode], is_complex=True, precision=0, spectrum_range=[-10, 10] ) # specify calculation type calculation = kite.Calculation(configuration) calculation.dos( num_points=1000, num_moments=256, num_random=1, num_disorder=1 ) # require the calculation conductivity_dc calculation.conductivity_dc( num_points=1000, num_moments=256, num_random=1, num_disorder=1, direction='xy', temperature=0.05 ) # configure the *.h5 file output_file = "haldane-output.h5" kite.config_system(lattice, configuration, calculation, filename=output_file, disorder=disorder) # for generating the desired output from the generated HDF5-file, run # ../build/KITEx haldane-output.h5 # ../tools/build/KITE-tools haldane-output.h5 # note: to generate the conductivity data file for a desired window of Fermi energies, please use # ../tools/build/KITE-tools h5_file.h --CondDC -F Emin Emax NumPoints # Run ../tools/build/KITE-tools --help for more options # returning the name of the created HDF5-file return output_file if __name__ == "__main__": main()
PypiClean
/huey-drf-api-0.2.0.tar.gz/huey-drf-api-0.2.0/README.rst
============================= Huey DRF API ============================= .. image:: https://badge.fury.io/py/huey-drf-api.svg :target: https://badge.fury.io/py/huey-drf-api .. image:: https://travis-ci.org/eyemyth/huey-drf-api.svg?branch=master :target: https://travis-ci.org/eyemyth/huey-drf-api .. image:: https://codecov.io/gh/eyemyth/huey-drf-api/branch/master/graph/badge.svg :target: https://codecov.io/gh/eyemyth/huey-drf-api A DRF API for Huey Documentation ------------- The full documentation is at https://huey-drf-api.readthedocs.io. (Except it isn't, not yet.) Quickstart ---------- Install Huey DRF API:: pip install huey-drf-api Add it to your `INSTALLED_APPS`: .. code-block:: python INSTALLED_APPS = ( ... 'hueydrfapi', ... ) Add Huey DRF API's URL patterns: .. code-block:: python urlpatterns = [ ... path('', include('hueydrfapi.urls', namespace='hueydrfapi')), ... ] Features -------- * TODO Running Tests ------------- * TODO: write tests Does the code actually work? :: source <YOURVIRTUALENV>/bin/activate (myenv) $ pip install tox (myenv) $ tox Credits ------- Tools used in rendering this package: * Cookiecutter_ * `cookiecutter-djangopackage`_ .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`cookiecutter-djangopackage`: https://github.com/pydanny/cookiecutter-djangopackage
PypiClean
/python-ldap-3.4.3.tar.gz/python-ldap-3.4.3/Doc/reference/ldap-filter.rst
:py:mod:`ldap.filter` LDAP filter handling ============================================ .. py:module:: ldap.filter :synopsis: LDAP filter handling. .. moduleauthor:: python-ldap project (see https://www.python-ldap.org/) .. % Author of the module code; .. seealso:: :rfc:`4515` - Lightweight Directory Access Protocol (LDAP): String Representation of Search Filters. The :mod:`ldap.filter` module defines the following functions: .. function:: escape_filter_chars(assertion_value[, escape_mode=0]) This function escapes characters in *assertion_value* which are special in LDAP filters. You should use this function when building LDAP filter strings from arbitrary input. *escape_mode* means: If :const:`0` only special chars mentioned in RFC 4515 are escaped. If :const:`1` all NON-ASCII chars are escaped. If :const:`2` all chars are escaped. .. % -> string .. function:: filter_format(filter_template, assertion_values) This function applies :func:`escape_filter_chars` to each of the strings in list *assertion_values*. After that *filter_template* containing as many :const:`%s` placeholders as count of assertion values is used to build the whole filter string. .. % -> string
PypiClean
/django-glitter-0.2.10.tar.gz/django-glitter-0.2.10/glitter/blocks/video/validators.py
import re from django.core.exceptions import ValidationError YOUTUBE_URL_RE = r""" (?x)^ ( (?:https?://|//) # http(s):// or protocol-independent URL (?:(?:(?:(?:\w+\.)?[yY][oO][uU][tT][uU][bB][eE](?:-nocookie)?\.com/| youtube\.googleapis\.com/) # the various hostnames, with wildcard subdomains (?:.*?\#/)? # handle anchor (#/) redirect urls (?: # the various things that can precede the ID: (?:(?:v|embed|e)/(?!videoseries)) # v/ or embed/ or e/ |(?: # or the v= param in all its forms (?:(?:watch|movie)(?:_popup)?(?:\.php)?/?)? # preceding watch(_popup|.php) or nothing (like /?v=xxxx) (?:\?|\#!?) # the params delimiter ? or # or #! (?:.*?&)?? # any other preceding param (like /?s=tuff&v=xxxx) v= ) )) |(?:www\.)?cleanvideosearch\.com/media/action/yt/watch\?videoId= ) )? # all until now is optional -> you can pass the naked ID ([0-9A-Za-z_-]{11}) # here is it! the YouTube video ID (?!.*?&list=) # combined list/video URLs are handled by the playlist IE (?(1).+)? # if we found the ID, everything can follow $ """ # noqa VIMEO_URL_RE = r""" (?x) https?:// (?:(?:www|(?P<player>player))\.)? vimeo(?P<pro>pro)?\.com/ (?!channels/[^/?#]+/?(?:$|[?#])|album/) (?:.*?/)? (?:(?:play_redirect_hls|moogaloop\.swf)\?clip_id=)? (?:videos?/)? (?P<id>[0-9]+) /?(?:[?&].*)?(?:[#].*)?$ """ def validate_url(value): """ Validate url. """ if not re.match(VIMEO_URL_RE, value) and not re.match(YOUTUBE_URL_RE, value): raise ValidationError('Invalid URL - only Youtube, Vimeo can be used.')
PypiClean
/odoo_addon_vault_share-15.0.1.1.1-py3-none-any.whl/odoo/addons/vault_share/static/src/legacy/vault_share_widget.js
odoo.define("vault.share.widget", function (require) { "use strict"; const basic_fields = require("web.basic_fields"); const core = require("web.core"); const registry = require("web.field_registry"); const sh_utils = require("vault.share.utils"); const utils = require("vault.utils"); const vault = require("vault"); const vault_fields = require("vault.fields"); const QWeb = core.qweb; // Widget used to view the encrypted pin const VaultPinField = basic_fields.InputField.extend(vault_fields.VaultAbstract, { supportedFieldTypes: ["char"], events: _.extend({}, basic_fields.InputField.prototype.events, { "click .o_vault_show": "_onShowValue", "click .o_vault_clipboard": "_onCopyValue", }), template: "FieldPinVault", /** * Prepare the widget by evaluating the field attributes and setting the defaults * * @override */ init: function () { this._super.apply(this, arguments); this.pin_size = this.attrs.pin_size || sh_utils.PinSize; }, /** * Decrypt the value using the private key of the vault and slice it to * the actual pin size because there is a salt following * * @private * @param {String} data * @returns the decrypted data */ _decrypt: async function (data) { if (!data) return data; const private_key = await vault.get_private_key(); const plain = await utils.asym_decrypt(private_key, data); return plain.slice(0, this.pin_size); }, /** * Render the decrypted value or the stars * * @private */ _renderReadonly: function () { this._renderValue(this.decrypted_value || "********"); }, /** * Render the decrypted value or the stars * * @private */ _renderEdit: function () { this._renderValue(this.decrypted_value || "********"); }, /** * Render the field using the template * * @private * @param {String} value */ _renderValue: function (value) { const self = this; this.$el.html( QWeb.render(this.template, { widget: self, value: value, show: !this.decrypted, }) ); }, }); // Widget used to create shared outgoing secrets encrypted with a pin const VaultShareField = vault_fields.VaultField.extend({ events: _.extend({}, vault_fields.VaultField.prototype.events, { "click .o_vault_save": "_onSaveValue", }), template: "FieldVaultShare", /** * Prepare the widget by evaluating the field attributes and setting the defaults * * @override */ init: function () { this._super.apply(this, arguments); this.pin_size = this.attrs.pin_size || sh_utils.PinSize; this.field_salt = this.attrs.salt || "salt"; this.field_pin = this.attrs.pin || "pin"; }, /** * Encrypt the pin with a random salt to make it hard to guess him by * encrypting every possilbilty with the public key. Store the pin in the * proper field * * @private */ _storePin: async function () { const salt = utils.generate_iv_base64(); const crypted_pin = await utils.asym_encrypt( await vault.get_public_key(), this.pin + salt ); this._setFieldValue(this.field_pin, crypted_pin); }, /** * Returns the pin from the class, record data, or generate a new pin if * none s currently available * * @private * @returns the pin */ _getPin: async function () { if (this.pin) return this.pin; this.pin = this.recordData[this.field_pin]; if (this.pin) { // Decrypt the pin and slice him to the configured pin size const private_key = await vault.get_private_key(); const plain = await utils.asym_decrypt(private_key, this.pin); this.pin = plain.slice(0, this.pin_size); return this.pin; } // Generate a new pin and store it this.pin = sh_utils.generate_pin(this.pin_size); await this._storePin(); return this.pin; }, /** * Returns the salt from the class, record data, or generate a new salt if * none is currently available * * @private * @returns the salt */ _getSalt: function () { if (this.salt) return this.salt; this.salt = this.recordData[this.field_salt]; if (this.salt) return this.salt; // Generate a new salt and store him this.salt = utils.toBase64(utils.generate_bytes(utils.SaltLength).buffer); this._setFieldValue(this.field_salt, this.salt); return this.salt; }, /** * Decrypt the encrypted data using the pin, IV and salt * * @private * @param {String} crypted */ _decrypt: async function (crypted) { if (crypted === false) return false; if (!utils.supported()) return null; const iv = this._getIV(); const pin = await this._getPin(); const salt = utils.fromBase64(this._getSalt()); const key = await utils.derive_key(pin, salt, 4000); return await utils.sym_decrypt(key, crypted, iv); }, /** * Encrypt the data using the pin, IV and salt * * @private * @param {String} data */ _encrypt: async function (data) { if (!utils.supported()) return null; const iv = this._getIV(); const pin = await this._getPin(); const salt = utils.fromBase64(this._getSalt()); const key = await utils.derive_key(pin, salt, 4000); return await utils.sym_encrypt(key, data, iv); }, /** * Resets the content to the formated value in readonly mode. * * @override * @private */ _renderReadonly: function () { const self = this; this.$el.html( QWeb.render(this.template, { widget: self, value: this.decrypted_value || "********", show: !this.decrypted, }) ); }, /** * @override * @returns {String} the content of the input */ _getValue: function () { return this.$input.val(); }, }); // Widget used to view shared incoming secrets encrypted with public keys const VaultShareFile = vault_fields.VaultFile.extend({ store_model: "vault.file", template: "FileVaultShare", events: _.extend({}, vault_fields.VaultFile.prototype.events, { "click .o_vault_save": "_onSaveValue", }), /** * Prepare the widget by evaluating the field attributes and setting the defaults * * @override */ init: function () { this._super.apply(this, arguments); this.field_key = this.attrs.key || "key"; this.pin_size = this.attrs.pin_size || sh_utils.PinSize; this.field_salt = this.attrs.salt || "salt"; this.field_pin = this.attrs.pin || "pin"; }, /** * Encrypt the pin with a random salt to make it hard to guess him by * encrypting every possilbilty with the public key. Store the pin in the * proper field * * @private */ _storePin: async function () { const salt = utils.generate_iv_base64(); const crypted_pin = await utils.asym_encrypt( await vault.get_public_key(), this.pin + salt ); this._setFieldValue(this.field_pin, crypted_pin); }, /** * Returns the pin from the class, record data, or generate a new pin if * none s currently available * * @private * @returns the pin */ _getPin: async function () { if (this.pin) return this.pin; this.pin = this.recordData[this.field_pin]; if (this.pin) { // Decrypt the pin and slice him to the configured pin size const private_key = await vault.get_private_key(); const plain = await utils.asym_decrypt(private_key, this.pin); this.pin = plain.slice(0, this.pin_size); return this.pin; } // Generate a new pin and store it this.pin = sh_utils.generate_pin(this.pin_size); await this._storePin(); return this.pin; }, /** * Returns the salt from the class, record data, or generate a new salt if * none is currently available * * @private * @returns the salt */ _getSalt: function () { if (this.salt) return this.salt; this.salt = this.recordData[this.field_salt]; if (this.salt) return this.salt; // Generate a new salt and store him this.salt = utils.toBase64(utils.generate_bytes(utils.SaltLength).buffer); this._setFieldValue(this.field_salt, this.salt); return this.salt; }, _renderReadonly: function () { this.do_toggle(Boolean(this.value)); if (this.value) { this.$el.html( QWeb.render(this.template, { widget: this, filename: this.filename_value, }) ); const $el = this.$(".link"); if (this.recordData.id) $el.css("cursor", "pointer"); else $el.css("cursor", "not-allowed"); } }, /** * Decrypt the encrypted data using the pin, IV and salt * * @private * @param {String} crypted */ _decrypt: async function (crypted) { if (!utils.supported()) return null; const iv = this._getIV(); const pin = await this._getPin(); const salt = utils.fromBase64(this._getSalt()); const key = await utils.derive_key(pin, salt, 4000); return await utils.sym_decrypt(key, crypted, iv); }, /** * Encrypt the data using the pin, IV and salt * * @private * @param {String} data */ _encrypt: async function (data) { if (!utils.supported()) return null; const iv = this._getIV(); const pin = await this._getPin(); const salt = utils.fromBase64(this._getSalt()); const key = await utils.derive_key(pin, salt, 4000); return await utils.sym_encrypt(key, data, iv); }, }); registry.add("vault_pin", VaultPinField); registry.add("vault_share", VaultShareField); registry.add("vault_share_file", VaultShareFile); });
PypiClean
/nautilus_trader-1.177.0-cp310-cp310-manylinux_2_31_x86_64.whl/nautilus_trader/examples/strategies/ema_cross_twap.py
from decimal import Decimal from typing import Any from nautilus_trader.common.enums import LogColor from nautilus_trader.config import StrategyConfig from nautilus_trader.config.validation import PositiveFloat from nautilus_trader.core.correctness import PyCondition from nautilus_trader.core.data import Data from nautilus_trader.core.message import Event from nautilus_trader.indicators.average.ema import ExponentialMovingAverage from nautilus_trader.model.data import Bar from nautilus_trader.model.data import BarType from nautilus_trader.model.data import OrderBookDeltas from nautilus_trader.model.data import QuoteTick from nautilus_trader.model.data import Ticker from nautilus_trader.model.data import TradeTick from nautilus_trader.model.enums import OrderSide from nautilus_trader.model.enums import TimeInForce from nautilus_trader.model.identifiers import ExecAlgorithmId from nautilus_trader.model.identifiers import InstrumentId from nautilus_trader.model.instruments import Instrument from nautilus_trader.model.orderbook import OrderBook from nautilus_trader.model.orders import MarketOrder from nautilus_trader.trading.strategy import Strategy # *** THIS IS A TEST STRATEGY WITH NO ALPHA ADVANTAGE WHATSOEVER. *** # *** IT IS NOT INTENDED TO BE USED TO TRADE LIVE WITH REAL MONEY. *** class EMACrossTWAPConfig(StrategyConfig, frozen=True): """ Configuration for ``EMACrossTWAP`` instances. Parameters ---------- instrument_id : InstrumentId The instrument ID for the strategy. bar_type : BarType The bar type for the strategy. trade_size : str The position size per trade (interpreted as Decimal). fast_ema_period : int, default 10 The fast EMA period. slow_ema_period : int, default 20 The slow EMA period. twap_horizon_secs : PositiveFloat, default 30.0 The TWAP horizon (seconds) over which the algorithm will execute. twap_interval_secs : PositiveFloat, default 3.0 The TWAP interval (seconds) between orders. close_positions_on_stop : bool, default True If all open positions should be closed on strategy stop. order_id_tag : str The unique order ID tag for the strategy. Must be unique amongst all running strategies for a particular trader ID. oms_type : OmsType The order management system type for the strategy. This will determine how the `ExecutionEngine` handles position IDs (see docs). """ instrument_id: str bar_type: str trade_size: Decimal fast_ema_period: int = 10 slow_ema_period: int = 20 twap_horizon_secs: PositiveFloat = 30.0 twap_interval_secs: PositiveFloat = 3.0 close_positions_on_stop: bool = True class EMACrossTWAP(Strategy): """ A simple moving average cross example strategy. When the fast EMA crosses the slow EMA then enter a position at the market in that direction. Cancels all orders and closes all positions on stop. Parameters ---------- config : EMACrossConfig The configuration for the instance. Raises ------ ValueError If `config.fast_ema_period` is not less than `config.slow_ema_period`. ValueError If `config.twap_interval_secs` is not less than or equal to `config.twap_horizon_secs`. """ def __init__(self, config: EMACrossTWAPConfig) -> None: PyCondition.true( config.fast_ema_period < config.slow_ema_period, "{config.fast_ema_period=} must be less than {config.slow_ema_period=}", ) PyCondition.true( config.twap_interval_secs <= config.twap_horizon_secs, "{config.twap_interval_secs=} must be less than or equal to {config.twap_horizon_secs=}", ) super().__init__(config) # Configuration self.instrument_id = InstrumentId.from_str(config.instrument_id) self.bar_type = BarType.from_str(config.bar_type) self.trade_size = Decimal(config.trade_size) # Create the indicators for the strategy self.fast_ema = ExponentialMovingAverage(config.fast_ema_period) self.slow_ema = ExponentialMovingAverage(config.slow_ema_period) # Order management self.twap_exec_algorithm_id = ExecAlgorithmId("TWAP") self.twap_exec_algorithm_params: dict[str, Any] = { "horizon_secs": config.twap_horizon_secs, "interval_secs": config.twap_interval_secs, } self.close_positions_on_stop = config.close_positions_on_stop self.instrument: Instrument = None def on_start(self) -> None: """ Actions to be performed on strategy start. """ self.instrument = self.cache.instrument(self.instrument_id) if self.instrument is None: self.log.error(f"Could not find instrument for {self.instrument_id}") self.stop() return # Register the indicators for updating self.register_indicator_for_bars(self.bar_type, self.fast_ema) self.register_indicator_for_bars(self.bar_type, self.slow_ema) # Get historical data self.request_bars(self.bar_type) # Subscribe to live data self.subscribe_bars(self.bar_type) self.subscribe_quote_ticks(self.instrument_id) def on_instrument(self, instrument: Instrument) -> None: """ Actions to be performed when the strategy is running and receives an instrument. Parameters ---------- instrument : Instrument The instrument received. """ # For debugging (must add a subscription) # self.log.info(repr(instrument), LogColor.CYAN) def on_order_book_deltas(self, deltas: OrderBookDeltas) -> None: """ Actions to be performed when the strategy is running and receives order book deltas. Parameters ---------- deltas : OrderBookDeltas The order book deltas received. """ # For debugging (must add a subscription) # self.log.info(repr(deltas), LogColor.CYAN) def on_order_book(self, order_book: OrderBook) -> None: """ Actions to be performed when the strategy is running and receives an order book. Parameters ---------- order_book : OrderBook The order book received. """ # For debugging (must add a subscription) # self.log.info(repr(order_book), LogColor.CYAN) def on_ticker(self, ticker: Ticker) -> None: """ Actions to be performed when the strategy is running and receives a ticker. Parameters ---------- ticker : Ticker The ticker received. """ # For debugging (must add a subscription) # self.log.info(repr(ticker), LogColor.CYAN) def on_quote_tick(self, tick: QuoteTick) -> None: """ Actions to be performed when the strategy is running and receives a quote tick. Parameters ---------- tick : QuoteTick The tick received. """ # For debugging (must add a subscription) # self.log.info(repr(tick), LogColor.CYAN) def on_trade_tick(self, tick: TradeTick) -> None: """ Actions to be performed when the strategy is running and receives a trade tick. Parameters ---------- tick : TradeTick The tick received. """ # For debugging (must add a subscription) # self.log.info(repr(tick), LogColor.CYAN) def on_bar(self, bar: Bar) -> None: """ Actions to be performed when the strategy is running and receives a bar. Parameters ---------- bar : Bar The bar received. """ self.log.info(repr(bar), LogColor.CYAN) # Check if indicators ready if not self.indicators_initialized(): self.log.info( f"Waiting for indicators to warm up [{self.cache.bar_count(self.bar_type)}]...", color=LogColor.BLUE, ) return # Wait for indicators to warm up... if bar.is_single_price(): # Implies no market information for this bar return # BUY LOGIC if self.fast_ema.value >= self.slow_ema.value: if self.portfolio.is_flat(self.instrument_id): self.buy() elif self.portfolio.is_net_short(self.instrument_id): self.close_all_positions(self.instrument_id) self.buy() # SELL LOGIC elif self.fast_ema.value < self.slow_ema.value: if self.portfolio.is_flat(self.instrument_id): self.sell() elif self.portfolio.is_net_long(self.instrument_id): self.close_all_positions(self.instrument_id) self.sell() def buy(self) -> None: """ Users simple buy method (example). """ order: MarketOrder = self.order_factory.market( instrument_id=self.instrument_id, order_side=OrderSide.BUY, quantity=self.instrument.make_qty(self.trade_size), time_in_force=TimeInForce.FOK, exec_algorithm_id=self.twap_exec_algorithm_id, exec_algorithm_params=self.twap_exec_algorithm_params, ) self.submit_order(order) def sell(self) -> None: """ Users simple sell method (example). """ order: MarketOrder = self.order_factory.market( instrument_id=self.instrument_id, order_side=OrderSide.SELL, quantity=self.instrument.make_qty(self.trade_size), time_in_force=TimeInForce.FOK, exec_algorithm_id=self.twap_exec_algorithm_id, exec_algorithm_params=self.twap_exec_algorithm_params, ) self.submit_order(order) def on_data(self, data: Data) -> None: """ Actions to be performed when the strategy is running and receives generic data. Parameters ---------- data : Data The data received. """ def on_event(self, event: Event) -> None: """ Actions to be performed when the strategy is running and receives an event. Parameters ---------- event : Event The event received. """ def on_stop(self) -> None: """ Actions to be performed when the strategy is stopped. """ self.cancel_all_orders(self.instrument_id) if self.close_positions_on_stop: self.close_all_positions(self.instrument_id) # Unsubscribe from data self.unsubscribe_bars(self.bar_type) def on_reset(self) -> None: """ Actions to be performed when the strategy is reset. """ # Reset indicators here self.fast_ema.reset() self.slow_ema.reset() def on_save(self) -> dict[str, bytes]: """ Actions to be performed when the strategy is saved. Create and return a state dictionary of values to be saved. Returns ------- dict[str, bytes] The strategy state dictionary. """ return {} def on_load(self, state: dict[str, bytes]) -> None: """ Actions to be performed when the strategy is loaded. Saved state values will be contained in the give state dictionary. Parameters ---------- state : dict[str, bytes] The strategy state dictionary. """ def on_dispose(self) -> None: """ Actions to be performed when the strategy is disposed. Cleanup any resources used by the strategy here. """
PypiClean
/py-Pyro-1.4.16.tar.gz/py-Pyro-1.4.16/pyro/methods/chats/get_dialogs.py
import logging from typing import List from pyro import raw from pyro import types from pyro import utils from pyro.scaffold import Scaffold log = logging.getLogger(__name__) class GetDialogs(Scaffold): async def get_dialogs( self, offset_date: int = 0, limit: int = 100, pinned_only: bool = False ) -> List["types.Dialog"]: """Get a chunk of the user's dialogs. You can get up to 100 dialogs at once. For a more convenient way of getting a user's dialogs see :meth:`~pyro.Client.iter_dialogs`. Parameters: offset_date (``int``): The offset date in Unix time taken from the top message of a :obj:`~pyro.types.Dialog`. Defaults to 0. Valid for non-pinned dialogs only. limit (``str``, *optional*): Limits the number of dialogs to be retrieved. Defaults to 100. Valid for non-pinned dialogs only. pinned_only (``bool``, *optional*): Pass True if you want to get only pinned dialogs. Defaults to False. Returns: List of :obj:`~pyro.types.Dialog`: On success, a list of dialogs is returned. Example: .. code-block:: python # Get first 100 dialogs app.get_dialogs() # Get pinned dialogs app.get_dialogs(pinned_only=True) """ if pinned_only: r = await self.send( raw.functions.messages.GetPinnedDialogs(folder_id=0), sleep_threshold=60 ) else: r = await self.send( raw.functions.messages.GetDialogs( offset_date=offset_date, offset_id=0, offset_peer=raw.types.InputPeerEmpty(), limit=limit, hash=0, exclude_pinned=True ), sleep_threshold=60 ) users = {i.id: i for i in r.users} chats = {i.id: i for i in r.chats} messages = {} for message in r.messages: if isinstance(message, raw.types.MessageEmpty): continue chat_id = utils.get_peer_id(message.peer_id) messages[chat_id] = await types.Message._parse(self, message, users, chats) parsed_dialogs = [] for dialog in r.dialogs: if not isinstance(dialog, raw.types.Dialog): continue parsed_dialogs.append(types.Dialog._parse(self, dialog, messages, users, chats)) return types.List(parsed_dialogs)
PypiClean
/Facebook_PyBot-0.8b4.tar.gz/Facebook_PyBot-0.8b4/Facebook/exception.py
class Error(Exception): pass class ValidationError(Error): def __init__(self, description): """ @required Code Description :type description: object """ self.description = description class InternalError(Error): def __init__(self, description): """ @required Code Description :type description: object """ self.description = description class LimitError(Error): def __init__(self, description): """ @required Code Description :type description: object """ self.description = description class BadParameterError(Error): def __init__(self, description): """ @required Code Description :type description: object """ self.description = description class AccessTokenErrors(Error): def __init__(self, description): """ @required Code Description :type description: object """ self.description = description class PermissionError(Error): def __init__(self, description): """ @required Code Description :type description: object """ self.description = description class AccountLinkingErrors(Error): def __init__(self, description): """ @required Code Description :type description: object """ self.description = description class OAuthException(Error): def __init__(self, description): """ :param description: """ self.description = description def raise_error(response_data): if response_data["error"]["code"] == 1200: return InternalError("Temporary send message failure. Please try again later.") elif response_data["error"]["code"] == 4: return LimitError("Too many send requests to phone numbers") elif response_data["error"]["code"] == 100: if response_data["error"].get("error_subcode") == 2018109: return LimitError("Attachment size exceeds allowable limit") elif response_data["error"].get("error_subcode") == 2018001: return BadParameterError("No matching user found") else: data = response_data["error"]["message"] return BadParameterError(data) elif response_data["error"]["code"] == 613: return LimitError("Calls to this API have exceeded the rate limit") elif response_data["error"]["code"] == 190: return AccessTokenErrors("Invalid OAuth access token.") elif response_data["error"]["code"] == 10: if response_data["error"].get("error_subcode") == 2018065: return PermissionError( "This message is sent outside of allowed window. " "You need page_messaging_subscriptions permission to be able to do it.") elif response_data["error"].get("error_subcode") == 2018108: return PermissionError( "This Person Cannot Receive Messages: This person isn't receiving messages from you right now.") elif response_data["error"]["code"] == 200: if response_data["error"].get("error_subcode") == 2018028: return PermissionError( "Cannot message users who are not admins, " "developers or testers of the app until pages_messaging permission is reviewed and the app is live.") elif response_data["error"]["error_subcode"] == 2018027: return PermissionError( "Cannot message users who are not admins, " "developers or testers of the app " "until pages_messaging_phone_number permission is reviewed and the app is live.") elif response_data["error"]["error_subcode"] == 2018021: return PermissionError( "Requires phone matching access fee to be paid by this page" " unless the recipient user is an admin, developer, or tester of the app.") elif response_data["error"]["error_subcode"] == 1545041: return PermissionError("Message Not Sent: This person isn't available right now.") elif response_data["error"]["code"] == 10303: return AccountLinkingErrors("Invalid account_linking_token") elif response_data["error"]["code"] == 194: message = response_data["error"]["message"] return OAuthException(message) elif response_data["error"]["code"] == 803: message = response_data["error"]["message"] return OAuthException(message) else: return Exception(response_data["error"]["message"]) class ElementCountExceeded(Error): def __init__(self, description): self.description = description class QuickReplyCountExceeded(Error): def __init__(self, description): self.description = description class CharacterCountExceeded(Error): def __init__(self, description): self.description = description
PypiClean
/pyflakes3k-0.4.3.tar.gz/pyflakes3k-0.4.3/pyflakes/messages.py
class Message: message = '' message_args = () def __init__(self, filename, lineno): self.filename = filename self.lineno = lineno def __str__(self): return '%s:%s: %s' % (self.filename, self.lineno, self.message % self.message_args) class UnusedImport(Message): message = '%r imported but unused' def __init__(self, filename, lineno, name): Message.__init__(self, filename, lineno) self.message_args = (name,) class RedefinedWhileUnused(Message): message = 'redefinition of unused %r from line %r' def __init__(self, filename, lineno, name, orig_lineno): Message.__init__(self, filename, lineno) self.message_args = (name, orig_lineno) class ImportShadowedByLoopVar(Message): message = 'import %r from line %r shadowed by loop variable' def __init__(self, filename, lineno, name, orig_lineno): Message.__init__(self, filename, lineno) self.message_args = (name, orig_lineno) class ImportStarUsed(Message): message = "'from %s import *' used; unable to detect undefined names" def __init__(self, filename, lineno, modname): Message.__init__(self, filename, lineno) self.message_args = (modname,) class UndefinedName(Message): message = 'undefined name %r' def __init__(self, filename, lineno, name): Message.__init__(self, filename, lineno) self.message_args = (name,) class UndefinedExport(Message): message = 'undefined name %r in __all__' def __init__(self, filename, lineno, name): Message.__init__(self, filename, lineno) self.message_args = (name,) class UndefinedLocal(Message): message = "local variable %r (defined in enclosing scope on line %r) referenced before assignment" def __init__(self, filename, lineno, name, orig_lineno): Message.__init__(self, filename, lineno) self.message_args = (name, orig_lineno) class DuplicateArgument(Message): message = 'duplicate argument %r in function definition' def __init__(self, filename, lineno, name): Message.__init__(self, filename, lineno) self.message_args = (name,) class RedefinedFunction(Message): message = 'redefinition of function %r from line %r' def __init__(self, filename, lineno, name, orig_lineno): Message.__init__(self, filename, lineno) self.message_args = (name, orig_lineno) class LateFutureImport(Message): message = 'future import(s) %r after other statements' def __init__(self, filename, lineno, names): Message.__init__(self, filename, lineno) self.message_args = (names,) class UnusedVariable(Message): """ Indicates that a variable has been explicity assigned to but not actually used. """ message = 'local variable %r is assigned to but never used' def __init__(self, filename, lineno, names): Message.__init__(self, filename, lineno) self.message_args = (names,)
PypiClean
/sat_mapping_cyborg_ai-0.0.37-py3-none-any.whl/sat_mapping/Lib/gsutil/gslib/vendored/boto/boto/cloudsearch/domain.py
import boto from boto.compat import json from boto.cloudsearch.optionstatus import OptionStatus from boto.cloudsearch.optionstatus import IndexFieldStatus from boto.cloudsearch.optionstatus import ServicePoliciesStatus from boto.cloudsearch.optionstatus import RankExpressionStatus from boto.cloudsearch.document import DocumentServiceConnection from boto.cloudsearch.search import SearchConnection def handle_bool(value): if value in [True, 'true', 'True', 'TRUE', 1]: return True return False class Domain(object): """ A Cloudsearch domain. :ivar name: The name of the domain. :ivar id: The internally generated unique identifier for the domain. :ivar created: A boolean which is True if the domain is created. It can take several minutes to initialize a domain when CreateDomain is called. Newly created search domains are returned with a False value for Created until domain creation is complete :ivar deleted: A boolean which is True if the search domain has been deleted. The system must clean up resources dedicated to the search domain when delete is called. Newly deleted search domains are returned from list_domains with a True value for deleted for several minutes until resource cleanup is complete. :ivar processing: True if processing is being done to activate the current domain configuration. :ivar num_searchable_docs: The number of documents that have been submittted to the domain and indexed. :ivar requires_index_document: True if index_documents needs to be called to activate the current domain configuration. :ivar search_instance_count: The number of search instances that are available to process search requests. :ivar search_instance_type: The instance type that is being used to process search requests. :ivar search_partition_count: The number of partitions across which the search index is spread. """ def __init__(self, layer1, data): self.layer1 = layer1 self.update_from_data(data) def update_from_data(self, data): self.created = data['created'] self.deleted = data['deleted'] self.processing = data['processing'] self.requires_index_documents = data['requires_index_documents'] self.domain_id = data['domain_id'] self.domain_name = data['domain_name'] self.num_searchable_docs = data['num_searchable_docs'] self.search_instance_count = data['search_instance_count'] self.search_instance_type = data.get('search_instance_type', None) self.search_partition_count = data['search_partition_count'] self._doc_service = data['doc_service'] self._search_service = data['search_service'] @property def doc_service_arn(self): return self._doc_service['arn'] @property def doc_service_endpoint(self): return self._doc_service['endpoint'] @property def search_service_arn(self): return self._search_service['arn'] @property def search_service_endpoint(self): return self._search_service['endpoint'] @property def created(self): return self._created @created.setter def created(self, value): self._created = handle_bool(value) @property def deleted(self): return self._deleted @deleted.setter def deleted(self, value): self._deleted = handle_bool(value) @property def processing(self): return self._processing @processing.setter def processing(self, value): self._processing = handle_bool(value) @property def requires_index_documents(self): return self._requires_index_documents @requires_index_documents.setter def requires_index_documents(self, value): self._requires_index_documents = handle_bool(value) @property def search_partition_count(self): return self._search_partition_count @search_partition_count.setter def search_partition_count(self, value): self._search_partition_count = int(value) @property def search_instance_count(self): return self._search_instance_count @search_instance_count.setter def search_instance_count(self, value): self._search_instance_count = int(value) @property def num_searchable_docs(self): return self._num_searchable_docs @num_searchable_docs.setter def num_searchable_docs(self, value): self._num_searchable_docs = int(value) @property def name(self): return self.domain_name @property def id(self): return self.domain_id def delete(self): """ Delete this domain and all index data associated with it. """ return self.layer1.delete_domain(self.name) def get_stemming(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined stemming options for the domain. """ return OptionStatus(self, None, self.layer1.describe_stemming_options, self.layer1.update_stemming_options) def get_stopwords(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined stopword options for the domain. """ return OptionStatus(self, None, self.layer1.describe_stopword_options, self.layer1.update_stopword_options) def get_synonyms(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined synonym options for the domain. """ return OptionStatus(self, None, self.layer1.describe_synonym_options, self.layer1.update_synonym_options) def get_access_policies(self): """ Return a :class:`boto.cloudsearch.option.OptionStatus` object representing the currently defined access policies for the domain. """ return ServicePoliciesStatus(self, None, self.layer1.describe_service_access_policies, self.layer1.update_service_access_policies) def index_documents(self): """ Tells the search domain to start indexing its documents using the latest text processing options and IndexFields. This operation must be invoked to make options whose OptionStatus has OptioState of RequiresIndexDocuments visible in search results. """ self.layer1.index_documents(self.name) def get_index_fields(self, field_names=None): """ Return a list of index fields defined for this domain. """ data = self.layer1.describe_index_fields(self.name, field_names) return [IndexFieldStatus(self, d) for d in data] def create_index_field(self, field_name, field_type, default='', facet=False, result=False, searchable=False, source_attributes=[]): """ Defines an ``IndexField``, either replacing an existing definition or creating a new one. :type field_name: string :param field_name: The name of a field in the search index. :type field_type: string :param field_type: The type of field. Valid values are uint | literal | text :type default: string or int :param default: The default value for the field. If the field is of type ``uint`` this should be an integer value. Otherwise, it's a string. :type facet: bool :param facet: A boolean to indicate whether facets are enabled for this field or not. Does not apply to fields of type ``uint``. :type results: bool :param results: A boolean to indicate whether values of this field can be returned in search results or used in ranking. Does not apply to fields of type ``uint``. :type searchable: bool :param searchable: A boolean to indicate whether search is enabled for this field or not. Applies only to fields of type ``literal``. :type source_attributes: list of dicts :param source_attributes: An optional list of dicts that provide information about attributes for this index field. A maximum of 20 source attributes can be configured for each index field. Each item in the list is a dict with the following keys: * data_copy - The value is a dict with the following keys: * default - Optional default value if the source attribute is not specified in a document. * name - The name of the document source field to add to this ``IndexField``. * data_function - Identifies the transformation to apply when copying data from a source attribute. * data_map - The value is a dict with the following keys: * cases - A dict that translates source field values to custom values. * default - An optional default value to use if the source attribute is not specified in a document. * name - the name of the document source field to add to this ``IndexField`` * data_trim_title - Trims common title words from a source document attribute when populating an ``IndexField``. This can be used to create an ``IndexField`` you can use for sorting. The value is a dict with the following fields: * default - An optional default value. * language - an IETF RFC 4646 language code. * separator - The separator that follows the text to trim. * name - The name of the document source field to add. :raises: BaseException, InternalException, LimitExceededException, InvalidTypeException, ResourceNotFoundException """ data = self.layer1.define_index_field(self.name, field_name, field_type, default=default, facet=facet, result=result, searchable=searchable, source_attributes=source_attributes) return IndexFieldStatus(self, data, self.layer1.describe_index_fields) def get_rank_expressions(self, rank_names=None): """ Return a list of rank expressions defined for this domain. """ fn = self.layer1.describe_rank_expressions data = fn(self.name, rank_names) return [RankExpressionStatus(self, d, fn) for d in data] def create_rank_expression(self, name, expression): """ Create a new rank expression. :type rank_name: string :param rank_name: The name of an expression computed for ranking while processing a search request. :type rank_expression: string :param rank_expression: The expression to evaluate for ranking or thresholding while processing a search request. The RankExpression syntax is based on JavaScript expressions and supports: * Integer, floating point, hex and octal literals * Shortcut evaluation of logical operators such that an expression a || b evaluates to the value a if a is true without evaluting b at all * JavaScript order of precedence for operators * Arithmetic operators: + - * / % * Boolean operators (including the ternary operator) * Bitwise operators * Comparison operators * Common mathematic functions: abs ceil erf exp floor lgamma ln log2 log10 max min sqrt pow * Trigonometric library functions: acosh acos asinh asin atanh atan cosh cos sinh sin tanh tan * Random generation of a number between 0 and 1: rand * Current time in epoch: time * The min max functions that operate on a variable argument list Intermediate results are calculated as double precision floating point values. The final return value of a RankExpression is automatically converted from floating point to a 32-bit unsigned integer by rounding to the nearest integer, with a natural floor of 0 and a ceiling of max(uint32_t), 4294967295. Mathematical errors such as dividing by 0 will fail during evaluation and return a value of 0. The source data for a RankExpression can be the name of an IndexField of type uint, another RankExpression or the reserved name text_relevance. The text_relevance source is defined to return an integer from 0 to 1000 (inclusive) to indicate how relevant a document is to the search request, taking into account repetition of search terms in the document and proximity of search terms to each other in each matching IndexField in the document. For more information about using rank expressions to customize ranking, see the Amazon CloudSearch Developer Guide. :raises: BaseException, InternalException, LimitExceededException, InvalidTypeException, ResourceNotFoundException """ data = self.layer1.define_rank_expression(self.name, name, expression) return RankExpressionStatus(self, data, self.layer1.describe_rank_expressions) def get_document_service(self): return DocumentServiceConnection(domain=self) def get_search_service(self): return SearchConnection(domain=self) def __repr__(self): return '<Domain: %s>' % self.domain_name
PypiClean
/monaco-qt-0.1.7.tar.gz/monaco-qt-0.1.7/monaco/monaco-editor/node_modules/pretty-quick/dist/scms/git.js
"use strict"; Object.defineProperty(exports, "__esModule", { value: true }); exports.stageFile = exports.getUnstagedChangedFiles = exports.getChangedFiles = exports.getSinceRevision = exports.detect = exports.name = void 0; var _findUp = _interopRequireDefault(require("find-up")); var _execa = _interopRequireDefault(require("execa")); var _path = require("path"); var fs = _interopRequireWildcard(require("fs")); function _getRequireWildcardCache() { if (typeof WeakMap !== "function") return null; var cache = new WeakMap(); _getRequireWildcardCache = function () { return cache; }; return cache; } function _interopRequireWildcard(obj) { if (obj && obj.__esModule) { return obj; } if (obj === null || typeof obj !== "object" && typeof obj !== "function") { return { default: obj }; } var cache = _getRequireWildcardCache(); if (cache && cache.has(obj)) { return cache.get(obj); } var newObj = {}; var hasPropertyDescriptor = Object.defineProperty && Object.getOwnPropertyDescriptor; for (var key in obj) { if (Object.prototype.hasOwnProperty.call(obj, key)) { var desc = hasPropertyDescriptor ? Object.getOwnPropertyDescriptor(obj, key) : null; if (desc && (desc.get || desc.set)) { Object.defineProperty(newObj, key, desc); } else { newObj[key] = obj[key]; } } } newObj.default = obj; if (cache) { cache.set(obj, newObj); } return newObj; } function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; } const name = 'git'; exports.name = name; const detect = directory => { if (fs.existsSync((0, _path.join)(directory, '.git'))) { return directory; } const gitDirectory = _findUp.default.sync('.git', { cwd: directory, type: 'directory' }); if (gitDirectory) { return (0, _path.dirname)(gitDirectory); } const gitWorktreeFile = _findUp.default.sync('.git', { cwd: directory, type: 'file' }); if (gitWorktreeFile) { return (0, _path.dirname)(gitWorktreeFile); } }; exports.detect = detect; const runGit = (directory, args) => _execa.default.sync('git', args, { cwd: directory }); const getLines = execaResult => execaResult.stdout.split('\n'); const getSinceRevision = (directory, { staged, branch }) => { try { const revision = staged ? 'HEAD' : runGit(directory, ['merge-base', 'HEAD', branch || 'master']).stdout.trim(); return runGit(directory, ['rev-parse', '--short', revision]).stdout.trim(); } catch (error) { if (/HEAD/.test(error.message) || staged && /Needed a single revision/.test(error.message)) { return null; } throw error; } }; exports.getSinceRevision = getSinceRevision; const getChangedFiles = (directory, revision, staged) => { return [...getLines(runGit(directory, ['diff', '--name-only', staged ? '--cached' : null, '--diff-filter=ACMRTUB', revision].filter(Boolean))), ...(staged ? [] : getLines(runGit(directory, ['ls-files', '--others', '--exclude-standard'])))].filter(Boolean); }; exports.getChangedFiles = getChangedFiles; const getUnstagedChangedFiles = directory => { return getChangedFiles(directory, null, false); }; exports.getUnstagedChangedFiles = getUnstagedChangedFiles; const stageFile = (directory, file) => { runGit(directory, ['add', file]); }; exports.stageFile = stageFile;
PypiClean
/ytclips_merge-0.1.1-py3-none-any.whl/yt_concate/main.py
import sys import getopt import logging import os # 把當前文件資料夾所在路徑加到PYTHONPATH # terminal同時也可以執行 sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) from yt_concate.pipeline.steps.preflight import Preflight from yt_concate.pipeline.steps.get_video_list import GetVideoList from yt_concate.pipeline.steps.initialize_yt import InitializeYT from yt_concate.pipeline.steps.download_caption import DownLoadCaptions from yt_concate.pipeline.steps.read_caption import ReadCaption from yt_concate.pipeline.steps.search import Search from yt_concate.pipeline.steps.download_videos import DownloadVideos from yt_concate.pipeline.steps.edit_video import EditVideo from yt_concate.pipeline.steps.postflight import Postflight from yt_concate.pipeline.steps.step import StepException from yt_concate.pipeline.pipeline import Pipeline from yt_concate.utils import Utils def print_usage(): print("python main.py -c <channel_id> -s <search_word> -l <limit>") print("python main.py --channel_id <channel_id> --search_word <search_word> --limit <limit>") print("options") print("{:>5}{:<12} ,{}".format("-c", "--channel_id", "channel id for youtube channel")) print("{:>5}{:<12} ,{}".format("-s", "--search_word", "search world in the channel videos")) print("{:>5}{:<12} ,{}".format("-l", "--limit", "integer,quantity for concatenating videos")) print("{:>5}{:<12} ,{}".format("", "--cleanup", "logical,delete files after the result files complete")) print("{:>5}{:<12} ,{}".format("", "--level", "the level to print on the screen,default is logging.INFO")) # channel_id = "UCKSVUHI9rbbkXhvAXK-2uxA" # search_word = "incredible" def main(): inputs = { "channel_id": "", "search_cord": "", "limit": "", "cleanup": True, "level": logging.INFO, } short_opt = "hc:s:l:" long_opt = "help channel_id= search_word= limit= cleanup= level=".split() try: opts, args = getopt.getopt(sys.argv[1:], short_opt, long_opt) except getopt.GetoptError: print_usage() sys.exit(2) for opt, arg in opts: if opt in ('-h', "help"): print_usage() sys.exit(0) elif opt in ("-c", "--channel_id"): inputs["channel_id"] = arg elif opt in ("-s", "--search_cord"): inputs["search_cord"] = arg elif opt in ("-l", "--limit"): inputs["limit"] = arg elif opt == "cleanup": inputs["cleanup"] = arg elif opt == "level": inputs["level"] = arg if not inputs["limit"].isnumeric(): print_usage() sys.exit(2) steps = [ Preflight(), GetVideoList(), InitializeYT(), DownLoadCaptions(), ReadCaption(), Search(), DownloadVideos(), EditVideo(), Postflight(), ] logger = logging.getLogger() logger.setLevel(logging.DEBUG) file_handler = logging.FileHandler("project.log") formatter = logging.Formatter("%(levelname)s:%(asctime)s:%(message)s") file_handler.setLevel(logging.DEBUG) file_handler.setFormatter(formatter) logger.addHandler(file_handler) stream_handler = logging.StreamHandler() stream_handler.setLevel(inputs["level"]) stream_handler.setFormatter(formatter) logger.addHandler(stream_handler) utils = Utils() p = Pipeline(steps) p.run(inputs, utils) if __name__ == "__main__": main()
PypiClean
/PyOpenGL-toolbox-2.3.0.tar.gz/PyOpenGL-toolbox-2.3.0/PyOpenGLtoolbox/__init__.py
# noinspection PyUnresolvedReferences from PyOpenGLtoolbox.camera import CameraR, CameraXYZ # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.geometry import draw_vertex_list, draw_vertex_list_create_normal, draw_list, \ draw_vertex_list_create_normal_textured, draw_vertex_list_normal, draw_vertex_list_normal_textured, \ draw_vertex_list_textured # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.figures import VBObject, load_obj_model, load_gmsh_model, load_gmsh_model, create_circle, \ create_cone, create_cube, create_cube_solid, create_cube_textured, create_diamond, create_dodecahedron, \ create_icosahedron, create_octahedron, create_pyramid, create_pyramid_textured, create_pyramid_vbo, create_sphere, \ create_teapot, create_teapot_textured, create_tetrahedron, create_tetrahedron_vbo, create_torus # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.materials import material_black_plastic, material_black_rubber, material_brass, material_bronze, \ material_chrome, material_copper, material_cyan_plastic, material_cyan_rubber, material_emerald, material_gold, \ material_green_plastic, material_green_rubber, material_jade, material_natural_white, material_obsidian, \ material_pearl, material_red_plastic, material_red_rubber, material_ruby, material_silver, material_turquoise, \ material_white_plastic, material_white_rubber, material_yellow_plastic, material_yellow_rubber # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.mathlib import Point3, Point2, Vector3 # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.opengl import init_gl, init_light, clear_buffer, reshape_window_perspective, is_light_enabled # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.particles import Particle, PARTICLES_OPERATOR_ADD, PARTICLES_OPERATOR_AND, \ PARTICLES_OPERATOR_DIFF, PARTICLES_OPERATOR_DIV, PARTICLES_OPERATOR_MOD, PARTICLES_OPERATOR_MULT, \ PARTICLES_OPERATOR_OR, PARTICLES_OPERATOR_POW, PARTICLES_OPERATOR_XOR # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.pyopengl import init_pygame, load_image # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.shader import load_shader, Shader, ShaderProgram # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.textures import load_texture # noinspection PyUnresolvedReferences from PyOpenGLtoolbox.utils import create_axes, draw_text # Base libraries # noinspection PyUnresolvedReferences from OpenGL.GL import * # noinspection PyUnresolvedReferences from OpenGL.GLU import * # noinspection PyUnresolvedReferences from OpenGL.GLUT import * # noinspection PyUnresolvedReferences import pygame # noinspection PyUnresolvedReferences from pygame.locals import *
PypiClean
/pypath_omnipath-0.15.4-py3-none-any.whl/pypath/inputs/adhesome.py
# # This file is part of the `pypath` python module # Helps to translate from the mouse data to human data # # Copyright # 2014-2022 # EMBL, EMBL-EBI, Uniklinik RWTH Aachen, Heidelberg University # # Authors: Dénes Türei ([email protected]) # Nicolàs Palacio # Sebastian Lobentanzer # Erva Ulusoy # Olga Ivanova # Ahmet Rifaioglu # # Distributed under the GPLv3 License. # See accompanying file LICENSE.txt or copy at # http://www.gnu.org/licenses/gpl-3.0.html # # Website: http://pypath.omnipathdb.org/ # import csv import collections import pypath.share.curl as curl import pypath.share.common as common import pypath.resources.urls as urls import pypath.utils.mapping as mapping def adhesome_interactions(): AdhesomeInteraction = collections.namedtuple( 'AdhesomeInteraction', ['source', 'target', 'effect', 'type', 'pmid'], ) url = urls.urls['adhesome']['interactions'] c = curl.Curl(url, large = True, silent = False) data = csv.DictReader(c.result, delimiter = ',') result = [] for rec in data: result.append( AdhesomeInteraction( source = rec['Source'], target = rec['Target'], effect = rec['Effect'], type = common.upper0(rec['Type']), pmid = rec['PMID'], ) ) return result def adhesome_annotations(): AdhesomeAnnotation = collections.namedtuple( 'AdhesomeAnnotation', ['mainclass', 'intrinsic'], ) result = collections.defaultdict(set) url = urls.urls['adhesome']['components'] c = curl.Curl(url, large = True, silent = False) data = csv.DictReader(c.result, delimiter = ',') for rec in data: uniprots = rec['Swiss-Prot ID'] for uniprot in uniprots.split(','): uniprot = uniprot.strip() if uniprot == 'null': continue for _uniprot in mapping.map_name(uniprot, 'uniprot', 'uniprot'): result[uniprot].add(AdhesomeAnnotation( mainclass = ( common.upper0(rec['Functional Category'].strip()) ), intrinsic = rec['FA'].strip() == 'Intrinsic Proteins', )) return dict(result)
PypiClean