id
stringlengths
1
8
text
stringlengths
6
1.05M
dataset_id
stringclasses
1 value
/sagemath-standard-10.0b0.tar.gz/sagemath-standard-10.0b0/sage/functions/error.py
r""" Error functions This module provides symbolic error functions. These functions use the `mpmath library` for numerical evaluation and Maxima, Pynac for symbolics. The main objects which are exported from this module are: * :meth:`erf <Function_erf>` -- The error function * :meth:`erfc <Function_erfc>` -- The complementary error function * :meth:`erfi <Function_erfi>` -- The imaginary error function * :meth:`erfinv <Function_erfinv>` -- The inverse error function * :meth:`fresnel_sin <Function_Fresnel_sin>` -- The Fresnel integral `S(x)` * :meth:`fresnel_cos <Function_Fresnel_cos>` -- The Fresnel integral `C(x)` AUTHORS: * Original authors ``erf``/``error_fcn`` (c) 2006-2014: Karl-Dieter Crisman, Benjamin Jones, Mike Hansen, William Stein, Burcin Erocal, Jeroen Demeyer, W. D. Joyner, R. Andrew Ohana * Reorganisation in new file, addition of ``erfi``/``erfinv``/``erfc`` (c) 2016: Ralf Stephan * Fresnel integrals (c) 2017 Marcelo Forets REFERENCES: - [DLMF-Error]_ - [WP-Error]_ """ # **************************************************************************** # Copyright (C) 2016 Ralf Stephan <gtrwst9 at gmail.com> # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # https://www.gnu.org/licenses/ # **************************************************************************** from sage.structure.all import parent as s_parent from sage.symbolic.function import BuiltinFunction from sage.libs.mpmath import utils as mpmath_utils from sage.symbolic.expression import Expression from sage.functions.all import exp from sage.misc.functional import sqrt from sage.symbolic.constants import pi from sage.rings.rational import Rational from sage.rings.infinity import unsigned_infinity from sage.symbolic.expression import I class Function_erf(BuiltinFunction): r""" The error function. The error function is defined for real values as .. MATH:: \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt. This function is also defined for complex values, via analytic continuation. EXAMPLES: We can evaluate numerically:: sage: erf(2) erf(2) sage: erf(2).n() 0.995322265018953 sage: erf(2).n(100) 0.99532226501895273416206925637 sage: erf(ComplexField(100)(2+3j)) -20.829461427614568389103088452 + 8.6873182714701631444280787545*I Basic symbolic properties are handled by Sage and Maxima:: sage: x = var("x") sage: diff(erf(x),x) 2*e^(-x^2)/sqrt(pi) sage: integrate(erf(x),x) x*erf(x) + e^(-x^2)/sqrt(pi) ALGORITHM: Sage implements numerical evaluation of the error function via the ``erf()`` function from mpmath. Symbolics are handled by Sage and Maxima. REFERENCES: - :wikipedia:`Error_function` - http://mpmath.googlecode.com/svn/trunk/doc/build/functions/expintegrals.html#error-functions TESTS: Check limits:: sage: limit(erf(x),x=0) 0 sage: limit(erf(x),x=infinity) 1 Check that it's odd:: sage: erf(1.0) 0.842700792949715 sage: erf(-1.0) -0.842700792949715 Check against other implementations and against the definition:: sage: erf(3).n() 0.999977909503001 sage: maxima.erf(3).n() 0.999977909503001 sage: (1-pari(3).erfc()) 0.999977909503001 sage: RR(3).erf() 0.999977909503001 sage: (integrate(exp(-x**2),(x,0,3))*2/sqrt(pi)).n() 0.999977909503001 :trac:`9044`:: sage: N(erf(sqrt(2)),200) 0.95449973610364158559943472566693312505644755259664313203267 :trac:`11626`:: sage: n(erf(2),100) 0.99532226501895273416206925637 sage: erf(2).n(100) 0.99532226501895273416206925637 Test (indirectly) :trac:`11885`:: sage: erf(float(0.5)) 0.5204998778130465 sage: erf(complex(0.5)) (0.5204998778130465+0j) Ensure conversion from maxima elements works:: sage: merf = maxima(erf(x)).sage().operator() sage: merf.parent() == erf.parent() True Make sure we can dump and load it:: sage: loads(dumps(erf(2))) erf(2) Special-case 0 for immediate evaluation:: sage: erf(0) 0 sage: solve(erf(x)==0,x) [x == 0] Make sure that we can hold:: sage: erf(0,hold=True) erf(0) sage: simplify(erf(0,hold=True)) 0 Check that high-precision ComplexField inputs work:: sage: CC(erf(ComplexField(1000)(2+3j))) -20.8294614276146 + 8.68731827147016*I """ def __init__(self): r""" See docstring for :meth:`Function_erf`. EXAMPLES:: sage: maxima(erf(2)) erf(2) sage: erf(2)._sympy_() erf(2) """ BuiltinFunction.__init__(self, "erf", latex_name=r"\operatorname{erf}", conversions=dict(maxima='erf', sympy='erf', fricas='erf', giac='erf')) def _eval_(self, x): """ EXAMPLES: Input is not an expression but is exact:: sage: erf(0) 0 sage: erf(1) erf(1) sage: erf(oo) 1 sage: erf(SR(-oo)) -1 sage: erf(unsigned_infinity) Infinity Input is not an expression and is not exact:: sage: erf(0.0) 0.000000000000000 Input is an expression but not a trivial zero:: sage: erf(x) erf(x) Input is an expression which is trivially zero:: sage: erf(SR(0)) 0 """ if isinstance(x, Expression): if x.is_trivial_zero(): return x elif x.is_infinity(): if x.is_positive_infinity(): return 1 elif x.is_negative_infinity(): return -1 else: return unsigned_infinity elif not x: return x def _evalf_(self, x, parent=None, algorithm=None): """ EXAMPLES:: sage: erf(2).n() 0.995322265018953 sage: erf(2).n(200) 0.99532226501895273416206925636725292861089179704006007673835 sage: erf(pi - 1/2*I).n(100) 1.0000111669099367825726058952 + 1.6332655417638522934072124547e-6*I TESTS: Check that PARI/GP through the GP interface gives the same answer:: sage: gp.set_real_precision(59) # random 38 sage: print(gp.eval("1 - erfc(1)")); print(erf(1).n(200)) 0.84270079294971486934122063508260925929606699796630290845994 0.84270079294971486934122063508260925929606699796630290845994 Check that for an imaginary input, the output is also imaginary, see :trac:`13193`:: sage: erf(3.0*I) 1629.99462260157*I sage: erf(33.0*I) 1.51286977510409e471*I Check that real ball evaluation is fixed :trac:`28061`:: sage: RealBallField(128)(erf(5)) # abs tol 1e-38 [0.99999999999846254020557196514981165651 +/- 7.33e-39] """ R = parent or s_parent(x) import mpmath y = mpmath_utils.call(mpmath.erf, x, parent=R) return y def _derivative_(self, x, diff_param=None): """ Derivative of erf function. EXAMPLES:: sage: erf(x).diff(x) 2*e^(-x^2)/sqrt(pi) TESTS: Check if :trac:`8568` is fixed:: sage: var('c,x') (c, x) sage: derivative(erf(c*x),x) 2*c*e^(-c^2*x^2)/sqrt(pi) sage: erf(c*x).diff(x)._maxima_init_() '((%pi)^(-1/2))*(_SAGE_VAR_c)*(exp(((_SAGE_VAR_c)^(2))*((_SAGE_VAR_x)^(2))*(-1)))*(2)' """ return 2*exp(-x**2)/sqrt(pi) erf = Function_erf() class Function_erfi(BuiltinFunction): r""" The imaginary error function. The imaginary error function is defined by .. MATH:: \operatorname{erfi}(x) = -i \operatorname{erf}(ix). """ def __init__(self): r""" Initialize ``self``. EXAMPLES:: sage: maxima(erfi(2)) erfi(2) sage: erfi(2)._sympy_() erfi(2) """ BuiltinFunction.__init__(self, "erfi", latex_name=r"\operatorname{erfi}", conversions=dict(maxima='erfi', sympy='erfi', fricas='erfi')) def _eval_(self, x): """ EXAMPLES:: sage: erfi(0) 0 sage: erfi(SR(0)) 0 sage: erfi(oo) Infinity sage: erfi(SR(-oo)) Infinity """ if isinstance(x, Expression): if x.is_trivial_zero(): return x elif x.is_infinity(): return unsigned_infinity elif not x: return x def _evalf_(self, x, parent=None, algorithm=None): """ EXAMPLES:: sage: erfi(2.) 18.5648024145756 sage: erfi(2).n(100) 18.564802414575552598704291913 sage: erfi(-2*I).n(100) -0.99532226501895273416206925637*I """ R = parent or s_parent(x) import mpmath return mpmath_utils.call(mpmath.erfi, x, parent=R) def _derivative_(self, x, diff_param=None): """ Derivative of erfi function. EXAMPLES:: sage: erfi(x).diff(x) 2*e^(x^2)/sqrt(pi) """ return 2*exp(x**2)/sqrt(pi) erfi = Function_erfi() class Function_erfc(BuiltinFunction): r""" The complementary error function. The complementary error function is defined by .. MATH:: \frac{2}{\sqrt{\pi}} \int_t^\infty e^{-x^2} dx. EXAMPLES:: sage: erfc(6) erfc(6) sage: erfc(6).n() 2.15197367124989e-17 sage: erfc(RealField(100)(1/2)) 0.47950012218695346231725334611 sage: 1 - erfc(0.5) 0.520499877813047 sage: erf(0.5) 0.520499877813047 TESTS: Check that :trac:`25991` is fixed:: sage: erfc(x)._fricas_() # optional - fricas - erf(x) + 1 """ def __init__(self): r""" EXAMPLES:: sage: maxima(erfc(2)) erfc(2) sage: erfc(2)._sympy_() erfc(2) """ BuiltinFunction.__init__(self, "erfc", latex_name=r"\operatorname{erfc}", conversions=dict(maxima='erfc', sympy='erfc', fricas='(x+->1-erf(x))', giac='erfc')) def _eval_(self, x): """ EXAMPLES:: sage: erfc(0) 1 sage: erfc(SR(0)) 1 sage: erfc(oo) 0 sage: erfc(SR(-oo)) 2 """ if isinstance(x, Expression): if x.is_trivial_zero(): return 1 elif x.is_infinity(): if x.is_positive_infinity(): return 0 elif x.is_negative_infinity(): return 2 else: return unsigned_infinity elif not x: return 1 def _evalf_(self, x, parent=None, algorithm=None): """ EXAMPLES:: sage: erfc(4).n() 1.54172579002800e-8 sage: erfc(4).n(100) 1.5417257900280018852159673487e-8 sage: erfc(4*I).n(100) 1.0000000000000000000000000000 - 1.2969597307176392315279409506e6*I """ R = parent or s_parent(x) import mpmath return mpmath_utils.call(mpmath.erfc, x, parent=R) def _derivative_(self, x, diff_param=None): """ Derivative of erfc function. EXAMPLES:: sage: erfc(x).diff(x) -2*e^(-x^2)/sqrt(pi) """ return -2*exp(-x**2)/sqrt(pi) erfc = Function_erfc() class Function_erfinv(BuiltinFunction): r""" The inverse error function. The inverse error function is defined by: .. MATH:: \operatorname{erfinv}(x) = \operatorname{erf}^{-1}(x). """ def __init__(self): r""" Initialize ``self``. EXAMPLES:: sage: erfinv(2)._sympy_() erfinv(2) sage: maxima(erfinv(2)) inverse_erf(2) TESTS: Check that :trac:`11349` is fixed:: sage: _ = var('z,t') sage: PDF = exp(-x^2 /2)/sqrt(2*pi) sage: integralExpr = integrate(PDF,x,z,oo).subs(z==log(t)) sage: y = solve(integralExpr==z,t)[0].rhs().subs(z==1/4) sage: y e^(sqrt(2)*erfinv(1/2)) sage: y.n() 1.96303108415826 """ BuiltinFunction.__init__(self, "erfinv", latex_name=r"\operatorname{erfinv}", conversions=dict(sympy='erfinv', maxima='inverse_erf')) def _eval_(self, x): """ EXAMPLES:: sage: erfinv(0) 0 sage: erfinv(SR(0)) 0 sage: erfinv(1) Infinity """ if isinstance(x, Expression): if x.is_trivial_zero(): return x elif (x-1).is_trivial_zero(): return unsigned_infinity elif not x: return x elif x == 1: return unsigned_infinity def _evalf_(self, x, parent=None, algorithm=None): """ EXAMPLES:: sage: erfinv(0.2) 0.179143454621292 sage: erfinv(1/5).n(100) 0.17914345462129167649274901663 """ R = parent or s_parent(x) import mpmath return mpmath_utils.call(mpmath.erfinv, x, parent=R) def _derivative_(self, x, diff_param=None): """ Derivative of inverse erf function. EXAMPLES:: sage: erfinv(x).diff(x) 1/2*sqrt(pi)*e^(erfinv(x)^2) """ return sqrt(pi)*exp(erfinv(x)**2)/2 erfinv = Function_erfinv() from sage.misc.persist import register_unpickle_override register_unpickle_override('sage.functions.other', 'Function_erf', Function_erf) ############################ # Fresnel integrals # ############################ class Function_Fresnel_sin(BuiltinFunction): def __init__(self): r""" The sine Fresnel integral. It is defined by the integral .. MATH :: \operatorname{S}(x) = \int_0^x \sin\left(\frac{\pi t^2}{2}\right)\, dt for real `x`. Using power series expansions, it can be extended to the domain of complex numbers. See the :wikipedia:`Fresnel_integral`. INPUT: - ``x`` -- the argument of the function EXAMPLES:: sage: fresnel_sin(0) 0 sage: fresnel_sin(x).subs(x==0) 0 sage: x = var('x') sage: fresnel_sin(1).n(100) 0.43825914739035476607675669662 sage: fresnel_sin(x)._sympy_() fresnels(x) """ BuiltinFunction.__init__(self, "fresnel_sin", nargs=1, latex_name=r"\operatorname{S}", conversions=dict(maxima='fresnel_s', sympy='fresnels', mathematica='FresnelS', maple='FresnelS', fricas='fresnelS')) def _eval_(self, x): r""" EXAMPLES:: sage: fresnel_sin(pi) fresnel_sin(pi) sage: fresnel_sin(oo) 1/2 sage: fresnel_sin(-oo) -1/2 sage: fresnel_sin(I*oo) -1/2*I sage: fresnel_sin(-I*oo) 1/2*I """ if isinstance(x, Expression): if x.is_negative(): return -fresnel_sin(-x) if x.is_trivial_zero(): return x if x.is_infinity(): if x.is_positive_infinity(): return Rational((1,2)) elif x.imag_part().is_positive_infinity(): return -I*Rational((1,2)) elif x.imag_part().is_negative_infinity(): return I*Rational((1,2)) elif x < 0: return -fresnel_sin(-x) elif not x: return x def _evalf_(self, x, parent=None, algorithm=None): r""" EXAMPLES:: sage: fresnel_sin(pi) fresnel_sin(pi) sage: fresnel_sin(pi).n(100) 0.59824907809026766482843860921 sage: fresnel_sin(1.0+2*I) 36.7254648839914 + 15.5877511044046*I """ import mpmath from sage.libs.mpmath import utils as mpmath_utils return mpmath_utils.call(mpmath.fresnels, x, parent=parent) def _derivative_(self, x, diff_param=None): """ EXAMPLES:: sage: x = var('x') sage: fresnel_sin(x).diff(x) sin(1/2*pi*x^2) """ from sage.functions.trig import sin return sin(pi*x**2/2) fresnel_sin = Function_Fresnel_sin() class Function_Fresnel_cos(BuiltinFunction): def __init__(self): r""" The cosine Fresnel integral. It is defined by the integral .. MATH :: \operatorname{C}(x) = \int_0^x \cos\left(\frac{\pi t^2}{2}\right)\, dt for real `x`. Using power series expansions, it can be extended to the domain of complex numbers. See the :wikipedia:`Fresnel_integral`. INPUT: - ``x`` -- the argument of the function EXAMPLES:: sage: fresnel_cos(0) 0 sage: fresnel_cos(x).subs(x==0) 0 sage: x = var('x') sage: fresnel_cos(1).n(100) 0.77989340037682282947420641365 sage: fresnel_cos(x)._sympy_() fresnelc(x) """ BuiltinFunction.__init__(self, "fresnel_cos", nargs=1, latex_name=r"\operatorname{C}", conversions=dict(maxima='fresnel_c', sympy='fresnelc', mathematica='FresnelC', maple='FresnelC', fricas='fresnelC')) def _eval_(self, x): r""" EXAMPLES:: sage: fresnel_cos(pi) fresnel_cos(pi) sage: fresnel_cos(oo) 1/2 sage: fresnel_cos(-oo) -1/2 sage: fresnel_cos(I*oo) 1/2*I sage: fresnel_cos(-I*oo) -1/2*I """ if isinstance(x, Expression): if x.is_negative(): return -fresnel_cos(-x) if x.is_trivial_zero(): return x if x.is_infinity(): if x.is_positive_infinity(): return Rational((1,2)) elif x.imag_part().is_positive_infinity(): return I*Rational((1,2)) elif x.imag_part().is_negative_infinity(): return -I*Rational((1,2)) elif x < 0: return -fresnel_cos(-x) elif not x: return x def _evalf_(self, x, parent=None, algorithm=None): r""" EXAMPLES:: sage: fresnel_cos(pi) fresnel_cos(pi) sage: fresnel_cos(pi).n(100) 0.52369854372622864215767570284 sage: fresnel_cos(1.0+2*I) 16.0878713741255 - 36.2256879928817*I """ import mpmath from sage.libs.mpmath import utils as mpmath_utils return mpmath_utils.call(mpmath.fresnelc, x, parent=parent) def _derivative_(self, x, diff_param=None): """ EXAMPLES:: sage: x = var('x') sage: fresnel_cos(x).diff(x) cos(1/2*pi*x^2) """ from sage.functions.trig import cos return cos(pi*x**2/2) fresnel_cos = Function_Fresnel_cos()
PypiClean
/retro_data_structures-0.23.0-py3-none-any.whl/retro_data_structures/properties/corruption/archetypes/TweakGui/ScanVisor.py
import dataclasses import struct import typing from retro_data_structures.game_check import Game from retro_data_structures.properties.base_property import BaseProperty from retro_data_structures.properties.corruption.core.Color import Color from retro_data_structures.properties.corruption.core.Spline import Spline @dataclasses.dataclass() class ScanVisor(BaseProperty): unknown_0x5d750eef: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) inactive_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) inactive_external_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) non_critical_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) critical_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) burn_in_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) highlight_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) unknown_0x84badf82: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) critical_highlight_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) unknown_0xe8f5018b: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) unknown_0xba1ae1e5: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) unknown_0xb39d450e: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) unknown_0x1042455b: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) unknown_0xd72435ad: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) unknown_0x75cdc913: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) sweep_bar_color: Color = dataclasses.field(default_factory=lambda: Color(r=0.0, g=0.0, b=0.0, a=0.0)) burn_in_time: float = dataclasses.field(default=1.0) fade_out_time: float = dataclasses.field(default=0.30000001192092896) unknown_0xee169779: float = dataclasses.field(default=0.6000000238418579) unknown_0x58bc9d5d: float = dataclasses.field(default=0.4000000059604645) unknown_0xf4f19c8b: Spline = dataclasses.field(default_factory=Spline) unknown_0x5286973f: Spline = dataclasses.field(default_factory=Spline) unknown_0x636e8da2: Spline = dataclasses.field(default_factory=Spline) unknown_0xc5198616: Spline = dataclasses.field(default_factory=Spline) unknown_0x00beb898: Spline = dataclasses.field(default_factory=Spline) unknown_0xa6c9b32c: Spline = dataclasses.field(default_factory=Spline) @classmethod def game(cls) -> Game: return Game.CORRUPTION @classmethod def from_stream(cls, data: typing.BinaryIO, size: typing.Optional[int] = None, default_override: typing.Optional[dict] = None): property_count = struct.unpack(">H", data.read(2))[0] present_fields = default_override or {} for _ in range(property_count): property_id, property_size = struct.unpack(">LH", data.read(6)) start = data.tell() try: property_name, decoder = _property_decoder[property_id] present_fields[property_name] = decoder(data, property_size) except KeyError: raise RuntimeError(f"Unknown property: 0x{property_id:08x}") assert data.tell() - start == property_size return cls(**present_fields) def to_stream(self, data: typing.BinaryIO, default_override: typing.Optional[dict] = None): default_override = default_override or {} data.write(b'\x00\x1a') # 26 properties data.write(b']u\x0e\xef') # 0x5d750eef data.write(b'\x00\x10') # size self.unknown_0x5d750eef.to_stream(data) data.write(b'\x97"q\xb9') # 0x972271b9 data.write(b'\x00\x10') # size self.inactive_color.to_stream(data) data.write(b'\xa9\x08\xc7u') # 0xa908c775 data.write(b'\x00\x10') # size self.inactive_external_color.to_stream(data) data.write(b'\xee\x1f\x1d\xf6') # 0xee1f1df6 data.write(b'\x00\x10') # size self.non_critical_color.to_stream(data) data.write(b'CDZ\xe7') # 0x43445ae7 data.write(b'\x00\x10') # size self.critical_color.to_stream(data) data.write(b'\xf4\x8f\xd5Y') # 0xf48fd559 data.write(b'\x00\x10') # size self.burn_in_color.to_stream(data) data.write(b'zd\x12\xf6') # 0x7a6412f6 data.write(b'\x00\x10') # size self.highlight_color.to_stream(data) data.write(b'\x84\xba\xdf\x82') # 0x84badf82 data.write(b'\x00\x10') # size self.unknown_0x84badf82.to_stream(data) data.write(b'\xf4_}\x17') # 0xf45f7d17 data.write(b'\x00\x10') # size self.critical_highlight_color.to_stream(data) data.write(b'\xe8\xf5\x01\x8b') # 0xe8f5018b data.write(b'\x00\x10') # size self.unknown_0xe8f5018b.to_stream(data) data.write(b'\xba\x1a\xe1\xe5') # 0xba1ae1e5 data.write(b'\x00\x10') # size self.unknown_0xba1ae1e5.to_stream(data) data.write(b'\xb3\x9dE\x0e') # 0xb39d450e data.write(b'\x00\x10') # size self.unknown_0xb39d450e.to_stream(data) data.write(b'\x10BE[') # 0x1042455b data.write(b'\x00\x10') # size self.unknown_0x1042455b.to_stream(data) data.write(b'\xd7$5\xad') # 0xd72435ad data.write(b'\x00\x10') # size self.unknown_0xd72435ad.to_stream(data) data.write(b'u\xcd\xc9\x13') # 0x75cdc913 data.write(b'\x00\x10') # size self.unknown_0x75cdc913.to_stream(data) data.write(b'\x99~\xc3\x8d') # 0x997ec38d data.write(b'\x00\x10') # size self.sweep_bar_color.to_stream(data) data.write(b'\x00\xb8?\x02') # 0xb83f02 data.write(b'\x00\x04') # size data.write(struct.pack('>f', self.burn_in_time)) data.write(b'|&\x9e\xbc') # 0x7c269ebc data.write(b'\x00\x04') # size data.write(struct.pack('>f', self.fade_out_time)) data.write(b'\xee\x16\x97y') # 0xee169779 data.write(b'\x00\x04') # size data.write(struct.pack('>f', self.unknown_0xee169779)) data.write(b'X\xbc\x9d]') # 0x58bc9d5d data.write(b'\x00\x04') # size data.write(struct.pack('>f', self.unknown_0x58bc9d5d)) data.write(b'\xf4\xf1\x9c\x8b') # 0xf4f19c8b before = data.tell() data.write(b'\x00\x00') # size placeholder self.unknown_0xf4f19c8b.to_stream(data) after = data.tell() data.seek(before) data.write(struct.pack(">H", after - before - 2)) data.seek(after) data.write(b'R\x86\x97?') # 0x5286973f before = data.tell() data.write(b'\x00\x00') # size placeholder self.unknown_0x5286973f.to_stream(data) after = data.tell() data.seek(before) data.write(struct.pack(">H", after - before - 2)) data.seek(after) data.write(b'cn\x8d\xa2') # 0x636e8da2 before = data.tell() data.write(b'\x00\x00') # size placeholder self.unknown_0x636e8da2.to_stream(data) after = data.tell() data.seek(before) data.write(struct.pack(">H", after - before - 2)) data.seek(after) data.write(b'\xc5\x19\x86\x16') # 0xc5198616 before = data.tell() data.write(b'\x00\x00') # size placeholder self.unknown_0xc5198616.to_stream(data) after = data.tell() data.seek(before) data.write(struct.pack(">H", after - before - 2)) data.seek(after) data.write(b'\x00\xbe\xb8\x98') # 0xbeb898 before = data.tell() data.write(b'\x00\x00') # size placeholder self.unknown_0x00beb898.to_stream(data) after = data.tell() data.seek(before) data.write(struct.pack(">H", after - before - 2)) data.seek(after) data.write(b'\xa6\xc9\xb3,') # 0xa6c9b32c before = data.tell() data.write(b'\x00\x00') # size placeholder self.unknown_0xa6c9b32c.to_stream(data) after = data.tell() data.seek(before) data.write(struct.pack(">H", after - before - 2)) data.seek(after) @classmethod def from_json(cls, data: dict): return cls( unknown_0x5d750eef=Color.from_json(data['unknown_0x5d750eef']), inactive_color=Color.from_json(data['inactive_color']), inactive_external_color=Color.from_json(data['inactive_external_color']), non_critical_color=Color.from_json(data['non_critical_color']), critical_color=Color.from_json(data['critical_color']), burn_in_color=Color.from_json(data['burn_in_color']), highlight_color=Color.from_json(data['highlight_color']), unknown_0x84badf82=Color.from_json(data['unknown_0x84badf82']), critical_highlight_color=Color.from_json(data['critical_highlight_color']), unknown_0xe8f5018b=Color.from_json(data['unknown_0xe8f5018b']), unknown_0xba1ae1e5=Color.from_json(data['unknown_0xba1ae1e5']), unknown_0xb39d450e=Color.from_json(data['unknown_0xb39d450e']), unknown_0x1042455b=Color.from_json(data['unknown_0x1042455b']), unknown_0xd72435ad=Color.from_json(data['unknown_0xd72435ad']), unknown_0x75cdc913=Color.from_json(data['unknown_0x75cdc913']), sweep_bar_color=Color.from_json(data['sweep_bar_color']), burn_in_time=data['burn_in_time'], fade_out_time=data['fade_out_time'], unknown_0xee169779=data['unknown_0xee169779'], unknown_0x58bc9d5d=data['unknown_0x58bc9d5d'], unknown_0xf4f19c8b=Spline.from_json(data['unknown_0xf4f19c8b']), unknown_0x5286973f=Spline.from_json(data['unknown_0x5286973f']), unknown_0x636e8da2=Spline.from_json(data['unknown_0x636e8da2']), unknown_0xc5198616=Spline.from_json(data['unknown_0xc5198616']), unknown_0x00beb898=Spline.from_json(data['unknown_0x00beb898']), unknown_0xa6c9b32c=Spline.from_json(data['unknown_0xa6c9b32c']), ) def to_json(self) -> dict: return { 'unknown_0x5d750eef': self.unknown_0x5d750eef.to_json(), 'inactive_color': self.inactive_color.to_json(), 'inactive_external_color': self.inactive_external_color.to_json(), 'non_critical_color': self.non_critical_color.to_json(), 'critical_color': self.critical_color.to_json(), 'burn_in_color': self.burn_in_color.to_json(), 'highlight_color': self.highlight_color.to_json(), 'unknown_0x84badf82': self.unknown_0x84badf82.to_json(), 'critical_highlight_color': self.critical_highlight_color.to_json(), 'unknown_0xe8f5018b': self.unknown_0xe8f5018b.to_json(), 'unknown_0xba1ae1e5': self.unknown_0xba1ae1e5.to_json(), 'unknown_0xb39d450e': self.unknown_0xb39d450e.to_json(), 'unknown_0x1042455b': self.unknown_0x1042455b.to_json(), 'unknown_0xd72435ad': self.unknown_0xd72435ad.to_json(), 'unknown_0x75cdc913': self.unknown_0x75cdc913.to_json(), 'sweep_bar_color': self.sweep_bar_color.to_json(), 'burn_in_time': self.burn_in_time, 'fade_out_time': self.fade_out_time, 'unknown_0xee169779': self.unknown_0xee169779, 'unknown_0x58bc9d5d': self.unknown_0x58bc9d5d, 'unknown_0xf4f19c8b': self.unknown_0xf4f19c8b.to_json(), 'unknown_0x5286973f': self.unknown_0x5286973f.to_json(), 'unknown_0x636e8da2': self.unknown_0x636e8da2.to_json(), 'unknown_0xc5198616': self.unknown_0xc5198616.to_json(), 'unknown_0x00beb898': self.unknown_0x00beb898.to_json(), 'unknown_0xa6c9b32c': self.unknown_0xa6c9b32c.to_json(), } def _decode_unknown_0x5d750eef(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_inactive_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_inactive_external_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_non_critical_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_critical_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_burn_in_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_highlight_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_unknown_0x84badf82(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_critical_highlight_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_unknown_0xe8f5018b(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_unknown_0xba1ae1e5(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_unknown_0xb39d450e(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_unknown_0x1042455b(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_unknown_0xd72435ad(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_unknown_0x75cdc913(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_sweep_bar_color(data: typing.BinaryIO, property_size: int): return Color.from_stream(data) def _decode_burn_in_time(data: typing.BinaryIO, property_size: int): return struct.unpack('>f', data.read(4))[0] def _decode_fade_out_time(data: typing.BinaryIO, property_size: int): return struct.unpack('>f', data.read(4))[0] def _decode_unknown_0xee169779(data: typing.BinaryIO, property_size: int): return struct.unpack('>f', data.read(4))[0] def _decode_unknown_0x58bc9d5d(data: typing.BinaryIO, property_size: int): return struct.unpack('>f', data.read(4))[0] def _decode_unknown_0xf4f19c8b(data: typing.BinaryIO, property_size: int): return Spline.from_stream(data, property_size) def _decode_unknown_0x5286973f(data: typing.BinaryIO, property_size: int): return Spline.from_stream(data, property_size) def _decode_unknown_0x636e8da2(data: typing.BinaryIO, property_size: int): return Spline.from_stream(data, property_size) def _decode_unknown_0xc5198616(data: typing.BinaryIO, property_size: int): return Spline.from_stream(data, property_size) def _decode_unknown_0x00beb898(data: typing.BinaryIO, property_size: int): return Spline.from_stream(data, property_size) def _decode_unknown_0xa6c9b32c(data: typing.BinaryIO, property_size: int): return Spline.from_stream(data, property_size) _property_decoder: typing.Dict[int, typing.Tuple[str, typing.Callable[[typing.BinaryIO, int], typing.Any]]] = { 0x5d750eef: ('unknown_0x5d750eef', _decode_unknown_0x5d750eef), 0x972271b9: ('inactive_color', _decode_inactive_color), 0xa908c775: ('inactive_external_color', _decode_inactive_external_color), 0xee1f1df6: ('non_critical_color', _decode_non_critical_color), 0x43445ae7: ('critical_color', _decode_critical_color), 0xf48fd559: ('burn_in_color', _decode_burn_in_color), 0x7a6412f6: ('highlight_color', _decode_highlight_color), 0x84badf82: ('unknown_0x84badf82', _decode_unknown_0x84badf82), 0xf45f7d17: ('critical_highlight_color', _decode_critical_highlight_color), 0xe8f5018b: ('unknown_0xe8f5018b', _decode_unknown_0xe8f5018b), 0xba1ae1e5: ('unknown_0xba1ae1e5', _decode_unknown_0xba1ae1e5), 0xb39d450e: ('unknown_0xb39d450e', _decode_unknown_0xb39d450e), 0x1042455b: ('unknown_0x1042455b', _decode_unknown_0x1042455b), 0xd72435ad: ('unknown_0xd72435ad', _decode_unknown_0xd72435ad), 0x75cdc913: ('unknown_0x75cdc913', _decode_unknown_0x75cdc913), 0x997ec38d: ('sweep_bar_color', _decode_sweep_bar_color), 0xb83f02: ('burn_in_time', _decode_burn_in_time), 0x7c269ebc: ('fade_out_time', _decode_fade_out_time), 0xee169779: ('unknown_0xee169779', _decode_unknown_0xee169779), 0x58bc9d5d: ('unknown_0x58bc9d5d', _decode_unknown_0x58bc9d5d), 0xf4f19c8b: ('unknown_0xf4f19c8b', _decode_unknown_0xf4f19c8b), 0x5286973f: ('unknown_0x5286973f', _decode_unknown_0x5286973f), 0x636e8da2: ('unknown_0x636e8da2', _decode_unknown_0x636e8da2), 0xc5198616: ('unknown_0xc5198616', _decode_unknown_0xc5198616), 0xbeb898: ('unknown_0x00beb898', _decode_unknown_0x00beb898), 0xa6c9b32c: ('unknown_0xa6c9b32c', _decode_unknown_0xa6c9b32c), }
PypiClean
/Glances-3.4.0.3.tar.gz/Glances-3.4.0.3/glances/secure.py
from glances.compat import nativestr from subprocess import Popen, PIPE import re def secure_popen(cmd): """A more or less secure way to execute system commands Multiple command should be separated with a && :return: the result of the commands """ ret = '' # Split by multiple commands '&&' for c in cmd.split('&&'): ret += __secure_popen(c) return ret def __secure_popen(cmd): """A more or less secure way to execute system command Manage redirection (>) and pipes (|) """ # Split by redirection '>' cmd_split_redirect = cmd.split('>') if len(cmd_split_redirect) > 2: return 'Glances error: Only one file redirection allowed ({})'.format(cmd) elif len(cmd_split_redirect) == 2: stdout_redirect = cmd_split_redirect[1].strip() cmd = cmd_split_redirect[0] else: stdout_redirect = None sub_cmd_stdin = None p_last = None # Split by pipe '|' for sub_cmd in cmd.split('|'): # Split by space character, but do no split spaces within quotes (remove surrounding quotes, though) tmp_split = [_ for _ in list(filter(None, re.split(r'(\s+)|(".*?"+?)|(\'.*?\'+?)', sub_cmd))) if _ != ' '] sub_cmd_split = [_[1:-1] if (_[0] == _[-1] == '"') or (_[0] == _[-1] == '\'') else _ for _ in tmp_split] p = Popen(sub_cmd_split, shell=False, stdin=sub_cmd_stdin, stdout=PIPE, stderr=PIPE) if p_last is not None: # Allow p_last to receive a SIGPIPE if p exits. p_last.stdout.close() p_last = p sub_cmd_stdin = p.stdout p_ret = p_last.communicate() if nativestr(p_ret[1]) == '': # No error ret = nativestr(p_ret[0]) if stdout_redirect is not None: # Write result to redirection file with open(stdout_redirect, "w") as stdout_redirect_file: stdout_redirect_file.write(ret) else: # Error ret = nativestr(p_ret[1]) return ret
PypiClean
/flexNetSim-0.26.tar.gz/flexNetSim-0.26/flexnetsim/bitrate.py
import json class Bitrate(): def __init__(self, bit_rate): self.__bit_rate = bit_rate self.__modulation = [] self.__slots = [] self.__reach = [] @property def bit_rate(self): return self.__bit_rate @property def modulation(self): return self.__modulation @property def slots(self): return self.__slots @property def reach(self): return self.__reach def add_modulation(self, modulation: str, slots: int, reach): self.modulation.append(modulation) self.slots.append(slots) self.reach.append(reach) def get_modulation(self, pos: int): if pos >= len(self.modulation): raise ValueError( f"Bitrate {self.bit_rate} does not have more than {len(self.modulation)} modulations") return self.modulation[pos] def get_number_of_slots(self, pos: int): if pos >= len(self.slots): raise ValueError( f"Bitrate {self.bit_rate} does not have more than {len(self.slots)} modulations") return self.slots[pos] def get_reach(self, pos: int): if pos >= len(self.reach): raise ValueError( f"Bitrate {self.bit_rate} does not have more than {len(self.reach)} modulations") return self.reach[pos] def read_bit_rate_file(filename: str): f = open(filename) bit_rate_file = json.load(f) f.close() vect = [] for x in bit_rate_file: bit_rate = int(x) number_of_modulations = len(bit_rate_file[x]) aux = Bitrate(bit_rate) for i in range(number_of_modulations): for j in bit_rate_file[x][i].items(): modulation = j[0] reach = int(j[1]["reach"]) slots = int(j[1]["slots"]) if (reach < 0) and (slots < 0): raise ValueError( "Value entered for slots and reach is less than zero") if reach < 0: raise ValueError( "Value entered for reach is less than zero") if slots < 0: raise ValueError( "Value entered for slots is less than zero") aux.add_modulation(modulation, slots, reach) vect.append(aux) return vect
PypiClean
/Orange3-zh-3.33.1.tar.gz/Orange3-zh-3.33.1/Orange/widgets/data/owoutliers.py
from typing import Dict, Tuple from types import SimpleNamespace import numpy as np from AnyQt.QtCore import Signal, Qt from AnyQt.QtWidgets import QWidget, QVBoxLayout from orangewidget.settings import SettingProvider from Orange.base import Learner from Orange.classification import OneClassSVMLearner, EllipticEnvelopeLearner,\ LocalOutlierFactorLearner, IsolationForestLearner from Orange.data import Table from Orange.util import wrap_callback from Orange.widgets import gui from Orange.widgets.settings import Setting from Orange.widgets.utils.concurrent import TaskState, ConcurrentWidgetMixin from Orange.widgets.utils.sql import check_sql_input from Orange.widgets.utils.widgetpreview import WidgetPreview from Orange.widgets.widget import Msg, Input, Output, OWWidget class Results(SimpleNamespace): inliers = None # type: Optional[Table] outliers = None # type: Optional[Table] annotated_data = None # type: Optional[Table] def run(data: Table, learner: Learner, state: TaskState) -> Results: results = Results() if not data: return results def callback(i: float, status=""): state.set_progress_value(i * 100) if status: state.set_status(status) if state.is_interruption_requested(): raise Exception callback(0, "Initializing...") model = learner(data, wrap_callback(callback, end=0.6)) pred = model(data, wrap_callback(callback, start=0.6, end=0.99)) col = pred.get_column_view(model.outlier_var)[0] inliers_ind = np.where(col == 1)[0] outliers_ind = np.where(col == 0)[0] results.inliers = data[inliers_ind] results.outliers = data[outliers_ind] results.annotated_data = pred callback(1) return results class ParametersEditor(QWidget, gui.OWComponent): param_changed = Signal() def __init__(self, parent): QWidget.__init__(self, parent) gui.OWComponent.__init__(self, parent) self.setMinimumWidth(300) layout = QVBoxLayout() layout.setContentsMargins(0, 0, 0, 0) self.setLayout(layout) self.param_box = gui.vBox(self, spacing=0) def parameter_changed(self): self.param_changed.emit() def get_parameters(self) -> Dict: raise NotImplementedError class SVMEditor(ParametersEditor): nu = Setting(50) gamma = Setting(0.01) def __init__(self, parent): super().__init__(parent) tooltip = "An upper bound on the fraction of training errors and a " \ "lower bound of the fraction of support vectors" gui.widgetLabel(self.param_box, "Nu:", tooltip=tooltip) gui.hSlider(self.param_box, self, "nu", minValue=1, maxValue=100, ticks=10, labelFormat="%d %%", tooltip=tooltip, callback=self.parameter_changed) gui.doubleSpin(self.param_box, self, "gamma", label="Kernel coefficient:", step=1e-2, minv=0.01, maxv=10, callback=self.parameter_changed) def get_parameters(self): return {"nu": self.nu / 100, "gamma": self.gamma} class CovarianceEditor(ParametersEditor): cont = Setting(10) empirical_covariance = Setting(False) support_fraction = Setting(1) def __init__(self, parent): super().__init__(parent) gui.widgetLabel(self.param_box, "污染量:") gui.hSlider(self.param_box, self, "cont", minValue=0, maxValue=100, ticks=10, labelFormat="%d %%", callback=self.parameter_changed) ebox = gui.hBox(self.param_box) gui.checkBox(ebox, self, "empirical_covariance", "支持比例(Support fraction):", callback=self.parameter_changed) gui.doubleSpin(ebox, self, "support_fraction", step=1e-1, minv=0.1, maxv=10, callback=self.parameter_changed) def get_parameters(self): fraction = self.support_fraction if self.empirical_covariance else None return {"contamination": self.cont / 100, "support_fraction": fraction} class LocalOutlierFactorEditor(ParametersEditor): METRICS = ("euclidean", "manhattan", "cosine", "jaccard", "hamming", "minkowski") n_neighbors = Setting(20) cont = Setting(10) metric_index = Setting(0) def __init__(self, parent): super().__init__(parent) gui.widgetLabel(self.param_box, "污染量:") gui.hSlider(self.param_box, self, "cont", minValue=1, maxValue=50, ticks=5, labelFormat="%d %%", callback=self.parameter_changed) gui.spin(self.param_box, self, "n_neighbors", label="邻近数:", minv=1, maxv=100000, callback=self.parameter_changed) gui.comboBox(self.param_box, self, "metric_index", label="度量:", orientation=Qt.Horizontal, items=[m.capitalize() for m in self.METRICS], callback=self.parameter_changed) def get_parameters(self): return {"n_neighbors": self.n_neighbors, "contamination": self.cont / 100, "algorithm": "brute", # works faster for big datasets "metric": self.METRICS[self.metric_index]} class IsolationForestEditor(ParametersEditor): cont = Setting(10) replicable = Setting(False) def __init__(self, parent): super().__init__(parent) gui.widgetLabel(self.param_box, "污染量:") gui.hSlider(self.param_box, self, "cont", minValue=0, maxValue=100, ticks=10, labelFormat="%d %%", callback=self.parameter_changed) gui.checkBox(self.param_box, self, "replicable", "可重复训练", callback=self.parameter_changed) def get_parameters(self): return {"contamination": self.cont / 100, "random_state": 42 if self.replicable else None} class OWOutliers(OWWidget, ConcurrentWidgetMixin): name = "异常值(Outliers)" description = "检测异常值。" icon = "icons/Outliers.svg" priority = 3000 category = "非监督(Unsupervised)" keywords = ["inlier", 'yichang'] class Inputs: data = Input("数据(Data)", Table, replaces=['Data']) class Outputs: inliers = Output("正常值(Inliers)", Table, replaces=["Inliers"]) outliers = Output("异常值(Outliers)", Table, replaces=["Outliers"]) data = Output("数据(Data)", Table, replaces=["Data"]) want_main_area = False resizing_enabled = False OneClassSVM, Covariance, LOF, IsolationForest = range(4) METHODS = (OneClassSVMLearner, EllipticEnvelopeLearner, LocalOutlierFactorLearner, IsolationForestLearner) svm_editor = SettingProvider(SVMEditor) cov_editor = SettingProvider(CovarianceEditor) lof_editor = SettingProvider(LocalOutlierFactorEditor) isf_editor = SettingProvider(IsolationForestEditor) settings_version = 2 outlier_method = Setting(LOF) auto_commit = Setting(True) MAX_FEATURES = 1500 class Warning(OWWidget.Warning): disabled_cov = Msg("协方差估计的特征过多.") class Error(OWWidget.Error): singular_cov = Msg("Singular covariance matrix.") memory_error = Msg("Not enough memory") def __init__(self): OWWidget.__init__(self) ConcurrentWidgetMixin.__init__(self) self.data = None # type: Table self.n_inliers = None # type: int self.n_outliers = None # type: int self.editors = None # type: Tuple[ParametersEditor] self.current_editor = None # type: ParametersEditor self.method_combo = None # type: QComboBox self.init_gui() def init_gui(self): box = gui.vBox(self.controlArea, "方法") self.method_combo = gui.comboBox(box, self, "outlier_method", items=[m.name for m in self.METHODS], callback=self.__method_changed) self._init_editors() gui.auto_apply(self.buttonsArea, self, "auto_commit") def _init_editors(self): self.svm_editor = SVMEditor(self) self.cov_editor = CovarianceEditor(self) self.lof_editor = LocalOutlierFactorEditor(self) self.isf_editor = IsolationForestEditor(self) box = gui.vBox(self.controlArea, "参数") self.editors = (self.svm_editor, self.cov_editor, self.lof_editor, self.isf_editor) for editor in self.editors: editor.param_changed.connect(self.commit.deferred) box.layout().addWidget(editor) editor.hide() self.set_current_editor() def __method_changed(self): self.set_current_editor() self.commit.deferred() def set_current_editor(self): if self.current_editor: self.current_editor.hide() self.current_editor = self.editors[self.outlier_method] self.current_editor.show() @Inputs.data @check_sql_input def set_data(self, data): self.cancel() self.clear_messages() self.data = data self.enable_controls() self.commit.now() def enable_controls(self): self.method_combo.model().item(self.Covariance).setEnabled(True) if self.data and len(self.data.domain.attributes) > self.MAX_FEATURES: self.outlier_method = self.LOF self.set_current_editor() self.method_combo.model().item(self.Covariance).setEnabled(False) self.Warning.disabled_cov() @gui.deferred def commit(self): self.Error.singular_cov.clear() self.Error.memory_error.clear() self.n_inliers = self.n_outliers = None learner_class = self.METHODS[self.outlier_method] kwargs = self.current_editor.get_parameters() learner = learner_class(**kwargs) self.start(run, self.data, learner) def on_partial_result(self, _): pass def on_done(self, result: Results): inliers, outliers = result.inliers, result.outliers self.n_inliers = len(inliers) if inliers else None self.n_outliers = len(outliers) if outliers else None self.Outputs.inliers.send(inliers) self.Outputs.outliers.send(outliers) self.Outputs.data.send(result.annotated_data) def on_exception(self, ex): if isinstance(ex, ValueError): self.Error.singular_cov(ex) elif isinstance(ex, MemoryError): self.Error.memory_error() else: raise ex def onDeleteWidget(self): self.shutdown() super().onDeleteWidget() def send_report(self): if self.n_outliers is None or self.n_inliers is None: return self.report_items("Data", (("Input instances", len(self.data)), ("Inliers", self.n_inliers), ("Outliers", self.n_outliers))) params = self.current_editor.get_parameters() if self.outlier_method == self.OneClassSVM: self.report_items( "Detection", (("Detection method", "One class SVM with non-linear kernel (RBF)"), ("Regularization (nu)", params["nu"]), ("Kernel coefficient", params["gamma"]))) elif self.outlier_method == self.Covariance: self.report_items( "Detection", (("Detection method", "Covariance estimator"), ("Contamination", params["contamination"]), ("Support fraction", params["support_fraction"]))) elif self.outlier_method == self.LOF: self.report_items( "Detection", (("Detection method", "Local Outlier Factor"), ("Contamination", params["contamination"]), ("Number of neighbors", params["n_neighbors"]), ("Metric", params["metric"]))) elif self.outlier_method == self.IsolationForest: self.report_items( "Detection", (("Detection method", "Isolation Forest"), ("Contamination", params["contamination"]))) else: raise NotImplementedError @classmethod def migrate_settings(cls, settings: Dict, version: int): if version is None or version < 2: settings["svm_editor"] = {"nu": settings.get("nu", 50), "gamma": settings.get("gamma", 0.01)} ec, sf = "empirical_covariance", "support_fraction" settings["cov_editor"] = {"cont": settings.get("cont", 10), ec: settings.get(ec, False), sf: settings.get(sf, 1)} if __name__ == "__main__": # pragma: no cover WidgetPreview(OWOutliers).run(Table("iris"))
PypiClean
/django_handyhelpers-0.3.9-py3-none-any.whl/handyhelpers/static/node_modules/clipboard/readme.md
# clipboard.js ![Build Status](https://github.com/zenorocha/clipboard.js/workflows/build/badge.svg) ![Killing Flash](https://img.shields.io/badge/killing-flash-brightgreen.svg?style=flat) > Modern copy to clipboard. No Flash. Just 3kb gzipped. <a href="https://clipboardjs.com/"><img width="728" src="https://cloud.githubusercontent.com/assets/398893/16165747/a0f6fc46-349a-11e6-8c9b-c5fd58d9099c.png" alt="Demo"></a> ## Why Copying text to the clipboard shouldn't be hard. It shouldn't require dozens of steps to configure or hundreds of KBs to load. But most of all, it shouldn't depend on Flash or any bloated framework. That's why clipboard.js exists. ## Install You can get it on npm. ``` npm install clipboard --save ``` Or if you're not into package management, just [download a ZIP](https://github.com/zenorocha/clipboard.js/archive/master.zip) file. ## Setup First, include the script located on the `dist` folder or load it from [a third-party CDN provider](https://github.com/zenorocha/clipboard.js/wiki/CDN-Providers). ```html <script src="dist/clipboard.min.js"></script> ``` Now, you need to instantiate it by [passing a DOM selector](https://github.com/zenorocha/clipboard.js/blob/master/demo/constructor-selector.html#L18), [HTML element](https://github.com/zenorocha/clipboard.js/blob/master/demo/constructor-node.html#L16-L17), or [list of HTML elements](https://github.com/zenorocha/clipboard.js/blob/master/demo/constructor-nodelist.html#L18-L19). ```js new ClipboardJS('.btn'); ``` Internally, we need to fetch all elements that matches with your selector and attach event listeners for each one. But guess what? If you have hundreds of matches, this operation can consume a lot of memory. For this reason we use [event delegation](https://stackoverflow.com/questions/1687296/what-is-dom-event-delegation) which replaces multiple event listeners with just a single listener. After all, [#perfmatters](https://twitter.com/hashtag/perfmatters). # Usage We're living a _declarative renaissance_, that's why we decided to take advantage of [HTML5 data attributes](https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Using_data_attributes) for better usability. ### Copy text from another element A pretty common use case is to copy content from another element. You can do that by adding a `data-clipboard-target` attribute in your trigger element. The value you include on this attribute needs to match another's element selector. <a href="https://clipboardjs.com/#example-target"><img width="473" alt="example-2" src="https://cloud.githubusercontent.com/assets/398893/9983467/a4946aaa-5fb1-11e5-9780-f09fcd7ca6c8.png"></a> ```html <!-- Target --> <input id="foo" value="https://github.com/zenorocha/clipboard.js.git" /> <!-- Trigger --> <button class="btn" data-clipboard-target="#foo"> <img src="assets/clippy.svg" alt="Copy to clipboard" /> </button> ``` ### Cut text from another element Additionally, you can define a `data-clipboard-action` attribute to specify if you want to either `copy` or `cut` content. If you omit this attribute, `copy` will be used by default. <a href="https://clipboardjs.com/#example-action"><img width="473" alt="example-3" src="https://cloud.githubusercontent.com/assets/398893/10000358/7df57b9c-6050-11e5-9cd1-fbc51d2fd0a7.png"></a> ```html <!-- Target --> <textarea id="bar">Mussum ipsum cacilds...</textarea> <!-- Trigger --> <button class="btn" data-clipboard-action="cut" data-clipboard-target="#bar"> Cut to clipboard </button> ``` As you may expect, the `cut` action only works on `<input>` or `<textarea>` elements. ### Copy text from attribute Truth is, you don't even need another element to copy its content from. You can just include a `data-clipboard-text` attribute in your trigger element. <a href="https://clipboardjs.com/#example-text"><img width="147" alt="example-1" src="https://cloud.githubusercontent.com/assets/398893/10000347/6e16cf8c-6050-11e5-9883-1c5681f9ec45.png"></a> ```html <!-- Trigger --> <button class="btn" data-clipboard-text="Just because you can doesn't mean you should — clipboard.js" > Copy to clipboard </button> ``` ## Events There are cases where you'd like to show some user feedback or capture what has been selected after a copy/cut operation. That's why we fire custom events such as `success` and `error` for you to listen and implement your custom logic. ```js var clipboard = new ClipboardJS('.btn'); clipboard.on('success', function (e) { console.info('Action:', e.action); console.info('Text:', e.text); console.info('Trigger:', e.trigger); e.clearSelection(); }); clipboard.on('error', function (e) { console.error('Action:', e.action); console.error('Trigger:', e.trigger); }); ``` For a live demonstration, go to this [site](https://clipboardjs.com/) and open your console. ## Tooltips Each application has different design needs, that's why clipboard.js does not include any CSS or built-in tooltip solution. The tooltips you see on the [demo site](https://clipboardjs.com/) were built using [GitHub's Primer](https://primer.style/css/components/tooltips). You may want to check that out if you're looking for a similar look and feel. ## Advanced Options If you don't want to modify your HTML, there's a pretty handy imperative API for you to use. All you need to do is declare a function, do your thing, and return a value. For instance, if you want to dynamically set a `target`, you'll need to return a Node. ```js new ClipboardJS('.btn', { target: function (trigger) { return trigger.nextElementSibling; }, }); ``` If you want to dynamically set a `text`, you'll return a String. ```js new ClipboardJS('.btn', { text: function (trigger) { return trigger.getAttribute('aria-label'); }, }); ``` For use in Bootstrap Modals or with any other library that changes the focus you'll want to set the focused element as the `container` value. ```js new ClipboardJS('.btn', { container: document.getElementById('modal'), }); ``` Also, if you are working with single page apps, you may want to manage the lifecycle of the DOM more precisely. Here's how you clean up the events and objects that we create. ```js var clipboard = new ClipboardJS('.btn'); clipboard.destroy(); ``` ## Browser Support This library relies on both [Selection](https://developer.mozilla.org/en-US/docs/Web/API/Selection) and [execCommand](https://developer.mozilla.org/en-US/docs/Web/API/Document/execCommand) APIs. The first one is [supported by all browsers](https://caniuse.com/#search=selection) while the second one is supported in the following browsers. | <img src="https://clipboardjs.com/assets/images/chrome.png" width="48px" height="48px" alt="Chrome logo"> | <img src="https://clipboardjs.com/assets/images/edge.png" width="48px" height="48px" alt="Edge logo"> | <img src="https://clipboardjs.com/assets/images/firefox.png" width="48px" height="48px" alt="Firefox logo"> | <img src="https://clipboardjs.com/assets/images/ie.png" width="48px" height="48px" alt="Internet Explorer logo"> | <img src="https://clipboardjs.com/assets/images/opera.png" width="48px" height="48px" alt="Opera logo"> | <img src="https://clipboardjs.com/assets/images/safari.png" width="48px" height="48px" alt="Safari logo"> | | :-------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------: | | 42+ ✔ | 12+ ✔ | 41+ ✔ | 9+ ✔ | 29+ ✔ | 10+ ✔ | The good news is that clipboard.js gracefully degrades if you need to support older browsers. All you have to do is show a tooltip saying `Copied!` when `success` event is called and `Press Ctrl+C to copy` when `error` event is called because the text is already selected. You can also check if clipboard.js is supported or not by running `ClipboardJS.isSupported()`, that way you can hide copy/cut buttons from the UI. ## Bonus A browser extension that adds a "copy to clipboard" button to every code block on _GitHub, MDN, Gist, StackOverflow, StackExchange, npm, and even Medium._ Install for [Chrome](https://chrome.google.com/webstore/detail/codecopy/fkbfebkcoelajmhanocgppanfoojcdmg) and [Firefox](https://addons.mozilla.org/en-US/firefox/addon/codecopy/). ## License [MIT License](https://zenorocha.mit-license.org/) © Zeno Rocha
PypiClean
/GalSim-2.4.11-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl/galsim/phase_psf.py
from heapq import heappush, heappop import numpy as np from .gsobject import GSObject from .gsparams import GSParams from .angle import radians, degrees, arcsec, Angle, AngleUnit from .image import Image, _Image from .bounds import _BoundsI from .wcs import PixelScale from .interpolatedimage import InterpolatedImage from .utilities import doc_inherit, OrderedWeakRef, rotate_xy, lazy_property, basestring from .errors import GalSimValueError, GalSimRangeError, GalSimIncompatibleValuesError from .errors import GalSimFFTSizeError, galsim_warn from .photon_array import TimeSampler class Aperture: """Class representing a telescope aperture embedded in a larger pupil plane array -- for use with the `PhaseScreenPSF` class to create PSFs via Fourier or geometric optics. The pupil plane array is completely specified by its size, sampling interval, and pattern of illuminated pixels. Pupil plane arrays can be specified either geometrically or using an image to indicate the illuminated pixels. In both cases, various options exist to control the pupil plane size and sampling interval. **Geometric pupil specification**: The first way to specify the details of the telescope aperture is through a series of keywords indicating the diameter, size of the central obscuration, and the nature of the struts holding up the secondary mirror (or prime focus cage, etc.). The struts are assumed to be rectangular obscurations extending from the outer edge of the pupil to the outer edge of the obscuration disk (or to the pupil center if ``obscuration = 0.``). You can specify how many struts there are (evenly spaced in angle), how thick they are as a fraction of the pupil diameter, and what angle they start at relative to the positive y direction. The size (in meters) and sampling interval (in meters) of the pupil plane array representing the aperture can be set directly using the the ``pupil_plane_size`` and ``pupil_plane_scale`` keywords. However, in most situations, it's probably more convenient to let GalSim set these automatically based on the pupil geometry and the nature of the (potentially time-varying) phase aberrations from which a PSF is being derived. The pupil plane array physical size is by default set to twice the pupil diameter producing a Nyquist sampled PSF image. While this would always be sufficient if using sinc interpolation over the PSF image for subsequent operations, GalSim by default uses the much faster (though approximate) quintic interpolant, which means that in some cases -- in particular, for significantly aberrated optical PSFs without atmospheric aberrations -- it may be useful to further increase the size of the pupil plane array, thereby increasing the sampling rate of the resulting PSF image. This can be done by increasing the ``oversampling`` keyword. A caveat to the above occurs when using ``geometric_shooting=True`` to draw using photon-shooting. In this case, we only need an array just large enough to avoid clipping the pupil, which we can get by setting ``oversampling=0.5``. The pupil plane array physical sampling interval (which is directly related to the resulting PSF image physical size) is set by default to the same interval as would be used to avoid significant aliasing (image folding) for an obscured `Airy` profile with matching diameter and obscuration and for the value of ``folding_threshold`` in the optionally specified gsparams argument. If the phase aberrations are significant, however, the PSF image size computed this way may still not be sufficiently large to avoid aliasing. To further increase the pupil plane sampling rate (and hence the PSF image size), you can increase the value of the ``pad_factor`` keyword. An additional way to set the pupil sampling interval for a particular set of phase screens (i.e., for a particular `PhaseScreenList`) is to provide the screens in the ``screen_list`` argument. Each screen in the list computes its own preferred sampling rate and the `PhaseScreenList` appropriately aggregates these. This last option also requires that a wavelength ``lam`` be specified, and is particularly helpful for creating PSFs derived from turbulent atmospheric screens. Finally, when specifying the pupil geometrically, Aperture may choose to make a small adjustment to ``pupil_plane_scale`` in order to produce an array with a good size for FFTs. If your application depends on knowing the size and scale used with the Fourier optics framework, you can obtain these from the ``aper.pupil_plane_size`` and ``aper.pupil_plane_scale`` attributes. **Pupil image specification**: The second way to specify the pupil plane configuration is by passing in an image of it. This can be useful, for example, if the struts are not evenly spaced or are not radially directed, as is assumed by the simple model for struts described above. In this case, an exception is raised if keywords related to struts are also given. On the other hand, the ``obscuration`` keyword is still used to ensure that the PSF images are not aliased, though it is ignored during the actual construction of the pupil plane illumination pattern. Note that for complicated pupil configurations, it may be desireable to increase ``pad_factor`` for more fidelity at the expense of slower running time. Finally, the ``pupil_plane_im`` that is passed in can be rotated during internal calculations by specifying a ``pupil_angle`` keyword. If you choose to pass in a pupil plane image, it must be a square array in which the image of the pupil is centered. The areas that are illuminated should have some value >0, and the other areas should have a value of precisely zero. Based on what the Aperture class determines is a good PSF sampling interval, the image of the pupil plane that is passed in might be zero-padded during internal calculations. (The pupil plane array size and scale values can be accessed via the ``aper.pupil_plane_size`` and ``aper.pupil_plane_scale`` attributes.) The pixel scale of the pupil plane can be specified in one of three ways. In descending order of priority, these are: 1. The ``pupil_plane_scale`` keyword argument (units are meters). 2. The ``pupil_plane_im.scale`` attribute (units are meters). 3. If (1) and (2) are both None, then the scale will be inferred by assuming that the illuminated pixel farthest from the image center is at a physical distance of self.diam/2. The ``pupil_plane_size`` and ``lam`` keywords are both ignored when constructing an Aperture from an image. Parameters: diam: Aperture diameter in meters. lam: Wavelength in nanometers. [default: None] circular_pupil: Adopt a circular pupil? [default: True] obscuration: Linear dimension of central obscuration as fraction of aperture linear dimension. [0., 1.). [default: 0.0] nstruts: Number of radial support struts to add to the central obscuration. [default: 0] strut_thick: Thickness of support struts as a fraction of aperture diameter. [default: 0.05] strut_angle: `Angle` made between the vertical and the strut starting closest to it, defined to be positive in the counter-clockwise direction; must be an `Angle` instance. [default: 0. * galsim.degrees] oversampling: Optional oversampling factor *in the image plane* for the PSF eventually constructed using this `Aperture`. Setting ``oversampling < 1`` will produce aliasing in the PSF (not good). [default: 1.0] pad_factor: Additional multiple by which to extend the PSF image to avoid folding. [default: 1.0] screen_list: An optional `PhaseScreenList` object. If present, then get a good pupil sampling interval using this object. [default: None] pupil_plane_im: The GalSim.Image, NumPy array, or name of file containing the pupil plane image, to be used instead of generating one based on the obscuration and strut parameters. [default: None] pupil_angle: If ``pupil_plane_im`` is not None, rotation angle for the pupil plane (positive in the counter-clockwise direction). Must be an `Angle` instance. [default: 0. * galsim.degrees] pupil_plane_scale: Sampling interval in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. The exception is when specifying the pupil arrangement via an image, in which case this keyword can be used to indicate the sampling of that image. See also ``pad_factor`` for adjusting the pupil sampling scale. [default: None] pupil_plane_size: Size in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. See also ``oversampling`` for adjusting the pupil size. [default: None] gsparams: An optional `GSParams` argument. [default: None] """ def __init__(self, diam, lam=None, circular_pupil=True, obscuration=0.0, nstruts=0, strut_thick=0.05, strut_angle=0.0*radians, oversampling=1.0, pad_factor=1.0, screen_list=None, pupil_plane_im=None, pupil_angle=0.0*radians, pupil_plane_scale=None, pupil_plane_size=None, gsparams=None): self._diam = diam # Always need to explicitly specify an aperture diameter. self._lam = lam self._circular_pupil = circular_pupil self._obscuration = obscuration self._nstruts = nstruts self._strut_thick = strut_thick self._strut_angle = strut_angle self._oversampling = oversampling self._pad_factor = pad_factor self._screen_list = screen_list self._pupil_plane_im = pupil_plane_im self._pupil_angle = pupil_angle self._input_pupil_plane_scale = pupil_plane_scale self._input_pupil_plane_size = pupil_plane_size self._gsparams = GSParams.check(gsparams) if diam <= 0.: raise GalSimRangeError("Invalid diam.", diam, 0.) if obscuration < 0. or obscuration >= 1.: raise GalSimRangeError("Invalid obscuration.", obscuration, 0., 1.) if not isinstance(strut_angle, Angle): raise TypeError("strut_angle must be a galsim.Angle instance.") if not isinstance(pupil_angle, Angle): raise TypeError("pupil_angle must be a galsim.Angle instance.") # You can either set geometric properties, or use a pupil image, but not both, so check for # that here. One caveat is that we allow sanity checking the sampling of a pupil_image by # comparing it to the sampling GalSim would have used for an (obscured) Airy profile. So # it's okay to specify an obscuration and a pupil_plane_im together, for example, but not # a pupil_plane_im and struts. is_default_geom = (circular_pupil and nstruts == 0 and strut_thick == 0.05 and strut_angle == 0.0*radians) if not is_default_geom and pupil_plane_im is not None: raise GalSimIncompatibleValuesError( "Can't specify both geometric parameters and pupil_plane_im.", circular_pupil=circular_pupil, nstruts=nstruts, strut_thick=strut_thick, strut_angle=strut_angle, pupil_plane_im=pupil_plane_im) if screen_list is not None and lam is None: raise GalSimIncompatibleValuesError( "Wavelength ``lam`` must be specified with ``screen_list``.", screen_list=screen_list, lam=lam) # For each of these, the actual value is defined during the construction of the _illuminated # array, so access that (lazy) property first. @property def pupil_plane_scale(self): """The scale_size of the pupil-plane image. """ self._illuminated return self._pupil_plane_scale @property def pupil_plane_size(self): """The size of the pupil-plane image. """ self._illuminated return self._pupil_plane_size @property def npix(self): """The number of pixels in each direction of the pupil-plane image. """ self._illuminated return self._npix @lazy_property def good_pupil_size(self): """An estimate of a good pupil-plane image size. """ # Although the user can set the pupil plane size and scale directly if desired, in most # cases it's nicer to have GalSim try to pick good values for these. # For the pupil plane size, we'll achieve Nyquist sampling in the focal plane if we sample # out to twice the diameter of the actual aperture in the pupil plane (completely # independent of wavelength, struts, obscurations, GSparams, and so on!). This corresponds # to oversampling=1.0. In fact, if we were willing to always use sinc interpolation, there # would never be any reason to go beyond this. In practice, we usually use a faster, but # less accurate, quintic interpolant, which means we can benefit from improved sampling # (oversampling > 1.0) in some cases, especially when we're *not* modeling an atmosphere # which would otherwise tend to damp contributions at large k. return 2 * self.diam * self._oversampling @lazy_property def good_pupil_scale(self): """An estimate of a good pupil-plane image scale. """ from .airy import Airy # For the pupil plane sampling interval, details like the obscuration and GSParams *are* # important as they affect the amount of aliasing encountered. (An Airy profile has an # infinite extent in real space, so it *always* aliases at some level, more so with an # obscuration than without. The GSParams settings indicate how much aliasing we're # willing to tolerate, so it's required here.) To pick a good sampling interval, we start # with the interval that would be used for an obscured Airy GSObject profile. If the # `screen_list` argument was supplied, then we also check its .stepk propertry, which # aggregates a good sampling interval from all of the wrapped PhaseScreens, and keep the # smaller stepk. if self._lam is None: # For Airy, pupil_plane_scale is independent of wavelength. We could build an Airy with # lam_over_diam=1.0 and then alter the `good_pupil_scale = ...` line below # appropriately, but it's easier to just arbitrarily set `lam=500` if it wasn't set. lam = 500.0 else: lam = self._lam airy = Airy(diam=self.diam, lam=lam, obscuration=self.obscuration, gsparams=self.gsparams) stepk = airy.stepk if self._screen_list is not None: screen_list = PhaseScreenList(self._screen_list) stepk = min(stepk, screen_list._getStepK(lam=lam, diam=self.diam, obscuration=self.obscuration, gsparams=self.gsparams)) return stepk * lam * 1.e-9 * (radians / arcsec) / (2 * np.pi * self._pad_factor) @lazy_property def _illuminated(self): # Now that we have good candidate sizes and scales, we load or generate the pupil plane # array. if self._pupil_plane_im is not None: # Use image of pupil plane return self._load_pupil_plane() else: # Use geometric parameters. if self._input_pupil_plane_scale is not None: self._pupil_plane_scale = self._input_pupil_plane_scale # Check input scale and warn if looks suspicious. if self._pupil_plane_scale > self.good_pupil_scale: ratio = self.good_pupil_scale / self._pupil_plane_scale galsim_warn("Input pupil_plane_scale may be too large for good sampling.\n" "Consider decreasing pupil_plane_scale by a factor %f, and/or " "check PhaseScreenPSF outputs for signs of folding in real " "space."%(1./ratio)) else: self._pupil_plane_scale = self.good_pupil_scale if self._input_pupil_plane_size is not None: self._pupil_plane_size = self._input_pupil_plane_size # Check input size and warn if looks suspicious if self._pupil_plane_size < self.good_pupil_size: ratio = self.good_pupil_size / self._pupil_plane_size galsim_warn("Input pupil_plane_size may be too small for good focal-plane" "sampling.\n" "Consider increasing pupil_plane_size by a factor %f, and/or " "check PhaseScreenPSF outputs for signs of undersampling."%ratio) else: self._pupil_plane_size = self.good_pupil_size return self._generate_pupil_plane() def _generate_pupil_plane(self): """ Create an array of illuminated pixels parameterically. """ ratio = self._pupil_plane_size/self._pupil_plane_scale # Fudge a little to prevent good_fft_size() from turning 512.0001 into 768. ratio *= (1.0 - 1.0/2**14) self._npix = Image.good_fft_size(int(np.ceil(ratio))) # Check FFT size if self._npix > self.gsparams.maximum_fft_size: raise GalSimFFTSizeError("Created pupil plane array that is too large.",self._npix) # Shrink scale such that size = scale * npix exactly. self._pupil_plane_scale = self._pupil_plane_size / self._npix radius = 0.5*self.diam if self._circular_pupil: illuminated = (self.rsqr < radius**2) if self.obscuration > 0.: illuminated *= self.rsqr >= (radius*self.obscuration)**2 else: illuminated = (np.abs(self.u) < radius) & (np.abs(self.v) < radius) if self.obscuration > 0.: illuminated *= ((np.abs(self.u) >= radius*self.obscuration) * (np.abs(self.v) >= radius*self.obscuration)) if self._nstruts > 0: # Add the initial rotation if requested, converting to radians. rot_u, rot_v = self.u, self.v if self._strut_angle.rad != 0.: rot_u, rot_v = rotate_xy(rot_u, rot_v, -self._strut_angle) rotang = 360. * degrees / self._nstruts # Then loop through struts setting to zero the regions which lie under the strut for istrut in range(self._nstruts): rot_u, rot_v = rotate_xy(rot_u, rot_v, -rotang) illuminated *= ((np.abs(rot_u) >= radius * self._strut_thick) + (rot_v < 0.0)) return illuminated def _load_pupil_plane(self): """ Create an array of illuminated pixels with appropriate size and scale from an input image of the pupil. The basic strategy is: 1. Read in array. 2. Determine the scale. 3. Pad the input array with zeros to meet the requested pupil size. 4. Check that the pupil plane sampling interval is at least as small as requested. 5. Optionally rotate pupil plane. """ from . import fits # Handle multiple types of input: NumPy array, galsim.Image, or string for filename with # image. if isinstance(self._pupil_plane_im, np.ndarray): # Make it into an image. self._pupil_plane_im = Image(self._pupil_plane_im) elif isinstance(self._pupil_plane_im, Image): # Make sure not to overwrite input image. self._pupil_plane_im = self._pupil_plane_im.copy() else: # Read in image of pupil plane from file. self._pupil_plane_im = fits.read(self._pupil_plane_im) # scale = pupil_plane_im.scale # Interpret as either the pixel scale in meters, or None. pp_arr = self._pupil_plane_im.array self._npix = pp_arr.shape[0] # Check FFT size if self._npix > self.gsparams.maximum_fft_size: raise GalSimFFTSizeError("Loaded pupil plane array that is too large.", self._npix) # Sanity checks if self._pupil_plane_im.array.shape[0] != self._pupil_plane_im.array.shape[1]: raise GalSimValueError("Input pupil_plane_im must be square.", self._pupil_plane_im.array.shape) if self._pupil_plane_im.array.shape[0] % 2 == 1: raise GalSimValueError("Input pupil_plane_im must have even sizes.", self._pupil_plane_im.array.shape) # Set the scale, priority is: # 1. pupil_plane_scale kwarg # 2. image.scale if not None # 3. Use diameter and farthest illuminated pixel. if self._input_pupil_plane_scale is not None: self._pupil_plane_scale = self._input_pupil_plane_scale elif self._pupil_plane_im.scale is not None: self._pupil_plane_scale = self._pupil_plane_im.scale else: # If self._pupil_plane_scale is not set yet, then figure it out from the distance # of the farthest illuminated pixel from the image center and the aperture diameter. # below is essentially np.linspace(-0.5, 0.5, self._npix) u = np.fft.fftshift(np.fft.fftfreq(self._npix)) u, v = np.meshgrid(u, u) r = np.hypot(u, v) rmax_illum = np.max(r*(self._pupil_plane_im.array > 0)) self._pupil_plane_scale = self.diam / (2.0 * rmax_illum * self._npix) self._pupil_plane_size = self._pupil_plane_scale * self._npix # Check the pupil plane size here and bump it up if necessary. if self._pupil_plane_size < self.good_pupil_size: new_npix = Image.good_fft_size(int(np.ceil( self.good_pupil_size/self._pupil_plane_scale))) pad_width = (new_npix-self._npix)//2 pp_arr = np.pad(pp_arr, [(pad_width, pad_width)]*2, mode='constant') self._npix = new_npix self._pupil_plane_size = self._pupil_plane_scale * self._npix # Check sampling interval and warn if it's not good enough. if self._pupil_plane_scale > self.good_pupil_scale: ratio = self._pupil_plane_scale / self.good_pupil_scale galsim_warn("Input pupil plane image may not be sampled well enough!\n" "Consider increasing sampling by a factor %f, and/or check " "PhaseScreenPSF outputs for signs of folding in real space."%ratio) if self._pupil_angle.rad == 0.: return pp_arr.astype(bool) else: # Rotate the pupil plane image as required based on the `pupil_angle`, being careful to # ensure that the image is one of the allowed types. We ignore the scale. b = _BoundsI(1,self._npix,1,self._npix) im = _Image(pp_arr, b, PixelScale(1.)) int_im = InterpolatedImage(im, x_interpolant='linear', calculate_stepk=False, calculate_maxk=False) int_im = int_im.rotate(self._pupil_angle) new_im = Image(pp_arr.shape[1], pp_arr.shape[0]) new_im = int_im.drawImage(image=new_im, scale=1., method='no_pixel') pp_arr = new_im.array # Restore hard edges that might have been lost during the interpolation. To do this, we # check the maximum value of the entries. Values after interpolation that are >half # that maximum value are kept as nonzero (True), but those that are <half the maximum # value are set to zero (False). max_pp_val = np.max(pp_arr) pp_arr[pp_arr < 0.5*max_pp_val] = 0. return pp_arr.astype(bool) @property def gsparams(self): """The `GSParams` of this object. """ return self._gsparams def withGSParams(self, gsparams=None, **kwargs): """Create a version of the current aperture with the given gsparams """ if gsparams == self.gsparams: return self from copy import copy ret = copy(self) ret._gsparams = GSParams.check(gsparams, self.gsparams, **kwargs) return ret # Used in Aperture.__str__ and OpticalPSF.__str__ def _geometry_str(self): s = "" if not self._circular_pupil: s += ", circular_pupil=False" if self.obscuration != 0.0: s += ", obscuration=%s"%self.obscuration if self._nstruts != 0: s += ", nstruts=%s"%self._nstruts if self._strut_thick != 0.05: s += ", strut_thick=%s"%self._strut_thick if self._strut_angle != 0*radians: s += ", strut_angle=%s"%self._strut_angle return s def __str__(self): s = "galsim.Aperture(diam=%r"%self.diam if self._pupil_plane_im is None: # Pupil was created geometrically, so use that here. s += self._geometry_str() s += ")" return s def _geometry_repr(self): s = "" if not self._circular_pupil: s += ", circular_pupil=False" if self.obscuration != 0.0: s += ", obscuration=%r"%self.obscuration if self._nstruts != 0: s += ", nstruts=%r"%self._nstruts if self._strut_thick != 0.05: s += ", strut_thick=%r"%self._strut_thick if self._strut_angle != 0*radians: s += ", strut_angle=%r"%self._strut_angle return s def __repr__(self): s = "galsim.Aperture(diam=%r"%self.diam if self._pupil_plane_im is None: # Pupil was created geometrically, so use that here. s += self._geometry_repr() s += ", pupil_plane_scale=%r"%self._input_pupil_plane_scale s += ", pupil_plane_size=%r"%self._input_pupil_plane_size s += ", oversampling=%r"%self._oversampling s += ", pad_factor=%r"%self._pad_factor else: # Pupil was created from image, so use that instead. # It's slightly less annoying to see an enormous stream of zeros fly by than an enormous # stream of Falses, so convert to int16. tmp = self.illuminated.astype(np.int16).tolist() s += ", pupil_plane_im=array(%r"%tmp+", dtype='int16')" s += ", pupil_plane_scale=%r"%self._pupil_plane_scale if self.gsparams != GSParams(): s += ", gsparams=%r"%self.gsparams s += ")" return s def __eq__(self, other): if self is other: return True if not (isinstance(other, Aperture) and self.diam == other.diam and self._gsparams == other._gsparams): return False if self._pupil_plane_im is not None: return (self.pupil_plane_scale == other.pupil_plane_scale and np.array_equal(self.illuminated, other.illuminated)) else: return (other._pupil_plane_im is None and self._circular_pupil == other._circular_pupil and self._obscuration == other._obscuration and self._nstruts == other._nstruts and self._strut_thick == other._strut_thick and self._strut_angle == other._strut_angle and self._input_pupil_plane_scale == other._input_pupil_plane_scale and self._input_pupil_plane_size == other._input_pupil_plane_size and self._oversampling == other._oversampling and self._pad_factor == other._pad_factor) def __hash__(self): # Cache since self.illuminated may be large. if not hasattr(self, '_hash'): self._hash = hash(("galsim.Aperture", self.diam, self.pupil_plane_scale)) self._hash ^= hash(tuple(self.illuminated.ravel())) return self._hash # Properties show up nicely in the interactive terminal for # >>>help(Aperture) # So we make a thin wrapper here. @property def illuminated(self): """A boolean array indicating which positions in the pupil plane are exposed to the sky. """ return self._illuminated @lazy_property def rho(self): """Unit-disk normalized pupil plane coordinate as a complex number: (x, y) => x + 1j * y. """ self._illuminated u = np.fft.fftshift(np.fft.fftfreq(self._npix, self.diam/self._pupil_plane_size/2.0)) u, v = np.meshgrid(u, u) return u + 1j * v @lazy_property def _uv(self): if not hasattr(self, '_npix'): # Need this check, since `_uv` is used by `_illuminated`, so need to make sure we # don't have an infinite loop. self._illuminated u = np.fft.fftshift(np.fft.fftfreq(self._npix, 1./self._pupil_plane_size)) u, v = np.meshgrid(u, u) return u, v @property def u(self): """Pupil horizontal coordinate array in meters.""" return self._uv[0] @property def v(self): """Pupil vertical coordinate array in meters.""" return self._uv[1] @lazy_property def u_illuminated(self): """The u values for only the `illuminated` pixels. """ return self.u[self.illuminated] @lazy_property def v_illuminated(self): """The v values for only the `illuminated` pixels. """ return self.v[self.illuminated] @lazy_property def rsqr(self): """Pupil radius squared array in meters squared.""" return self.u**2 + self.v**2 @property def diam(self): """Aperture diameter in meters""" return self._diam @property def obscuration(self): """Fraction linear obscuration of pupil.""" return self._obscuration def __getstate__(self): # Let unpickled object reconstruct cached values on-the-fly instead of including them in the # pickle. d = self.__dict__.copy() for k in ('rho', '_uv', 'rsqr', 'u_illuminated', 'v_illuminated'): d.pop(k, None) # Only reconstruct _illuminated if we made it from geometry. If loaded, it's probably # faster to serialize the array. if self._pupil_plane_im is None: d.pop('_illuminated', None) return d def samplePupil(self, photons, rng): """Set the pupil_u and pupil_v values in the PhotonArray by sampling the current aperture. """ from .random import UniformDeviate n_photons = len(photons) u = self.u_illuminated v = self.v_illuminated gen = rng.as_numpy_generator() pick = gen.choice(len(u), size=n_photons).astype(int) photons.pupil_u = u[pick] photons.pupil_v = v[pick] # Make continuous by adding +/- 0.5 pixels shifts. uscale = self.u[0, 1] - self.u[0, 0] vscale = self.v[1, 0] - self.v[0, 0] photons.pupil_u += gen.uniform(-uscale/2.,uscale/2.,size=n_photons) photons.pupil_v += gen.uniform(-vscale/2.,vscale/2.,size=n_photons) # Some quick notes for Josh: # - Relation between real-space grid with size theta and pitch dtheta (dimensions of angle) # and corresponding (fast) Fourier grid with size 2*maxk and pitch stepk (dimensions of # inverse angle): # stepk = 2*pi/theta # maxk = pi/dtheta # - Relation between aperture of size L and pitch dL (dimensions of length, not angle!) and # (fast) Fourier grid: # dL = stepk * lambda / (2 * pi) # L = maxk * lambda / pi # - Implies relation between aperture grid and real-space grid: # dL = lambda/theta # L = lambda/dtheta # # MJ: Of these four, only _sky_scale is still used. The rest are left here for informational # purposes, but nothing actually calls them. def _getStepK(self, lam, scale_unit=arcsec): """Return the Fourier grid spacing for this aperture at given wavelength. Parameters: lam: Wavelength in nanometers. scale_unit: Inverse units in which to return result [default: galsim.arcsec] Returns: Fourier grid spacing. """ return 2*np.pi*self.pupil_plane_scale/(lam*1e-9) * scale_unit/radians def _getMaxK(self, lam, scale_unit=arcsec): """Return the Fourier grid half-size for this aperture at given wavelength. Parameters: lam: Wavelength in nanometers. scale_unit: Inverse units in which to return result [default: galsim.arcsec] Returns: Fourier grid half-size. """ return np.pi*self.pupil_plane_size/(lam*1e-9) * scale_unit/radians def _sky_scale(self, lam, scale_unit=arcsec): """Return the image scale for this aperture at given wavelength. Parameters: lam: Wavelength in nanometers. scale_unit: Units in which to return result [default: galsim.arcsec] Returns: Image scale. """ return (lam*1e-9) / self.pupil_plane_size * radians/scale_unit def _sky_size(self, lam, scale_unit=arcsec): """Return the image size for this aperture at given wavelength. Parameters: lam: Wavelength in nanometers. scale_unit: Units in which to return result [default: galsim.arcsec] Returns: Image size. """ return (lam*1e-9) / self.pupil_plane_scale * radians/scale_unit class PhaseScreenList: """List of phase screens that can be turned into a PSF. Screens can be either atmospheric layers or optical phase screens. Generally, one would assemble a PhaseScreenList object using the function `Atmosphere`. Layers can be added, removed, appended, etc. just like items can be manipulated in a python list. For example:: # Create an atmosphere with three layers. >>> screens = galsim.PhaseScreenList([galsim.AtmosphericScreen(...), galsim.AtmosphericScreen(...), galsim.AtmosphericScreen(...)]) # Add another layer >>> screens.append(galsim.AtmosphericScreen(...)) # Remove the second layer >>> del screens[1] # Switch the first and second layer. Silly, but works... >>> screens[0], screens[1] = screens[1], screens[0] Parameters: layers: Sequence of phase screens. """ def __init__(self, *layers): from .phase_screens import AtmosphericScreen, OpticalScreen if len(layers) == 1: # First check if layers[0] is a PhaseScreenList, so we avoid nesting. if isinstance(layers[0], PhaseScreenList): self._layers = layers[0]._layers else: # Next, see if layers[0] is iterable. E.g., to catch generator expressions. try: self._layers = list(layers[0]) except TypeError: self._layers = list(layers) else: self._layers = list(layers) self._update_attrs() self._pending = [] # Pending PSFs to calculate upon first drawImage. def __len__(self): return len(self._layers) def __getitem__(self, index): try: items = self._layers[index] except TypeError: msg = "{cls.__name__} indices must be integers or slices" raise TypeError(msg.format(cls=self.__class__)) try: index + 1 # Regular in indices are the norm, so try something that works for it, # but not for slices, where we need different handling. except TypeError: # index is a slice, so items is a list. return PhaseScreenList(items) else: # index is an int, so items is just one screen. return items def __setitem__(self, index, layer): self._layers[index] = layer self._update_attrs() def __delitem__(self, index): del self._layers[index] self._update_attrs() def append(self, layer): self._layers.append(layer) self._update_attrs() def extend(self, layers): self._layers.extend(layers) self._update_attrs() def __str__(self): return "galsim.PhaseScreenList([%s])" % ",".join(str(l) for l in self._layers) def __repr__(self): return "galsim.PhaseScreenList(%r)" % self._layers def __eq__(self, other): return (self is other or (isinstance(other,PhaseScreenList) and self._layers == other._layers)) def __ne__(self, other): return not self == other __hash__ = None # Mutable means not hashable. def _update_attrs(self): # If any of the wrapped PhaseScreens have an rng, then eval(repr(screen_list)) will run, but # fail to round-trip to the original object. So we search for that here and set/delete a # dummy rng sentinel attribute so do_pickle() will know to skip the obj == eval(repr(obj)) # test. self.__dict__.pop('rng', None) self.dynamic = any(l.dynamic for l in self) self.reversible = all(l.reversible for l in self) self.__dict__.pop('r0_500_effective', None) def _seek(self, t): """Set all layers' internal clocks to time t.""" for layer in self: try: layer._seek(t) except AttributeError: # Time indep phase screen pass self._update_attrs() def _reset(self): """Reset phase screens back to time=0.""" for layer in self: try: layer._reset() except AttributeError: # Time indep phase screen pass self._update_attrs() def instantiate(self, pool=None, _bar=None, **kwargs): """Instantiate the screens in this `PhaseScreenList`. Parameters: pool: A multiprocessing.Pool object to use to instantiate screens in parallel. **kwargs: Keyword arguments to forward to screen.instantiate(). """ _bar = _bar if _bar else dict() # with dict() _bar.update() is a trivial no op. if pool is not None: results = [] for layer in self: try: results.append(pool.apply_async(layer.instantiate, kwds=kwargs)) except AttributeError: # OpticalScreen has no instantiate method pass _bar.update() for r in results: r.wait() else: for layer in self: try: layer.instantiate(**kwargs) except AttributeError: pass _bar.update() def _delayCalculation(self, psf): """Add psf to delayed calculation list.""" heappush(self._pending, (psf.t0, OrderedWeakRef(psf))) def _prepareDraw(self): """Calculate previously delayed PSFs.""" if not self._pending: return # See if we have any dynamic screens. If not, then we can immediately compute each PSF # in a simple loop. if not self.dynamic: for _, psfref in self._pending: psf = psfref() if psf is not None: psf._step() psf._finalize() self._pending = [] self._update_time_heap = [] return # If we do have time-evolving screens, then iteratively increment the time while being # careful to always stop at multiples of each PSF's time_step attribute to update that PSF. # Use a heap (in _pending list) to track the next time to stop at. while(self._pending): # Get and seek to next time that has a PSF update. t, psfref = heappop(self._pending) # Check if this PSF weakref is still alive psf = psfref() if psf is not None: # If it's alive, update this PSF self._seek(t) psf._step() # If that PSF's next possible update time doesn't extend past its exptime, then # push it back on the heap. t += psf.time_step if t < psf.t0 + psf.exptime: heappush(self._pending, (t, OrderedWeakRef(psf))) else: psf._finalize() self._pending = [] def wavefront(self, u, v, t, theta=(0.0*radians, 0.0*radians)): """ Compute cumulative wavefront due to all phase screens in `PhaseScreenList`. Wavefront here indicates the distance by which the physical wavefront lags or leads the ideal plane wave (pre-optics) or spherical wave (post-optics). Parameters: u: Horizontal pupil coordinate (in meters) at which to evaluate wavefront. Can be a scalar or an iterable. The shapes of u and v must match. v: Vertical pupil coordinate (in meters) at which to evaluate wavefront. Can be a scalar or an iterable. The shapes of u and v must match. t: Times (in seconds) at which to evaluate wavefront. Can be None, a scalar or an iterable. If None, then the internal time of the phase screens will be used for all u, v. If scalar, then the size will be broadcast up to match that of u and v. If iterable, then the shape must match the shapes of u and v. theta: Field angle at which to evaluate wavefront, as a 2-tuple of `galsim.Angle` instances. [default: (0.0*galsim.arcmin, 0.0*galsim.arcmin)] Only a single theta is permitted. Returns: Array of wavefront lag or lead in nanometers. """ if len(self._layers) > 1: return np.sum([layer.wavefront(u, v, t, theta) for layer in self], axis=0) else: return self._layers[0].wavefront(u, v, t, theta) def wavefront_gradient(self, u, v, t, theta=(0.0*radians, 0.0*radians)): """ Compute cumulative wavefront gradient due to all phase screens in `PhaseScreenList`. Parameters: u: Horizontal pupil coordinate (in meters) at which to evaluate wavefront. Can be a scalar or an iterable. The shapes of u and v must match. v: Vertical pupil coordinate (in meters) at which to evaluate wavefront. Can be a scalar or an iterable. The shapes of u and v must match. t: Times (in seconds) at which to evaluate wavefront gradient. Can be None, a scalar or an iterable. If None, then the internal time of the phase screens will be used for all u, v. If scalar, then the size will be broadcast up to match that of u and v. If iterable, then the shape must match the shapes of u and v. theta: Field angle at which to evaluate wavefront, as a 2-tuple of `galsim.Angle` instances. [default: (0.0*galsim.arcmin, 0.0*galsim.arcmin)] Only a single theta is permitted. Returns: Arrays dWdu and dWdv of wavefront lag or lead gradient in nm/m. """ if len(self._layers) > 1: return np.sum([layer.wavefront_gradient(u, v, t, theta) for layer in self], axis=0) else: return self._layers[0].wavefront_gradient(u, v, t, theta) def _wavefront(self, u, v, t, theta): if len(self._layers) > 1: return np.sum([layer._wavefront(u, v, t, theta) for layer in self], axis=0) else: return self._layers[0]._wavefront(u, v, t, theta) def _wavefront_gradient(self, u, v, t, theta): gradx, grady = self._layers[0]._wavefront_gradient(u, v, t, theta) for layer in self._layers[1:]: gx, gy = layer._wavefront_gradient(u, v, t, theta) gradx += gx grady += gy return gradx, grady def makePSF(self, lam, **kwargs): """Create a PSF from the current `PhaseScreenList`. Parameters: lam: Wavelength in nanometers at which to compute PSF. t0: Time at which to start exposure in seconds. [default: 0.0] exptime: Time in seconds over which to accumulate evolving instantaneous PSF. [default: 0.0] time_step: Time interval in seconds with which to sample phase screens when drawing using real-space or Fourier methods, or when using photon-shooting without the geometric optics approximation. Note that the default value of 0.025 is fairly arbitrary. For careful studies, we recommend checking that results are stable when decreasing time_step. Also note that when drawing using photon-shooting with the geometric optics approximation this keyword is ignored, as the phase screen can be sampled continuously in this case instead of at discrete intervals. [default: 0.025] flux: Flux of output PSF. [default: 1.0] theta: Field angle of PSF as a 2-tuple of `Angle` instances. [default: (0.0*galsim.arcmin, 0.0*galsim.arcmin)] interpolant: Either an Interpolant instance or a string indicating which interpolant should be used. Options are 'nearest', 'sinc', 'linear', 'cubic', 'quintic', or 'lanczosN' where N should be the integer order to use. [default: galsim.Quintic()] scale_unit: Units to use for the sky coordinates of the output profile. [default: galsim.arcsec] ii_pad_factor: Zero-padding factor by which to extend the image of the PSF when creating the ``InterpolatedImage``. See the ``InterpolatedImage`` docstring for more details. [default: 1.5] suppress_warning: If ``pad_factor`` is too small, the code will emit a warning telling you its best guess about how high you might want to raise it. However, you can suppress this warning by using ``suppress_warning=True``. [default: False] geometric_shooting: If True, then when drawing using photon shooting, use geometric optics approximation where the photon angles are derived from the phase screen gradient. If False, then first draw using Fourier optics and then shoot from the derived InterpolatedImage. [default: True] aper: `Aperture` to use to compute PSF(s). [default: None] second_kick: An optional second kick to also convolve by when using geometric photon-shooting. (This can technically be any `GSObject`, though usually it should probably be a SecondKick object). If None, then a good second kick will be chosen automatically based on ``screen_list``. If False, then a second kick won't be applied. [default: None] kcrit: Critical Fourier scale (in units of 1/r0) at which to separate low-k and high-k turbulence. The default value was chosen based on comparisons between Fourier optics and geometric optics with a second kick correction. While most values of kcrit smaller than the default produce similar results, we caution the user to compare the affected geometric PSFs against Fourier optics PSFs carefully before changing this value. [default: 0.2] fft_sign: The sign (+/-) to use in the exponent of the Fourier kernel when evaluating the Fourier optics PSF. As of version 2.3, GalSim uses a plus sign by default, which we believe to be consistent with, for example, how Zemax computes a Fourier optics PSF on DECam. Before version 2.3, the default was a negative sign. Input should be either the string '+' or the string '-'. [default: '+'] gsparams: An optional `GSParams` argument. [default: None] The following are optional keywords to use to setup the aperture if ``aper`` is not provided. Parameters: diam: Aperture diameter in meters. circular_pupil: Adopt a circular pupil? [default: True] obscuration: Linear dimension of central obscuration as fraction of aperture linear dimension. [0., 1.). [default: 0.0] nstruts: Number of radial support struts to add to the central obscuration. [default: 0] strut_thick: Thickness of support struts as a fraction of aperture diameter. [default: 0.05] strut_angle: `Angle` made between the vertical and the strut starting closest to it, defined to be positive in the counter-clockwise direction; must be an `Angle` instance. [default: 0. * galsim.degrees] oversampling: Optional oversampling factor *in the image plane* for the PSF eventually constructed using this `Aperture`. Setting ``oversampling < 1`` will produce aliasing in the PSF (not good). [default: 1.0] pad_factor: Additional multiple by which to extend the PSF image to avoid folding. [default: 1.0] pupil_plane_im: The GalSim.Image, NumPy array, or name of file containing the pupil plane image, to be used instead of generating one based on the obscuration and strut parameters. [default: None] pupil_angle: If ``pupil_plane_im`` is not None, rotation angle for the pupil plane (positive in the counter-clockwise direction). Must be an `Angle` instance. [default: 0. * galsim.degrees] pupil_plane_scale: Sampling interval in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. The exception is when specifying the pupil arrangement via an image, in which case this keyword can be used to indicate the sampling of that image. See also ``pad_factor`` for adjusting the pupil sampling scale. [default: None] pupil_plane_size: Size in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. See also ``oversampling`` for adjusting the pupil size. [default: None] """ return PhaseScreenPSF(self, lam, **kwargs) @lazy_property def r0_500_effective(self): """Effective r0_500 for set of screens in list that define an r0_500 attribute.""" r0_500s = np.array([l.r0_500 for l in self if hasattr(l, 'r0_500')]) if len(r0_500s) == 0: return None else: return np.sum(r0_500s**(-5./3))**(-3./5) def _getStepK(self, **kwargs): """Return an appropriate stepk for this list of phase screens. The required set of parameters depends on the types of the individual `PhaseScreen` instances in the `PhaseScreenList`. See the documentation for the individual `PhaseScreen.pupil_plane_scale` methods for more details. Returns: stepk. """ # Generically, GalSim propagates stepk for convolutions using # stepk = sum(s**-2 for s in stepks)**(-0.5) # We're not actually doing convolution between screens here, though. In fact, the right # relation for Kolmogorov screens uses exponents -5./3 and -3./5: # stepk = sum(s**(-5./3) for s in stepks)**(-3./5) # Since most of the layers in a PhaseScreenList are likely to be (nearly) Kolmogorov # screens, we'll use that relation. return np.sum([layer._getStepK(**kwargs)**(-5./3) for layer in self])**(-3./5) def __getstate__(self): d = self.__dict__.copy() d['_pending'] = [] return d class PhaseScreenPSF(GSObject): """A PSF surface brightness profile constructed by integrating over time the instantaneous PSF derived from a set of phase screens and an aperture. There are two equivalent ways to construct a PhaseScreenPSF given a `PhaseScreenList`:: >>> psf = screen_list.makePSF(...) >>> psf = PhaseScreenPSF(screen_list, ...) Computing a PSF from a phase screen also requires an `Aperture` be specified. This can be done either directly via the ``aper`` keyword, or by setting a number of keywords that will be passed to the `Aperture` constructor. The ``aper`` keyword always takes precedence. There are effectively three ways to draw a PhaseScreenPSF (or `GSObject` that includes a PhaseScreenPSF): 1) Fourier optics This is the default, and is performed for all drawImage methods except method='phot'. This is generally the most accurate option. For a `PhaseScreenList` that includes an `AtmosphericScreen`, however, this can be prohibitively slow. For `OpticalPSF`, though, this can sometimes be a good option. 2) Photon-shooting from an image produced using Fourier optics. This is done if geometric_shooting=False when creating the PhaseScreenPSF, and method='phot' when calling drawImage. This actually performs the same calculations as the Fourier optics option above, but then proceeds by shooting photons from that result. This can sometimes be a good option for OpticalPSFs, especially if the same OpticalPSF can be reused for may objects, since the Fourier part of the process would only be performed once in this case. 3) Photon-shooting using the "geometric approximation". This is done if geometric_shooting=True when creating the PhaseScreenPSF, and method='phot' when calling drawImage. In this case, a completely different algorithm is used make an image. Photons are uniformly generated in the `Aperture` pupil, and then the phase gradient at that location is used to deflect each photon in the image plane. This method, which corresponds to geometric optics, is broadly accurate for phase screens that vary slowly across the aperture, and is usually several orders of magnitude or more faster than Fourier optics (depending on the flux of the object, of course, but much faster even for rather bright flux levels). One short-coming of this method is that it neglects interference effects, i.e. diffraction. For `PhaseScreenList` that include at least one `AtmosphericScreen`, a correction, dubbed the "second kick", will automatically be applied to handle both the quickly varying modes of the screens and the diffraction pattern of the `Aperture`. For PhaseScreenLists without an `AtmosphericScreen`, the correction is simply an Airy function. Note that this correction can be overridden using the second_kick keyword argument, and also tuned to some extent using the kcrit keyword argument. Note also that calling drawImage on a PhaseScreenPSF that uses a `PhaseScreenList` with any uninstantiated `AtmosphericScreen` will perform that instantiation, and that the details of the instantiation depend on the drawing method used, and also the kcrit keyword argument to PhaseScreenPSF. See the `AtmosphericScreen` docstring for more details. Parameters: screen_list: `PhaseScreenList` object from which to create PSF. lam: Wavelength in nanometers at which to compute PSF. t0: Time at which to start exposure in seconds. [default: 0.0] exptime: Time in seconds over which to accumulate evolving instantaneous PSF. [default: 0.0] time_step: Time interval in seconds with which to sample phase screens when drawing using real-space or Fourier methods, or when using photon-shooting without the geometric optics approximation. Note that the default value of 0.025 is fairly arbitrary. For careful studies, we recommend checking that results are stable when decreasing time_step. Also note that when drawing using photon-shooting with the geometric optics approximation this keyword is ignored, as the phase screen can be sampled continuously in this case instead of at discrete intervals. [default: 0.025] flux: Flux of output PSF [default: 1.0] theta: Field angle of PSF as a 2-tuple of `Angle` instances. [default: (0.0*galsim.arcmin, 0.0*galsim.arcmin)] interpolant: Either an Interpolant instance or a string indicating which interpolant should be used. Options are 'nearest', 'sinc', 'linear', 'cubic', 'quintic', or 'lanczosN' where N should be the integer order to use. [default: galsim.Quintic()] scale_unit: Units to use for the sky coordinates of the output profile. [default: galsim.arcsec] ii_pad_factor: Zero-padding factor by which to extend the image of the PSF when creating the ``InterpolatedImage``. See the ``InterpolatedImage`` docstring for more details. [default: 1.5] suppress_warning: If ``pad_factor`` is too small, the code will emit a warning telling you its best guess about how high you might want to raise it. However, you can suppress this warning by using ``suppress_warning=True``. [default: False] geometric_shooting: If True, then when drawing using photon shooting, use geometric optics approximation where the photon angles are derived from the phase screen gradient. If False, then first draw using Fourier optics and then shoot from the derived InterpolatedImage. [default: True] aper: `Aperture` to use to compute PSF(s). [default: None] second_kick: An optional second kick to also convolve by when using geometric photon-shooting. (This can technically be any `GSObject`, though usually it should probably be a SecondKick object). If None, then a good second kick will be chosen automatically based on ``screen_list``. If False, then a second kick won't be applied. [default: None] kcrit: Critical Fourier scale (in units of 1/r0) at which to separate low-k and high-k turbulence. The default value was chosen based on comparisons between Fourier optics and geometric optics with a second kick correction. While most values of kcrit smaller than the default produce similar results, we caution the user to compare the affected geometric PSFs against Fourier optics PSFs carefully before changing this value. [default: 0.2] fft_sign: The sign (+/-) to use in the exponent of the Fourier kernel when evaluating the Fourier optics PSF. As of version 2.3, GalSim uses a plus sign by default, which we believe to be consistent with, for example, how Zemax computes a Fourier optics PSF on DECam. Before version 2.3, the default was a negative sign. Input should be either the string '+' or the string '-'. [default: '+'] gsparams: An optional `GSParams` argument. [default: None] The following are optional keywords to use to setup the aperture if ``aper`` is not provided: Parameters: diam: Aperture diameter in meters. [default: None] circular_pupil: Adopt a circular pupil? [default: True] obscuration: Linear dimension of central obscuration as fraction of aperture linear dimension. [0., 1.). [default: 0.0] nstruts: Number of radial support struts to add to the central obscuration. [default: 0] strut_thick: Thickness of support struts as a fraction of aperture diameter. [default: 0.05] strut_angle: `Angle` made between the vertical and the strut starting closest to it, defined to be positive in the counter-clockwise direction; must be an `Angle` instance. [default: 0. * galsim.degrees] oversampling: Optional oversampling factor *in the image plane* for the PSF eventually constructed using this `Aperture`. Setting ``oversampling < 1`` will produce aliasing in the PSF (not good). [default: 1.0] pad_factor: Additional multiple by which to extend the PSF image to avoid folding. [default: 1.0] pupil_plane_im: The GalSim.Image, NumPy array, or name of file containing the pupil plane image, to be used instead of generating one based on the obscuration and strut parameters. [default: None] pupil_angle: If ``pupil_plane_im`` is not None, rotation angle for the pupil plane (positive in the counter-clockwise direction). Must be an `Angle` instance. [default: 0. * galsim.degrees] pupil_plane_scale: Sampling interval in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. The exception is when specifying the pupil arrangement via an image, in which case this keyword can be used to indicate the sampling of that image. See also ``pad_factor`` for adjusting the pupil sampling scale. [default: None] pupil_plane_size: Size in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. See also ``oversampling`` for adjusting the pupil size. [default: None] """ _has_hard_edges = False _is_axisymmetric = False _is_analytic_x = True _is_analytic_k = True _default_iipf = 1.5 def __init__(self, screen_list, lam, t0=0.0, exptime=0.0, time_step=0.025, flux=1.0, theta=(0.0*arcsec, 0.0*arcsec), interpolant=None, scale_unit=arcsec, ii_pad_factor=None, suppress_warning=False, geometric_shooting=True, aper=None, second_kick=None, kcrit=0.2, fft_sign='+', gsparams=None, _force_stepk=0., _force_maxk=0., _bar=None, **kwargs): # Hidden `_bar` kwarg can be used with astropy.console.utils.ProgressBar to print out a # progress bar during long calculations. if not isinstance(screen_list, PhaseScreenList): screen_list = PhaseScreenList(screen_list) if fft_sign not in ['+', '-']: raise GalSimValueError("Invalid fft_sign", fft_sign, allowed_values=['+','-']) self._screen_list = screen_list self.t0 = float(t0) self.lam = float(lam) self.exptime = float(exptime) self.time_step = float(time_step) if aper is None: # Check here for diameter. if 'diam' not in kwargs: raise GalSimIncompatibleValuesError( "Diameter required if aperture not specified directly.", diam=None, aper=aper) aper = Aperture(lam=lam, screen_list=self._screen_list, gsparams=gsparams, **kwargs) elif gsparams is None: gsparams = aper.gsparams else: aper = aper.withGSParams(gsparams) self.aper = aper if not isinstance(theta[0], Angle) or not isinstance(theta[1], Angle): raise TypeError("theta must be 2-tuple of galsim.Angle's.") self.theta = theta self.interpolant = interpolant if isinstance(scale_unit, str): scale_unit = AngleUnit.from_name(scale_unit) self.scale_unit = scale_unit self._gsparams = GSParams.check(gsparams) self.scale = aper._sky_scale(self.lam, self.scale_unit) self._force_stepk = _force_stepk self._force_maxk = _force_maxk self._img = None if self.exptime < 0: raise GalSimRangeError("Cannot integrate PSF for negative time.", self.exptime, 0.) self._ii_pad_factor = ii_pad_factor if ii_pad_factor is not None else self._default_iipf self._bar = _bar if _bar else dict() # with dict() _bar.update() is a trivial no op. self._flux = float(flux) self._suppress_warning = suppress_warning self._geometric_shooting = geometric_shooting self._kcrit = kcrit self._fft_sign = fft_sign # We'll set these more intelligently as needed below self._second_kick = second_kick self._screen_list._delayCalculation(self) self._finalized = False @lazy_property def _real_ii(self): ii = InterpolatedImage( self._img, x_interpolant=self.interpolant, _force_stepk=self._force_stepk, _force_maxk=self._force_maxk, pad_factor=self._ii_pad_factor, use_true_center=False, gsparams=self._gsparams) if not self._suppress_warning: specified_stepk = 2*np.pi/(self._img.array.shape[0]*self.scale) observed_stepk = ii.stepk if observed_stepk < specified_stepk: galsim_warn( "The calculated stepk (%g) for PhaseScreenPSF is smaller than what was used " "to build the wavefront (%g). This could lead to aliasing problems. " "Increasing pad_factor is recommended."%(observed_stepk, specified_stepk)) return ii @lazy_property def _dummy_ii(self): # If we need self._ii before we've done _prepareDraw, then build a placeholder that has # roughly the right properties. All we really need is for the stepk and maxk to be # correct, so use the force_ options to set them how we want. if self._force_stepk > 0.: stepk = self._force_stepk else: stepk = self._screen_list._getStepK(lam=self.lam, diam=self.aper.diam, obscuration=self.aper.obscuration, gsparams=self._gsparams) if self._force_maxk > 0.: maxk = self._force_maxk else: maxk = self.aper._getMaxK(self.lam, self.scale_unit) image = _Image(np.array([[self._flux]], dtype=float), _BoundsI(1, 1, 1, 1), PixelScale(1.)) interpolant = 'delta' # Use delta so it doesn't contribute to stepk return InterpolatedImage( image, pad_factor=1.0, x_interpolant=interpolant, _force_stepk=stepk, _force_maxk=maxk) @property def _ii(self): if self._finalized: return self._real_ii else: return self._dummy_ii @property def kcrit(self): """The critical Fourier scale being used for this object. """ return self._kcrit @property def fft_sign(self): """The sign (+/-) to use in the exponent of the Fourier kernel when evaluating the Fourier optics PSF. """ return self._fft_sign @lazy_property def screen_kmax(self): """The maximum k value to use in the screen. Typically `kcrit`/r0. """ r0_500 = self._screen_list.r0_500_effective if r0_500 is None: return np.inf else: r0 = r0_500 * (self.lam/500)**(6./5) return self.kcrit / r0 @lazy_property def second_kick(self): """Make a SecondKick object based on contents of screen_list and aper. """ from .airy import Airy from .second_kick import SecondKick if self._second_kick is None: r0_500 = self._screen_list.r0_500_effective if r0_500 is None: # No AtmosphericScreens in list return Airy(lam=self.lam, diam=self.aper.diam, obscuration=self.aper.obscuration, gsparams=self._gsparams) else: r0 = r0_500 * (self.lam/500.)**(6./5) return SecondKick( self.lam, r0, self.aper.diam, self.aper.obscuration, kcrit=self.kcrit, scale_unit=self.scale_unit, gsparams=self._gsparams) else: return self._second_kick @property def flux(self): """The flux of the profile. """ return self._flux @property def screen_list(self): """The `PhaseScreenList` being used for this object. """ return self._screen_list @doc_inherit def withGSParams(self, gsparams=None, **kwargs): if gsparams == self.gsparams: return self gsparams = GSParams.check(gsparams, self.gsparams, **kwargs) aper = self.aper.withGSParams(gsparams) ret = self.__class__.__new__(self.__class__) ret.__dict__.update(self.__dict__) # Make sure we generate fresh versions of any attrs that depend on gsparams for attr in ['second_kick', '_real_ii', '_dummy_ii']: ret.__dict__.pop(attr, None) ret._gsparams = gsparams ret.aper = aper # Make sure we mark that we need to recalculate any previously finalized InterpolatedImage ret._finalized = False ret._screen_list._delayCalculation(ret) ret._img = None return ret def __str__(self): return ("galsim.PhaseScreenPSF(%s, lam=%s, exptime=%s)" % (self._screen_list, self.lam, self.exptime)) def __repr__(self): outstr = ("galsim.PhaseScreenPSF(%r, lam=%r, exptime=%r, flux=%r, aper=%r, theta=%r, " "interpolant=%r, scale_unit=%r, fft_sign=%r, gsparams=%r)") return outstr % (self._screen_list, self.lam, self.exptime, self.flux, self.aper, self.theta, self.interpolant, self.scale_unit, self._fft_sign, self.gsparams) def __eq__(self, other): # Even if two PSFs were generated with different sets of parameters, they will act # identically if their img, interpolant, stepk, maxk, pad_factor, fft_sign and gsparams # match. return (self is other or (isinstance(other, PhaseScreenPSF) and self._screen_list == other._screen_list and self.lam == other.lam and self.aper == other.aper and self.t0 == other.t0 and self.exptime == other.exptime and self.time_step == other.time_step and self._flux == other._flux and self.interpolant == other.interpolant and self._force_stepk == other._force_stepk and self._force_maxk == other._force_maxk and self._ii_pad_factor == other._ii_pad_factor and self._fft_sign == other._fft_sign and self.gsparams == other.gsparams)) def __hash__(self): return hash(("galsim.PhaseScreenPSF", tuple(self._screen_list), self.lam, self.aper, self.t0, self.exptime, self.time_step, self._flux, self.interpolant, self._force_stepk, self._force_maxk, self._ii_pad_factor, self._fft_sign, self.gsparams)) def _prepareDraw(self): # Trigger delayed computation of all pending PSFs. self._screen_list._prepareDraw() def _step(self): """Compute the current instantaneous PSF and add it to the developing integrated PSF.""" from . import fft u = self.aper.u_illuminated v = self.aper.v_illuminated # This is where I need to make sure the screens are instantiated for FFT. self._screen_list.instantiate(check='FFT') wf = self._screen_list._wavefront(u, v, None, self.theta) expwf = np.exp((2j*np.pi/self.lam) * wf) expwf_grid = np.zeros_like(self.aper.illuminated, dtype=np.complex128) expwf_grid[self.aper.illuminated] = expwf # Note fft is '-' and ifft is '+' below if self._fft_sign == '+': ftexpwf = fft.ifft2(expwf_grid, shift_in=True, shift_out=True) else: ftexpwf = fft.fft2(expwf_grid, shift_in=True, shift_out=True) if self._img is None: self._img = np.zeros(self.aper.illuminated.shape, dtype=np.float64) self._img += np.abs(ftexpwf)**2 self._bar.update() def _finalize(self): """Take accumulated integrated PSF image and turn it into a proper GSObject.""" self._img *= self._flux / self._img.sum(dtype=float) b = _BoundsI(1,self.aper.npix,1,self.aper.npix) self._img = _Image(self._img, b, PixelScale(self.scale)) self._finalized = True def __getstate__(self): d = self.__dict__.copy() # The SBProfile is picklable, but it is pretty inefficient, due to the large images being # written as a string. Better to pickle the image and remake the InterpolatedImage. d.pop('_dummy_ii',None) d.pop('_real_ii',None) d.pop('second_kick',None) return d def __setstate__(self, d): self.__dict__ = d if not self._finalized: self._screen_list._delayCalculation(self) @property def _maxk(self): return self._ii.maxk @property def _stepk(self): return self._ii.stepk @property def _centroid(self): self._prepareDraw() return self._ii.centroid @property def _positive_flux(self): if self._geometric_shooting: return self._flux else: return self._ii.positive_flux @property def _negative_flux(self): if self._geometric_shooting: return 0. else: return self._ii.negative_flux @property def _flux_per_photon(self): if self._geometric_shooting: return 1. else: return self._calculate_flux_per_photon() @property def _max_sb(self): return self._ii.max_sb def _xValue(self, pos): self._prepareDraw() return self._ii._xValue(pos) def _kValue(self, kpos): self._prepareDraw() return self._ii._kValue(kpos) def _drawReal(self, image, jac=None, offset=(0.,0.), flux_scaling=1.): self._ii._drawReal(image, jac, offset, flux_scaling) def _shoot(self, photons, rng): from .photon_array import PhotonArray if not self._geometric_shooting: self._prepareDraw() return self._ii._shoot(photons, rng) if not photons.hasAllocatedPupil(): self.aper.samplePupil(photons, rng) if not photons.hasAllocatedTimes(): TimeSampler(self.t0, self.exptime).applyTo(photons, rng=rng) u = photons.pupil_u v = photons.pupil_v t = photons.time n_photons = len(photons) # This is where the screens need to be instantiated for drawing with geometric photon # shooting. self._screen_list.instantiate(kmax=self.screen_kmax, check='phot') nm_to_arcsec = 1.e-9 * radians / arcsec if self._fft_sign == '+': nm_to_arcsec *= -1 photons.x, photons.y = self._screen_list._wavefront_gradient(u, v, t, self.theta) photons.x *= nm_to_arcsec photons.y *= nm_to_arcsec photons.flux = self._flux / n_photons if self.second_kick: p2 = PhotonArray(len(photons)) self.second_kick._shoot(p2, rng) photons.convolve(p2, rng) def _drawKImage(self, image, jac=None): self._ii._drawKImage(image, jac) @property def img(self): from .deprecated import depr depr('img', 2.1, '', "This functionality has been removed.") return self._img @property def finalized(self): from .deprecated import depr depr('finalized', 2.1, "This functionality has been removed.") return self._finalized @doc_inherit def withFlux(self, flux): if self._finalized: # Then it's probably not faster to rebuild with a different flux. return self.withScaledFlux(flux / self.flux) else: return PhaseScreenPSF(self._screen_list, lam=self.lam, exptime=self.exptime, flux=flux, aper=self.aper, theta=self.theta, interpolant=self.interpolant, scale_unit=self.scale_unit, gsparams=self.gsparams) class OpticalPSF(GSObject): """A class describing aberrated PSFs due to telescope optics. Its underlying implementation uses an InterpolatedImage to characterize the profile. The diffraction effects are characterized by the diffraction angle, which is a function of the ratio lambda / D, where lambda is the wavelength of the light and D is the diameter of the telescope. The natural unit for this value is radians, which is not normally a convenient unit to use for other `GSObject` dimensions. Assuming that the other sky coordinates you are using are all in arcsec (e.g. the pixel scale when you draw the image, the size of the galaxy, etc.), then you should convert this to arcsec as well:: >>> lam = 700 # nm >>> diam = 4.0 # meters >>> lam_over_diam = (lam * 1.e-9) / diam # radians >>> lam_over_diam *= 206265 # Convert to arcsec >>> psf = galsim.OpticalPSF(lam_over_diam, ...) To make this process a bit simpler, we recommend instead providing the wavelength and diameter separately using the parameters ``lam`` (in nm) and ``diam`` (in m). GalSim will then convert this to any of the normal kinds of angular units using the ``scale_unit`` parameter:: >>> psf = galsim.OpticalPSF(lam=lam, diam=diam, scale_unit=galsim.arcsec, ...) When drawing images, the scale_unit should match the unit used for the pixel scale or the WCS. e.g. in this case, a pixel scale of 0.2 arcsec/pixel would be specified as ``pixel_scale=0.2``. Input aberration coefficients are assumed to be supplied in units of wavelength, and correspond to the Zernike polynomials in the Noll convention defined in Noll, J. Opt. Soc. Am. 66, 207-211(1976). For a brief summary of the polynomials, refer to http://en.wikipedia.org/wiki/Zernike_polynomials#Zernike_polynomials. By default, the aberration coefficients indicate the amplitudes of _circular_ Zernike polynomials, which are orthogonal over a circle. If you would like the aberration coefficients to instead be interpretted as the amplitudes of _annular_ Zernike polynomials, which are orthogonal over an annulus (see Mahajan, J. Opt. Soc. Am. 71, 1 (1981)), set the ``annular_zernike`` keyword argument to True. There are two ways to specify the geometry of the pupil plane, i.e., the obscuration disk size and the areas that will be illuminated outside of it. The first way is to use keywords that specify the size of the obscuration, and the nature of the support struts holding up the secondary mirror (or prime focus cage, etc.). These are taken to be rectangular obscurations extending from the outer edge of the pupil to the outer edge of the obscuration disk (or the pupil center if ``obscuration = 0.``). You can specify how many struts there are (evenly spaced in angle), how thick they are as a fraction of the pupil diameter, and what angle they start at relative to the positive y direction. The second way to specify the pupil plane configuration is by passing in an image of it. This can be useful for example if the struts are not evenly spaced or are not radially directed, as is assumed by the simple model for struts described above. In this case, keywords related to struts are ignored; moreover, the ``obscuration`` keyword is used to ensure that the images are properly sampled (so it is still needed), but the keyword is then ignored when using the supplied image of the pupil plane. Note that for complicated pupil configurations, it may be desireable to increase ``pad_factor`` for more fidelity at the expense of slower running time. The ``pupil_plane_im`` that is passed in can be rotated during internal calculations by specifying a ``pupil_angle`` keyword. If you choose to pass in a pupil plane image, it must be a square array in which the image of the pupil is centered. The areas that are illuminated should have some value >0, and the other areas should have a value of precisely zero. Based on what the OpticalPSF class thinks is the required sampling to make the PSF image, the image that is passed in of the pupil plane might be zero-padded during internal calculations. The pixel scale of the pupil plane can be specified in one of three ways. In descending order of priority, these are: 1. The ``pupil_plane_scale`` keyword argument (units are meters). 2. The ``pupil_plane_im.scale`` attribute (units are meters). 3. If (1) and (2) are both None, then the scale will be inferred by assuming that the illuminated pixel farthest from the image center is at a physical distance of self.diam/2. Note that if the scale is specified by either (1) or (2) above (which always includes specifying the pupil_plane_im as a filename, since the default scale then will be 1.0), then the lam_over_diam keyword must not be used, but rather the lam and diam keywords are required separately. Finally, to ensure accuracy of calculations using a pupil plane image, we recommend sampling it as finely as possible. As described above, either specify the lam/diam ratio directly in arbitrary units:: >>> optical_psf = galsim.OpticalPSF(lam_over_diam=lam_over_diam, defocus=0., ...) or, use separate keywords for the telescope diameter and wavelength in meters and nanometers, respectively:: >>> optical_psf = galsim.OpticalPSF(lam=lam, diam=diam, defocus=0., ...) Either of these options initializes ``optical_psf`` as an OpticalPSF instance. Parameters: lam_over_diam: Lambda / telescope diameter in the physical units adopted for ``scale`` (user responsible for consistency). Either ``lam_over_diam``, or ``lam`` and ``diam``, must be supplied. lam: Lambda (wavelength) in units of nanometers. Must be supplied with ``diam``, and in this case, image scales (``scale``) should be specified in units of ``scale_unit``. diam : Telescope diameter in units of meters. Must be supplied with ``lam``, and in this case, image scales (``scale``) should be specified in units of ``scale_unit``. tip: Tip in units of incident light wavelength. [default: 0] tilt: Tilt in units of incident light wavelength. [default: 0] defocus: Defocus in units of incident light wavelength. [default: 0] astig1: Astigmatism (like e2) in units of incident light wavelength. [default: 0] astig2: Astigmatism (like e1) in units of incident light wavelength. [default: 0] coma1: Coma along y in units of incident light wavelength. [default: 0] coma2: Coma along x in units of incident light wavelength. [default: 0] trefoil1: Trefoil (one of the arrows along y) in units of incident light wavelength. [default: 0] trefoil2: Trefoil (one of the arrows along x) in units of incident light wavelength. [default: 0] spher: Spherical aberration in units of incident light wavelength. [default: 0] aberrations: Optional keyword, to pass in a list, tuple, or NumPy array of aberrations in units of reference wavelength (ordered according to the Noll convention), rather than passing in individual values for each individual aberration. Note that aberrations[1] is piston (and not aberrations[0], which is unused.) This list can be arbitrarily long to handle Zernike polynomial aberrations of arbitrary order. annular_zernike: Boolean indicating that aberrations specify the amplitudes of annular Zernike polynomials instead of circular Zernike polynomials. [default: False] aper: `Aperture` object to use when creating PSF. [default: None] circular_pupil: Adopt a circular pupil? [default: True] obscuration: Linear dimension of central obscuration as fraction of pupil linear dimension, [0., 1.). This should be specified even if you are providing a ``pupil_plane_im``, since we need an initial value of obscuration to use to figure out the necessary image sampling. [default: 0] interpolant: Either an Interpolant instance or a string indicating which interpolant should be used. Options are 'nearest', 'sinc', 'linear', 'cubic', 'quintic', or 'lanczosN' where N should be the integer order to use. [default: galsim.Quintic()] oversampling: Optional oversampling factor for the InterpolatedImage. Setting ``oversampling < 1`` will produce aliasing in the PSF (not good). Usually ``oversampling`` should be somewhat larger than 1. 1.5 is usually a safe choice. [default: 1.5] pad_factor: Additional multiple by which to zero-pad the PSF image to avoid folding compared to what would be employed for a simple `Airy`. Note that ``pad_factor`` may need to be increased for stronger aberrations, i.e. those larger than order unity. [default: 1.5] ii_pad_factor: Zero-padding factor by which to extend the image of the PSF when creating the ``InterpolatedImage``. See the ``InterpolatedImage`` docstring for more details. [default: 1.5] suppress_warning: If ``pad_factor`` is too small, the code will emit a warning telling you its best guess about how high you might want to raise it. However, you can suppress this warning by using ``suppress_warning=True``. [default: False] geometric_shooting: If True, then when drawing using photon shooting, use geometric optics approximation where the photon angles are derived from the phase screen gradient. If False, then first draw using Fourier optics and then shoot from the derived InterpolatedImage. [default: False] flux: Total flux of the profile. [default: 1.] nstruts: Number of radial support struts to add to the central obscuration. [default: 0] strut_thick: Thickness of support struts as a fraction of pupil diameter. [default: 0.05] strut_angle: `Angle` made between the vertical and the strut starting closest to it, defined to be positive in the counter-clockwise direction; must be an `Angle` instance. [default: 0. * galsim.degrees] pupil_plane_im: The GalSim.Image, NumPy array, or name of file containing the pupil plane image, to be used instead of generating one based on the obscuration and strut parameters. [default: None] pupil_angle: If ``pupil_plane_im`` is not None, rotation angle for the pupil plane (positive in the counter-clockwise direction). Must be an `Angle` instance. [default: 0. * galsim.degrees] pupil_plane_scale: Sampling interval in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. The exception is when specifying the pupil arrangement via an image, in which case this keyword can be used to indicate the sampling of that image. See also ``pad_factor`` for adjusting the pupil sampling scale. [default: None] pupil_plane_size: Size in meters to use for the pupil plane array. In most cases, it's a good idea to leave this as None, in which case GalSim will attempt to find a good value automatically. See also ``oversampling`` for adjusting the pupil size. [default: None] scale_unit: Units to use for the sky coordinates when calculating lam/diam if these are supplied separately. Should be either a `galsim.AngleUnit` or a string that can be used to construct one (e.g., 'arcsec', 'radians', etc.). [default: galsim.arcsec] fft_sign: The sign (+/-) to use in the exponent of the Fourier kernel when evaluating the Fourier optics PSF. As of version 2.3, GalSim uses a plus sign by default, which we believe to be consistent with, for example, how Zemax computes a Fourier optics PSF on DECam. Before version 2.3, the default was a negative sign. Input should be either the string '+' or the string '-'. [default: '+'] gsparams: An optional `GSParams` argument. [default: None] """ _opt_params = { "diam": float, "defocus": float, "astig1": float, "astig2": float, "coma1": float, "coma2": float, "trefoil1": float, "trefoil2": float, "spher": float, "annular_zernike": bool, "circular_pupil": bool, "obscuration": float, "oversampling": float, "pad_factor": float, "suppress_warning": bool, "interpolant": str, "flux": float, "nstruts": int, "strut_thick": float, "strut_angle": Angle, "pupil_plane_im": str, "pupil_angle": Angle, "pupil_plane_scale": float, "pupil_plane_size": float, "scale_unit": str, "fft_sign": str} _single_params = [{"lam_over_diam": float, "lam": float}] _has_hard_edges = False _is_axisymmetric = False _is_analytic_x = True _is_analytic_k = True _default_iipf = 1.5 # The default ii_pad_factor, since we need to check it for the repr def __init__(self, lam_over_diam=None, lam=None, diam=None, tip=0., tilt=0., defocus=0., astig1=0., astig2=0., coma1=0., coma2=0., trefoil1=0., trefoil2=0., spher=0., aberrations=None, annular_zernike=False, aper=None, circular_pupil=True, obscuration=0., interpolant=None, oversampling=1.5, pad_factor=1.5, ii_pad_factor=None, flux=1., nstruts=0, strut_thick=0.05, strut_angle=0.*radians, pupil_plane_im=None, pupil_plane_scale=None, pupil_plane_size=None, pupil_angle=0.*radians, scale_unit=arcsec, fft_sign='+', gsparams=None, _force_stepk=0., _force_maxk=0., suppress_warning=False, geometric_shooting=False): from .phase_screens import OpticalScreen if fft_sign not in ['+', '-']: raise GalSimValueError("Invalid fft_sign", fft_sign, allowed_values=['+','-']) if isinstance(scale_unit, str): scale_unit = AngleUnit.from_name(scale_unit) # Need to handle lam/diam vs. lam_over_diam here since lam by itself is needed for # OpticalScreen. if lam_over_diam is not None: if lam is not None or diam is not None: raise GalSimIncompatibleValuesError( "If specifying lam_over_diam, then do not specify lam or diam", lam_over_diam=lam_over_diam, lam=lam, diam=diam) # For combination of lam_over_diam and pupil_plane_im with a specified scale, it's # tricky to determine the actual diameter of the pupil needed by Aperture. So for now, # we just disallow this combination. Please feel free to raise an issue at # https://github.com/GalSim-developers/GalSim/issues if you need this functionality. if pupil_plane_im is not None: if isinstance(pupil_plane_im, basestring): # Filename, therefore specific scale exists. raise GalSimIncompatibleValuesError( "If specifying lam_over_diam, then do not specify pupil_plane_im as " "as a filename.", lam_over_diam=lam_over_diam, pupil_plane_im=pupil_plane_im) elif isinstance(pupil_plane_im, Image) and pupil_plane_im.scale is not None: raise GalSimIncompatibleValuesError( "If specifying lam_over_diam, then do not specify pupil_plane_im " "with definite scale attribute.", lam_over_diam=lam_over_diam, pupil_plane_im=pupil_plane_im) elif pupil_plane_scale is not None: raise GalSimIncompatibleValuesError( "If specifying lam_over_diam, then do not specify pupil_plane_scale. ", lam_over_diam=lam_over_diam, pupil_plane_scale=pupil_plane_scale) lam = 500. # Arbitrary diam = lam*1.e-9 / lam_over_diam * radians / scale_unit else: if lam is None or diam is None: raise GalSimIncompatibleValuesError( "If not specifying lam_over_diam, then specify lam AND diam", lam_over_diam=lam_over_diam, lam=lam, diam=diam) # Make the optical screen. self._screen = OpticalScreen( diam=diam, defocus=defocus, astig1=astig1, astig2=astig2, coma1=coma1, coma2=coma2, trefoil1=trefoil1, trefoil2=trefoil2, spher=spher, aberrations=aberrations, obscuration=obscuration, annular_zernike=annular_zernike, lam_0=lam) # Make the aperture. if aper is None: aper = Aperture( diam, lam=lam, circular_pupil=circular_pupil, obscuration=obscuration, nstruts=nstruts, strut_thick=strut_thick, strut_angle=strut_angle, oversampling=oversampling, pad_factor=pad_factor, pupil_plane_im=pupil_plane_im, pupil_angle=pupil_angle, pupil_plane_scale=pupil_plane_scale, pupil_plane_size=pupil_plane_size, gsparams=gsparams) self.obscuration = obscuration else: self.obscuration = aper.obscuration # Save for pickling self._lam = float(lam) self._flux = float(flux) self._interpolant = interpolant self._scale_unit = scale_unit self._gsparams = GSParams.check(gsparams) self._suppress_warning = suppress_warning self._geometric_shooting = geometric_shooting self._aper = aper self._force_stepk = _force_stepk self._force_maxk = _force_maxk self._ii_pad_factor = ii_pad_factor if ii_pad_factor is not None else self._default_iipf self._fft_sign = fft_sign @lazy_property def _psf(self): psf = PhaseScreenPSF(PhaseScreenList(self._screen), lam=self._lam, flux=self._flux, aper=self._aper, interpolant=self._interpolant, scale_unit=self._scale_unit, fft_sign=self._fft_sign, gsparams=self._gsparams, suppress_warning=self._suppress_warning, geometric_shooting=self._geometric_shooting, _force_stepk=self._force_stepk, _force_maxk=self._force_maxk, ii_pad_factor=self._ii_pad_factor) psf._prepareDraw() # No need to delay an OpticalPSF. return psf def __str__(self): screen = self._screen s = "galsim.OpticalPSF(lam=%s, diam=%s" % (screen.lam_0, self._aper.diam) if any(screen.aberrations): s += ", aberrations=[" + ",".join(str(ab) for ab in screen.aberrations) + "]" if self._aper._pupil_plane_im is None: s += self._aper._geometry_str() if screen.annular_zernike: s += ", annular_zernike=True" s += ", obscuration=%r"%self.obscuration if self._flux != 1.0: s += ", flux=%s" % self._flux s += ")" return s def __repr__(self): screen = self._screen s = "galsim.OpticalPSF(lam=%r, diam=%r" % (self._lam, self._aper.diam) s += ", aper=%r"%self._aper if any(screen.aberrations): s += ", aberrations=[" + ",".join(repr(ab) for ab in screen.aberrations) + "]" if screen.annular_zernike: s += ", annular_zernike=True" s += ", obscuration=%r"%self.obscuration if self._interpolant != None: s += ", interpolant=%r"%self._interpolant if self._scale_unit != arcsec: s += ", scale_unit=%r"%self._scale_unit if self._fft_sign != '+': s += ", fft_sign='-'" if self._gsparams != GSParams(): s += ", gsparams=%r"%self._gsparams if self._flux != 1.0: s += ", flux=%r" % self._flux if self._force_stepk != 0.: s += ", _force_stepk=%r" % self._force_stepk if self._force_maxk != 0.: s += ", _force_maxk=%r" % self._force_maxk if self._ii_pad_factor != OpticalPSF._default_iipf: s += ", ii_pad_factor=%r" % self._ii_pad_factor s += ")" return s def __eq__(self, other): return (self is other or (isinstance(other, OpticalPSF) and self._lam == other._lam and self._aper == other._aper and self._screen == other._screen and self._flux == other._flux and self._interpolant == other._interpolant and self._scale_unit == other._scale_unit and self._force_stepk == other._force_stepk and self._force_maxk == other._force_maxk and self._ii_pad_factor == other._ii_pad_factor and self._fft_sign == other._fft_sign and self._gsparams == other._gsparams)) def __hash__(self): return hash(("galsim.OpticalPSF", self._lam, self._aper, self._screen, self._flux, self._interpolant, self._scale_unit, self._force_stepk, self._force_maxk, self._ii_pad_factor, self._fft_sign, self._gsparams)) def __getstate__(self): # The SBProfile is picklable, but it is pretty inefficient, due to the large images being # written as a string. Better to pickle the psf and remake the PhaseScreenPSF. d = self.__dict__.copy() d.pop('_psf', None) return d def __setstate__(self, d): self.__dict__ = d @property def _maxk(self): return self._psf.maxk @property def _stepk(self): return self._psf.stepk @property def _centroid(self): return self._psf.centroid @property def _positive_flux(self): return self._psf.positive_flux @property def _negative_flux(self): return self._psf.negative_flux @property def _flux_per_photon(self): return self._psf._flux_per_photon @property def _max_sb(self): return self._psf.max_sb @property def fft_sign(self): return self._fft_sign def _xValue(self, pos): return self._psf._xValue(pos) def _kValue(self, kpos): return self._psf._kValue(kpos) def _drawReal(self, image, jac=None, offset=(0.,0.), flux_scaling=1.): self._psf._drawReal(image, jac, offset, flux_scaling) def _shoot(self, photons, rng): self._psf._shoot(photons, rng) def _drawKImage(self, image, jac=None): self._psf._drawKImage(image, jac) @doc_inherit def withFlux(self, flux): screen = self._screen return OpticalPSF( lam=self._lam, diam=self._aper.diam, aper=self._aper, aberrations=screen.aberrations, annular_zernike=screen.annular_zernike, flux=flux, _force_stepk=self._force_stepk, _force_maxk=self._force_maxk, ii_pad_factor=self._ii_pad_factor, fft_sign=self._fft_sign, gsparams=self._gsparams)
PypiClean
/msgraph_beta_sdk-1.0.0a9-py3-none-any.whl/msgraph/generated/device_management/user_experience_analytics_device_scope/trigger_device_scope_action/trigger_device_scope_action_request_builder.py
from __future__ import annotations from dataclasses import dataclass from kiota_abstractions.get_path_parameters import get_path_parameters from kiota_abstractions.method import Method from kiota_abstractions.request_adapter import RequestAdapter from kiota_abstractions.request_information import RequestInformation from kiota_abstractions.request_option import RequestOption from kiota_abstractions.response_handler import ResponseHandler from kiota_abstractions.serialization import Parsable, ParsableFactory from typing import Any, Callable, Dict, List, Optional, TYPE_CHECKING, Union if TYPE_CHECKING: from . import trigger_device_scope_action_post_request_body from ....models import device_scope_action_result from ....models.o_data_errors import o_data_error class TriggerDeviceScopeActionRequestBuilder(): """ Provides operations to call the triggerDeviceScopeAction method. """ def __init__(self,request_adapter: RequestAdapter, path_parameters: Optional[Union[Dict[str, Any], str]] = None) -> None: """ Instantiates a new TriggerDeviceScopeActionRequestBuilder and sets the default values. Args: pathParameters: The raw url or the Url template parameters for the request. requestAdapter: The request adapter to use to execute the requests. """ if path_parameters is None: raise Exception("path_parameters cannot be undefined") if request_adapter is None: raise Exception("request_adapter cannot be undefined") # Url template to use to build the URL for the current request builder self.url_template: str = "{+baseurl}/deviceManagement/userExperienceAnalyticsDeviceScope/triggerDeviceScopeAction" url_tpl_params = get_path_parameters(path_parameters) self.path_parameters = url_tpl_params self.request_adapter = request_adapter async def post(self,body: Optional[trigger_device_scope_action_post_request_body.TriggerDeviceScopeActionPostRequestBody] = None, request_configuration: Optional[TriggerDeviceScopeActionRequestBuilderPostRequestConfiguration] = None) -> Optional[device_scope_action_result.DeviceScopeActionResult]: """ Invoke action triggerDeviceScopeAction Args: body: The request body requestConfiguration: Configuration for the request such as headers, query parameters, and middleware options. Returns: Optional[device_scope_action_result.DeviceScopeActionResult] """ if body is None: raise Exception("body cannot be undefined") request_info = self.to_post_request_information( body, request_configuration ) from ....models.o_data_errors import o_data_error error_mapping: Dict[str, ParsableFactory] = { "4XX": o_data_error.ODataError, "5XX": o_data_error.ODataError, } if not self.request_adapter: raise Exception("Http core is null") from ....models import device_scope_action_result return await self.request_adapter.send_async(request_info, device_scope_action_result.DeviceScopeActionResult, error_mapping) def to_post_request_information(self,body: Optional[trigger_device_scope_action_post_request_body.TriggerDeviceScopeActionPostRequestBody] = None, request_configuration: Optional[TriggerDeviceScopeActionRequestBuilderPostRequestConfiguration] = None) -> RequestInformation: """ Invoke action triggerDeviceScopeAction Args: body: The request body requestConfiguration: Configuration for the request such as headers, query parameters, and middleware options. Returns: RequestInformation """ if body is None: raise Exception("body cannot be undefined") request_info = RequestInformation() request_info.url_template = self.url_template request_info.path_parameters = self.path_parameters request_info.http_method = Method.POST request_info.headers["Accept"] = ["application/json"] if request_configuration: request_info.add_request_headers(request_configuration.headers) request_info.add_request_options(request_configuration.options) request_info.set_content_from_parsable(self.request_adapter, "application/json", body) return request_info @dataclass class TriggerDeviceScopeActionRequestBuilderPostRequestConfiguration(): """ Configuration for the request such as headers, query parameters, and middleware options. """ # Request headers headers: Optional[Dict[str, Union[str, List[str]]]] = None # Request options options: Optional[List[RequestOption]] = None
PypiClean
/pycallibri_ecg-1.0.1.tar.gz/pycallibri_ecg-1.0.1/callibri_ecg/callibri_ecg_lib.py
import ctypes import pathlib import platform import sys _libname = None _filters_lib = None if sys.platform == "win32": arc = platform.architecture() if arc[0].__contains__("64"): _libname = pathlib.Path(__file__).parent.resolve() / "libs" / "x64" / "callibri_utils-x64.dll" _filters_lib = pathlib.Path(__file__).parent.resolve() / "libs" / "x64" / "filters.dll" else: _libname = pathlib.Path(__file__).parent.resolve() / "libs" / "x86" / "callibri_utils-x86.dll" _filters_lib = pathlib.Path(__file__).parent.resolve() / "libs" / "x86" / "filters.dll" elif sys.platform.startswith("linux"): print('Add linux lib') elif sys.platform == "darwin": print('Add macos lib') else: raise FileNotFoundError("This platform (%s) is currently not supported by pycallibri-ecg-lib." % sys.platform) ctypes.windll.LoadLibrary(str(_filters_lib)) _callibri_lib = ctypes.CDLL(str(_libname)) class CallibriMath: def __init__(self, sampling_rate: int, data_window: int, nwins_for_pressure_index: int): callibri_math_lib = ctypes.POINTER(ctypes.c_void_p) self.create_callibri_math_lib = _callibri_lib.createCallibriMathLib self.create_callibri_math_lib.restype = ctypes.POINTER(callibri_math_lib) self.create_callibri_math_lib.argtypes = (ctypes.c_int, ctypes.c_int, ctypes.c_int) self.free_callibri_math_lib = _callibri_lib.freeCallibriMathLib self.free_callibri_math_lib.restype = None self.free_callibri_math_lib.argtypes = (ctypes.POINTER(callibri_math_lib),) self._init_filter = _callibri_lib.CallibriMathLibInitFilter self._init_filter.restype = None self._init_filter.argtypes = (ctypes.POINTER(callibri_math_lib),) self._push_data = _callibri_lib.CallibriMathLibPushData self._push_data.restype = None self._push_data.argtypes = (ctypes.POINTER(callibri_math_lib), ctypes.c_void_p, ctypes.c_size_t) self._process_data_arr = _callibri_lib.CallibriMathLibProcessDataArr self._process_data_arr.restype = None self._process_data_arr.argtypes = (ctypes.POINTER(callibri_math_lib),) self._get_rr = _callibri_lib.CallibriMathLibGetRR self._get_rr.restype = ctypes.c_double self._get_rr.argtypes = (ctypes.POINTER(callibri_math_lib),) self._get_pressure_index = _callibri_lib.CallibriMathLibGetPressureIndex self._get_pressure_index.restype = ctypes.c_double self._get_pressure_index.argtypes = (ctypes.POINTER(callibri_math_lib),) self._get_hr = _callibri_lib.CallibriMathLibGetHR self._get_hr.restype = ctypes.c_double self._get_hr.argtypes = (ctypes.POINTER(callibri_math_lib),) self._get_moda = _callibri_lib.CallibriMathLibGetModa self._get_moda.restype = ctypes.c_double self._get_moda.argtypes = (ctypes.POINTER(callibri_math_lib),) self._get_ampl_moda = _callibri_lib.CallibriMathLibGetAmplModa self._get_ampl_moda.restype = ctypes.c_double self._get_ampl_moda.argtypes = (ctypes.POINTER(callibri_math_lib),) self._get_variation_dist = _callibri_lib.CallibriMathLibGetVariationDist self._get_variation_dist.restype = ctypes.c_double self._get_variation_dist.argtypes = (ctypes.POINTER(callibri_math_lib),) self._initial_signal_corrupted = _callibri_lib.CallibriMathLibInitialSignalCorrupted self._initial_signal_corrupted.restype = ctypes.c_bool self._initial_signal_corrupted.argtypes = (ctypes.POINTER(callibri_math_lib),) self._reset_data_process = _callibri_lib.CallibriMathLibResetDataProcess self._reset_data_process.restype = None self._reset_data_process.argtypes = (ctypes.POINTER(callibri_math_lib),) self._set_rr_checked = _callibri_lib.CallibriMathLibSetRRchecked self._set_rr_checked.restype = None self._set_rr_checked.argtypes = (ctypes.POINTER(callibri_math_lib),) self._set_pressure_average = _callibri_lib.CallibriMathLibSetPressureAverage self._set_pressure_average.restype = None self._set_pressure_average.argtypes = (ctypes.POINTER(callibri_math_lib), ctypes.c_int) self._rr_detected = _callibri_lib.CallibriMathLibRRdetected self._rr_detected.restype = ctypes.c_bool self._rr_detected.argtypes = (ctypes.POINTER(callibri_math_lib),) self._clear_data = _callibri_lib.CallibriMathLibClearData self._clear_data.restype = None self._clear_data.argtypes = (ctypes.POINTER(callibri_math_lib),) self._native_ptr = self.create_callibri_math_lib(sampling_rate, data_window, nwins_for_pressure_index) def init_filter(self): self._init_filter(self._native_ptr) def push_data(self, samples: list): self._push_data(self._native_ptr, (ctypes.c_double * len(samples))(*samples), len(samples)) def process_data_arr(self): self._process_data_arr(self._native_ptr) def get_rr(self) -> float: return self._get_rr(self._native_ptr) def get_pressure_index(self) -> float: return self._get_pressure_index(self._native_ptr) def get_hr(self) -> float: return self._get_hr(self._native_ptr) def get_moda(self) -> float: return self._get_moda(self._native_ptr) def get_ampl_moda(self) -> float: return self._get_ampl_moda(self._native_ptr) def get_variation_dist(self) -> float: return self._get_variation_dist(self._native_ptr) def initial_signal_corrupted(self) -> bool: return self._initial_signal_corrupted(self._native_ptr) def reset_data_process(self): self._reset_data_process(self._native_ptr) def set_rr_checked(self): self._set_rr_checked(self._native_ptr) def set_pressure_average(self, t: int): self._set_pressure_average(self._native_ptr, ctypes.c_int(t)) def rr_detected(self) -> bool: return self._rr_detected(self._native_ptr) def clear_data(self): self._clear_data(self._native_ptr) def __del__(self): if self._native_ptr is not None: self.free_callibri_math_lib(self._native_ptr) self._native_ptr = None
PypiClean
/napari-unicell-0.0.1.post2.tar.gz/napari-unicell-0.0.1.post2/src/napari_unicell/_widget.py
from typing import TYPE_CHECKING from magicgui import magic_factory, magicgui from qtpy.QtWidgets import QHBoxLayout, QPushButton, QWidget from napari.utils.notifications import show_info if TYPE_CHECKING: import napari from typing import List import os join = os.path.join import time import numpy as np from enum import Enum import torch import monai from monai.transforms import Compose, EnsureType, Activations, AsDiscrete from .models.unicell_modules import UniCell from .utils.multi_task_sliding_window_inference import multi_task_sliding_window_inference from .utils.postprocess import watershed_post import time from skimage import io, segmentation, morphology, measure, exposure, transform import pathlib class DownSampleRate(Enum): No_DS = 'No_DS' DS2 = 'DS2' DS4 = 'DS4' DS8 = 'DS' class ModelName(Enum): UniCell = 'unicell' UniNuclei = 'uninuclei' def load_model(model_name, custom_model_path): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # elif model_name == 'swin_unet': # model = monai.networks.nets.SwinUNETR( # img_size=(256, 256), # in_channels=3, # out_channels=3, # feature_size=24, # should be divisible by 12 # spatial_dims=2 # ) # if os.path.isfile(custom_model_path): # checkpoint = torch.load(custom_model_path.resolve(), map_location=torch.device(device)) # elif os.path.isfile(join(os.path.dirname(__file__), 'work_dir/swinunetr/best_Dice_model.pth')): # checkpoint = torch.load(join(os.path.dirname(__file__), 'work_dir/swinunetr/best_dice_model.pth'), map_location=torch.device(device)) # else: # torch.hub.download_url_to_file('https://zenodo.org/record/6792177/files/best_Dice_model.pth?download=1', join(os.path.dirname(__file__), 'work_dir/swinunetr/best_Dice_model.pth')) # checkpoint = torch.load(join(os.path.dirname(__file__), 'work_dir/swinunetr/best_Dice_model.pth'), map_location=torch.device(device)) if model_name == 'unicell': model = UniCell(in_channels=3, out_channels=3, regress_class=1, img_size=256).to(device) if os.path.isfile(custom_model_path): checkpoint = torch.load(custom_model_path.resolve(), map_location=torch.device(device)) elif os.path.isfile(join(os.path.dirname(__file__), 'work_dir/unicell/model.pth')): checkpoint = torch.load(join(os.path.dirname(__file__), 'work_dir/unicell/model.pth'), map_location=torch.device(device)) else: os.makedirs(join(os.path.dirname(__file__), 'work_dir/unicell'), exist_ok=True) torch.hub.download_url_to_file('https://zenodo.org/record/7308987/files/model.pth?download=1', join(os.path.dirname(__file__), 'work_dir/unicell/model.pth')) checkpoint = torch.load(join(os.path.dirname(__file__), 'work_dir/unicell/model.pth'), map_location=torch.device(device)) elif model_name == 'uninuclei': model = UniCell(in_channels=3, out_channels=3, regress_class=1, img_size=256).to(device) if os.path.isfile(custom_model_path): checkpoint = torch.load(custom_model_path.resolve(), map_location=torch.device(device)) elif os.path.isfile(join(os.path.dirname(__file__), 'work_dir/uninuclei/model.pth')): checkpoint = torch.load(join(os.path.dirname(__file__), 'work_dir/uninuclei/model.pth'), map_location=torch.device(device)) else: os.makedirs(join(os.path.dirname(__file__), 'work_dir/uninuclei'), exist_ok=True) torch.hub.download_url_to_file('https://zenodo.org/record/7308990/files/model.pth?download=1', join(os.path.dirname(__file__), 'work_dir/unicell/model.pth')) checkpoint = torch.load(join(os.path.dirname(__file__), 'work_dir/uninuclei/model.pth'), map_location=torch.device(device)) model.load_state_dict(checkpoint['model_state_dict']) model = model.to(device) model.eval() return model class ExampleQWidget(QWidget): # your QWidget.__init__ can optionally request the napari viewer instance # in one of two ways: # 1. use a parameter called `napari_viewer`, as done here # 2. use a type annotation of 'napari.viewer.Viewer' for any parameter def __init__(self, napari_viewer): super().__init__() self.viewer = napari_viewer btn = QPushButton("Click me!") btn.clicked.connect(self._on_click) self.setLayout(QHBoxLayout()) self.layout().addWidget(btn) def _on_click(self): print("napari has", len(self.viewer.layers), "layers") def normalize_channel(img, lower=0.1, upper=99.9): non_zero_vals = img[np.nonzero(img)] percentiles = np.percentile(non_zero_vals, [lower, upper]) if percentiles[1] - percentiles[0] > 0.001: img_norm = exposure.rescale_intensity(img, in_range=(percentiles[0], percentiles[1]), out_range='uint8') else: img_norm = img return img_norm def preprocess(img_data): if len(img_data.shape) == 2: img_data = np.repeat(np.expand_dims(img_data, axis=-1), 3, axis=-1) elif len(img_data.shape) == 3 and img_data.shape[-1] > 3: img_data = img_data[:,:, :3] else: pass pre_img_data = np.zeros(img_data.shape, dtype=np.uint8) for i in range(3): img_channel_i = img_data[:,:,i] if len(img_channel_i[np.nonzero(img_channel_i)])>0: pre_img_data[:,:,i] = normalize_channel(img_channel_i, lower=0.1, upper=99.9) return pre_img_data def unicell_seg(pre_img_data, model_name, custom_model_path): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = load_model(model_name, custom_model_path) # post_pred = Compose([EnsureType(), Activations(softmax=False), AsDiscrete(threshold=threshold)]) #%% roi_size = (256, 256) sw_batch_size = 8 with torch.no_grad(): t0 = time.time() test_npy01 = pre_img_data/np.max(pre_img_data) test_tensor = torch.from_numpy(np.expand_dims(test_npy01, 0)).permute(0,3,1,2).type(torch.FloatTensor).to(device) # include softmax; output: interior: [B, 3, H, W], dist: [B, 1, H, W] pred_interior, pred_dist = multi_task_sliding_window_inference(test_tensor, roi_size, sw_batch_size, predictor=model) pred_dist_npy = pred_dist.squeeze(1).cpu().numpy() # (B, H, W) pred_interior_npy = pred_interior.cpu().numpy()[:,1] # 1-interior (B, H, W) seg_inst = watershed_post(pred_dist_npy, pred_interior_npy) if np.max(seg_inst)<60000: test_pred_mask = seg_inst.squeeze().astype(np.int16) else: test_pred_mask = seg_inst.squeeze().astype(np.int64) t1 = time.time() # test_pred_mask, _,_ = segmentation.relabel_sequential(test_pred_mask) bw_mask = np.uint8(test_pred_mask>0.2) print(f'Prediction finished; img size = {pre_img_data.shape}; costing: {t1-t0:.2f}s') show_info(f'Prediction finished; img size = {pre_img_data.shape}; costing: {t1-t0:.2f}s') return test_pred_mask, bw_mask # @magicgui(call_button='run segmentation', layout='vertical', # model_name=dict(widget_type='ComboBox', label='select model', choices=['unicell', 'uninuclei'], value='unicell'), # custom_model_path=dict(widget_type='FileEdit', label='custom model path', value=''), # downsample_rate = dict(widget_type='SpinBox', label='downsample rate', value=1, min=1, max=8, step=2), # binary_mask = dict(widget_type='CheckBox', text='binary mask', value=False, tooltip='output binary mask') # ) @magic_factory def unicell_widget(image_layer: "napari.layers.Image", model_name: ModelName, custom_model_path: pathlib.Path, downsample_rate: DownSampleRate) -> List["napari.types.LayerDataTuple"]: print(f"you have selected {image_layer}") img_data = image_layer.data img_dim = len(img_data.shape) if downsample_rate.value == 'DS2': if img_dim > 2: img_data_ds = img_data[::2, ::2, :] else: img_data_ds = img_data[::2, ::2] elif downsample_rate.value == 'DS4': if img_dim > 2: img_data_ds = img_data[::4, ::4, :] else: img_data_ds = img_data[::4, ::4] elif downsample_rate.value == 'DS8': if img_dim > 2: img_data_ds = img_data[::8, ::8, :] else: img_data_ds = img_data[::8, ::8] else: img_data_ds = img_data inst_seg, bw_seg = unicell_seg(preprocess(img_data_ds), model_name.value, custom_model_path) if downsample_rate.value != 'No_DS': final_seg = transform.resize(inst_seg, (img_data.shape[0], img_data.shape[1]), order=0, preserve_range=True, anti_aliasing=False).astype(inst_seg.dtype) # final_bw = transform.resize(bw_seg, (img_data.shape[0], img_data.shape[1]), order=0, preserve_range=True, anti_aliasing=False).astype(bw_seg.dtype) else: final_seg = inst_seg # final_bw = bw_seg seg_layer = (final_seg, {"name": f"{image_layer.name}_inst"}, "labels") # bw_layer = (final_bw, {"name": f"{image_layer.name}_bw"}, "labels") return seg_layer # Uses the `autogenerate: true` flag in the plugin manifest # to indicate it should be wrapped as a magicgui to autogenerate # a widget. # def example_function_widget(image_layer: "napari.layers.Image"): # print(f"you have selected {image_layer}")
PypiClean
/ensmallen_graph-0.6.0-cp37-cp37m-manylinux2010_x86_64.whl/ensmallen_graph/datasets/string/thermosynechococcuselongatus.py
from typing import Dict from ..automatic_graph_retrieval import AutomaticallyRetrievedGraph from ...ensmallen_graph import EnsmallenGraph # pylint: disable=import-error def ThermosynechococcusElongatus( directed: bool = False, verbose: int = 2, cache_path: str = "graphs/string", **additional_graph_kwargs: Dict ) -> EnsmallenGraph: """Return new instance of the Thermosynechococcus elongatus graph. The graph is automatically retrieved from the STRING repository. Parameters ------------------- directed: bool = False, Wether to load the graph as directed or undirected. By default false. verbose: int = 2, Wether to show loading bars during the retrieval and building of the graph. cache_path: str = "graphs", Where to store the downloaded graphs. additional_graph_kwargs: Dict, Additional graph kwargs. Returns ----------------------- Instace of Thermosynechococcus elongatus graph. Report --------------------- At the time of rendering these methods (please see datetime below), the graph had the following characteristics: Datetime: 2021-02-02 19:59:01.232609 The undirected graph Thermosynechococcus elongatus has 2458 nodes and 255294 weighted edges, of which none are self-loops. The graph is dense as it has a density of 0.08454 and has 3 connected components, where the component with most nodes has 2454 nodes and the component with the least nodes has 2 nodes. The graph median node degree is 193, the mean node degree is 207.72, and the node degree mode is 4. The top 5 most central nodes are 197221.22294598 (degree 1054), 197221.22295063 (degree 1025), 197221.22294240 (degree 953), 197221.22295650 (degree 929) and 197221.22294647 (degree 807). References --------------------- Please cite the following if you use the data: @article{szklarczyk2019string, title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets}, author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others}, journal={Nucleic acids research}, volume={47}, number={D1}, pages={D607--D613}, year={2019}, publisher={Oxford University Press} } Usage example ---------------------- The usage of this graph is relatively straightforward: .. code:: python # First import the function to retrieve the graph from the datasets from ensmallen_graph.datasets.string import ThermosynechococcusElongatus # Then load the graph graph = ThermosynechococcusElongatus() # Finally, you can do anything with it, for instance, compute its report: print(graph) # If you need to run a link prediction task with validation, # you can split the graph using a connected holdout as follows: train_graph, validation_graph = graph.connected_holdout( # You can use an 80/20 split the holdout, for example. train_size=0.8, # The random state is used to reproduce the holdout. random_state=42, # Wether to show a loading bar. verbose=True ) # Remember that, if you need, you can enable the memory-time trade-offs: train_graph.enable( vector_sources=True, vector_destinations=True, vector_outbounds=True ) # Consider using the methods made available in the Embiggen package # to run graph embedding or link prediction tasks. """ return AutomaticallyRetrievedGraph( graph_name="ThermosynechococcusElongatus", dataset="string", directed=directed, verbose=verbose, cache_path=cache_path, additional_graph_kwargs=additional_graph_kwargs )()
PypiClean
/ensmallen_graph-0.6.0-cp37-cp37m-manylinux2010_x86_64.whl/ensmallen_graph/datasets/networkrepository/pkustk07.py
from typing import Dict from ..automatic_graph_retrieval import AutomaticallyRetrievedGraph from ...ensmallen_graph import EnsmallenGraph # pylint: disable=import-error def Pkustk07( directed: bool = False, verbose: int = 2, cache_path: str = "graphs/networkrepository", **additional_graph_kwargs: Dict ) -> EnsmallenGraph: """Return new instance of the pkustk07 graph. The graph is automatically retrieved from the NetworkRepository repository. Parameters ------------------- directed: bool = False, Wether to load the graph as directed or undirected. By default false. verbose: int = 2, Wether to show loading bars during the retrieval and building of the graph. cache_path: str = "graphs", Where to store the downloaded graphs. additional_graph_kwargs: Dict, Additional graph kwargs. Returns ----------------------- Instace of pkustk07 graph. Report --------------------- At the time of rendering these methods (please see datetime below), the graph had the following characteristics: Datetime: 2021-02-06 12:15:14.729261 The undirected graph pkustk07 has 16860 nodes and 1217832 unweighted edges, of which 16860 are self-loops. The graph is sparse as it has a density of 0.00851 and is connected, as it has a single component. The graph median node degree is 165, the mean node degree is 143.46, and the node degree mode is 165. The top 5 most central nodes are 14742 (degree 267), 14741 (degree 267), 14740 (degree 267), 14688 (degree 267) and 14687 (degree 267). References --------------------- Please cite the following if you use the data: @inproceedings{nr, title = {The Network Data Repository with Interactive Graph Analytics and Visualization}, author={Ryan A. Rossi and Nesreen K. Ahmed}, booktitle = {AAAI}, url={http://networkrepository.com}, year={2015} } Usage example ---------------------- The usage of this graph is relatively straightforward: .. code:: python # First import the function to retrieve the graph from the datasets from ensmallen_graph.datasets.networkrepository import Pkustk07 # Then load the graph graph = Pkustk07() # Finally, you can do anything with it, for instance, compute its report: print(graph) # If you need to run a link prediction task with validation, # you can split the graph using a connected holdout as follows: train_graph, validation_graph = graph.connected_holdout( # You can use an 80/20 split the holdout, for example. train_size=0.8, # The random state is used to reproduce the holdout. random_state=42, # Wether to show a loading bar. verbose=True ) # Remember that, if you need, you can enable the memory-time trade-offs: train_graph.enable( vector_sources=True, vector_destinations=True, vector_outbounds=True ) # Consider using the methods made available in the Embiggen package # to run graph embedding or link prediction tasks. """ return AutomaticallyRetrievedGraph( graph_name="Pkustk07", dataset="networkrepository", directed=directed, verbose=verbose, cache_path=cache_path, additional_graph_kwargs=additional_graph_kwargs )()
PypiClean
/evernote-1.25.3.tar.gz/evernote-1.25.3/lib/thrift/server/TServer.py
import logging import sys import os import traceback import threading import Queue from thrift.Thrift import TProcessor from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol class TServer: """Base interface for a server, which must have a serve method.""" """ 3 constructors for all servers: 1) (processor, serverTransport) 2) (processor, serverTransport, transportFactory, protocolFactory) 3) (processor, serverTransport, inputTransportFactory, outputTransportFactory, inputProtocolFactory, outputProtocolFactory)""" def __init__(self, *args): if (len(args) == 2): self.__initArgs__(args[0], args[1], TTransport.TTransportFactoryBase(), TTransport.TTransportFactoryBase(), TBinaryProtocol.TBinaryProtocolFactory(), TBinaryProtocol.TBinaryProtocolFactory()) elif (len(args) == 4): self.__initArgs__(args[0], args[1], args[2], args[2], args[3], args[3]) elif (len(args) == 6): self.__initArgs__(args[0], args[1], args[2], args[3], args[4], args[5]) def __initArgs__(self, processor, serverTransport, inputTransportFactory, outputTransportFactory, inputProtocolFactory, outputProtocolFactory): self.processor = processor self.serverTransport = serverTransport self.inputTransportFactory = inputTransportFactory self.outputTransportFactory = outputTransportFactory self.inputProtocolFactory = inputProtocolFactory self.outputProtocolFactory = outputProtocolFactory def serve(self): pass class TSimpleServer(TServer): """Simple single-threaded server that just pumps around one transport.""" def __init__(self, *args): TServer.__init__(self, *args) def serve(self): self.serverTransport.listen() while True: client = self.serverTransport.accept() itrans = self.inputTransportFactory.getTransport(client) otrans = self.outputTransportFactory.getTransport(client) iprot = self.inputProtocolFactory.getProtocol(itrans) oprot = self.outputProtocolFactory.getProtocol(otrans) try: while True: self.processor.process(iprot, oprot) except TTransport.TTransportException, tx: pass except Exception, x: logging.exception(x) itrans.close() otrans.close() class TThreadedServer(TServer): """Threaded server that spawns a new thread per each connection.""" def __init__(self, *args, **kwargs): TServer.__init__(self, *args) self.daemon = kwargs.get("daemon", False) def serve(self): self.serverTransport.listen() while True: try: client = self.serverTransport.accept() t = threading.Thread(target = self.handle, args=(client,)) t.setDaemon(self.daemon) t.start() except KeyboardInterrupt: raise except Exception, x: logging.exception(x) def handle(self, client): itrans = self.inputTransportFactory.getTransport(client) otrans = self.outputTransportFactory.getTransport(client) iprot = self.inputProtocolFactory.getProtocol(itrans) oprot = self.outputProtocolFactory.getProtocol(otrans) try: while True: self.processor.process(iprot, oprot) except TTransport.TTransportException, tx: pass except Exception, x: logging.exception(x) itrans.close() otrans.close() class TThreadPoolServer(TServer): """Server with a fixed size pool of threads which service requests.""" def __init__(self, *args, **kwargs): TServer.__init__(self, *args) self.clients = Queue.Queue() self.threads = 10 self.daemon = kwargs.get("daemon", False) def setNumThreads(self, num): """Set the number of worker threads that should be created""" self.threads = num def serveThread(self): """Loop around getting clients from the shared queue and process them.""" while True: try: client = self.clients.get() self.serveClient(client) except Exception, x: logging.exception(x) def serveClient(self, client): """Process input/output from a client for as long as possible""" itrans = self.inputTransportFactory.getTransport(client) otrans = self.outputTransportFactory.getTransport(client) iprot = self.inputProtocolFactory.getProtocol(itrans) oprot = self.outputProtocolFactory.getProtocol(otrans) try: while True: self.processor.process(iprot, oprot) except TTransport.TTransportException, tx: pass except Exception, x: logging.exception(x) itrans.close() otrans.close() def serve(self): """Start a fixed number of worker threads and put client into a queue""" for i in range(self.threads): try: t = threading.Thread(target = self.serveThread) t.setDaemon(self.daemon) t.start() except Exception, x: logging.exception(x) # Pump the socket for clients self.serverTransport.listen() while True: try: client = self.serverTransport.accept() self.clients.put(client) except Exception, x: logging.exception(x) class TForkingServer(TServer): """A Thrift server that forks a new process for each request""" """ This is more scalable than the threaded server as it does not cause GIL contention. Note that this has different semantics from the threading server. Specifically, updates to shared variables will no longer be shared. It will also not work on windows. This code is heavily inspired by SocketServer.ForkingMixIn in the Python stdlib. """ def __init__(self, *args): TServer.__init__(self, *args) self.children = [] def serve(self): def try_close(file): try: file.close() except IOError, e: logging.warning(e, exc_info=True) self.serverTransport.listen() while True: client = self.serverTransport.accept() try: pid = os.fork() if pid: # parent # add before collect, otherwise you race w/ waitpid self.children.append(pid) self.collect_children() # Parent must close socket or the connection may not get # closed promptly itrans = self.inputTransportFactory.getTransport(client) otrans = self.outputTransportFactory.getTransport(client) try_close(itrans) try_close(otrans) else: itrans = self.inputTransportFactory.getTransport(client) otrans = self.outputTransportFactory.getTransport(client) iprot = self.inputProtocolFactory.getProtocol(itrans) oprot = self.outputProtocolFactory.getProtocol(otrans) ecode = 0 try: try: while True: self.processor.process(iprot, oprot) except TTransport.TTransportException, tx: pass except Exception, e: logging.exception(e) ecode = 1 finally: try_close(itrans) try_close(otrans) os._exit(ecode) except TTransport.TTransportException, tx: pass except Exception, x: logging.exception(x) def collect_children(self): while self.children: try: pid, status = os.waitpid(0, os.WNOHANG) except os.error: pid = None if pid: self.children.remove(pid) else: break
PypiClean
/Products.TinyMCE-1.4.3.tar.gz/Products.TinyMCE-1.4.3/Products/TinyMCE/skins/tinymce/plugins/xhtmlxtras/langs/hy_dlg.js
tinyMCE.addI18n('hy.xhtmlxtras_dlg',{"attribs_title":"\u054f\u0565\u0572\u0561\u0564\u0580\u0565\u056c / \u0583\u0578\u0583\u0578\u056d\u0565\u056c \u0561\u057f\u0580\u056b\u0562\u0578\u0582\u057f\u0576\u0565\u0580","option_rtl":"\u0531\u057b\u056b\u0581 \u0571\u0561\u056d","option_ltr":"\u0541\u0561\u056d\u056b\u0581 \u0561\u057b","insert_date":"\u054f\u0565\u0572\u0561\u0564\u0580\u0565\u056c \u0568\u0576\u0569\u0561\u0581\u056b\u056f \u0561\u0574\u057d\u0561\u0569\u056b\u057e\u0568 / \u056a\u0561\u0574\u0568",remove:"\u0540\u0565\u057c\u0561\u0581\u0576\u0565\u056c","title_cite_element":"Citation Element","title_abbr_element":"Abbreviation Element","title_acronym_element":"Acronym Element","title_del_element":"Deletion Element","title_ins_element":"Insertion Element","fieldset_events_tab":"Element Events","fieldset_attrib_tab":"Element Attributes","fieldset_general_tab":"\u0538\u0576\u0564\u0570\u0561\u0576\u0578\u0582\u0580 \u057a\u0561\u0580\u0561\u0574\u0565\u057f\u0580\u0565\u0580","events_tab":"\u0534\u0565\u057a\u0584\u0565\u0580","attrib_tab":"\u0531\u057f\u0580\u056b\u0562\u0578\u0582\u057f\u0576\u0565\u0580","general_tab":"\u0540\u056b\u0574\u0576\u0561\u056f\u0561\u0576","attribute_attrib_tab":"\u0531\u0568\u0580\u056b\u0562\u0578\u0582\u057f\u0576\u0565\u0580","attribute_events_tab":"\u0534\u0565\u057a\u0584\u0565\u0580","attribute_label_accesskey":"AccessKey","attribute_label_tabindex":"TabIndex","attribute_label_langcode":"\u053c\u0565\u0566\u0578\u0582","attribute_option_rtl":"\u0531\u057b\u056b\u0581 \u0571\u0561\u056d","attribute_option_ltr":"\u0541\u0561\u056d\u056b\u0581 \u0561\u057b","attribute_label_langdir":"\u054f\u0565\u0584\u057d\u057f\u056b \u0578\u0582\u0572\u0572\u0578\u0582\u0569\u0575\u0578\u0582\u0576","attribute_label_datetime":"\u0531\u0574\u057d\u0561\u0569\u056b\u057e / \u053a\u0561\u0574\u0561\u0576\u0561\u056f","attribute_label_cite":"\u0544\u0565\u056f\u0576\u0561\u0562\u0561\u0576\u0578\u0582\u0569\u0575\u0578\u0582\u0576","attribute_label_style":"\u0548\u0573","attribute_label_class":"\u0534\u0561\u057d","attribute_label_id":"ID","attribute_label_title":"\u054e\u0565\u0580\u0576\u0561\u0563\u056b\u0580"});
PypiClean
/zensols.zotsite-0.8.1-py3-none-any.whl/zensols/zotsite/resources/site/lib/js/bootstrap-treeview.min.js
!function(a,b,c,d){"use strict";var e="treeview",f={};f.settings={injectStyle:!0,levels:2,expandIcon:"glyphicon glyphicon-plus",collapseIcon:"glyphicon glyphicon-minus",emptyIcon:"glyphicon",nodeIcon:"",selectedIcon:"",checkedIcon:"glyphicon glyphicon-check",uncheckedIcon:"glyphicon glyphicon-unchecked",color:d,backColor:d,borderColor:d,onhoverColor:"#F5F5F5",selectedColor:"#FFFFFF",selectedBackColor:"#428bca",searchResultColor:"#D9534F",searchResultBackColor:d,enableLinks:!1,highlightSelected:!0,highlightSearchResults:!0,showBorder:!0,showIcon:!0,showCheckbox:!1,showTags:!1,multiSelect:!1,onNodeChecked:d,onNodeCollapsed:d,onNodeDisabled:d,onNodeEnabled:d,onNodeExpanded:d,onNodeSelected:d,onNodeUnchecked:d,onNodeUnselected:d,onSearchComplete:d,onSearchCleared:d},f.options={silent:!1,ignoreChildren:!1},f.searchOptions={ignoreCase:!0,exactMatch:!1,revealResults:!0};var g=function(b,c){return this.$element=a(b),this.elementId=b.id,this.styleId=this.elementId+"-style",this.init(c),{options:this.options,init:a.proxy(this.init,this),remove:a.proxy(this.remove,this),getNode:a.proxy(this.getNode,this),getParent:a.proxy(this.getParent,this),getSiblings:a.proxy(this.getSiblings,this),getSelected:a.proxy(this.getSelected,this),getUnselected:a.proxy(this.getUnselected,this),getExpanded:a.proxy(this.getExpanded,this),getCollapsed:a.proxy(this.getCollapsed,this),getChecked:a.proxy(this.getChecked,this),getUnchecked:a.proxy(this.getUnchecked,this),getDisabled:a.proxy(this.getDisabled,this),getEnabled:a.proxy(this.getEnabled,this),selectNode:a.proxy(this.selectNode,this),unselectNode:a.proxy(this.unselectNode,this),toggleNodeSelected:a.proxy(this.toggleNodeSelected,this),collapseAll:a.proxy(this.collapseAll,this),collapseNode:a.proxy(this.collapseNode,this),expandAll:a.proxy(this.expandAll,this),expandNode:a.proxy(this.expandNode,this),toggleNodeExpanded:a.proxy(this.toggleNodeExpanded,this),revealNode:a.proxy(this.revealNode,this),checkAll:a.proxy(this.checkAll,this),checkNode:a.proxy(this.checkNode,this),uncheckAll:a.proxy(this.uncheckAll,this),uncheckNode:a.proxy(this.uncheckNode,this),toggleNodeChecked:a.proxy(this.toggleNodeChecked,this),disableAll:a.proxy(this.disableAll,this),disableNode:a.proxy(this.disableNode,this),enableAll:a.proxy(this.enableAll,this),enableNode:a.proxy(this.enableNode,this),toggleNodeDisabled:a.proxy(this.toggleNodeDisabled,this),search:a.proxy(this.search,this),clearSearch:a.proxy(this.clearSearch,this)}};g.prototype.init=function(b){this.tree=[],this.nodes=[],b.data&&("string"==typeof b.data&&(b.data=a.parseJSON(b.data)),this.tree=a.extend(!0,[],b.data),delete b.data),this.options=a.extend({},f.settings,b),this.destroy(),this.subscribeEvents(),this.setInitialStates({nodes:this.tree},0),this.render()},g.prototype.remove=function(){this.destroy(),a.removeData(this,e),a("#"+this.styleId).remove()},g.prototype.destroy=function(){this.initialized&&(this.$wrapper.remove(),this.$wrapper=null,this.unsubscribeEvents(),this.initialized=!1)},g.prototype.unsubscribeEvents=function(){this.$element.off("click"),this.$element.off("nodeChecked"),this.$element.off("nodeCollapsed"),this.$element.off("nodeDisabled"),this.$element.off("nodeEnabled"),this.$element.off("nodeExpanded"),this.$element.off("nodeSelected"),this.$element.off("nodeUnchecked"),this.$element.off("nodeUnselected"),this.$element.off("searchComplete"),this.$element.off("searchCleared")},g.prototype.subscribeEvents=function(){this.unsubscribeEvents(),this.$element.on("click",a.proxy(this.clickHandler,this)),"function"==typeof this.options.onNodeChecked&&this.$element.on("nodeChecked",this.options.onNodeChecked),"function"==typeof this.options.onNodeCollapsed&&this.$element.on("nodeCollapsed",this.options.onNodeCollapsed),"function"==typeof this.options.onNodeDisabled&&this.$element.on("nodeDisabled",this.options.onNodeDisabled),"function"==typeof this.options.onNodeEnabled&&this.$element.on("nodeEnabled",this.options.onNodeEnabled),"function"==typeof this.options.onNodeExpanded&&this.$element.on("nodeExpanded",this.options.onNodeExpanded),"function"==typeof this.options.onNodeSelected&&this.$element.on("nodeSelected",this.options.onNodeSelected),"function"==typeof this.options.onNodeUnchecked&&this.$element.on("nodeUnchecked",this.options.onNodeUnchecked),"function"==typeof this.options.onNodeUnselected&&this.$element.on("nodeUnselected",this.options.onNodeUnselected),"function"==typeof this.options.onSearchComplete&&this.$element.on("searchComplete",this.options.onSearchComplete),"function"==typeof this.options.onSearchCleared&&this.$element.on("searchCleared",this.options.onSearchCleared)},g.prototype.setInitialStates=function(b,c){if(b.nodes){c+=1;var d=b,e=this;a.each(b.nodes,function(a,b){b.nodeId=e.nodes.length,b.parentId=d.nodeId,b.hasOwnProperty("selectable")||(b.selectable=!0),b.state=b.state||{},b.state.hasOwnProperty("checked")||(b.state.checked=!1),b.state.hasOwnProperty("disabled")||(b.state.disabled=!1),b.state.hasOwnProperty("expanded")||(!b.state.disabled&&c<e.options.levels&&b.nodes&&b.nodes.length>0?b.state.expanded=!0:b.state.expanded=!1),b.state.hasOwnProperty("selected")||(b.state.selected=!1),e.nodes.push(b),b.nodes&&e.setInitialStates(b,c)})}},g.prototype.clickHandler=function(b){this.options.enableLinks||b.preventDefault();var c=a(b.target),d=this.findNode(c);if(d&&!d.state.disabled){var e=c.attr("class")?c.attr("class").split(" "):[];-1!==e.indexOf("expand-icon")?(this.toggleExpandedState(d,f.options),this.render()):-1!==e.indexOf("check-icon")?(this.toggleCheckedState(d,f.options),this.render()):(d.selectable?this.toggleSelectedState(d,f.options):this.toggleExpandedState(d,f.options),this.render())}},g.prototype.findNode=function(a){var b=a.closest("li.list-group-item").attr("data-nodeid"),c=this.nodes[b];return c||console.log("Error: node does not exist"),c},g.prototype.toggleExpandedState=function(a,b){a&&this.setExpandedState(a,!a.state.expanded,b)},g.prototype.setExpandedState=function(b,c,d){c!==b.state.expanded&&(c&&b.nodes?(b.state.expanded=!0,d.silent||this.$element.trigger("nodeExpanded",a.extend(!0,{},b))):c||(b.state.expanded=!1,d.silent||this.$element.trigger("nodeCollapsed",a.extend(!0,{},b)),b.nodes&&!d.ignoreChildren&&a.each(b.nodes,a.proxy(function(a,b){this.setExpandedState(b,!1,d)},this))))},g.prototype.toggleSelectedState=function(a,b){a&&this.setSelectedState(a,!a.state.selected,b)},g.prototype.setSelectedState=function(b,c,d){c!==b.state.selected&&(c?(this.options.multiSelect||a.each(this.findNodes("true","g","state.selected"),a.proxy(function(a,b){this.setSelectedState(b,!1,d)},this)),b.state.selected=!0,d.silent||this.$element.trigger("nodeSelected",a.extend(!0,{},b))):(b.state.selected=!1,d.silent||this.$element.trigger("nodeUnselected",a.extend(!0,{},b))))},g.prototype.toggleCheckedState=function(a,b){a&&this.setCheckedState(a,!a.state.checked,b)},g.prototype.setCheckedState=function(b,c,d){c!==b.state.checked&&(c?(b.state.checked=!0,d.silent||this.$element.trigger("nodeChecked",a.extend(!0,{},b))):(b.state.checked=!1,d.silent||this.$element.trigger("nodeUnchecked",a.extend(!0,{},b))))},g.prototype.setDisabledState=function(b,c,d){c!==b.state.disabled&&(c?(b.state.disabled=!0,this.setExpandedState(b,!1,d),this.setSelectedState(b,!1,d),this.setCheckedState(b,!1,d),d.silent||this.$element.trigger("nodeDisabled",a.extend(!0,{},b))):(b.state.disabled=!1,d.silent||this.$element.trigger("nodeEnabled",a.extend(!0,{},b))))},g.prototype.render=function(){this.initialized||(this.$element.addClass(e),this.$wrapper=a(this.template.list),this.injectStyle(),this.initialized=!0),this.$element.empty().append(this.$wrapper.empty()),this.buildTree(this.tree,0)},g.prototype.buildTree=function(b,c){if(b){c+=1;var d=this;a.each(b,function(b,e){for(var f=a(d.template.item).addClass("node-"+d.elementId).addClass(e.state.checked?"node-checked":"").addClass(e.state.disabled?"node-disabled":"").addClass(e.state.selected?"node-selected":"").addClass(e.searchResult?"search-result":"").attr("data-nodeid",e.nodeId).attr("style",d.buildStyleOverride(e)),g=0;c-1>g;g++)f.append(d.template.indent);var h=[];if(e.nodes?(h.push("expand-icon"),h.push(e.state.expanded?d.options.collapseIcon:d.options.expandIcon)):h.push(d.options.emptyIcon),f.append(a(d.template.icon).addClass(h.join(" "))),d.options.showIcon){var h=["node-icon"];h.push(e.icon||d.options.nodeIcon),e.state.selected&&(h.pop(),h.push(e.selectedIcon||d.options.selectedIcon||e.icon||d.options.nodeIcon)),f.append(a(d.template.icon).addClass(h.join(" ")))}if(d.options.showCheckbox){var h=["check-icon"];h.push(e.state.checked?d.options.checkedIcon:d.options.uncheckedIcon),f.append(a(d.template.icon).addClass(h.join(" ")))}return f.append(d.options.enableLinks?a(d.template.link).attr("href",e.href).append(e.text):e.text),d.options.showTags&&e.tags&&a.each(e.tags,function(b,c){f.append(a(d.template.badge).append(c))}),d.$wrapper.append(f),e.nodes&&e.state.expanded&&!e.state.disabled?d.buildTree(e.nodes,c):void 0})}},g.prototype.buildStyleOverride=function(a){if(a.state.disabled)return"";var b=a.color,c=a.backColor;return this.options.highlightSelected&&a.state.selected&&(this.options.selectedColor&&(b=this.options.selectedColor),this.options.selectedBackColor&&(c=this.options.selectedBackColor)),this.options.highlightSearchResults&&a.searchResult&&!a.state.disabled&&(this.options.searchResultColor&&(b=this.options.searchResultColor),this.options.searchResultBackColor&&(c=this.options.searchResultBackColor)),"color:"+b+";background-color:"+c+";"},g.prototype.injectStyle=function(){this.options.injectStyle&&!c.getElementById(this.styleId)&&a('<style type="text/css" id="'+this.styleId+'"> '+this.buildStyle()+" </style>").appendTo("head")},g.prototype.buildStyle=function(){var a=".node-"+this.elementId+"{";return this.options.color&&(a+="color:"+this.options.color+";"),this.options.backColor&&(a+="background-color:"+this.options.backColor+";"),this.options.showBorder?this.options.borderColor&&(a+="border:1px solid "+this.options.borderColor+";"):a+="border:none;",a+="}",this.options.onhoverColor&&(a+=".node-"+this.elementId+":not(.node-disabled):hover{background-color:"+this.options.onhoverColor+";}"),this.css+a},g.prototype.template={list:'<ul class="list-group"></ul>',item:'<li class="list-group-item"></li>',indent:'<span class="indent"></span>',icon:'<span class="icon"></span>',link:'<a href="#" style="color:inherit;"></a>',badge:'<span class="badge"></span>'},g.prototype.css=".treeview .list-group-item{cursor:pointer}.treeview span.indent{margin-left:10px;margin-right:10px}.treeview span.icon{width:12px;margin-right:5px}.treeview .node-disabled{color:silver;cursor:not-allowed}",g.prototype.getNode=function(a){return this.nodes[a]},g.prototype.getParent=function(a){var b=this.identifyNode(a);return this.nodes[b.parentId]},g.prototype.getSiblings=function(a){var b=this.identifyNode(a),c=this.getParent(b),d=c?c.nodes:this.tree;return d.filter(function(a){return a.nodeId!==b.nodeId})},g.prototype.getSelected=function(){return this.findNodes("true","g","state.selected")},g.prototype.getUnselected=function(){return this.findNodes("false","g","state.selected")},g.prototype.getExpanded=function(){return this.findNodes("true","g","state.expanded")},g.prototype.getCollapsed=function(){return this.findNodes("false","g","state.expanded")},g.prototype.getChecked=function(){return this.findNodes("true","g","state.checked")},g.prototype.getUnchecked=function(){return this.findNodes("false","g","state.checked")},g.prototype.getDisabled=function(){return this.findNodes("true","g","state.disabled")},g.prototype.getEnabled=function(){return this.findNodes("false","g","state.disabled")},g.prototype.selectNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setSelectedState(a,!0,b)},this)),this.render()},g.prototype.unselectNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setSelectedState(a,!1,b)},this)),this.render()},g.prototype.toggleNodeSelected=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.toggleSelectedState(a,b)},this)),this.render()},g.prototype.collapseAll=function(b){var c=this.findNodes("true","g","state.expanded");this.forEachIdentifier(c,b,a.proxy(function(a,b){this.setExpandedState(a,!1,b)},this)),this.render()},g.prototype.collapseNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setExpandedState(a,!1,b)},this)),this.render()},g.prototype.expandAll=function(b){if(b=a.extend({},f.options,b),b&&b.levels)this.expandLevels(this.tree,b.levels,b);else{var c=this.findNodes("false","g","state.expanded");this.forEachIdentifier(c,b,a.proxy(function(a,b){this.setExpandedState(a,!0,b)},this))}this.render()},g.prototype.expandNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setExpandedState(a,!0,b),a.nodes&&b&&b.levels&&this.expandLevels(a.nodes,b.levels-1,b)},this)),this.render()},g.prototype.expandLevels=function(b,c,d){d=a.extend({},f.options,d),a.each(b,a.proxy(function(a,b){this.setExpandedState(b,c>0?!0:!1,d),b.nodes&&this.expandLevels(b.nodes,c-1,d)},this))},g.prototype.revealNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){for(var c=this.getParent(a);c;)this.setExpandedState(c,!0,b),c=this.getParent(c)},this)),this.render()},g.prototype.toggleNodeExpanded=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.toggleExpandedState(a,b)},this)),this.render()},g.prototype.checkAll=function(b){var c=this.findNodes("false","g","state.checked");this.forEachIdentifier(c,b,a.proxy(function(a,b){this.setCheckedState(a,!0,b)},this)),this.render()},g.prototype.checkNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setCheckedState(a,!0,b)},this)),this.render()},g.prototype.uncheckAll=function(b){var c=this.findNodes("true","g","state.checked");this.forEachIdentifier(c,b,a.proxy(function(a,b){this.setCheckedState(a,!1,b)},this)),this.render()},g.prototype.uncheckNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setCheckedState(a,!1,b)},this)),this.render()},g.prototype.toggleNodeChecked=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.toggleCheckedState(a,b)},this)),this.render()},g.prototype.disableAll=function(b){var c=this.findNodes("false","g","state.disabled");this.forEachIdentifier(c,b,a.proxy(function(a,b){this.setDisabledState(a,!0,b)},this)),this.render()},g.prototype.disableNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setDisabledState(a,!0,b)},this)),this.render()},g.prototype.enableAll=function(b){var c=this.findNodes("true","g","state.disabled");this.forEachIdentifier(c,b,a.proxy(function(a,b){this.setDisabledState(a,!1,b)},this)),this.render()},g.prototype.enableNode=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setDisabledState(a,!1,b)},this)),this.render()},g.prototype.toggleNodeDisabled=function(b,c){this.forEachIdentifier(b,c,a.proxy(function(a,b){this.setDisabledState(a,!a.state.disabled,b)},this)),this.render()},g.prototype.forEachIdentifier=function(b,c,d){c=a.extend({},f.options,c),b instanceof Array||(b=[b]),a.each(b,a.proxy(function(a,b){d(this.identifyNode(b),c)},this))},g.prototype.identifyNode=function(a){return"number"==typeof a?this.nodes[a]:a},g.prototype.search=function(b,c){c=a.extend({},f.searchOptions,c),this.clearSearch({render:!1});var d=[];if(b&&b.length>0){c.exactMatch&&(b="^"+b+"$");var e="g";c.ignoreCase&&(e+="i"),d=this.findNodes(b,e),a.each(d,function(a,b){b.searchResult=!0})}return c.revealResults?this.revealNode(d):this.render(),this.$element.trigger("searchComplete",a.extend(!0,{},d)),d},g.prototype.clearSearch=function(b){b=a.extend({},{render:!0},b);var c=a.each(this.findNodes("true","g","searchResult"),function(a,b){b.searchResult=!1});b.render&&this.render(),this.$element.trigger("searchCleared",a.extend(!0,{},c))},g.prototype.findNodes=function(b,c,d){c=c||"g",d=d||"text";var e=this;return a.grep(this.nodes,function(a){var f=e.getNodeValue(a,d);return"string"==typeof f?f.match(new RegExp(b,c)):void 0})},g.prototype.getNodeValue=function(a,b){var c=b.indexOf(".");if(c>0){var e=a[b.substring(0,c)],f=b.substring(c+1,b.length);return this.getNodeValue(e,f)}return a.hasOwnProperty(b)?a[b].toString():d};var h=function(a){b.console&&b.console.error(a)};a.fn[e]=function(b,c){var d;return this.each(function(){var f=a.data(this,e);"string"==typeof b?f?a.isFunction(f[b])&&"_"!==b.charAt(0)?(c instanceof Array||(c=[c]),d=f[b].apply(f,c)):h("No such method : "+b):h("Not initialized, can not call method : "+b):"boolean"==typeof b?d=f:a.data(this,e,new g(this,a.extend(!0,{},b)))}),d||this}}(jQuery,window,document);
PypiClean
/ActiveReign-1.0.5.tar.gz/ActiveReign-1.0.5/ar3/ops/query/arg_parser.py
import argparse from os import path from getpass import getpass def file_exists(parser, filename): if not path.exists(filename): parser.error("Input file not found: {}".format(filename)) return [x.strip() for x in open(filename)] def query_args(sub_parser): query_parser = sub_parser.add_parser("query", help='- Perform LDAP queries on domain') # Output / Display Options query_parser.add_argument('-t', dest='timeout', type=int, default=3, help='Connection Timeout') query_parser.add_argument('-srv', '--ldap-srv', dest='ldap_srv', type=str, default='', help='LDAP Server') qtypes = query_parser.add_argument_group("Query Types") qtypes.add_argument('--users', dest="users", action='store_true', help="Query domain users") qtypes.add_argument('--groups', dest="groups", action='store_true', help="Query domain groups") qtypes.add_argument('--computers', dest="computers", action='store_true', help="Query domain computers") qtypes.add_argument('--domain', dest="qdomain", action='store_true', help="Query domain information") qtypes.add_argument('--trust', dest="trust", action='store_true', help="Enumerate domain trust relationships") qtypes.add_argument('--reversible-encryption', dest="reversible_encryption", action='store_true', help="Lookup users with reversible encryption") qtypes.add_argument('--pass-never-expire', dest="pass_never_expire", action='store_true',help="Lookup users whos password never expires") qtypes.add_argument('--pass-not-required', dest="pass_not_required", action='store_true',help="Lookup users with password not required") qtypes.add_argument('--recon', dest="recon", action='store_true',help="Perform recon on the domain and populates the AR3 database for enumeration") qtypes.add_argument('--custom', dest="custom", type=str, default='', help="Perform custom query") qoptions = query_parser.add_argument_group("Query Options") qoptions.add_argument('-q', '--query', dest='query', type=str, default='', help='Specify user, computer, or group to query') qoptions.add_argument('-a', dest='attrs', type=str, default='', help='Specify attrs to query') qoptions.add_argument('--all', dest='all', action='store_true', help='Enumerate all users (even disabled) or all groups & members') auth = query_parser.add_argument_group("Query Authentication") auth.add_argument('-id', dest='cred_id', type=int, help='Use creds from db for queries') auth.add_argument('-u', dest='user', type=str, default='', required=False, help='Set username (Default=null)') auth.add_argument('-d', dest='domain', type=str, default='', help='Domain Name') query_pwd = auth.add_mutually_exclusive_group(required=False) query_pwd.add_argument('-H','-hashes', dest='hash', type=str, default='', help='Use Hash for authentication') query_pwd.add_argument('-p', dest='passwd', type=str, default='', help='Set password (Default=null)') outdata = query_parser.add_argument_group("Output Options") outdata.add_argument('-v','--verbose', dest="verbose", action='store_true', help="Show attribute fields and values") outdata.add_argument('--data-only', dest="data_only", action='store_true', help="Show data only (Copy/Paste Format)") outdata.add_argument('--parse', dest="parse", action='store_true', help="Parse text fields for sensitive information") # Hidden Args: Required for execution methods to work but not applicable to the operational mode query_parser.add_argument('--local-auth', dest="local_auth", action='store_true', help=argparse.SUPPRESS) def parse_attrs(attrs): if not attrs: return [] else: return attrs.split(",") def query_arg_mods(args, db_obj, loggers): logger = loggers['console'] args.attrs = parse_attrs(args.attrs) if args.hash: args.passwd.append(False) elif not args.passwd and args.user: args.passwd = [getpass("Enter password, or continue with null-value: ")] if args.cred_id and not args.user: enum_user = db_obj.extract_user(args.cred_id) if enum_user: args.user = enum_user[0][0] args.passwd = enum_user[0][1] args.hash = enum_user[0][2] args.domain = enum_user[0][3] else: logger.fail("Unable to gather credentials from db, try again") exit(1) if args.hash: logger.status(['Query Authentication', '{}\{} (Password: None) (Hash: True)'.format(args.domain, args.user)]) else: logger.status(['Query Authentication', '{}\{} (Password: {}****) (Hash: False)'.format(args.domain, args.user, args.passwd[:1])]) return args
PypiClean
/mindspore_gpu-1.10.0-cp39-cp39-manylinux1_x86_64.whl/mindspore/ops/_op_impl/aicpu/scatter.py
"""Scatter op""" from mindspore.ops.op_info_register import op_info_register, AiCPURegOp, DataType scatter_op_info = AiCPURegOp("Scatter") \ .fusion_type("OPAQUE") \ .input(0, "target", "required") \ .input(1, "dim", "required") \ .input(2, "index", "required") \ .input(3, "src", "required") \ .output(0, "output", "required") \ .dtype_format(DataType.I8_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.I8_Default, DataType.I8_Default) \ .dtype_format(DataType.I16_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.I16_Default, DataType.I16_Default) \ .dtype_format(DataType.I32_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.I32_Default, DataType.I32_Default) \ .dtype_format(DataType.I64_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.I64_Default, DataType.I64_Default) \ .dtype_format(DataType.U8_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.U8_Default, DataType.U8_Default) \ .dtype_format(DataType.U16_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.U16_Default, DataType.U16_Default) \ .dtype_format(DataType.U32_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.U32_Default, DataType.U32_Default) \ .dtype_format(DataType.U64_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.U64_Default, DataType.U64_Default) \ .dtype_format(DataType.F16_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.F16_Default, DataType.F16_Default) \ .dtype_format(DataType.F32_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.F32_Default, DataType.F32_Default) \ .dtype_format(DataType.F64_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.F64_Default, DataType.F64_Default) \ .dtype_format(DataType.BOOL_Default, DataType.I32_Default, \ DataType.I32_Default, DataType.BOOL_Default, DataType.BOOL_Default) \ .dtype_format(DataType.I8_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.I8_Default, DataType.I8_Default) \ .dtype_format(DataType.I16_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.I16_Default, DataType.I16_Default) \ .dtype_format(DataType.I32_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.I32_Default, DataType.I32_Default) \ .dtype_format(DataType.I64_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.I64_Default, DataType.I64_Default) \ .dtype_format(DataType.U8_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.U8_Default, DataType.U8_Default) \ .dtype_format(DataType.U16_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.U16_Default, DataType.U16_Default) \ .dtype_format(DataType.U32_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.U32_Default, DataType.U32_Default) \ .dtype_format(DataType.U64_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.U64_Default, DataType.U64_Default) \ .dtype_format(DataType.F16_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.F16_Default, DataType.F16_Default) \ .dtype_format(DataType.F32_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.F32_Default, DataType.F32_Default) \ .dtype_format(DataType.F64_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.F64_Default, DataType.F64_Default) \ .dtype_format(DataType.BOOL_Default, DataType.I32_Default, \ DataType.I64_Default, DataType.BOOL_Default, DataType.BOOL_Default) \ .get_op_info() @op_info_register(scatter_op_info) def _scatter_aicpu(): """Scatter AiCPU register""" return
PypiClean
/Sutekh-2.0.0.tar.gz/Sutekh-2.0.0/sutekh/base/core/BaseFilters.py
# pylint: disable=super-init-not-called, abstract-method, too-many-lines # the base classes don't have useful __init__ methods, so we # generally don't call __init__ when creating a new filter # not every abstract method is immediately overridden # the module is long, but keeping the filters together is the best # option """Define all the filters provided in sutekh""" # pylint: disable=deprecated-module # We need string.punctation for best_guess_filter import string # pylint: enable=deprecated-module from sqlobject import (SQLObjectNotFound, AND, OR, NOT, LIKE, func, sqlhub, IN as SQLOBJ_IN) from sqlobject.sqlbuilder import (Table, Alias, LEFTJOINOn, Select, SQLTrueClause as TRUE) from .BaseTables import (AbstractCard, CardType, Expansion, RarityPair, PhysicalCardSet, PhysicalCard, Artist, Keyword, Printing, MapPhysicalCardToPhysicalCardSet) from .BaseAdapters import (IAbstractCard, IPhysicalCardSet, IRarityPair, IExpansion, ICardType, IRarity, IArtist, IPrinting, IPrintingName, IKeyword) # Compability Patches # pylint: disable=invalid-name # IN name is from SQLObject def IN(oCol, oListOrSelect): """Check explicitly for empty lists passed to the IN operator. Some databases engines (MySQL) don't handle them so just return False instead. """ if not oListOrSelect: return False return SQLOBJ_IN(oCol, oListOrSelect) # pylint: enable=invalid-name # Filter Base Class class Filter: """Base class for all filters""" types = () @classmethod def get_values(cls): """Used by GUI tools and FilterParser to get/check acceptable values""" # We can't do this as an attribute, since we need a database connection # to fill in the values most times raise NotImplementedError # pragma: no cover # pylint: disable=no-self-use # children need to be able to override this. def involves(self, _oCardSet): """Return true if the filter results change when oCardSet changes""" return self.is_physical_card_only() def select(self, cCardClass): """cCardClass.select(...) applying the filter to the selection.""" return cCardClass.select(self._get_expression(), join=self._get_joins()) def _get_expression(self): """Actual filter expression""" raise NotImplementedError # pragma: no cover def _get_joins(self): """joins needed by the filter""" raise NotImplementedError # pragma: no cover def is_physical_card_only(self): """Return true if this filter only operates on physical cards. Mainly used to handle various corner cases in the gui.""" return 'PhysicalCard' in self.types and \ 'AbstractCard' not in self.types # Collections of Filters class FilterBox(Filter, list): """Base class for filter collections.""" # pylint: disable=protected-access # we delibrately access protected members def _get_joins(self): """The joins required for the composite filter This is the union of the joins of the subfilters """ aJoins = [] for oSubFilter in self: aJoins.extend(oSubFilter._get_joins()) return aJoins def _get_types(self): """Get types for a composite filter. This is the intersection of the types of the subfilters """ aTypes = [] if self: for sType in self[0].types: iLen = len([x for x in self if sType in x.types]) if iLen == len(self): aTypes.append(sType) return aTypes def involves(self, oCardSet): """Return true if any of the child results change with oCardSet""" bResult = False for oSubFilter in self: bResult = bResult or oSubFilter.involves(oCardSet) return bResult # We allow protected access here too types = property(fget=lambda self: self._get_types(), doc="types supported by this filter") class FilterAndBox(FilterBox): """AND a list of filters.""" # pylint: disable=protected-access # we intentinally access protected members def _get_expression(self): """Combine filters with AND""" return AND(*[x._get_expression() for x in self]) class FilterOrBox(FilterBox): """OR a list of filters.""" # pylint: disable=protected-access # we intentinally access protected members def _get_expression(self): """Combine filters with OR""" return OR(*[x._get_expression() for x in self]) # NOT Filter class FilterNot(Filter): """NOT (negate) a filter.""" def __init__(self, oSubFilter): self.__oSubFilter = oSubFilter if self.__oSubFilter is None: # Can happen if we're given a filter without values set # We use NotNull, so we end up matching everything self.__oSubFilter = NotNullFilter() def _get_joins(self): """Joins for not is null, as they are used in the sub-select""" return [] # pylint: disable=protected-access # we are delibrately accesing protected members her # and in _get_expression types = property(fget=lambda self: self.__oSubFilter.types, doc="types supported by this filter") def _get_expression(self): """The expression for the NOT filter. We generate a suitable subselect from self._oSubFilter, and negate the results of that. """ # pylint: disable=no-member # SQLObject methods not detected by pylint oExpression = self.__oSubFilter._get_expression() aJoins = self.__oSubFilter._get_joins() if 'AbstractCard' in self.__oSubFilter.types: return NOT(IN(AbstractCard.q.id, Select(AbstractCard.q.id, oExpression, join=aJoins))) if 'PhysicalCard' in self.__oSubFilter.types: return NOT(IN(PhysicalCard.q.id, Select(PhysicalCard.q.id, oExpression, join=aJoins))) if 'PhysicalCardSet' in self.__oSubFilter.types: return NOT(IN(PhysicalCardSet.q.id, Select(PhysicalCardSet.q.id, oExpression, join=aJoins))) raise RuntimeError("FilterNot unable to handle sub-filter type.") class CachedFilter(Filter): """A filter which caches joins and expression lookups""" def __init__(self, oFilter): # pylint: disable=protected-access # We delibrately access the protected members here, as that's # the point self._oSubFilter = oFilter self._oExpression = oFilter._get_expression() self._aJoins = oFilter._get_joins() def _get_expression(self): return self._oExpression def _get_joins(self): return self._aJoins # pylint: disable=protected-access # we are delibrately accesing protected members her types = property(fget=lambda self: self._oSubFilter.types, doc="types supported by this filter") # Null Filter class NullFilter(Filter): """Return everything.""" types = ('AbstractCard', 'PhysicalCard', 'PhysicalCardSet') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): return TRUE # SQLite doesn't like True. Postgres doesn't like 1. def _get_joins(self): return [] # NotNullFilter class NotNullFilter(NullFilter): """Return nothing""" # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): return NOT(TRUE) # See Null Filter # Base Classes for Common Filter Idioms class SingleFilter(Filter): """Base class for filters on single items which connect to AbstractCard via a mapping table. Sub-class should set self._oMapTable, self._oMapField and self._oId. """ # pylint: disable=missing-docstring # don't need docstrings for _get_expression & _get_joins def _get_joins(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return [LEFTJOINOn(None, self._oMapTable, AbstractCard.q.id == self._oMapTable.q.abstract_card_id)] def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return self._oIdField == self._oId class MultiFilter(Filter): """Base class for filters on multiple items which connect to AbstractCard via a mapping table. Sub-class should set self._oMapTable, self._oMapField and self._aIds. """ # pylint: disable=missing-docstring # don't need docstrings for _get_expression & _get_joins def _get_joins(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return [LEFTJOINOn(None, self._oMapTable, AbstractCard.q.id == self._oMapTable.q.abstract_card_id)] def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return IN(self._oIdField, self._aIds) class DirectFilter(Filter): """Base class for filters which query AbstractTable directly.""" # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_joins(self): return [] # Useful utiltiy function for filters using with def split_list(aList): """Split a list of 'X with Y' strings into (X, Y) tuples""" aResults = [] for sWithString in aList: try: sVal1, sVal2 = sWithString.split(' with ') aResults.append((sVal1, sVal2)) except ValueError: return [] return aResults def make_table_alias(sTable): """In order to allow multiple filters to be AND together, filters need to create aliases of mapping tables so that, for example: FilterAndBox([DisciplineFilter('dom'), DisciplineFilter('obf')]) produces a list of cards which have both dominate and obfuscate rather than an empty list. The two discipline filters above need to join the abstract card table with two different copies of the mapping table to discipline pairs. """ return Alias(sTable) class ExpansionFilter(MultiFilter): """Filter AbstractCard on Expansion name""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, sExpansion): self._aIds = [oP.id for oP in IExpansion(sExpansion).pairs] self._oMapTable = make_table_alias('abs_rarity_pair_map') self._oIdField = self._oMapTable.q.rarity_pair_id class MultiExpansionFilter(MultiFilter): """Filter AbstractCard on multiple Expansion names""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, aExpansions): oPairs = [] for sExp in aExpansions: oPairs += IExpansion(sExp).pairs self._aIds = [oP.id for oP in oPairs] self._oMapTable = make_table_alias('abs_rarity_pair_map') self._oIdField = self._oMapTable.q.rarity_pair_id # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return sorted([x.name for x in Expansion.select() if x.name[:5] != 'Promo']) class ExpansionRarityFilter(SingleFilter): """Filter on Expansion & Rarity combo """ types = ('AbstractCard', 'PhysicalCard') def __init__(self, tExpanRarity): """ We use a tuple for Expansion and Rarity here to keep the same calling convention as for the Multi Filter""" sExpansion, sRarity = tExpanRarity self._oId = IRarityPair((IExpansion(sExpansion), IRarity(sRarity))).id self._oMapTable = make_table_alias('abs_rarity_pair_map') self._oIdField = self._oMapTable.q.rarity_pair_id class MultiExpansionRarityFilter(MultiFilter): """Filter on multiple Expansion & Rarity combos""" keyword = "Expansion_with_Rarity" description = "Expansion with Rarity" helptext = "a list of expansions and rarities (each element specified" \ " as an expansion with associated rarity).\nReturns all matching" \ " cards." iswithfilter = True islistfilter = True types = ('AbstractCard', 'PhysicalCard') def __init__(self, aExpansionRarities): """ Called with a list of Expansion + Rarity pairs""" self._aIds = [] if isinstance(aExpansionRarities[0], str): aValues = split_list(aExpansionRarities) else: aValues = aExpansionRarities for sExpansion, sRarity in aValues: self._aIds.append(IRarityPair((IExpansion(sExpansion), IRarity(sRarity))).id) self._oMapTable = make_table_alias('abs_rarity_pair_map') self._oIdField = self._oMapTable.q.rarity_pair_id # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): aExpansions = [x.name for x in Expansion.select() if x.name[:5] != 'Promo'] aExpansions.sort() aResults = [] for sExpan in aExpansions: oExpansion = IExpansion(sExpan) aRarities = [x.rarity.name for x in RarityPair.selectBy(expansion=oExpansion)] for sRarity in aRarities: aResults.append(sExpan + ' with ' + sRarity) return aResults class PrintingFilter(DirectFilter): """Filter on Printing Names""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, sPrinting): """We filter for cards which appeared in a specific printing""" # This is a bit messy, but we extract all the physical cards # that belong to the printing, then filter on their abstract # card id's self._aIds = set() oPrinting = IPrinting(sPrinting) for oCard in PhysicalCard.selectBy(printing=oPrinting): self._aIds.add(oCard.abstractCardID) # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): # pylint: disable=no-member # SQLObject confuses pylint return IN(AbstractCard.q.id, self._aIds) class MultiPrintingFilter(DirectFilter): """Filter on multiple Printings""" keyword = "Printing" description = "Non-Default Printing" helptext = "a list of printings.\nReturns all cards that have appeared " \ "in the specific printings." islistfilter = True types = ('AbstractCard', 'PhysicalCard') def __init__(self, aPrintings): """ Called with a list of Printing Names""" self._aIds = set() # See comments on PrintingFilter for sPrint in aPrintings: oPrinting = IPrinting(sPrint) for oCard in PhysicalCard.selectBy(printing=oPrinting): self._aIds.add(oCard.abstractCardID) # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): """We restrict ourselves to non-standard printings, since the standard ones are covered by the expansion filters""" aExpPrint = [IPrintingName(x) for x in Printing.select() if x.expansion.name[:5] != 'Promo' and x.name is not None] return sorted(aExpPrint) # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): # pylint: disable=no-member # SQLObject confuses pylint return IN(AbstractCard.q.id, self._aIds) class CardTypeFilter(SingleFilter): """Filter on card type""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, sCardType): self._oId = ICardType(sCardType).id self._oMapTable = make_table_alias('abs_type_map') self._oIdField = self._oMapTable.q.card_type_id class MultiCardTypeFilter(MultiFilter): """Filter on multiple card types""" keyword = "CardType" description = "Card Type" helptext = "a list of card types.\nReturns all cards of the given types" islistfilter = True types = ('AbstractCard', 'PhysicalCard') def __init__(self, aCardTypes): self._aIds = [ICardType(x).id for x in aCardTypes] self._oMapTable = make_table_alias('abs_type_map') self._oIdField = self._oMapTable.q.card_type_id # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return sorted([x.name for x in CardType.select()]) class ArtistFilter(SingleFilter): """Filter on Card's artist""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, sArtist): self._oId = IArtist(sArtist).id self._oMapTable = make_table_alias('abs_artist_map') self._oIdField = self._oMapTable.q.artist_id class MultiArtistFilter(MultiFilter): """Filter on multiple artists""" keyword = "Artist" islistfilter = True description = "Artist" helptext = "a list of artists\nReturns all cards where one or more of" \ " the specified artists has created art for the card." types = ('AbstractCard', 'PhysicalCard') def __init__(self, aArtists): self._aIds = [IArtist(x).id for x in aArtists] self._oMapTable = make_table_alias('abs_artist_map') self._oIdField = self._oMapTable.q.artist_id # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return sorted([x.name for x in Artist.select()]) class KeywordFilter(SingleFilter): """Filter on Card's keyword""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, sKeyword): self._oId = IKeyword(sKeyword).id self._oMapTable = make_table_alias('abs_keyword_map') self._oIdField = self._oMapTable.q.keyword_id class MultiKeywordFilter(MultiFilter): """Filter on multiple keywords""" keyword = "Keyword" islistfilter = True description = "Keyword" helptext = "a list of keywords\nReturns all cards where one or more of" \ " the specified keywords is associated with the card." types = ('AbstractCard', 'PhysicalCard') def __init__(self, aKeywords): self._aIds = [IKeyword(x).id for x in aKeywords] self._oMapTable = make_table_alias('abs_keyword_map') self._oIdField = self._oMapTable.q.keyword_id # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return sorted([x.keyword for x in Keyword.select()]) class BaseCardTextFilter(DirectFilter): """Base for filters on Card Text This defines the basics of a card text filter, without any special logic for dealing with specially formatted text.""" keyword = "CardText" description = "Card Text" helptext = "the desired card text to search for (% and _ can be used as " \ "wildcards).\nReturns all cards whose text contains this string." istextentry = True types = ('AbstractCard', 'PhysicalCard') def __init__(self, sPattern): self._sPattern = sPattern.lower() # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return '' def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return LIKE(func.LOWER(AbstractCard.q.text), '%' + self._sPattern + '%') class CardNameFilter(DirectFilter): """Filter on the name of the card""" keyword = "CardName" description = "Card Name" helptext = "the text to be matched against card names (% and _ can be " \ "used as wildcards).\nReturns all cards whose name contains " \ "this string" istextentry = True types = ('AbstractCard', 'PhysicalCard') def __init__(self, sPattern): self.__sPattern = sPattern.lower() # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return '' def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return LIKE(AbstractCard.q.canonicalName, '%' + self.__sPattern + '%') class PhysicalCardFilter(Filter): """Filter for converting a filter on abstract cards to a filter on physical cards.""" def __init__(self): # Specifies Physical Cards, intended to be anded with other filters pass # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_joins(self): # This is one of the filters allowed to # pass the AbstractCard table as a joining table. # The join is needed so filtering on abstract card properties can work # pylint: disable=no-member # SQLObject methods not detected by pylint oTable = Table('physical_card') return [LEFTJOINOn(None, AbstractCard, AbstractCard.q.id == oTable.abstract_card_id)] def _get_expression(self): return TRUE # SQLite doesn't like True. Postgres doesn't like 1. class AbstractCardFilter(Filter): """Filter for converting a filter on physical cards to a filter on abstract cards.""" # Not used in the gui, as it's quite fragile due to database differences. # Kept for documentation purposes and for use when directly using the # Filters. # Because of how SQL handles NULLs, combining this filter with # FilterNot(PhysicalX) will still only match cards in the PhysicalCard # list. This is hard to fix, partly due to the database differences # mentioned. # # FilterBox([AbstractCardFilter, PhysicalCardFilter, X]) is almost # certainly not going to do the right thing, due to the multiple joins # involved. We should never do that. def __init__(self): # speficies AbstractCards, intended to be and'ed with other filters pass # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_joins(self): # pylint: disable=no-member # SQLObject methods not detected by pylint oTable = Table('abstract_card') return [LEFTJOINOn(None, PhysicalCard, PhysicalCard.q.abstractCardID == oTable.id)] def _get_expression(self): return TRUE # See PhysicalCardFilter class CardSetMultiCardCountFilter(DirectFilter): """Filter on number of cards in the Physical Card Set""" keyword = "CardCount" description = "Card Count" helptext = "a list of card numbers from a chosen card set (filters on " \ "number of cards in the Card Set).\nReturns a list of cards " \ "that have the chosen counts in the given card set." isfromfilter = True islistfilter = True types = ('PhysicalCard',) def __init__(self, aData): # aData is a list or tuple of the form (aCounts, sCardSetName) # Selects cards with a count in the range specified by aCounts from # the Physical Card Set sCardSetName # We rely on the joins to limit this to the appropriate card sets # pylint: disable=no-member # SQLObject methods not detected by pylint aIds = [] try: aCounts, aCardSetName = aData if not isinstance(aCardSetName, list): aCardSetName = [aCardSetName] for sCardSetName in aCardSetName: try: oCS = IPhysicalCardSet(sCardSetName) aIds.append(oCS.id) except SQLObjectNotFound: aCounts = [] except ValueError: aCounts = [] # strip whitespace before comparing stuff # aCounts may be a single string, so we can't use 'for x in aCounts' aCounts = {x.strip() for x in list(aCounts)} self._oFilters = [] self._aCardSetIds = aIds self._oZeroQuery = None if '0' in aCounts: aCounts.remove('0') self._oZeroQuery = Select( PhysicalCard.q.abstractCardID, where=IN(MapPhysicalCardToPhysicalCardSet.q.physicalCardSetID, aIds), join=LEFTJOINOn( PhysicalCard, MapPhysicalCardToPhysicalCardSet, PhysicalCard.q.id == MapPhysicalCardToPhysicalCardSet.q.physicalCardID), groupBy=PhysicalCard.q.abstractCardID, having=func.COUNT(PhysicalCard.q.abstractCardID) > 0) if '>30' in aCounts: aCounts.remove('>30') oGreater30Query = Select( PhysicalCard.q.abstractCardID, where=IN(MapPhysicalCardToPhysicalCardSet.q.physicalCardSetID, aIds), join=LEFTJOINOn( PhysicalCard, MapPhysicalCardToPhysicalCardSet, PhysicalCard.q.id == MapPhysicalCardToPhysicalCardSet.q.physicalCardID), groupBy=(PhysicalCard.q.abstractCardID, MapPhysicalCardToPhysicalCardSet.q.physicalCardSetID), having=func.COUNT(PhysicalCard.q.abstractCardID) > 30) self._oFilters.append(oGreater30Query) if aCounts: # SQLite doesn't like strings here, so convert to int oCountFilter = Select( PhysicalCard.q.abstractCardID, where=IN(MapPhysicalCardToPhysicalCardSet.q.physicalCardSetID, aIds), join=LEFTJOINOn( PhysicalCard, MapPhysicalCardToPhysicalCardSet, PhysicalCard.q.id == MapPhysicalCardToPhysicalCardSet.q.physicalCardID), groupBy=(PhysicalCard.q.abstractCardID, MapPhysicalCardToPhysicalCardSet.q.physicalCardSetID), having=IN(func.COUNT(PhysicalCard.q.abstractCardID), [int(x) for x in aCounts])) self._oFilters.append(oCountFilter) # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): # Should this have a more staggered range split? 0..20, 20-30, # 30-40, >40 type thing? aCardSets = [x.name for x in PhysicalCardSet.select().orderBy('name')] aValues = [str(x) for x in range(0, 31)] + ['>30'] return (aValues, aCardSets) def _get_expression(self): # We duplicate subselect logic here, rather than letting the database # handle it, because mysql handles this case very poorly, resulting in # horrible performance. This approach, while ugly, is at least # reasonably fast on all the databases we're concerned with. # We create the actual filters here, which filter for cards with the # correct numbers as we can't create the lists in __init__ since # the numbers can change between calls to _get_expression # pylint: disable=no-member # SQLObject methods not detected by pylint aFinalFilters = [] oConn = sqlhub.processConnection if self._oZeroQuery: oQuery = oConn.sqlrepr(self._oZeroQuery) aNonZeroIds = oConn.queryAll(oQuery) aFinalFilters.append(NOT(IN(PhysicalCard.q.abstractCardID, aNonZeroIds))) if self._oFilters: for oFilter in self._oFilters: # OR(*self._oFilters) doesn't do what I expected here, so # we manually fiddle stuff to get the right result oQuery = oConn.sqlrepr(oFilter) aIds = oConn.queryAll(oQuery) aFinalFilters.append(IN(PhysicalCard.q.abstractCardID, aIds)) return OR(*aFinalFilters) def involves(self, oCardSet): return oCardSet.id in self._aCardSetIds class PhysicalExpansionFilter(DirectFilter): """Filter PhysicalCard based on the PhysicalCard expansion""" types = ('PhysicalCard',) # We must be calling this with a PhysicalCardFilter for sensible results, # so we don't need any special join magic def __init__(self, sExpansion): self._aPrintings = [] if sExpansion is not None: iId = IExpansion(sExpansion).id # Find all the printings with this ID self._aPrintings = [x.id for x in Printing.selectBy(expansion=iId)] # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): oTable = Table('physical_card') if self._aPrintings: return IN(oTable.printing_id, self._aPrintings) # None case # pylint: disable=singleton-comparison # Must be a comparison so SQLObject generates the correct SQL return oTable.printing_id == None class MultiPhysicalExpansionFilter(DirectFilter): """Filter PhysicalCard based on a list of PhysicalCard expansions""" keyword = "PhysicalExpansion" description = "Physical Expansion" helptext = "a list of expansions.\nSelects cards with their expansion " \ "set to the chosen expansions.\nThis will return all the " \ "printings in a given expansion." types = ('PhysicalCard',) islistfilter = True __sUnspec = ' Unspecified Expansion' def __init__(self, aExpansions): self._aIds = [] self.__bOrUnspec = False for sExpansion in aExpansions: if sExpansion is not None and sExpansion != self.__sUnspec: iId = IExpansion(sExpansion).id self._aIds.extend([x.id for x in Printing.selectBy(expansion=iId)]) else: self.__bOrUnspec = True # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): aExpansions = [cls.__sUnspec] aExpansions.extend(sorted([x.name for x in Expansion.select() if x.name[:5] != 'Promo'])) return aExpansions def _get_expression(self): oTable = Table('physical_card') # None in the IN statement doesn't do the right thing for me # pylint: disable=singleton-comparison # == None syntax required for SQLObject if self.__bOrUnspec and self._aIds: return OR(IN(oTable.printing_id, self._aIds), oTable.printing_id == None) if self.__bOrUnspec: # Psycopg2 doesn't like IN(a, []) constructions return oTable.printing_id == None return IN(oTable.printing_id, self._aIds) class PhysicalPrintingFilter(DirectFilter): """Filter PhysicalCard based on the PhysicalCard printing""" types = ('PhysicalCard',) # We must be calling this with a PhysicalCardFilter for sensible results, # so we don't need any special join magic def __init__(self, sExpPrint): self._iPrintID = None if sExpPrint is not None: oPrinting = IPrinting(sExpPrint) self._iPrintID = oPrinting.id # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): oTable = Table('physical_card') return oTable.printing_id == self._iPrintID class MultiPhysicalPrintingFilter(DirectFilter): """Filter PhysicalCard based on a list of PhysicalCard printings.""" keyword = "PhysicalPrinting" description = "Physical Printing" helptext = "a list of printings.\nSelects cards with their printing " \ "set to the chosen printings.\nThis will only return cards " \ "with the specified printings, and will exclude cards from the " \ "same expansion that aren't part of the given printing." types = ('PhysicalCard',) islistfilter = True __sUnspec = ' Unspecified Expansion' def __init__(self, aPrintings): self._aIds = [] self.__bOrUnspec = False for sExpPrint in aPrintings: if sExpPrint is not None and sExpPrint != self.__sUnspec: iId = IPrinting(sExpPrint).id self._aIds.append(iId) else: self.__bOrUnspec = True # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): aExpPrint = [cls.__sUnspec] aExpPrint.extend([IPrintingName(x) for x in Printing.select() if x.expansion.name[:5] != 'Promo']) return sorted(aExpPrint) def _get_expression(self): oTable = Table('physical_card') # None in the IN statement doesn't do the right thing for me # pylint: disable=singleton-comparison # == None syntax required for SQLObject if self.__bOrUnspec and self._aIds: return OR(IN(oTable.printing_id, self._aIds), oTable.printing_id == None) if self.__bOrUnspec: # Psycopg2 doesn't like IN(a, []) constructions return oTable.printing_id == None return IN(oTable.printing_id, self._aIds) class PhysicalCardSetFilter(Filter): """Filter on Physical Card Set membership""" types = ('PhysicalCard',) def __init__(self, sName): # Select cards belonging to a PhysicalCardSet self.__iCardSetId = IPhysicalCardSet(sName).id self.__oTable = Table('physical_map') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_joins(self): # The join on the AbstractCard table is needed to enable filtering # physical card sets on abstract card propeties, since the base class # for physical card sets is the mapping table. # This is one of the only filters allowed to join like this # pylint: disable=no-member # SQLObject methods not detected by pylint return [ LEFTJOINOn(None, PhysicalCard, PhysicalCard.q.id == self.__oTable.physical_card_id), LEFTJOINOn(None, AbstractCard, AbstractCard.q.id == PhysicalCard.q.abstractCardID), ] def _get_expression(self): return self.__oTable.physical_card_set_id == self.__iCardSetId def involves(self, oCardSet): return oCardSet.id == self.__iCardSetId class MultiPhysicalCardSetFilter(Filter): """Filter on a list of Physical Card Sets""" keyword = "Card_Sets" description = "Card Sets" helptext = "a list of card sets names\nSelects cards in the " \ "specified sets." islistfilter = True types = ('PhysicalCard',) # We don't need the join as in PhysicalCardSetFilter, because this is # never the base filter in the gui def __init__(self, aNames): # Select cards belonging to the PhysicalCardSet self.__aCardSetIds = [] for sName in aNames: try: self.__aCardSetIds.append(IPhysicalCardSet(sName).id) except SQLObjectNotFound: # May happen if config has been edited, or pointed to new # database and so forth, convert to a more informative error raise RuntimeError( "Unable to load Card Set (%s) for filter" % sName) self.__oTable = make_table_alias('physical_map') self.__oPT = Table('physical_card') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): aNames = [] for oCS in PhysicalCardSet.select(): aNames.append(oCS.name) return aNames def _get_joins(self): return [LEFTJOINOn(None, self.__oTable, self.__oPT.id == self.__oTable.q.physical_card_id)] def _get_expression(self): return IN(self.__oTable.q.physical_card_set_id, self.__aCardSetIds) def involves(self, oCardSet): return oCardSet.id in self.__aCardSetIds class MultiPhysicalCardSetMapFilter(Filter): """Filter on a list of Physical Card Sets""" # This does the same join magic as for PhysicalCardSetFilter, so # it can be used for checking other card sets def __init__(self, aNames): # Select cards belonging to the PhysicalCardSet self.__aCardSetIds = [] for sName in aNames: self.__aCardSetIds.append(IPhysicalCardSet(sName).id) self.__oTable = Table('physical_map') # pylint: disable=missing-docstring # don't need docstrings for get_values & _get_joins def _get_joins(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return [ LEFTJOINOn(None, PhysicalCard, PhysicalCard.q.id == self.__oTable.physical_card_id), LEFTJOINOn(None, AbstractCard, AbstractCard.q.id == PhysicalCard.q.abstractCardID), ] def _get_expression(self): return IN(self.__oTable.physical_card_set_id, self.__aCardSetIds) class PhysicalCardSetInUseFilter(Filter): """Filter on a membership of Physical Card Sets marked in use""" keyword = "SetsInUse" description = "In the 'In Use' children of" helptext = "list of card sets\nSelects cards in the Card Sets marked " \ "as in use that are children of the given card sets." islistfilter = True types = ('PhysicalCard',) def __init__(self, aParCardSets): # Select cards belonging to the PhysicalCardSet in use self.__aCardSetIds = [] for oCS in PhysicalCardSet.select(): if oCS.inuse and oCS.parent and oCS.parent.name in aParCardSets: self.__aCardSetIds.append(oCS.id) self.__oTable = make_table_alias('physical_map') self.__oPT = Table('physical_card') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): aInUseCardSets = PhysicalCardSet.selectBy(inuse=True) aParents = set() for oSet in aInUseCardSets: if oSet.parent: aParents.add(oSet.parent.name) return list(aParents) def _get_joins(self): return [LEFTJOINOn(None, self.__oTable, self.__oPT.id == self.__oTable.q.physical_card_id)] def _get_expression(self): return IN(self.__oTable.q.physical_card_set_id, self.__aCardSetIds) def involves(self, oCardSet): return oCardSet.id in self.__aCardSetIds class SpecificCardFilter(DirectFilter): """This filter matches a single card. It is used in the GUI to test if a card is in the filter results set. """ types = ('AbstractCard', 'PhysicalCard') def __init__(self, oCard): self.__iCardId = IAbstractCard(oCard).id # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return AbstractCard.q.id == self.__iCardId class SpecificCardIdFilter(DirectFilter): """This filter matches a single card by id.""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, iCardId): self.__iCardId = iCardId # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return AbstractCard.q.id == self.__iCardId class MultiSpecificCardIdFilter(DirectFilter): """This filter matches multiple cards by id.""" types = ('AbstractCard', 'PhysicalCard') def __init__(self, aCardIds): self.__aCardIds = aCardIds # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return IN(AbstractCard.q.id, self.__aCardIds) class SpecificPhysCardIdFilter(DirectFilter): """This filter matches a single physical card by id. It is used in the GUI to test if a card is in the filter results set. """ types = ('PhysicalCard',) def __init__(self, iCardId): self.__iCardId = iCardId # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint return PhysicalCard.q.id == self.__iCardId # Card Set Filters # These filters are designed to select card sets from the database # rather than cards, hence they aren't intended to be joined # base filters, to be subclassed to PhysicalCardSet or AbstractClassSet # as needed class CardSetNameFilter(DirectFilter): """Filters on Card Set Name""" keyword = "CardSetName" description = "Card Set Name" helptext = "the text to be matched against card set names. " \ "(% and _ can be used as wildcards.)\nReturns all card sets " \ "whose name contains the given string." istextentry = True types = ('PhysicalCardSet',) def __init__(self, sPattern): self.__sPattern = sPattern.lower() self.oTable = Table('physical_card_set') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return '' def _get_expression(self): return LIKE(func.LOWER(self.oTable.name), '%' + self.__sPattern + '%') class CardSetDescriptionFilter(DirectFilter): """Base class for CardSet filters on Card Set Description""" keyword = "CardSetDescription" description = "Card Set Description" helptext = "the text to be matched against card set description. " \ "(% and _ can be used as wildcards.)\nReturns all card sets " \ "containing the given string in the description." istextentry = True types = ('PhysicalCardSet',) def __init__(self, sPattern): self.__sPattern = sPattern.lower() # Subclasses will replace this with the correct table self.oTable = Table('physical_card_set') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return '' def _get_expression(self): return LIKE(func.LOWER(self.oTable.comment), '%' + self.__sPattern + '%') class CardSetAuthorFilter(DirectFilter): """Base class for CardSet filters on Card Set Author""" keyword = "CardSetAuthor" description = "Card Set Author" helptext = "the text to be matched against card set Author. " \ "(% and _ can be used as wildcards.)\nReturns all card sets "\ "whose author includes the given string." istextentry = True types = ('PhysicalCardSet',) def __init__(self, sPattern): self.__sPattern = sPattern.lower() # Subclasses will replace this with the correct table self.oTable = Table('physical_card_set') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return '' def _get_expression(self): return LIKE(func.LOWER(self.oTable.author), '%' + self.__sPattern + '%') class CardSetAnnotationsFilter(DirectFilter): """Base class for CardSet filters on Card Set Annotations""" keyword = "CardSetAnnotations" description = "Card Set Annotations" helptext = "the text to be matched against card set annotations. " \ "(% and _ can be used as wildcards.)\nReturns all card sets " \ "where the annotations contain the given string." istextentry = True types = ('PhysicalCardSet',) def __init__(self, sPattern): self.__sPattern = sPattern.lower() # Subclasses will replace this with the correct table self.oTable = Table('physical_card_set') # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return '' def _get_expression(self): return LIKE(func.LOWER(self.oTable.annotations), '%' + self.__sPattern + '%') class ParentCardSetFilter(MultiFilter): """Filters on Parent's Card Set""" keyword = "ParentCardSet" description = "Parent Card Set" helptext = "a list names of the parent card sets.\n" \ "Returns all card sets with one of the selected card sets " \ "as a parent." islistfilter = True types = ('PhysicalCardSet',) def __init__(self, aCardSets): # pylint: disable=no-member # SQLObject methods not detected by pylint self._aIds = [IPhysicalCardSet(x).id for x in aCardSets] self._oIdField = PhysicalCardSet.q.parent # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return [x.name for x in PhysicalCardSet.select().orderBy('name')] # Override _get_joins, since we don't join def _get_joins(self): return [] class CSPhysicalCardSetInUseFilter(DirectFilter): """Filter Physical Card Set on inuse status""" keyword = "CSSetsInUse" description = "Card Set Marked as in Use" helptext = "This filter takes no parameters\nSelects those Card Sets " \ " in the Card Set List that are marked as in use." types = ('PhysicalCardSet',) # pylint: disable=missing-docstring # don't need docstrings for _get_expression, get_values & _get_joins @classmethod def get_values(cls): return None def _get_expression(self): # pylint: disable=no-member # SQLObject methods not detected by pylint # pylint: disable=singleton-comparison # == True syntax required for SQLObject return PhysicalCardSet.q.inuse == True def best_guess_filter(sName): """Create a filter for selecting close matches to a card name.""" # Set the filter on the Card List to one the does a # Best guess search sFilterString = ' ' + sName.lower() + ' ' # Kill the's in the string sFilterString = sFilterString.replace(' the ', ' ') # Kill commas, as possible issues sFilterString = sFilterString.replace(',', ' ') # Free style punctuation for sPunc in string.punctuation: sFilterString = sFilterString.replace(sPunc, '_') # Stolen semi-concept from soundex - replace vowels with wildcards # Should these be %'s ?? # (Should at least handle the Rotscheck variation as it stands) sFilterString = sFilterString.replace('a', '_') sFilterString = sFilterString.replace('e', '_') sFilterString = sFilterString.replace('i', '_') sFilterString = sFilterString.replace('o', '_') sFilterString = sFilterString.replace('u', '_') # Normalise spaces and Wildcard spaces sFilterString = ' '.join(sFilterString.split()) sFilterString = sFilterString.replace(' ', '%') # Add % on outside sFilterString = '%' + sFilterString + '%' return CardNameFilter(sFilterString) def make_illegal_filter(): """Creates a filter that excludes not legal for tournament play cards. Function to handle the case that the keyword isn't in the database.""" try: # We use MultiKeywordFilter to work around a performance # oddity of sqlite, where IN(a, b) outperforms a == b # for large sets oLegalFilter = FilterNot(MultiKeywordFilter(['not for legal play'])) except SQLObjectNotFound: # Fallback to no filter return NullFilter() return oLegalFilter
PypiClean
/fortuna_uq-0.1.0.tar.gz/fortuna_uq-0.1.0/fortuna/model/resnet.py
from functools import partial from typing import Any, Callable, Sequence, Tuple import flax.linen as nn import jax.numpy as jnp from fortuna.typing import Array ModuleDef = Any class ResNetBlock(nn.Module): """ Residual network block. Attributes ---------- filters: int Number of filters. conv: ModuleDef Convolution module. norm: ModuleDef Normalization module. activation: Callable Activation function. strides: Tuple[int, int] Strides. """ filters: int conv: ModuleDef norm: ModuleDef activation: Callable strides: Tuple[int, int] = (1, 1) @nn.compact def __call__(self, x: jnp.ndarray,) -> jnp.ndarray: """ Block forward pass. Parameters ---------- x: jnp.ndarray Block inputs. Returns ------- jnp.ndarray Block outputs. """ residual = x y = self.conv(self.filters, (3, 3), self.strides)(x) y = self.norm()(y) y = self.activation(y) y = self.conv(self.filters, (3, 3))(y) y = self.norm(scale_init=nn.initializers.zeros)(y) if residual.shape != y.shape: residual = self.conv(self.filters, (1, 1), self.strides, name="conv_proj")( residual ) residual = self.norm(name="norm_proj")(residual) return self.activation(residual + y) class BottleneckResNetBlock(nn.Module): """ Bottleneck residual network block. Attributes ---------- filters: int Number of filters. conv: ModuleDef Convolution module. norm: ModuleDef Normalization module. activation: Callable Activation function. strides: Tuple[int, int] Strides. """ filters: int conv: ModuleDef norm: ModuleDef activation: Callable strides: Tuple[int, int] = (1, 1) @nn.compact def __call__(self, x: jnp.ndarray) -> jnp.ndarray: """ Bottleneck block forward pass. Parameters ---------- x: jnp.ndarray Block inputs. Returns ------- jnp.ndarray Block outputs. """ residual = x y = self.conv(self.filters, (1, 1))(x) y = self.norm()(y) y = self.activation(y) y = self.conv(self.filters, (3, 3), self.strides)(y) y = self.norm()(y) y = self.activation(y) y = self.conv(self.filters * 4, (1, 1))(y) y = self.norm(scale_init=nn.initializers.zeros)(y) if residual.shape != y.shape: residual = self.conv( self.filters * 4, (1, 1), self.strides, name="conv_proj" )(residual) residual = self.norm(name="norm_proj")(residual) return self.activation(residual + y) class DeepFeatureExtractorSubNet(nn.Module): """ Deep feature extractor subnetwork. Attributes ---------- stage_sizes: Sequence[int] Sizes for each stage. block_cls: ModuleDef Block class. num_filters: int Number of filters. dtype: Any Layers' dtype. activation: Callable Activation function. conv: ModuleDef Convolution module. """ stage_sizes: Sequence[int] block_cls: ModuleDef num_filters: int = 64 dtype: Any = jnp.float32 activation: Callable = nn.relu conv: ModuleDef = nn.Conv @nn.compact def __call__(self, x: Array, train: bool = True) -> jnp.ndarray: """ Deep feature extractor subnetwork forward pass. Parameters ---------- x: Array Input data. train: bool Whether the call is performed during training. Returns ------- jnp.ndarray Deep feature extractor representation. """ conv = partial(self.conv, use_bias=False, dtype=self.dtype) norm = partial( nn.BatchNorm, use_running_average=not train, momentum=0.9, epsilon=1e-5, dtype=self.dtype, ) x = conv( self.num_filters, (7, 7), (2, 2), padding=[(3, 3), (3, 3)], name="conv_init" )(x) x = norm(name="bn_init")(x) x = nn.relu(x) x = nn.max_pool(x, (3, 3), strides=(2, 2), padding="SAME") for i, block_size in enumerate(self.stage_sizes): for j in range(block_size): strides = (2, 2) if i > 0 and j == 0 else (1, 1) x = self.block_cls( self.num_filters * 2 ** i, strides=strides, conv=conv, norm=norm, activation=self.activation, )(x) x = jnp.mean(x, axis=(1, 2)) return x class OutputSubNet(nn.Module): """ Output subnetwork. Attributes ---------- output_dim: int Output dimension. """ output_dim: int dtype: Any = jnp.float32 @nn.compact def __call__(self, x: jnp.ndarray, train: bool = True) -> jnp.ndarray: """ Output subnetwork forward pass. Parameters ---------- x: jnp.ndarray Deep feature extractor representation. train: bool Whether the call is performed during training. Returns ------- jnp.ndarray Output of the subnetwork. """ x = nn.Dense(self.output_dim, dtype=self.dtype)(x) x = jnp.asarray(x, self.dtype) return x class ResNet(nn.Module): """ Deep feature extractor subnetwork. Attributes ---------- stage_sizes: Sequence[int] Sizes for each stage. block_cls: ModuleDef Block class. output_dim: int Output dimension. num_filters: int Number of filters. dtype: Any Layers' dtype. activation: Callable Activation function. conv: ModuleDef Convolution module. """ stage_sizes: Sequence[int] block_cls: ModuleDef output_dim: int num_filters: int = 64 dtype: Any = jnp.float32 activation: Callable = nn.relu conv: ModuleDef = nn.Conv def setup(self): self.dfe_subnet = DeepFeatureExtractorSubNet( stage_sizes=self.stage_sizes, block_cls=self.block_cls, num_filters=self.num_filters, dtype=self.dtype, activation=self.activation, conv=self.conv, ) self.output_subnet = OutputSubNet(output_dim=self.output_dim, dtype=self.dtype) def __call__(self, x: Array, train: bool = True) -> jnp.ndarray: """ Forward pass. Parameters ---------- x: Array Input data. train: bool Whether the call is performed during training. Returns ------- jnp.ndarray Outputs. """ x = self.dfe_subnet(x, train) x = self.output_subnet(x, train) return x ResNet18 = partial(ResNet, stage_sizes=[2, 2, 2, 2], block_cls=ResNetBlock) ResNet34 = partial(ResNet, stage_sizes=[3, 4, 6, 3], block_cls=ResNetBlock) ResNet50 = partial(ResNet, stage_sizes=[3, 4, 6, 3], block_cls=BottleneckResNetBlock) ResNet101 = partial(ResNet, stage_sizes=[3, 4, 23, 3], block_cls=BottleneckResNetBlock) ResNet152 = partial(ResNet, stage_sizes=[3, 8, 36, 3], block_cls=BottleneckResNetBlock) ResNet200 = partial(ResNet, stage_sizes=[3, 24, 36, 3], block_cls=BottleneckResNetBlock)
PypiClean
/alipay-python-3.3.17.tar.gz/alipay-python-3.3.17/alipay/aop/api/domain/KoubeiRetailWmsSupplierQueryModel.py
import json from alipay.aop.api.constant.ParamConstants import * from alipay.aop.api.domain.OperateContext import OperateContext class KoubeiRetailWmsSupplierQueryModel(object): def __init__(self): self._operate_context = None self._supplier_ids = None @property def operate_context(self): return self._operate_context @operate_context.setter def operate_context(self, value): if isinstance(value, OperateContext): self._operate_context = value else: self._operate_context = OperateContext.from_alipay_dict(value) @property def supplier_ids(self): return self._supplier_ids @supplier_ids.setter def supplier_ids(self, value): if isinstance(value, list): self._supplier_ids = list() for i in value: self._supplier_ids.append(i) def to_alipay_dict(self): params = dict() if self.operate_context: if hasattr(self.operate_context, 'to_alipay_dict'): params['operate_context'] = self.operate_context.to_alipay_dict() else: params['operate_context'] = self.operate_context if self.supplier_ids: if isinstance(self.supplier_ids, list): for i in range(0, len(self.supplier_ids)): element = self.supplier_ids[i] if hasattr(element, 'to_alipay_dict'): self.supplier_ids[i] = element.to_alipay_dict() if hasattr(self.supplier_ids, 'to_alipay_dict'): params['supplier_ids'] = self.supplier_ids.to_alipay_dict() else: params['supplier_ids'] = self.supplier_ids return params @staticmethod def from_alipay_dict(d): if not d: return None o = KoubeiRetailWmsSupplierQueryModel() if 'operate_context' in d: o.operate_context = d['operate_context'] if 'supplier_ids' in d: o.supplier_ids = d['supplier_ids'] return o
PypiClean
/cloud-scheduler-1.13.2.tar.gz/cloud-scheduler-1.13.2/cloudscheduler/configstatus.py
import os import sys import ConfigParser # Cloud Scheduler Status Options Module. # Set default values info_server_port = 8111 def setup(path=None): """Setup cloudscheduler using config file. setup will look for a configuration file specified on the command line, or in ~/.cloudscheduler.conf or /etc/cloudscheduler.conf """ global info_server_port homedir = os.path.expanduser('~') # Find config file if not path: if os.path.exists(homedir + "/.cloudscheduler/cloud_scheduler_status.conf"): path = homedir + "/.cloudscheduler/cloud_scheduler_status.conf" elif os.path.exists("/etc/cloudscheduler/cloud_scheduler_status.conf"): path = "/etc/cloudscheduler/cloud_scheduler_status.conf" elif os.path.exists("/usr/local/share/cloud-scheduler/cloud_scheduler_status.conf"): path = "/usr/local/share/cloud-scheduler/cloud_scheduler_status.conf" else: print >> sys.stderr, "Configuration file problem: There doesn't " \ "seem to be a configuration file. " \ "You can specify one with the --config-file parameter, " \ "or put one in ~/.cloudscheduler/cloud_scheduler_status.conf or "\ "/etc/cloudscheduler/cloud_scheduler_status.conf "\ "Running in full default value mode." return # Read config file config_file = ConfigParser.ConfigParser() try: config_file.read(path) except IOError: print >> sys.stderr, "Configuration file problem: There was a " \ "problem reading %s. Check that it is readable," \ "and that it exists. " % path raise except ConfigParser.ParsingError: print >> sys.stderr, "Configuration file problem: Couldn't " \ "parse your file. Check for spaces before or after variables." raise except: print "Configuration file problem: There is something wrong with " \ "your config file." raise if config_file.has_option("global", "info_server_port"): try: info_server_port = config_file.getint("global", "info_server_port") except ValueError: print "Configuration file problem: info_server_port must be an " \ "integer value." sys.exit(1)
PypiClean
/Arky-1.3.1.tar.gz/Arky-1.3.1/arky/ark/aip11.py
from .. import slots from .. import rest from .. import cfg from . import __PY3__ from . import crypto from . import init import struct C = 0.0001*100000000 rest.POST.createEndpoint(rest.POST, rest.post, "/peer/transactions/v1") class Payload: @staticmethod def setArkPerByteFees(value): global C C = value @staticmethod def get(typ, **kw): return crypto.hexlify(getattr(Payload, "type%d"%typ)(**kw)) @staticmethod def type0(**kw): try: recipientId = crypto.base58.b58decode_check(kw["recipientId"]) except: raise Exception("no recipientId defined") return struct.pack("<QI21s" if __PY3__ else ("<QI"+21*"c"), kw.get("amount", 0), kw.get("expiration", 0), recipientId ) @staticmethod def type1(**kw): if "secondSecret" in kw: secondPublicKey = crypto.getKeys(kw["secondSecret"])["publicKey"] elif "secondPublicKey" in kw: secondPublicKey = kw["secondPublicKey"] else: raise Exception("no secondSecret or secondPublicKey given") return struct.pack("<33s", crypto.unhexlify(secondPublicKey)) if __PY3__ else \ struct.pack(33*"c", secondPublicKey) @staticmethod def type2(**kw): username = kw.get("username", False) if username: length = len(username) if 3 <= length <= 255: return struct.pack("<B%ds"%length, length, username.encode()) if __PY3__ else \ struct.pack("<B" + length*"c", length, username) else: raise Exception("bad username length [3-255]: %s" % username) else: raise Exception("no username defined") @staticmethod def type3(**kw): pass def getHeaders(**kw): if "secret" in kw: publicKey = crypto.getKeys(kw["secret"])["publicKey"] elif "publicKey" in kw: publicKey = kw["publicKey"] else: raise Exception("Can not initialize transaction (no secret or publicKey given)") header = struct.pack("<BBBBI", kw.get("head", 0xff), kw.get("version", 0x02), kw.get("network", int(cfg.marker, base=16)), kw.get("type", 0), int(slots.getTime()) ) header += struct.pack("<33s", crypto.unhexlify(publicKey)) if __PY3__ else \ struct.pack(33*"c", publicKey) header += struct.pack("<Q", kw.get("fees", 0)) vendorField = kw.get("vendorField", "") n = min(255, len(vendorField)) header += struct.pack("<B", n) if n > 0: header += struct.pack("<%ss"%n, crypto.unhexlify(publicKey[:n])) if __PY3__ else \ struct.pack(n*"c", publicKey[:n]) return crypto.hexlify(header) def bakePayload(**kw): if "publicKey" in kw and "privateKey" in kw: keys = {} keys["publicKey"] = kw["publicKey"] keys["privateKey"] = kw["privateKey"] elif "secret" in kw: keys = crypto.getKeys(kw["secret"]) else: raise Exception("Can not initialize transaction (no secret or keys given)") payload = Payload.get(kw.get("type", 0), **kw) kw["fees"] = int((kw.get("type", 0) + len(payload) + 47) * C) header = getHeaders(**kw) payload = header + payload payload += crypto.getSignatureFromBytes(crypto.unhexlify(payload), keys["privateKey"]) if kw.get("secondSecret", False): secondKeys = crypto.getKeys(kw["secondSecret"]) payload += crypto.getSignatureFromBytes(crypto.unhexlify(payload), secondKeys["privateKey"]) elif kw.get("secondPrivateKey", False): payload += crypto.getSignatureFromBytes(crypto.unhexlify(payload), kw["secondPrivateKey"]) # identify payload payload += crypto.getIdFromBytes(crypto.unhexlify(payload)) return payload # This function is a high-level broadcasting for a single tx def sendTransaction(**kw): tx = bakePayload(**dict([k,v] for k,v in kw.items() if v)) result = rest.POST.peer.transactions.v1(peer=cfg.peers[0], transactions=[tx]) success = 1 if result["success"] else 0 for peer in cfg.peers[1:]: if rest.POST.peer.transactions.v1(peer=peer, transactions=[tx])["success"]: success += 1 result["broadcast"] = "%.1f%%" % (100.*success/len(cfg.peers)) return result ####################### ## basic transaction ## ####################### # def sendToken(amount, recipientId, vendorField, secret, secondSecret=None): # return sendTransaction( # amount=amount, # recipientId=recipientId, # vendorField=VendorField, # secret=secret, # secondSecret=secondSecret # ) # def registerSecondPublicKey(secondPublicKey, secret, secondSecret=None): # keys = crypto.getKeys(secret) # return sendTransaction( # type=1, # publicKey=keys["publicKey"], # privateKey=keys["privateKey"], # secondSecret=secondSecret, # asset={"signature":{"publicKey":secondPublicKey}} # ) # def registerSecondPassphrase(secondPassphrase, secret, secondSecret=None): # secondKeys = crypto.getKeys(secondPassphrase) # return registerSecondPublicKey(secondKeys["publicKey"], secret, secondSecret) # def registerDelegate(username, secret, secondSecret=None): # keys = crypto.getKeys(secret) # return sendTransaction( # type=2, # publicKey=keys["publicKey"], # privateKey=keys["privateKey"], # secondSecret=secondSecret, # asset={"delegate":{"username":username, "publicKey":keys["publicKey"]}} # ) # def upVoteDelegate(usernames, secret, secondSecret=None): # keys = crypto.getKeys(secret) # req = rest.GET.api.delegates.get(username=usernames[-1]) # if req["success"]: # return sendTransaction( # type=3, # publicKey=keys["publicKey"], # recipientId=crypto.getAddress(keys["publicKey"]), # privateKey=keys["privateKey"], # secondSecret=secondSecret, # asset={"votes":["+%s"%req["delegate"]["publicKey"]]} # ) # def downVoteDelegate(usernames, secret, secondSecret=None): # keys = crypto.getKeys(secret) # req = rest.GET.api.delegates.get(username=usernames[-1]) # if req["success"]: # return sendTransaction( # type=3, # publicKey=keys["publicKey"], # recipientId=crypto.getAddress(keys["publicKey"]), # privateKey=keys["privateKey"], # secondSecret=secondSecret, # asset={"votes":["-%s"%req["delegate"]["publicKey"]]} # )
PypiClean
/pyutopia-plugins-common-3.1.0.10.tar.gz/pyutopia-plugins-common-3.1.0.10/utopia/plugins/common/ieee.py
import urllib2 import urlparse import utopia.citation from lxml import etree from StringIO import StringIO class IEEEResolver(utopia.citation.Resolver): """Resolve PDF link from an IEEE page""" def resolve(self, citations, document = None): citation = {} if not utopia.citation.has_link(citations, {'mime': 'application/pdf'}, {'whence': 'ieee'}): resolved_links = utopia.citation.filter_links(citations, {'resolved_url': None}) for link in resolved_links: url = link['resolved_url'] if 'ieeexplore.ieee.org' in url: parser = etree.HTMLParser() resource = urllib2.urlopen(url, timeout=12) html = resource.read() dom = etree.parse(StringIO(html), parser) # look for the PDF link download_pdf_urls = dom.xpath('//a[@id="full-text-pdf"]/@href') for pdf_url in download_pdf_urls: pdf_url = urlparse.urljoin(url, pdf_url) if pdf_url != resource.geturl(): # Check for cyclic references # follow the link and find the iframe src resource = urllib2.urlopen(pdf_url, timeout=12) html = resource.read() dom = etree.parse(StringIO(html), parser) # developing time-frequency features for prediction download_pdf_urls = dom.xpath("//frame[contains(@src, 'pdf')]/@src") for pdf_url in download_pdf_urls: pdf_url = urlparse.urljoin(url, pdf_url) citation.setdefault('links', []) citation['links'].append({ 'url': pdf_url, 'mime': 'application/pdf', 'type': 'article', 'title': 'Download article from IEEEXplore', }) return citation def provenance(self): return {'whence': 'ieee'} def purposes(self): return 'dereference' def weight(self): return 103
PypiClean
/custom-awscli-1.27.51.tar.gz/custom-awscli-1.27.51/awscli/examples/redshift/describe-orderable-cluster-options.rst
Describing All Orderable Cluster Options ---------------------------------------- This example returns descriptions of all orderable cluster options. By default, the output is in JSON format. Command:: aws redshift describe-orderable-cluster-options Result:: { "OrderableClusterOptions": [ { "NodeType": "dw.hs1.8xlarge", "AvailabilityZones": [ { "Name": "us-east-1a" }, { "Name": "us-east-1b" }, { "Name": "us-east-1c" } ], "ClusterVersion": "1.0", "ClusterType": "multi-node" }, { "NodeType": "dw.hs1.xlarge", "AvailabilityZones": [ { "Name": "us-east-1a" }, { "Name": "us-east-1b" }, { "Name": "us-east-1c" } ], "ClusterVersion": "1.0", "ClusterType": "multi-node" }, { "NodeType": "dw.hs1.xlarge", "AvailabilityZones": [ { "Name": "us-east-1a" }, { "Name": "us-east-1b" }, { "Name": "us-east-1c" } ], "ClusterVersion": "1.0", "ClusterType": "single-node" } ], "ResponseMetadata": { "RequestId": "f6000035-64cb-11e2-9135-ff82df53a51a" } } You can also obtain the same information in text format using the ``--output text`` option. Command:: aws redshift describe-orderable-cluster-options --output text Result:: dw.hs1.8xlarge 1.0 multi-node us-east-1a us-east-1b us-east-1c dw.hs1.xlarge 1.0 multi-node us-east-1a us-east-1b us-east-1c dw.hs1.xlarge 1.0 single-node us-east-1a us-east-1b us-east-1c RESPONSEMETADATA e648696b-64cb-11e2-bec0-17624ad140dd
PypiClean
/OpenCoweb-1.0.tar.gz/OpenCoweb-1.0/coweb/bot/wrapper/object.py
import tornado.ioloop # std lib import logging import time import weakref import functools # coweb from .base import BotWrapperBase log = logging.getLogger('coweb.bot') class ObjectBotWrapper(BotWrapperBase): def __init__(self, manager, botClass, serviceName, serviceToken, appData): self.serviceName = serviceName self.appData = appData self._serviceToken = serviceToken self._manager = weakref.proxy(manager) self._bot = botClass(self, serviceName, appData) self._ioLoop = tornado.ioloop.IOLoop.instance() # asynchronously inform local manager we're ready self.add_callback(self._manager.on_bot_ready, serviceName, serviceToken, self) def on_message(self, mtdName, *args): '''Proxy messages from manager to bot impl.''' try: mtd = getattr(self._bot, mtdName) except AttributeError: # bot isn't listening for this message type return # keep sync with manager so we can catch exceptions, else exception # fires in context of original request which is wrong, it's a bot # error not a client error try: mtd(*args) except Exception: log.exception('bot error') def reply(self, replyToken, data): '''Sends a private reply to a requestor.''' self._manager.on_bot_response(self.serviceName, replyToken, data) def publish(self, data): '''Sends a public reply to subscribes on a bot subchannel.''' self._manager.on_bot_publish(self.serviceName, data) def add_callback(self, callback, *args, **kwargs): '''Schedule a callback in the main loop.''' f = functools.partial(callback, *args, **kwargs) self._ioLoop.add_callback(f) def add_timer(self, delay, callback, *args, **kwargs): '''Add a one-shot timer that schedules a main loop callback.''' f = functools.partial(callback, *args, **kwargs) return self._ioLoop.add_timeout(time.time() + delay, f) def remove_timer(self, timer): '''Remove a one-shot timer.''' self._ioLoop.remove_timeout(timer)
PypiClean
/socialoauth-0.3.3.tar.gz/socialoauth-0.3.3/example/helper.py
import json import random import hashlib class SingletonGuard(type): def __init__(self, name, parent, class_dict): super(SingletonGuard, self).__init__(name, parent, class_dict) self.instance = None def __call__(self, *args, **kwargs): if self.instance is None: self.instance = super(SingletonGuard, self).__call__(*args, **kwargs) return self.instance class UserStorage(object): __metaclass__ = SingletonGuard def __init__(self): # 这是自身系统的用户ID,模拟数据库的自增长主键 self.ID = 0 # 存储社交网站uid于自身ID的对应关系 self.table = {} # 用户信息 self.user = {} def get_uid(self, site_name, site_uid): # site_name 是社交网站的 名字 # site_uid 是此授权用户在此网站的uid # 查询此授权用户是否在自身数据库中的UID return self.table.get(site_name, {}).get(site_uid, None) def bind_new_user(self, site_name, site_uid): self.ID += 1 if site_name in self.table: self.table[site_name][site_uid] = self.ID else: self.table[site_name] = {site_uid: self.ID} return self.ID def get_user(self, inner_uid): return self.user[inner_uid] def set_user(self, inner_uid, **kwargs): self.user[inner_uid] = kwargs def gen_session_id(): key = '%0.10f' % random.random() return hashlib.sha1(key).hexdigest() class Session(object): __metaclass__ = SingletonGuard uid_session_keys = {} def __init__(self): self._sessions = {} @classmethod def make_session_id(cls, uid): if uid not in cls.uid_session_keys: cls.uid_session_keys[uid] = gen_session_id() return cls.uid_session_keys[uid] @classmethod def refresh_session_id(cls, uid): cls.uid_session_keys[uid] = gen_session_id() return cls.uid_session_keys[uid] def get(self, key): if key not in self._sessions: return {} return json.loads(self._sessions[key]) def set(self, key, **kwargs): self._sessions[key] = json.dumps(kwargs) def update(self, key, **kwargs): s = self.get(key) if not s: self.set(key, **kwargs) else: s.update(kwargs) self.set(key, **s) def rem(self, key): if key in self._sessions: del self._sessions[key]
PypiClean
/wxPython-zombie-3.1.5.6.tar.gz/wxPython-zombie-3.1.5.6/wx/lib/agw/balloontip.py
import wx import time import wx.adv from wx.lib.buttons import GenButton # Define The Values For The BalloonTip Frame Shape BT_ROUNDED = 1 """ :class:`BalloonTip` will have a rounded rectangular shape. """ BT_RECTANGLE = 2 """ :class:`BalloonTip` will have a rectangular shape. """ # Define The Value For The BalloonTip Destruction Behavior BT_LEAVE = 3 """ :class:`BalloonTip` will be destroyed when the user moves the mouse outside the target window. """ BT_CLICK = 4 """ :class:`BalloonTip` will be destroyed when the user click on :class:`BalloonTip`. """ BT_BUTTON = 5 """ :class:`BalloonTip` will be destroyed when the user click on the close button. """ # --------------------------------------------------------------- # Class BalloonFrame # --------------------------------------------------------------- # This Class Is Called By The Main BalloonTip Class, And It Is # Responsible For The Frame Creation/Positioning On Screen # Depending On Target Control/Window, The Frame Can Position # Itself To NW (Default), NE, SW, SE. The Switch On Positioning # Is Done By Calculating The Absolute Position Of The Target # Control/Window Plus/Minus The BalloonTip Size. The Pointing # Arrow Is Positioned Accordingly. # --------------------------------------------------------------- class BalloonFrame(wx.Frame): """ This class is called by the main :class:`BalloonTip` class, and it is responsible for the frame creation/positioning on screen depending on target control/window, the frame can position itself to NW (default), NE, SW, SE. The switch on positioning is done by calculating the absolute position of the target control/window plus/minus the balloontip size. The pointing arrow is positioned accordingly. """ def __init__(self, parent, id=wx.ID_ANY, pos=wx.DefaultPosition, size=wx.DefaultSize, classparent=None): """ Default class constructor. Used internally. Do not call directly this class in your application! """ wx.Frame.__init__(self, None, -1, "BalloonTip", pos, size, style=wx.FRAME_SHAPED | wx.SIMPLE_BORDER | wx.FRAME_NO_TASKBAR | wx.STAY_ON_TOP) self._parent = classparent self._toptitle = self._parent._toptitle self._topicon = self._parent._topicon self._message = self._parent._message self._shape = self._parent._shape self._tipstyle = self._parent._tipstyle self._ballooncolour = self._parent._ballooncolour self._balloonmsgcolour = self._parent._balloonmsgcolour self._balloonmsgfont = self._parent._balloonmsgfont if self._toptitle != "": self._balloontitlecolour = self._parent._balloontitlecolour self._balloontitlefont = self._parent._balloontitlefont panel = wx.Panel(self, -1) sizer = wx.BoxSizer(wx.VERTICAL) self.panel = panel subsizer = wx.BoxSizer(wx.VERTICAL) hsizer = wx.BoxSizer(wx.HORIZONTAL) subsizer.Add((0,20), 0, wx.EXPAND) if self._topicon is not None: stb = wx.StaticBitmap(panel, -1, self._topicon) hsizer.Add(stb, 0, wx.EXPAND | wx.LEFT | wx.RIGHT | wx.TOP, 10) self._balloonbmp = stb if self._toptitle != "": stt = wx.StaticText(panel, -1, self._toptitle) stt.SetFont(wx.Font(9, wx.FONTFAMILY_SWISS, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_BOLD, False)) if self._topicon is None: hsizer.Add((10,0), 0, wx.EXPAND) hsizer.Add(stt, 1, wx.EXPAND | wx.TOP, 10) self._balloontitle = stt self._balloontitle.SetForegroundColour(self._balloontitlecolour) self._balloontitle.SetFont(self._balloontitlefont) if self._tipstyle == BT_BUTTON: self._closebutton = GenButton(panel, -1, "X", style=wx.NO_BORDER) self._closebutton.SetMinSize((16,16)) self._closebutton.SetFont(wx.Font(9, wx.FONTFAMILY_SWISS, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_BOLD, False)) self._closebutton.Bind(wx.EVT_ENTER_WINDOW, self.OnEnterButton) self._closebutton.Bind(wx.EVT_LEAVE_WINDOW, self.OnLeaveButton) self._closebutton.SetUseFocusIndicator(False) if self._toptitle != "": hsizer.Add(self._closebutton, 0, wx.TOP | wx.RIGHT, 5) else: hsizer.Add((10,0), 1, wx.EXPAND) hsizer.Add(self._closebutton, 0, wx.ALIGN_RIGHT | wx.TOP | wx.RIGHT, 5) if self._topicon is not None or self._toptitle != "" \ or self._tipstyle == BT_BUTTON: subsizer.Add(hsizer, 0, wx.EXPAND | wx.BOTTOM, 5) self._firstline = line = wx.StaticLine(panel, -1, style=wx.LI_HORIZONTAL) if self._topicon is not None or self._toptitle != "" \ or self._tipstyle == BT_BUTTON: subsizer.Add(self._firstline, 0, wx.EXPAND | wx.LEFT | wx.RIGHT | wx.BOTTOM, 10) else: subsizer.Add(self._firstline, 0, wx.EXPAND | wx.LEFT | wx.RIGHT | wx.BOTTOM | wx.TOP, 10) mainstt = wx.StaticText(panel, -1, self._message) self._balloonmsg = mainstt self._balloonmsg.SetForegroundColour(self._balloonmsgcolour) self._balloonmsg.SetFont(self._balloonmsgfont) subsizer.Add(self._balloonmsg, 1, wx.EXPAND | wx.LEFT | wx.RIGHT | wx.BOTTOM, 10) self._secondline = wx.StaticLine(panel, -1, style=wx.LI_HORIZONTAL) subsizer.Add(self._secondline, 0, wx.EXPAND | wx.LEFT | wx.RIGHT, 10) subsizer.Add((0,0),1) panel.SetSizer(subsizer) sizer.Add(panel, 1, wx.EXPAND) self.SetSizerAndFit(sizer) sizer.Layout() if self._tipstyle == BT_CLICK: if self._toptitle != "": self._balloontitle.Bind(wx.EVT_LEFT_DOWN, self.OnClose) if self._topicon is not None: self._balloonbmp.Bind(wx.EVT_LEFT_DOWN, self.OnClose) self._balloonmsg.Bind(wx.EVT_LEFT_DOWN, self.OnClose) self.panel.Bind(wx.EVT_LEFT_DOWN, self.OnClose) elif self._tipstyle == BT_BUTTON: self._closebutton.Bind(wx.EVT_BUTTON, self.OnClose) self.panel.SetBackgroundColour(self._ballooncolour) if wx.Platform == "__WXGTK__": self.Bind(wx.EVT_WINDOW_CREATE, self.SetBalloonShape) else: self.SetBalloonShape() self.Show(True) def SetBalloonShape(self, event=None): """ Sets the balloon shape. :param `event`: on wxGTK, a :class:`wx.WindowCreateEvent` event to process. """ size = self.GetSize() pos = self.GetPosition() dc = wx.MemoryDC(wx.Bitmap(1,1)) textlabel = self._balloonmsg.GetLabel() textfont = self._balloonmsg.GetFont() textextent = dc.GetFullTextExtent(textlabel, textfont) boxheight = size.y - textextent[1]*len(textlabel.split("\n")) boxwidth = size.x position = wx.GetMousePosition() xpos = position[0] ypos = position[1] if xpos > 20 and ypos > 20: # This Is NW Positioning positioning = "NW" xpos = position[0] - boxwidth + 20 ypos = position[1] - boxheight - 20 elif xpos <= 20 and ypos <= 20: # This Is SE Positioning positioning = "SE" xpos = position[0] - 20 ypos = position[1] elif xpos > 20 and ypos <= 20: # This Is SW Positioning positioning = "SW" xpos = position[0] - boxwidth + 20 ypos = position[1] else: # This Is NE Positioning positioning = "NE" xpos = position[0] ypos = position[1] - boxheight + 20 bmp = wx.Bitmap(size.x,size.y) dc = wx.BufferedDC(None, bmp) dc.SetBackground(wx.BLACK_BRUSH) dc.Clear() dc.SetPen(wx.Pen(wx.BLACK, 1, wx.PENSTYLE_TRANSPARENT)) if self._shape == BT_ROUNDED: dc.DrawRoundedRectangle(0, 20, boxwidth, boxheight-20, 12) elif self._shape == BT_RECTANGLE: dc.DrawRectangle(0, 20, boxwidth, boxheight-20) if positioning == "NW": dc.DrawPolygon(((boxwidth-40, boxheight), (boxwidth-20, boxheight+20), (boxwidth-20, boxheight))) elif positioning == "SE": dc.DrawPolygon(((20, 20), (20, 0), (40, 20))) elif positioning == "SW": dc.DrawPolygon(((boxwidth-40, 20), (boxwidth-20, 0), (boxwidth-20, 20))) else: dc.DrawPolygon(((20, boxheight), (20, boxheight+20), (40, boxheight))) r = wx.Region(bmp, wx.BLACK) self.hasShape = self.SetShape(r) if self._tipstyle == BT_BUTTON: colour = self.panel.GetBackgroundColour() self._closebutton.SetBackgroundColour(colour) self.SetPosition((xpos, ypos)) def OnEnterButton(self, event): """ Handles the ``wx.EVT_ENTER_WINDOW`` for the :class:`BalloonTip` button. When the :class:`BalloonTip` is created with the `tipstyle` = ``BT_BUTTON``, this event provide some kind of 3D effect when the mouse enters the button area. :param `event`: a :class:`MouseEvent` event to be processed. """ button = event.GetEventObject() colour = button.GetBackgroundColour() red = colour.Red() green = colour.Green() blue = colour.Blue() if red < 30: red = red + 30 if green < 30: green = green + 30 if blue < 30: blue = blue + 30 colour = wx.Colour(red-30, green-30, blue-30) button.SetBackgroundColour(colour) button.SetForegroundColour(wx.WHITE) button.Refresh() event.Skip() def OnLeaveButton(self, event): """ Handles the ``wx.EVT_LEAVE_WINDOW`` for the :class:`BalloonTip` button. When the :class:`BalloonTip` is created with the `tipstyle` = ``BT_BUTTON``, this event provide some kind of 3D effect when the mouse enters the button area. :param `event`: a :class:`MouseEvent` event to be processed. """ button = event.GetEventObject() colour = self.panel.GetBackgroundColour() button.SetBackgroundColour(colour) button.SetForegroundColour(wx.BLACK) button.Refresh() event.Skip() def OnClose(self, event): """ Handles the ``wx.EVT_CLOSE`` event for :class:`BalloonTip`. :param `event`: a :class:`CloseEvent` event to be processed. """ if isinstance(self._parent._widget, wx.adv.TaskBarIcon): self._parent.taskbarcreation = 0 self._parent.taskbartime.Stop() del self._parent.taskbartime del self._parent.BalloonFrame self.Destroy() # --------------------------------------------------------------- # Class BalloonTip # --------------------------------------------------------------- # This Is The Main BalloonTip Implementation # --------------------------------------------------------------- class BalloonTip(object): """ :class:`BalloonTip` is a class that allows you to display tooltips in a balloon style window. This is the main class implementation. """ def __init__(self, topicon=None, toptitle="", message="", shape=BT_ROUNDED, tipstyle=BT_LEAVE): """ Default class constructor. :param `topicon`: an icon that will be displayed on the top-left part of the :class:`BalloonTip` frame. If set to ``None``, no icon will be displayed; :type `topicon`: :class:`wx.Bitmap` or ``None`` :param string `toptitle`: a title that will be displayed on the top part of the :class:`BalloonTip` frame. If set to an empty string, no title will be displayed; :param string `message`: the tip message that will be displayed. It can not be set to an empty string; :param integer `shape`: the :class:`BalloonTip` shape. It can be one of the following: ======================= ========= ==================================== Shape Flag Hex Value Description ======================= ========= ==================================== ``BT_ROUNDED`` 0x1 :class:`BalloonTip` will have a rounded rectangular shape. ``BT_RECTANGLE`` 0x2 :class:`BalloonTip` will have a rectangular shape. ======================= ========= ==================================== :param integer `tipstyle`: the :class:`BalloonTip` destruction behavior. It can be one of: ======================= ========= ==================================== Tip Flag Hex Value Description ======================= ========= ==================================== ``BT_LEAVE`` 0x3 :class:`BalloonTip` will be destroyed when the user moves the mouse outside the target window. ``BT_CLICK`` 0x4 :class:`BalloonTip` will be destroyed when the user click on :class:`BalloonTip`. ``BT_BUTTON`` 0x5 :class:`BalloonTip` will be destroyed when the user click on the close button. ======================= ========= ==================================== :raise: `Exception` in the following cases: - The `message` parameter is an empty string; - The `shape` parameter has an invalid value (i.e., it's not one of ``BT_ROUNDED``, ``BT_RECTANGLE``); - The `tipstyle` parameter has an invalid value (i.e., it's not one of ``BT_LEAVE``, ``BT_CLICK``, ``BT_BUTTON``). """ self._shape = shape self._topicon = topicon self._toptitle = toptitle self._message = message self._tipstyle = tipstyle app = wx.GetApp() self._runningapp = app self._runningapp.__tooltipenabled__ = True if self._message == "": raise Exception("\nERROR: You Should At Least Set The Message For The BalloonTip") if self._shape not in [BT_ROUNDED, BT_RECTANGLE]: raise Exception('\nERROR: BalloonTip Shape Should Be One Of "BT_ROUNDED", "BT_RECTANGLE"') if self._tipstyle not in [BT_LEAVE, BT_CLICK, BT_BUTTON]: raise Exception('\nERROR: BalloonTip TipStyle Should Be One Of "BT_LEAVE", '\ '"BT_CLICK", "BT_BUTTON"') self.SetStartDelay() self.SetEndDelay() self.SetBalloonColour() if toptitle != "": self.SetTitleFont() self.SetTitleColour() if topicon is not None: self.SetBalloonIcon(topicon) self.SetMessageFont() self.SetMessageColour() def SetTarget(self, widget): """ Sets the target control/window for the :class:`BalloonTip`. :param `widget`: any subclass of :class:`wx.Window`. """ self._widget = widget if isinstance(widget, wx.adv.TaskBarIcon): self._widget.Bind(wx.adv.EVT_TASKBAR_MOVE, self.OnTaskBarMove) self._widget.Bind(wx.EVT_WINDOW_DESTROY, self.OnDestroy) self.taskbarcreation = 0 else: self._widget.Bind(wx.EVT_ENTER_WINDOW, self.OnWidgetEnter) self._widget.Bind(wx.EVT_LEAVE_WINDOW, self.OnWidgetLeave) self._widget.Bind(wx.EVT_MOTION, self.OnWidgetMotion) self._widget.Bind(wx.EVT_WINDOW_DESTROY, self.OnDestroy) def GetTarget(self): """ Returns the target window for the :class:`BalloonTip`. :return: An instance of :class:`wx.Window`. :raise: `Exception` if the :meth:`~BalloonTip.SetTarget` method has not previously called. """ if not hasattr(self, "_widget"): raise Exception("\nERROR: BalloonTip Target Has Not Been Set") return self._widget def SetStartDelay(self, delay=1): """ Sets the delay time after which the :class:`BalloonTip` is created. :param integer `delay`: the number of milliseconds after which :class:`BalloonTip` is created. :raise: `Exception` if `delay` is less than ``1`` milliseconds. """ if delay < 1: raise Exception("\nERROR: Delay Time For BalloonTip Creation Should Be Greater Than 1 ms") self._startdelaytime = delay def GetStartDelay(self): """ Returns the delay time after which the :class:`BalloonTip` is created. :return: the delay time, in milliseconds. """ return self._startdelaytime def SetEndDelay(self, delay = 1000000): """ Sets the delay time after which the BalloonTip is destroyed. :param integer `delay`: the number of milliseconds after which :class:`BalloonTip` is destroyed. :raise: `Exception` if `delay` is less than ``1`` milliseconds. """ if delay < 1: raise Exception("\nERROR: Delay Time For BalloonTip Destruction Should Be Greater Than 1 ms") self._enddelaytime = delay def GetEndDelay(self): """ Returns the delay time after which the :class:`BalloonTip` is destroyed. :return: the delay time, in milliseconds. """ return self._enddelaytime def OnWidgetEnter(self, event): """ Handles the ``wx.EVT_ENTER_WINDOW`` for the target control/window and starts the :class:`BalloonTip` timer for creation. :param `event`: a :class:`MouseEvent` event to be processed. """ if hasattr(self, "BalloonFrame"): if self.BalloonFrame: return if not self._runningapp.__tooltipenabled__: return self.showtime = wx.Timer(self._widget) self._widget.Bind(wx.EVT_TIMER, self.NotifyTimer, self.showtime) self.showtime.Start(self._startdelaytime) event.Skip() def OnWidgetLeave(self, event): """ Handles the ``wx.EVT_LEAVE_WINDOW`` for the target control/window. :param `event`: a :class:`MouseEvent` event to be processed. :note: If the BalloonTip `tipstyle` is set to ``BT_LEAVE``, the :class:`BalloonTip` is destroyed. """ if hasattr(self, "showtime"): if self.showtime: self.showtime.Stop() del self.showtime if hasattr(self, "BalloonFrame"): if self.BalloonFrame: if self._tipstyle == BT_LEAVE: endtime = time.time() if endtime - self.starttime > 0.1: try: self.BalloonFrame.Destroy() except: pass else: event.Skip() else: event.Skip() else: event.Skip() def OnTaskBarMove(self, event): """ Handles the mouse motion inside the taskbar icon area. :param `event`: a :class:`MouseEvent` event to be processed. """ if not hasattr(self, "BalloonFrame"): if self.taskbarcreation == 0: self.mousepos = wx.GetMousePosition() self.currentmousepos = self.mousepos self.taskbartime = wx.Timer(self._widget) self._widget.Bind(wx.EVT_TIMER, self.TaskBarTimer, self.taskbartime) self.taskbartime.Start(100) self.showtime = wx.Timer(self._widget) self._widget.Bind(wx.EVT_TIMER, self.NotifyTimer, self.showtime) self.showtime.Start(self._startdelaytime) if self.taskbarcreation == 0: self.taskbarcreation = 1 return event.Skip() def OnWidgetMotion(self, event): """ Handle the mouse motion inside the target. This prevents the annoying behavior of :class:`BalloonTip` to display when the user does something else inside the window. The :class:`BalloonTip` window is displayed only when the mouse does *not* move for the start delay time. :param `event`: a :class:`MouseEvent` event to be processed. """ if hasattr(self, "BalloonFrame"): if self.BalloonFrame: return if hasattr(self, "showtime"): if self.showtime: self.showtime.Start(self._startdelaytime) event.Skip() def NotifyTimer(self, event): """ The creation timer has expired. Creates the :class:`BalloonTip` frame. :param `event`: a :class:`wx.TimerEvent` to be processed. """ self.BalloonFrame = BalloonFrame(self._widget, classparent=self) self.BalloonFrame.Show(True) self.starttime = time.time() if hasattr(self, "showtime"): self.showtime.Stop() del self.showtime self.destroytime = wx.Timer(self._widget) self._widget.Bind(wx.EVT_TIMER, self.NotifyTimer, self.destroytime) self.destroytime.Start(self._enddelaytime) def TaskBarTimer(self, event): """ This timer check periodically the mouse position. If the current mouse position is sufficiently far from the coordinates it had when entered the taskbar icon and the :class:`BalloonTip` style is ``BT_LEAVE``, the :class:`BalloonTip` frame is destroyed. :param `event`: a :class:`wx.TimerEvent` to be processed. """ self.currentmousepos = wx.GetMousePosition() mousepos = self.mousepos if abs(self.currentmousepos[0] - mousepos[0]) > 30 or \ abs(self.currentmousepos[1] - mousepos[1]) > 30: if hasattr(self, "BalloonFrame"): if self._tipstyle == BT_LEAVE: try: self.BalloonFrame.Destroy() self.taskbartime.Stop() del self.taskbartime del self.BalloonFrame self.taskbarcreation = 0 except: pass def DestroyTimer(self, event): """ The destruction timer has expired. Destroys the :class:`BalloonTip` frame. :param `event`: a :class:`wx.TimerEvent` to be processed. """ self.destroytime.Stop() del self.destroytime try: self.BalloonFrame.Destroy() except: pass def SetBalloonShape(self, shape=BT_ROUNDED): """ Sets the :class:`BalloonTip` frame shape. :param integer `shape`: should be one of ``BT_ROUNDED`` or ``BT_RECTANGLE``. :raise: `Exception` if the `shape` parameter is not a valid value (i.e., it's not one of ``BT_ROUNDED``, ``BT_RECTANGLE``); """ if shape not in [BT_ROUNDED, BT_RECTANGLE]: raise Exception('\nERROR: BalloonTip Shape Should Be One Of "BT_ROUNDED", "BT_RECTANGLE"') self._shape = shape def GetBalloonShape(self): """ Returns the :class:`BalloonTip` frame shape. :return: An integer, one of ``BT_ROUNDED``, ``BT_RECTANGLE``. """ return self._shape def SetBalloonIcon(self, icon): """ Sets the :class:`BalloonTip` top-left icon. :param `icon`: an instance of :class:`wx.Bitmap`. :raise: `Exception` if the `icon` bitmap is not a valid :class:`wx.Bitmap`. """ if icon.IsOk(): self._topicon = icon else: raise Exception("\nERROR: Invalid Image Passed To BalloonTip") def GetBalloonIcon(self): """ Returns the :class:`BalloonTip` top-left icon. :return: An instance of :class:`wx.Bitmap`. """ return self._topicon def SetBalloonTitle(self, title=""): """ Sets the :class:`BalloonTip` top title. :param string `title`: a string to use as a :class:`BalloonTip` title. """ self._toptitle = title def GetBalloonTitle(self): """ Returns the :class:`BalloonTip` top title. :return: A string containing the top title. """ return self._toptitle def SetBalloonMessage(self, message): """ Sets the :class:`BalloonTip` tip message. :param string `message`: a string identifying the main message body of :class:`BalloonTip`. :raise: `Exception` if the message is an empty string. :note: The :class:`BalloonTip` message should never be empty. """ if len(message.strip()) < 1: raise Exception("\nERROR: BalloonTip Message Can Not Be Empty") self._message = message def GetBalloonMessage(self): """ Returns the :class:`BalloonTip` tip message. :return: A string containing the main message. """ return self._message def SetBalloonTipStyle(self, tipstyle=BT_LEAVE): """ Sets the :class:`BalloonTip` `tipstyle` parameter. :param integer `tipstyle`: one of the following bit set: ============== ========== ===================================== Tip Style Hex Value Description ============== ========== ===================================== ``BT_LEAVE`` 0x3 :class:`BalloonTip` will be destroyed when the user moves the mouse outside the target window. ``BT_CLICK`` 0x4 :class:`BalloonTip` will be destroyed when the user click on :class:`BalloonTip`. ``BT_BUTTON`` 0x5 :class:`BalloonTip` will be destroyed when the user click on the close button. ============== ========== ===================================== :raise: `Exception` if the `tipstyle` parameter has an invalid value (i.e., it's not one of ``BT_LEAVE``, ``BT_CLICK``, ``BT_BUTTON``). """ if tipstyle not in [BT_LEAVE, BT_CLICK, BT_BUTTON]: raise Exception('\nERROR: BalloonTip TipStyle Should Be One Of "BT_LEAVE", '\ '"BT_CLICK", "BT_BUTTON"') self._tipstyle = tipstyle def GetBalloonTipStyle(self): """ Returns the :class:`BalloonTip` `tipstyle` parameter. :return: An integer representing the style. :see: :meth:`~BalloonTip.SetBalloonTipStyle` """ return self._tipstyle def SetBalloonColour(self, colour=None): """ Sets the :class:`BalloonTip` background colour. :param `colour`: a valid :class:`wx.Colour` instance. """ if colour is None: colour = wx.Colour(255, 250, 205) self._ballooncolour = colour def GetBalloonColour(self): """ Returns the :class:`BalloonTip` background colour. :return: An instance of :class:`wx.Colour`. """ return self._ballooncolour def SetTitleFont(self, font=None): """ Sets the font for the top title. :param `font`: a valid :class:`wx.Font` instance. """ if font is None: font = wx.Font(9, wx.FONTFAMILY_SWISS, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_BOLD, False) self._balloontitlefont = font def GetTitleFont(self): """ Returns the font for the top title. :return: An instance of :class:`wx.Font`. """ return self._balloontitlefont def SetMessageFont(self, font=None): """ Sets the font for the tip message. :param `font`: a valid :class:`wx.Font` instance. """ if font is None: font = wx.Font(8, wx.FONTFAMILY_SWISS, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_NORMAL, False) self._balloonmsgfont = font def GetMessageFont(self): """ Returns the font for the tip message. :return: An instance of :class:`wx.Font`. """ return self._balloonmsgfont def SetTitleColour(self, colour=None): """ Sets the colour for the top title. :param `colour`: a valid :class:`wx.Colour` instance. """ if colour is None: colour = wx.BLACK self._balloontitlecolour = colour def GetTitleColour(self): """ Returns the colour for the top title. :return: An instance of :class:`wx.Colour`. """ return self._balloontitlecolour def SetMessageColour(self, colour=None): """ Sets the colour for the tip message. :param `colour`: a valid :class:`wx.Colour` instance. """ if colour is None: colour = wx.BLACK self._balloonmsgcolour = colour def GetMessageColour(self): """ Returns the colour for the tip message. :return: An instance of :class:`wx.Colour`. """ return self._balloonmsgcolour def OnDestroy(self, event): """ Handles the target destruction, specifically handling the ``wx.EVT_WINDOW_DESTROY`` event. :param `event`: a :class:`wx.WindowDestroyEvent` event to be processed. """ if hasattr(self, "BalloonFrame"): if self.BalloonFrame: try: if isinstance(self._widget, wx.adv.TaskBarIcon): self._widget.Unbind(wx.adv.EVT_TASKBAR_MOVE) self.taskbartime.Stop() del self.taskbartime else: self._widget.Unbind(wx.EVT_MOTION) self._widget.Unbind(wx.EVT_LEAVE_WINDOW) self._widget.Unbind(wx.EVT_ENTER_WINDOW) self.BalloonFrame.Destroy() except: pass del self.BalloonFrame def EnableTip(self, enable=True): """ Enable/disable globally the :class:`BalloonTip`. :param bool `enable`: ``True`` to enable :class:`BalloonTip`, ``False`` otherwise. """ self._runningapp.__tooltipenabled__ = enable if __name__ == '__main__': import wx class MyFrame(wx.Frame): def __init__(self, parent): wx.Frame.__init__(self, parent, -1, "BalloonTip Demo") panel = wx.Panel(self) # Let's suppose that in your application you have a wx.TextCtrl defined as: mytextctrl = wx.TextCtrl(panel, -1, "I am a textctrl", pos=(100, 100)) # You can define your BalloonTip as follows: tipballoon = BalloonTip(topicon=None, toptitle="textctrl", message="this is a textctrl", shape=BT_ROUNDED, tipstyle=BT_LEAVE) # Set the BalloonTip target tipballoon.SetTarget(mytextctrl) # Set the BalloonTip background colour tipballoon.SetBalloonColour(wx.WHITE) # Set the font for the balloon title tipballoon.SetTitleFont(wx.Font(9, wx.FONTFAMILY_SWISS, wx.FONTSTYLE_NORMAL, wx.FONTWEIGHT_BOLD, False)) # Set the colour for the balloon title tipballoon.SetTitleColour(wx.BLACK) # Leave the message font as default tipballoon.SetMessageFont() # Set the message (tip) foreground colour tipballoon.SetMessageColour(wx.LIGHT_GREY) # Set the start delay for the BalloonTip tipballoon.SetStartDelay(1000) # Set the time after which the BalloonTip is destroyed tipballoon.SetEndDelay(3000) # our normal wxApp-derived class, as usual app = wx.App(0) frame = MyFrame(None) app.SetTopWindow(frame) frame.Show() app.MainLoop()
PypiClean
/alibi-detect-0.11.4.tar.gz/alibi-detect-0.11.4/alibi_detect/cd/tensorflow/mmd_online.py
from tqdm import tqdm import numpy as np import tensorflow as tf from typing import Any, Callable, Optional, Union from alibi_detect.cd.base_online import BaseMultiDriftOnline from alibi_detect.utils.tensorflow.kernels import GaussianRBF from alibi_detect.utils.tensorflow import zero_diag, quantile, subset_matrix from alibi_detect.utils.frameworks import Framework class MMDDriftOnlineTF(BaseMultiDriftOnline): online_state_keys: tuple = ('t', 'test_stats', 'drift_preds', 'test_window', 'k_xy') def __init__( self, x_ref: Union[np.ndarray, list], ert: float, window_size: int, preprocess_fn: Optional[Callable] = None, x_ref_preprocessed: bool = False, kernel: Callable = GaussianRBF, sigma: Optional[np.ndarray] = None, n_bootstraps: int = 1000, verbose: bool = True, input_shape: Optional[tuple] = None, data_type: Optional[str] = None ) -> None: """ Online maximum Mean Discrepancy (MMD) data drift detector using preconfigured thresholds. Parameters ---------- x_ref Data used as reference distribution. ert The expected run-time (ERT) in the absence of drift. For the multivariate detectors, the ERT is defined as the expected run-time from t=0. window_size The size of the sliding test-window used to compute the test-statistic. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift. preprocess_fn Function to preprocess the data before computing the data drift metrics. x_ref_preprocessed Whether the given reference data `x_ref` has been preprocessed yet. If `x_ref_preprocessed=True`, only the test data `x` will be preprocessed at prediction time. If `x_ref_preprocessed=False`, the reference data will also be preprocessed. kernel Kernel used for the MMD computation, defaults to Gaussian RBF kernel. sigma Optionally set the GaussianRBF kernel bandwidth. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If `sigma` is not specified, the 'median heuristic' is adopted whereby `sigma` is set as the median pairwise distance between reference samples. n_bootstraps The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT. verbose Whether or not to print progress during configuration. input_shape Shape of input data. data_type Optionally specify the data type (tabular, image or time-series). Added to metadata. """ super().__init__( x_ref=x_ref, ert=ert, window_size=window_size, preprocess_fn=preprocess_fn, x_ref_preprocessed=x_ref_preprocessed, n_bootstraps=n_bootstraps, verbose=verbose, input_shape=input_shape, data_type=data_type ) self.backend = Framework.TENSORFLOW.value self.meta.update({'backend': self.backend}) # initialize kernel if isinstance(sigma, np.ndarray): sigma = tf.convert_to_tensor(sigma) self.kernel = kernel(sigma) if kernel == GaussianRBF else kernel # compute kernel matrix for the reference data self.k_xx = self.kernel(self.x_ref, self.x_ref, infer_sigma=(sigma is None)) self._configure_thresholds() self._configure_ref_subset() # self.initialise_state() called inside here def _initialise_state(self) -> None: """ Initialise online state (the stateful attributes updated by `score` and `predict`). This method relies on attributes defined by `_configure_ref_subset`, hence must be called afterwards. """ super()._initialise_state() self.test_window = tf.gather(self.x_ref, self.init_test_inds) self.k_xy = self.kernel(tf.gather(self.x_ref, self.ref_inds), self.test_window) def _configure_ref_subset(self): """ Configure the reference data split. If the randomly selected split causes an initial detection, further splits are attempted. """ etw_size = 2 * self.window_size - 1 # etw = extended test window rw_size = self.n - etw_size # rw = ref window# # Make split and ensure it doesn't cause an initial detection mmd_init = None while mmd_init is None or mmd_init >= self.get_threshold(0): # Make split perm = tf.random.shuffle(tf.range(self.n)) self.ref_inds, self.init_test_inds = perm[:rw_size], perm[-self.window_size:] # Compute initial mmd to check for initial detection self._initialise_state() # to set self.test_window and self.k_xtc self.k_xx_sub = subset_matrix(self.k_xx, self.ref_inds, self.ref_inds) self.k_xx_sub_sum = tf.reduce_sum(zero_diag(self.k_xx_sub)) / (rw_size * (rw_size - 1)) k_yy = self.kernel(self.test_window, self.test_window) mmd_init = ( self.k_xx_sub_sum + tf.reduce_sum(zero_diag(k_yy)) / (self.window_size * (self.window_size - 1)) - 2 * tf.reduce_mean(self.k_xy) ) def _configure_thresholds(self): """ Configure the test statistic thresholds via bootstrapping. """ # Each bootstrap sample splits the reference samples into a sub-reference sample (x) # and an extended test window (y). The extended test window will be treated as W overlapping # test windows of size W (so 2W-1 test samples in total) w_size = self.window_size etw_size = 2 * w_size - 1 # etw = extended test window rw_size = self.n - etw_size # rw = ref window perms = [tf.random.shuffle(tf.range(self.n)) for _ in range(self.n_bootstraps)] x_inds_all = [perm[:-etw_size] for perm in perms] y_inds_all = [perm[-etw_size:] for perm in perms] if self.verbose: print("Generating permutations of kernel matrix..") # Need to compute mmd for each bs for each of W overlapping windows # Most of the computation can be done once however # We avoid summing the rw_size^2 submatrix for each bootstrap sample by instead computing the full # sum once and then subtracting the relavent parts (k_xx_sum = k_full_sum - 2*k_xy_sum - k_yy_sum). # We also reduce computation of k_xy_sum from O(nW) to O(W) by caching column sums k_full_sum = tf.reduce_sum(zero_diag(self.k_xx)) k_xy_col_sums_all = [ tf.reduce_sum(subset_matrix(self.k_xx, x_inds, y_inds), axis=0) for x_inds, y_inds in (tqdm(zip(x_inds_all, y_inds_all), total=self.n_bootstraps) if self.verbose else zip(x_inds_all, y_inds_all)) ] k_xx_sums_all = [( k_full_sum - tf.reduce_sum(zero_diag(subset_matrix(self.k_xx, y_inds, y_inds))) - 2 * tf.reduce_sum(k_xy_col_sums) ) / (rw_size * (rw_size - 1)) for y_inds, k_xy_col_sums in zip(y_inds_all, k_xy_col_sums_all)] k_xy_col_sums_all = [k_xy_col_sums / (rw_size * w_size) for k_xy_col_sums in k_xy_col_sums_all] # Now to iterate through the W overlapping windows thresholds = [] p_bar = tqdm(range(w_size), "Computing thresholds") if self.verbose else range(w_size) for w in p_bar: y_inds_all_w = [y_inds[w:w + w_size] for y_inds in y_inds_all] # test windows of size W mmds = [( k_xx_sum + tf.reduce_sum(zero_diag(subset_matrix(self.k_xx, y_inds_w, y_inds_w))) / (w_size * (w_size - 1)) - 2 * tf.reduce_sum(k_xy_col_sums[w:w + w_size])) for k_xx_sum, y_inds_w, k_xy_col_sums in zip(k_xx_sums_all, y_inds_all_w, k_xy_col_sums_all) ] mmds = tf.stack(mmds, axis=0) # an mmd for each bootstrap sample # Now we discard all bootstrap samples for which mmd is in top (1/ert)% and record the thresholds thresholds.append(quantile(mmds, 1 - self.fpr)) y_inds_all = [y_inds_all[i] for i in range(len(y_inds_all)) if mmds[i] < thresholds[-1]] k_xx_sums_all = [ k_xx_sums_all[i] for i in range(len(k_xx_sums_all)) if mmds[i] < thresholds[-1] ] k_xy_col_sums_all = [ k_xy_col_sums_all[i] for i in range(len(k_xy_col_sums_all)) if mmds[i] < thresholds[-1] ] self.thresholds = thresholds def _update_state(self, x_t: np.ndarray): # type: ignore[override] """ Update online state based on the provided test instance. Parameters ---------- x_t The test instance. """ self.t += 1 kernel_col = self.kernel(tf.gather(self.x_ref, self.ref_inds), x_t) self.test_window = tf.concat([self.test_window[(1 - self.window_size):], x_t], axis=0) self.k_xy = tf.concat([self.k_xy[:, (1 - self.window_size):], kernel_col], axis=1) def score(self, x_t: Union[np.ndarray, Any]) -> float: """ Compute the test-statistic (squared MMD) between the reference window and test window. Parameters ---------- x_t A single instance to be added to the test-window. Returns ------- Squared MMD estimate between reference window and test window. """ x_t = super()._preprocess_xt(x_t) self._update_state(x_t) k_yy = self.kernel(self.test_window, self.test_window) mmd = ( self.k_xx_sub_sum + tf.reduce_sum(zero_diag(k_yy)) / (self.window_size * (self.window_size - 1)) - 2 * tf.reduce_mean(self.k_xy) ) return mmd.numpy()
PypiClean
/scikit_ued-2.1.14-py3-none-any.whl/skued/affine.py
import math import numpy as np # standard basis e1, e2, e3 = np.eye(3) def affine_map(array): """ Extends 3x3 transform matrices to 4x4, i.e. general affine transforms. Parameters ---------- array : ndarray, shape {(3,3), (4,4)} Transformation matrix. If shape = (4,4), returned intact. Returns ------- extended : ndarray, shape (4,4) Extended array Raises ------ ValueError : If the transformation matrix is neither 3x3 or 4x4 """ if array.shape == (4, 4): # Already the right shape return array elif array.shape == (3, 3): extended_matrix = np.zeros(shape=(4, 4), dtype=array.dtype) extended_matrix[-1, -1] = 1 extended_matrix[:3, :3] = array return extended_matrix else: raise ValueError( "Array shape not 3x3 or 4x4, and thus is not a transformation matrix." ) def transform(matrix, array): """ Applies a matrix transform on an array. Parameters ---------- matrix : ndarray, shape {(3,3), (4,4)} Transformation matrix. array : ndarray, shape {(3,), (3,3), (4,4)} Array to be transformed. Either a 1x3 vector, or a transformation matrix in 3x3 or 4x4 shape. Returns ------- transformed : ndarray Transformed array, either a 1D vector or a 4x4 transformation matrix Raises ------ ValueError : If the transformation matrix is neither 3x3 or 4x4 """ if matrix.shape not in [(3, 3), (4, 4)]: raise ValueError( f"Input matrix is neither a 3x3 or 4x4 matrix, but \ rather of shape {matrix.shape}." ) matrix = affine_map(matrix) # Case of a vector (e.g. position vector): if array.ndim == 1: extended_vector = np.array([0, 0, 0, 1], dtype=array.dtype) extended_vector[:3] = array return np.dot(matrix, extended_vector)[:3] else: array = affine_map(array) return np.dot(matrix, array) def translation_matrix(direction): """ Return matrix to translate by direction vector. Parameters ---------- direction : array_like, shape (3,) Returns ------- translation : `~numpy.ndarray`, shape (4,4) 4x4 translation matrix. """ matrix = np.eye(4) matrix[:3, 3] = np.asarray(direction)[:3] return matrix def change_of_basis(basis1, basis2=(e1, e2, e3)): """ Returns the matrix that goes from one basis to the other. Parameters ---------- basis1 : list of array_like, shape (3,) First basis basis2 : list of array_like, shape (3,), optional Second basis. By default, this is the standard basis Returns ------- cob : `~numpy.ndarray`, shape (3,3) Change-of-basis matrix that, applied to `basis`, will return `basis2`. """ # Calculate the transform that goes from basis 1 to standard basis basis1 = [np.asarray(vector).reshape(3, 1) for vector in basis1] basis1_to_standard = np.hstack(tuple(basis1)) # Calculate the transform that goes from standard basis to basis2 basis2 = [np.asarray(vector).reshape(3, 1) for vector in basis2] standard_to_basis2 = np.linalg.inv(np.hstack(tuple(basis2))) return np.dot(standard_to_basis2, basis1_to_standard) def is_basis(basis): """ Returns true if the set of vectors forms a basis. This is done by checking whether basis vectors are independent via an eigenvalue calculation. Parameters ---------- basis : list of array-like, shape (3,) Returns ------- out : bool Whether or not the basis is valid. """ return 0 not in np.linalg.eigvals(np.asarray(basis)) def is_rotation_matrix(matrix): """ Checks whether a matrix is orthogonal with unit determinant (1 or -1), properties of rotation matrices. Parameters ---------- matrix : ndarray, shape {(3,3), (4,4)} Rotation matrix candidate. If (4,4) matrix is provided, only the top-left block matrix of (3,) is checked Returns ------- result : bool If True, input could be a rotation matrix. """ # TODO: is this necessary? should a composite transformation # of translation and rotation return True? # if matrix.shape == (4,4): # matrix = matrix[:3,:3] is_orthogonal = np.allclose(np.linalg.inv(matrix), np.transpose(matrix)) unit_determinant = np.allclose(abs(np.linalg.det(matrix)), 1) return is_orthogonal and unit_determinant def rotation_matrix(angle, axis=(0, 0, 1)): """ Return matrix to rotate about axis defined by direction around the origin [0,0,0]. To combine rotation and translations, see http://www.euclideanspace.com/maths/geometry/affine/matrix4x4/index.htm Parameters ---------- angle : float Rotation angle [rad] axis : array-like of length 3 Axis about which to rotate Returns ------- matrix : `~numpy.ndarray`, shape (3,3) Rotation matrix. See also -------- translation_rotation_matrix """ sina, cosa = math.sin(angle), math.cos(angle) # Make sure direction is a numpy vector of unit length direction = np.asarray(axis) direction = direction / np.linalg.norm(direction) # rotation matrix around unit vector R = np.diag([cosa, cosa, cosa]) R += np.outer(direction, direction) * (1.0 - cosa) direction *= sina R += np.array( [ [0.0, -direction[2], direction[1]], [direction[2], 0.0, -direction[0]], [-direction[1], direction[0], 0.0], ] ) return R def translation_rotation_matrix(angle, axis, translation): """ Returns a 4x4 matrix that includes a rotation and a translation. Parameters ---------- angle : float Rotation angle [rad] axis : array-like of length 3 Axis about which to rotate translation : array_like, shape (3,) Translation vector Returns ------- matrix : `~numpy.ndarray`, shape (4,4) Affine transform matrix. """ rmat = affine_map(rotation_matrix(angle=angle, axis=axis)) rmat[:3, 3] = np.asarray(translation) return rmat def change_basis_mesh(xx, yy, zz, basis1, basis2): """ Changes the basis of meshgrid arrays. Parameters ---------- xx, yy, zz : ndarrays Arrays of equal shape, such as produced by numpy.meshgrid. basis1 : list of ndarrays, shape(3,) Basis of the mesh basis2 : list of ndarrays, shape(3,) Basis in which to express the mesh Returns ------- XX, YY, ZZ : `~numpy.ndarray` """ # Build coordinate array row-wise changed = np.empty(shape=(3, xx.size), dtype=float) linearized = np.empty(shape=(3, xx.size), dtype=float) linearized[0, :] = xx.ravel() linearized[1, :] = yy.ravel() linearized[2, :] = zz.ravel() # Change the basis at each row COB = change_of_basis(basis1, basis2) np.dot(COB, linearized, out=changed) return ( changed[0, :].reshape(xx.shape), changed[1, :].reshape(yy.shape), changed[2, :].reshape(zz.shape), ) def minimum_image_distance(xx, yy, zz, lattice): """ Returns a periodic array according to the minimum image convention. Parameters ---------- xx, yy, zz : ndarrays Arrays of equal shape, such as produced by numpy.meshgrid. lattice : list of ndarrays, shape(3,) Basis of the mesh Returns ------- r : `~numpy.ndarray` Minimum image distance over the lattice """ COB = change_of_basis(np.eye(3), lattice) linearized = np.empty(shape=(3, xx.size), dtype=float) # In the standard basis ulinearized = np.empty_like(linearized) # In the unitcell basis linearized[0, :] = xx.ravel() linearized[1, :] = yy.ravel() linearized[2, :] = zz.ravel() # Go to unitcell basis, where the cell is cubic of side length 1 np.dot(COB, linearized, out=ulinearized) ulinearized -= np.rint(ulinearized) np.dot(np.linalg.inv(COB), ulinearized, out=linearized) return np.reshape(np.linalg.norm(linearized, axis=0), xx.shape)
PypiClean
/django-tinymce4-3.0.2.tar.gz/django-tinymce4-3.0.2/tinymce/static/tinymce/langs/gd.js
tinymce.addI18n('gd',{ "Cut": "Gearr \u00e0s", "Heading 5": "Ceann-sgr\u00ecobhadh 5", "Header 2": "Bann-cinn 2", "Your browser doesn't support direct access to the clipboard. Please use the Ctrl+X\/C\/V keyboard shortcuts instead.": "Chan eil am brabhsair agad a' cur taic ri inntrigeadh d\u00ecreach dhan st\u00f2r-bh\u00f2rd. Cleachd ath-ghoiridean a' mheur-chl\u00e0ir, Ctrl+X\/V\/V 'nan \u00e0ite.", "Heading 4": "Ceann-sgr\u00ecobhadh 4", "Div": "Div", "Heading 2": "Ceann-sgr\u00ecobhadh 2", "Paste": "Cuir ann", "Close": "D\u00f9in", "Font Family": "Teaghlach a' chrutha-chl\u00f2", "Pre": "Pre", "Align right": "Co-thaobhaich ris an l\u00e0imh dheas", "New document": "Sgr\u00ecobhainn \u00f9r", "Blockquote": "Bloc-luaidh", "Numbered list": "Liosta \u00e0ireamhaichte", "Heading 1": "Ceann-sgr\u00ecobhadh 1", "Headings": "Ceann-sgr\u00ecobhaidhean", "Increase indent": "Meudaich an eag", "Formats": "F\u00f2rmatan", "Headers": "Bannan-cinn", "Select all": "Tagh na h-uile", "Header 3": "Bann-cinn 3", "Blocks": "Blocaichean", "Undo": "Neo-dh\u00e8an", "Strikethrough": "Loidhne troimhe", "Bullet list": "Liosta pheilearaichte", "Header 1": "Bann-cinn 1", "Superscript": "Os-sgr\u00ecobhte", "Clear formatting": "Falamhaich am f\u00f2rmatadh", "Font Sizes": "Meudan nan cruthan-chl\u00f2", "Subscript": "Bun-sgr\u00ecobhte", "Header 6": "Bann-cinn 6", "Redo": "Ath-dh\u00e8an", "Paragraph": "Paragraf", "Ok": "Ceart ma-th\u00e0", "Bold": "Trom", "Code": "C\u00f2d", "Italic": "Eadailteach", "Align center": "Co-thaobhaich ris a' mheadhan", "Header 5": "Bann-cinn 5", "Heading 6": "Ceann-sgr\u00ecobhadh 6", "Heading 3": "Ceann-sgr\u00ecobhadh 3", "Decrease indent": "Lughdaich an eag", "Header 4": "Bann-cinn 4", "Paste is now in plain text mode. Contents will now be pasted as plain text until you toggle this option off.": "Ma chuireas tu rud ann a-nis, th\u00e8id an t-susbaint a chur ann mar theacsa lom gus an cuir thu dheth an roghainn seo a-rithist.", "Underline": "Fo-loidhne", "Cancel": "Sguir dheth", "Justify": "Blocaich", "Inline": "Taobh a-staigh na loidhne", "Copy": "D\u00e8an lethbhreac", "Align left": "Co-thaobhaich ris an l\u00e0imh chl\u00ec", "Visual aids": "Taic l\u00e8irsinne", "Lower Greek": "Litrichean Greugach beaga", "Square": "Ce\u00e0rnag", "Default": "Bun-roghainn", "Lower Alpha": "Aibidileach is beag", "Circle": "Cearcall", "Disc": "Diosga", "Upper Alpha": "Aibidileach is m\u00f2r", "Upper Roman": "\u00c0ireamhan R\u00f2manach is m\u00f2r", "Lower Roman": "\u00c0ireamhan R\u00f2manach is beag", "Name": "Ainm", "Anchor": "Acair", "You have unsaved changes are you sure you want to navigate away?": "Tha atharraichean gun s\u00e0bhaladh agad, a bheil thu cinnteach gu bheil thu airson gluasad air falbh?", "Restore last draft": "Aisig an dreach mu dheireadh", "Special character": "Caractar s\u00f2nraichte", "Source code": "An c\u00f2d t\u00f9sail", "Color": "Dath", "Right to left": "Deas gu cl\u00ec", "Left to right": "Cl\u00ec gu deas", "Emoticons": "Samhlaidhean-gn\u00f9ise", "Robots": "Robotairean", "Document properties": "Roghainnean na sgr\u00ecobhainne", "Title": "Tiotal", "Keywords": "Faclan-luirg", "Encoding": "C\u00f2dachadh", "Description": "Tuairisgeul", "Author": "\u00d9ghdar", "Fullscreen": "L\u00e0n-sgr\u00ecn", "Horizontal line": "Loidhne ch\u00f2mhnard", "Horizontal space": "\u00c0ite c\u00f2mhnard", "Insert\/edit image": "Cuir a-steach\/Deasaich an dealbh", "General": "Coitcheann", "Advanced": "Adhartach", "Source": "T\u00f9s", "Border": "Iomall", "Constrain proportions": "Cuingich na co-r\u00e8irean", "Vertical space": "\u00c0ite inghearach", "Image description": "Tuairisgeul an deilbh", "Style": "Stoidhle", "Dimensions": "Meudachd", "Insert image": "Cuir a-steach dealbh", "Insert date\/time": "Cuir a-steach ceann-l\u00e0\/\u00e0m", "Remove link": "Thoir air falbh an ceangal", "Url": "URL", "Text to display": "An teacsa a th\u00e8id a shealltainn", "Anchors": "Acraichean", "Insert link": "Cuir a-steach ceangal", "New window": "Uinneag \u00f9r", "None": "Chan eil gin", "The URL you entered seems to be an external link. Do you want to add the required http:\/\/ prefix?": "Tha coltas gu bheil an URL a chuir thu a-steach 'na cheangal ris an taobh a-muigh. A bheil thu airson an ro-leasachan http:\/\/ a chur ris? Tha feum air.", "Target": "Targaid", "The URL you entered seems to be an email address. Do you want to add the required mailto: prefix?": "Tha coltas gu bheil an URL a chuir thu a-steach 'na she\u00f2ladh puist-d. A bheil thu airson an ro-leasachan mailto: a chur ris? Tha feum air.", "Insert\/edit link": "Cuir a-steach\/Deasaich an ceangal", "Insert\/edit video": "Cuir a-steach\/Deasaich a' video", "Poster": "P\u00f2stair", "Alternative source": "Roghainn eile de th\u00f9s", "Paste your embed code below:": "Cuir an c\u00f2d leabachaidh agad a-steach gu h-\u00ecosal:", "Insert video": "Cuir a-steach video", "Embed": "Leabaich", "Nonbreaking space": "Be\u00e0rn neo-bhristidh", "Page break": "Briseadh-duilleige", "Paste as text": "Cuir ann mar theacsa", "Preview": "Ro-shealladh", "Print": "Cl\u00f2-bhuail", "Save": "S\u00e0bhail", "Could not find the specified string.": "Cha b' urrainn dhuinn na dh'iarr thu a lorg.", "Replace": "Cuir 'na \u00e0ite", "Next": "Air adhart", "Whole words": "Faclan sl\u00e0na", "Find and replace": "Lorg is cuir 'na \u00e0ite", "Replace with": "Cuir na leanas 'na \u00e0ite", "Find": "Lorg", "Replace all": "Cuir an \u00e0ite na h-uile", "Match case": "Maids litrichean m\u00f2ra 's beaga", "Prev": "Air ais", "Spellcheck": "Dearbhaich an litreachadh", "Finish": "Cr\u00ecochnaich", "Ignore all": "Leig seachad na h-uile", "Ignore": "Leig seachad", "Add to Dictionary": "Cuir ris an fhaclair", "Insert row before": "Cuir a-steach r\u00e0gh roimhe", "Rows": "R\u00e0ghan", "Height": "\u00c0irde", "Paste row after": "Cuir ann r\u00e0gh 'na dh\u00e8idh", "Alignment": "Co-thaobhadh", "Border color": "Dath an iomaill", "Column group": "Buidheann cholbhan", "Row": "R\u00e0gh", "Insert column before": "Cuir a-steach colbh roimhe", "Split cell": "Sgoilt an cealla", "Cell padding": "Padadh nan ceallan", "Cell spacing": "Be\u00e0rnadh nan ceallan", "Row type": "Se\u00f2rsa an r\u00e0igh", "Insert table": "Cuir a-steach cl\u00e0r", "Body": "Bodhaig", "Caption": "Caipsean", "Footer": "Bann-coise", "Delete row": "Sguab \u00e0s an r\u00e0gh", "Paste row before": "Cuir ann r\u00e0gh roimhe", "Scope": "Sg\u00f2p", "Delete table": "Sguab \u00e0s an cl\u00e0r", "H Align": "Co-thaobhadh c\u00f2mhnard", "Top": "Barr", "Header cell": "Cealla a' bhanna-chinn", "Column": "Colbh", "Row group": "Buidheann r\u00e0ghan", "Cell": "Cealla", "Middle": "Meadhan", "Cell type": "Se\u00f2rsa a' chealla", "Copy row": "D\u00e8an lethbhreac dhen r\u00e0gh", "Row properties": "Roghainnean an r\u00e0igh", "Table properties": "Roghainnean a' chl\u00e0ir", "Bottom": "Bonn", "V Align": "Co-thaobhadh inghearach", "Header": "Bann-cinn", "Right": "Deas", "Insert column after": "Cuir a-steach colbh 'na dh\u00e8idh", "Cols": "Colbhan", "Insert row after": "Cuir a-steach r\u00e0gh 'na dh\u00e8idh", "Width": "Leud", "Cell properties": "Roghainnean a' chealla", "Left": "Cl\u00ec", "Cut row": "Gearr \u00e0s an r\u00e0gh", "Delete column": "Sguab \u00e0s an colbh", "Center": "Meadhan", "Merge cells": "Co-aonaich na ceallan", "Insert template": "Cuir a-steach teamplaid", "Templates": "Teamplaidean", "Background color": "Dath a\u2019 ch\u00f9laibh", "Custom...": "Gn\u00e0thaichte...", "Custom color": "Dath gn\u00e0thaichte", "No color": "Gun dath", "Text color": "Dath an teacsa", "Show blocks": "Seall na blocaichean", "Show invisible characters": "Seall na caractaran do-fhaicsinneach", "Words: {0}": "Faclan: {0}", "Insert": "Cuir a-steach", "File": "Faidhle", "Edit": "Deasaich", "Rich Text Area. Press ALT-F9 for menu. Press ALT-F10 for toolbar. Press ALT-0 for help": "Raon Rich Text. Br\u00f9th ALT-F9 airson a' chl\u00e0ir-thaice. Br\u00f9th ALT-F10 airson a' bh\u00e0r-inneal. Br\u00f9th ALT-0 airson na cobharach.", "Tools": "Innealan", "View": "Sealladh", "Table": "Cl\u00e0r", "Format": "F\u00f2rmat" });
PypiClean
/zapp-0.0.2.tar.gz/zapp-0.0.2/README.rst
.. Introduction ============ Build ``zipapp`` single file Python applications easily. Usage ===== Standalone application ---------------------- .. code:: zapp ~/bin/myapp myapp.cli:main 'myapp==1.2.3' 'mylib==3.2.1' python3 -m zapp ~/bin/myapp myapp.cli:main 'myapp==1.2.3' 'mylib==3.2.1' zapp toolmaker.pyz toolmaker.cli:main toolmaker zapp pipdeptree.pyz pipdeptree:main pipdeptree zapp ~/bin/httpie httpie.__main__:main httpie # Without requirements zapp zipfile.pyz zipfile:main Library ------- .. code:: import zapp zapp.core.build_zapp( [ 'myapp==1.2.3', 'mylib==3.2.1', ], 'myapp.cli:main', 'myapp.pyz', ) Setuptools command ------------------ .. code:: python3 setup.py bdist_zapp --entry-point myapp.cli:main Details ======= Similar applications -------------------- * Shiv https://shiv.readthedocs.io * Pex https://pex.readthedocs.io Hacking ======= This project makes extensive use of `tox`_, `pytest`_, and `GNU Make`_. Development environment ----------------------- Use following command to create a Python virtual environment with all necessary dependencies:: tox --recreate -e develop This creates a Python virtual environment in the ``.tox/develop`` directory. It can be activated with the following command:: . .tox/develop/bin/activate Run test suite -------------- In a Python virtual environment run the following command:: make review Outside of a Python virtual environment run the following command:: tox --recreate Build and package ----------------- In a Python virtual environment run the following command:: make package Outside of a Python virtual environment run the following command:: tox --recreate -e package .. Links .. _`GNU Make`: https://www.gnu.org/software/make/ .. _`pytest`: https://pytest.org/ .. _`tox`: https://tox.readthedocs.io/ .. EOF
PypiClean
/pyfalco-3.0.2.tar.gz/pyfalco-3.0.2/falco/thinfilm.py
import numpy as np from os.path import isfile import os import numpy as np import falco from falco.check import real_scalar, real_positive_scalar,\ real_nonnegative_scalar, scalar_integer,\ positive_scalar_integer, real_array,\ oneD_array, twoD_array, twoD_square_array def calc_complex_occulter(lam, aoi, t_Ti, t_Ni_vec, t_PMGI_vec, d0, pol, flagOPD=False, SUBSTRATE='FS'): """ Calculate the thin-film complex transmission and reflectance. Calculates the thin-film complex transmission and reflectance for the provided combinations of metal and dielectric thicknesses and list of wavelengths. Parameters ---------- lam : float Wavelength in meters. aoi : flaot Angle of incidence in degrees. t_Ti : float Titanium thickness in meters. Titanium goes only between fused silica and nickel layers. t_Ni_vec : array_like 1-D array of nickel thicknesses in meters. Nickel goes between titanium and PMGI layers. t_PMGI_vec : array_like 1-D array of PMGI thicknesses in meters. d0 : float Reference height for all phase offsets. Must be larger than the stack of materials, not including the substrate. Units of meters. pol : {0, 1, 2} Polarization state to compute values for. 0 for TE(s) polarization, 1 for TM(p) polarization, 2 for mean of s and p polarizations flagOPD : bool, optional Flag to use the OPD convention. The default is False. SUBSTRATE : str, optional Material to use as the substrate. The default is 'FS'. Returns ------- tCoef : numpy ndarray 2-D array of complex transmission amplitude values. rCoef : numpy ndarray 2-D array of complex reflection amplitude values. """ real_positive_scalar(lam, 'lam', TypeError) real_nonnegative_scalar(aoi, 'theta', TypeError) real_nonnegative_scalar(t_Ti, 't_Ti', TypeError) oneD_array(t_Ni_vec, 't_Ni_vec', ValueError) oneD_array(t_PMGI_vec, 't_PMGI_vec', ValueError) # if len(t_Ti) != len(t_Ni_vec) or len(t_Ni_vec) != len(t_PMGI_vec): # raise ValueError('Ti, Ni, and PMGI thickness vectors must all ' + # 'have same length.') scalar_integer(pol, 'pol', TypeError) lam_nm = lam * 1.0e9 # m --> nm lam_um = lam * 1.0e6 # m --> microns lam_um2 = lam_um * lam_um theta = aoi * (np.pi/180.) # deg --> rad # Define Material Properties # --------------------------------------------- # Substrate properties if SUBSTRATE.upper() in ('FS', 'FUSEDSILICA'): A1 = 0.68374049400 A2 = 0.42032361300 A3 = 0.58502748000 B1 = 0.00460352869 B2 = 0.01339688560 B3 = 64.49327320000 n_substrate = np.sqrt(1 + A1*lam_um2/(lam_um2 - B1) + A2*lam_um2/(lam_um2 - B2) + A3*lam_um2/(lam_um2 - B3)) elif SUBSTRATE.upper() in ('N-BK7', 'NBK7', 'BK7'): B1 = 1.03961212 B2 = 0.231792344 B3 = 1.01046945 C1 = 0.00600069867 C2 = 0.0200179144 C3 = 103.560653 n_substrate = np.sqrt(1 + (B1*lam_um2/(lam_um2 - C1)) + (B2*lam_um2/(lam_um2 - C2)) + (B3*lam_um2/(lam_um2 - C3))) # Dielectric properties npmgi = 1.524 + 5.176e-03/lam_um**2 + 2.105e-4/lam_um**4 Ndiel = len(t_PMGI_vec) # Metal layer properties # Titanium base layer under the nickel Nmetal = len(t_Ni_vec) t_Ti_vec = t_Ti * np.ones(Nmetal) t_Ti_vec[np.asarray(t_Ni_vec) < 1e-10] = 0 # no Ti where no Ni # from D Moody titanium = np.array([ [397, 2.08, 2.95], [413, 2.14, 2.98], [431, 2.21, 3.01], [451, 2.27, 3.04], [471, 2.3, 3.1], [496, 2.36, 3.19], [521, 2.44, 3.2], [549, 2.54, 3.43], [582, 2.6, 3.58], [617, 2.67, 3.74], [659, 2.76, 3.84], [704, 2.86, 3.96], [756, 3.00, 4.01], [821, 3.21, 4.01], [892, 3.29, 3.96], [984, 3.35, 3.97], [1088, 3.5, 4.02], [1216, 3.62, 4.15] ]) lam_ti = titanium[:, 0] # nm n_ti = titanium[:, 1] k_ti = titanium[:, 2] nti = np.interp(lam_nm, lam_ti, n_ti) kti = np.interp(lam_nm, lam_ti, k_ti) # Nickel localpath = os.path.dirname(os.path.abspath(__file__)) fnNickel = os.path.join(localpath, 'data', 'nickel_data_from_Palik_via_Bala_wvlNM_n_k.txt') vnickel = np.loadtxt(fnNickel, delimiter="\t", unpack=False, comments="#") lam_nickel = vnickel[:, 0] # nm n_nickel = vnickel[:, 1] k_nickel = vnickel[:, 2] nnickel = np.interp(lam_nm, lam_nickel, n_nickel) knickel = np.interp(lam_nm, lam_nickel, k_nickel) # Compute the complex transmission # tCoef = np.zeros((Nmetal, ), dtype=complex) # initialize # rCoef = np.zeros((Nmetal, ), dtype=complex) # initialize # for ii in range(Nmetal): # dni = t_Ni_vec[ii] # dti = t_Ti_vec[ii] # dpm = t_PMGI_vec[ii] # nvec = np.array([1, 1, npmgi, nnickel-1j*knickel, nti-1j*kti, # n_substrate], dtype=complex) # dvec = np.array([d0-dpm-dni-dti, dpm, dni, dti]) # # Choose polarization # if(pol == 2): # Mean of the two # [dummy1, dummy2, rr0, tt0] = solver(nvec, dvec, theta, # lam, False) # [dummy1, dummy2, rr1, tt1] = solver(nvec, dvec, theta, # lam, True) # rr = (rr0+rr1)/2. # tt = (tt0+tt1)/2. # elif(pol == 0 or pol == 1): # [dumm1, dummy2, rr, tt] = solver(nvec, dvec, theta, lam, # bool(pol)) # else: # raise ValueError('Wrong input value for polarization.') # # Choose phase convention # if not flagOPD: # tCoef[ii] = np.conj(tt) # Complex field transmission coef # rCoef[ii] = np.conj(rr) # Complex field reflection coef # else: # OPD phase convention # tCoef[ii] = tt # Complex field transmission coeffient # rCoef[ii] = rr # Complex field reflection coeffient # Compute the complex transmission tCoef = np.zeros((Ndiel, Nmetal), dtype=complex) # initialize rCoef = np.zeros((Ndiel, Nmetal), dtype=complex) # initialize for jj in range(Ndiel): dpm = t_PMGI_vec[jj] for ii in range(Nmetal): dni = t_Ni_vec[ii] dti = t_Ti_vec[ii] nvec = np.array([1, 1, npmgi, nnickel-1j*knickel, nti-1j*kti, n_substrate], dtype=complex) dvec = np.array([d0-dpm-dni-dti, dpm, dni, dti]) # Choose polarization if(pol == 2): # Mean of the two [dummy1, dummy2, rr0, tt0] = solver(nvec, dvec, theta, lam, False) [dummy1, dummy2, rr1, tt1] = solver(nvec, dvec, theta, lam, True) rr = (rr0+rr1)/2. tt = (tt0+tt1)/2. elif(pol == 0 or pol == 1): [dumm1, dummy2, rr, tt] = solver(nvec, dvec, theta, lam, bool(pol)) else: raise ValueError('Wrong input value for polarization.') # Choose phase convention if not flagOPD: tCoef[jj, ii] = np.conj(tt) # Complex field transmission coef rCoef[jj, ii] = np.conj(rr) # Complex field reflection coef else: # OPD phase convention tCoef[jj, ii] = tt # Complex field transmission coeffient rCoef[jj, ii] = rr # Complex field reflection coeffient return tCoef, rCoef def solver(n, d0, theta, lam, tetm=False): """ Solve the thin film equations for the given materials. Parameters ---------- n : array_like index of refraction for each layer. n(1) = index of incident medium n(N) = index of transmission medium then length(n) must be >= 2 d0 : array_like thickness of each layer, not counting incident medium or transmission medium. length(d) = length(n)-2. theta : float angle of incidence [radians]. lam : float wavelength. units of lam must be same as d0. tetm : bool, optional False => TE, True => TM. The default is False. Returns ------- R : numpy ndarray normalized reflected intensity coefficient T : numpy ndarray normalized transmitted intensity coefficient rr : numpy ndarray complex field reflection coefficient tt : numpy ndarray complex field transmission coefficient """ oneD_array(n, 'n', ValueError) oneD_array(d0, 'd0', ValueError) N = len(n) if not (len(d0) == N-2): raise ValueError('n and d size mismatch') pass real_nonnegative_scalar(theta, 'theta', TypeError) real_positive_scalar(lam, 'lam', TypeError) if not type(tetm) == bool: raise TypeError('tetm must be a boolean.') # np.hstac d = np.hstack((0, d0.reshape(len(d0, )), 0)) kx = 2*np.pi*n[0]*np.sin(theta)/lam # sign agrees with measurement convention: kz = -np.sqrt((2*np.pi*n/lam)**2 - kx*kx) if tetm: kzz = kz/(n*n) else: kzz = kz eep = np.exp(-1j*kz*d) eem = np.exp(1j*kz*d) i1 = np.arange(N-1) i2 = np.arange(1, N) tin = 0.5*(kzz[i1] + kzz[i2])/kzz[i1] ri = (kzz[i1] - kzz[i2])/(kzz[i1] + kzz[i2]) A = np.eye(2, dtype=complex) for i in range(N-1): A = A @ np.array(tin[i]*np.array([[eep[i], ri[i]*eep[i]], [ri[i]*eem[i], eem[i]]])) rr = A[1, 0] / A[0, 0] tt = 1 / A[0, 0] # transmitted power flux (Poynting vector . surface) depends on index of # the substrate and angle R = np.abs(rr)**2 if tetm: Pn = np.real((kz[-1]/(n[-1]**2)) / (kz[0]/(n[0]**2))) else: Pn = np.real((kz[-1]/kz[0])) pass T = Pn*np.abs(tt)**2 tt = np.sqrt(Pn)*tt return [R, T, rr, tt] def gen_complex_trans_table(mp, flagRefl=False, SUBSTRATE='FS'): """ Calculate 3-D look-up table for thin film transmission data. Calculate thin-film complex transmission data cube. The three dimensions are for metal thickness, dielectric thickness, and wavelength. Parameters ---------- mp : ModelParameters Model parameters object. flagRefl : TYPE, optional Compute the thin film properties in reflection. The default is False. SUBSTRATE : TYPE, optional Change the substrate material. The default is 'FS'. The only other option is 'BK7'. Returns ------- complexTransCompact : numpy ndarray Complex transmission datacube for FALCO's compact model. complexTransFull : numpy ndarray Complex transmission datacube for FALCO's full model. """ localpath = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) mp.F3.metal = 'Ni' mp.F3.diel = 'PMGI' fn_compact = ('ct_cube_%s_Ti%.1fnm_%s_%.1fto%.1fby%.2f_%s_%.1fto%.1fby%.2f_wvl%dnm_BW%.1fN%d_%.1fdeg_compact.npy' % (SUBSTRATE, mp.t_Ti_nm, mp.F3.metal, np.min(mp.t_metal_nm_vec), np.max(mp.t_metal_nm_vec), mp.dt_metal_nm, mp.F3.diel, np.min(mp.t_diel_nm_vec), np.max(mp.t_diel_nm_vec), mp.dt_diel_nm, 1e9*mp.lambda0, 100*mp.fracBW, mp.Nsbp, mp.aoi)) fn_cube_compact = os.path.join(localpath, 'data', 'material', fn_compact) fn_full = ('ct_cube_%s_Ti%.1fnm_%s_%.1fto%.1fby%.2f_%s_%.1fto%.1fby%.2f_wvl%dnm_BW%.1f_%dN%d_%.1fdeg_full.npy' % (SUBSTRATE, mp.t_Ti_nm, mp.F3.metal, np.min(mp.t_metal_nm_vec), np.max(mp.t_metal_nm_vec), mp.dt_metal_nm, mp.F3.diel, np.min(mp.t_diel_nm_vec), np.max(mp.t_diel_nm_vec), mp.dt_diel_nm, (1e9*mp.lambda0), 100*mp.fracBW, mp.Nsbp, mp.Nwpsbp, mp.aoi)) fn_cube_full = os.path.join(localpath, 'data', 'material', fn_full) if(flagRefl): fn_cube_compact = fn_cube_compact[0:-4] + '_refl.mat' fn_cube_full = fn_cube_full[0:-4] + '_refl.mat' t_Ti_m = 1e-9*mp.t_Ti_nm # Static base layer of titanium beneath nickel. aoi = mp.aoi Nsbp = mp.Nsbp t_diel_m_vec = 1e-9*mp.t_diel_nm_vec # PMGI thickness range t_metal_m_vec = 1e-9*mp.t_metal_nm_vec # nickel thickness range Nmetal = len(mp.t_metal_nm_vec) Ndiel = len(mp.t_diel_nm_vec) # Compact Model: Load the data if it has been generated before; otherwise generate it. if(isfile(fn_cube_compact)): complexTransCompact = np.load(fn_cube_compact) print('Loaded complex transmission datacube for compact model: %s' % fn_cube_compact) else: print('Computing thin film equations for compact model:') complexTransCompact = np.zeros((Ndiel, Nmetal, mp.Nsbp), dtype=complex) sbp_centers = mp.sbp_centers # Parallel/distributed computing # To be completed later # Regular (serial) computing for si in range(Nsbp): lam = sbp_centers[si] d0 = lam * mp.F3.d0fac # Max thickness of PMGI + Ni [tCoef, rCoef] = calc_complex_occulter(lam, aoi, t_Ti_m, t_metal_m_vec, t_diel_m_vec, d0, 2) if(flagRefl): complexTransCompact[:, :, si] = rCoef else: complexTransCompact[:, :, si] = tCoef pass print('\tDone computing wavelength %d of %d.\n' % (si, Nsbp)) # Save out for future use np.save(fn_cube_compact, complexTransCompact) print('Saved complex transmission datacube: %s' % fn_cube_compact) pass # Full Model: Load the data if it has been generated before; otherwise generate it. if isfile(fn_cube_full): complexTransFull = np.load(fn_cube_full) print('Loaded complex transmission datacube for full model: %s' % fn_cube_full) else: print('Computing thin film equations for full model:') if mp.Nwpsbp == 1: complexTransFull = complexTransCompact else: complexTransFull = np.zeros((Ndiel, Nmetal, mp.Nsbp*mp.Nwpsbp), dtype=complex) lambdas = mp.full.lambdas # Parallel/distributed computing # To be completed later # Regular (serial) computing for li in range(len(lambdas)): lam = lambdas[li] d0 = lam * mp.F3.d0fac # Max thickness of PMGI + Ni [tCoef, rCoef] = calc_complex_occulter(lam, aoi, t_Ti_m, t_metal_m_vec, t_diel_m_vec, d0, 2) if(flagRefl): complexTransFull[:, :, li] = rCoef else: complexTransFull[:, :, li] = tCoef pass print('\tDone computing wavelength %d of %d.\n' % (li, len(lambdas))) # Save out for future use np.save(fn_cube_full, complexTransFull) print('Saved complex transmission datacube: %s\n' % fn_cube_full) return complexTransCompact, complexTransFull
PypiClean
/dsin100daysv34-6.0.1.tar.gz/dsin100daysv34-6.0.1/notebook/static/components/codemirror/mode/cmake/cmake.js
(function(mod) { if (typeof exports == "object" && typeof module == "object") mod(require("../../lib/codemirror")); else if (typeof define == "function" && define.amd) define(["../../lib/codemirror"], mod); else mod(CodeMirror); })(function(CodeMirror) { "use strict"; CodeMirror.defineMode("cmake", function () { var variable_regex = /({)?[a-zA-Z0-9_]+(})?/; function tokenString(stream, state) { var current, prev, found_var = false; while (!stream.eol() && (current = stream.next()) != state.pending) { if (current === '$' && prev != '\\' && state.pending == '"') { found_var = true; break; } prev = current; } if (found_var) { stream.backUp(1); } if (current == state.pending) { state.continueString = false; } else { state.continueString = true; } return "string"; } function tokenize(stream, state) { var ch = stream.next(); // Have we found a variable? if (ch === '$') { if (stream.match(variable_regex)) { return 'variable-2'; } return 'variable'; } // Should we still be looking for the end of a string? if (state.continueString) { // If so, go through the loop again stream.backUp(1); return tokenString(stream, state); } // Do we just have a function on our hands? // In 'cmake_minimum_required (VERSION 2.8.8)', 'cmake_minimum_required' is matched if (stream.match(/(\s+)?\w+\(/) || stream.match(/(\s+)?\w+\ \(/)) { stream.backUp(1); return 'def'; } if (ch == "#") { stream.skipToEnd(); return "comment"; } // Have we found a string? if (ch == "'" || ch == '"') { // Store the type (single or double) state.pending = ch; // Perform the looping function to find the end return tokenString(stream, state); } if (ch == '(' || ch == ')') { return 'bracket'; } if (ch.match(/[0-9]/)) { return 'number'; } stream.eatWhile(/[\w-]/); return null; } return { startState: function () { var state = {}; state.inDefinition = false; state.inInclude = false; state.continueString = false; state.pending = false; return state; }, token: function (stream, state) { if (stream.eatSpace()) return null; return tokenize(stream, state); } }; }); CodeMirror.defineMIME("text/x-cmake", "cmake"); });
PypiClean
/cctbx_base-2020.8-0_py38h167b89d-cp38-cp38m-manylinux2010_x86_64.whl/rstbx/slip_viewer/uc_frame.py
from __future__ import absolute_import, division, print_function # -*- Mode: Python; c-basic-offset: 2; indent-tabs-mode: nil; tab-width: 8 -*- # # $Id import wx, math class UCSettingsFrame(wx.MiniFrame): def __init__(self, *args, **kwds): super(UCSettingsFrame, self).__init__(*args, **kwds) szr = wx.BoxSizer(wx.VERTICAL) self.phil_params = args[0].params panel = UCSettingsPanel(self) self.SetSizer(szr) szr.Add(panel, 1, wx.EXPAND) szr.Fit(panel) self.panel = panel self.sizer = szr self.Fit() self.Bind(wx.EVT_CLOSE, lambda evt : self.Destroy(), self) class UCSettingsPanel(wx.Panel): def __init__(self, *args, **kwds): super(UCSettingsPanel, self).__init__(*args, **kwds) self.phil_params = args[0].phil_params from wx.lib.agw.floatspin import EVT_FLOATSPIN, FloatSpin # Needed to draw and delete the rings. XXX Applies to # calibration_frame as well? self._pyslip = self.GetParent().GetParent().pyslip sizer = wx.BoxSizer(wx.VERTICAL) self.SetSizer(sizer) # Number of decimal digits for distances. self.digits = 2 # Wavelength control. beam = self._pyslip.tiles.raw_image.get_beam() self._wavelength = beam.get_wavelength() # Unit cell controls. if self.phil_params.calibrate_unitcell.unitcell is not None: self._cell = list(self.phil_params.calibrate_unitcell.unitcell.parameters()) else: self._cell = [4.18,4.72,58.38,89.44,89.63,75.85] if self.phil_params.calibrate_unitcell.spacegroup is not None: self._spacegroup = self.phil_params.calibrate_unitcell.spacegroup else: self._spacegroup = "P1" self._cell_control_names = ["uc_a_ctrl","uc_b_ctrl","uc_c_ctrl", "uc_alpha_ctrl","uc_beta_ctrl","uc_gamma_ctrl"] box = wx.BoxSizer(wx.HORIZONTAL) self.uc_a = FloatSpin( self, digits=self.digits, name=self._cell_control_names[0], value=self._cell[0]) box.Add(self.uc_a, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="a"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCell, self.uc_a) self.uc_alpha = FloatSpin( self, digits=self.digits, name=self._cell_control_names[3], value=self._cell[3]) box.Add(self.uc_alpha, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="alpha"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCell, self.uc_alpha) sizer.Add(box) box = wx.BoxSizer(wx.HORIZONTAL) self.uc_b = FloatSpin( self, digits=self.digits, name=self._cell_control_names[1], value=self._cell[1]) box.Add(self.uc_b, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="b"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCell, self.uc_b) self.uc_beta = FloatSpin( self, digits=self.digits, name=self._cell_control_names[4], value=self._cell[4]) box.Add(self.uc_beta, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="beta"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCell, self.uc_beta) sizer.Add(box) box = wx.BoxSizer(wx.HORIZONTAL) self.uc_c = FloatSpin( self, digits=self.digits, name=self._cell_control_names[2], value=self._cell[2]) box.Add(self.uc_c, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="c"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCell, self.uc_c) self.uc_gamma = FloatSpin( self, digits=self.digits, name=self._cell_control_names[5], value=self._cell[5]) box.Add(self.uc_gamma, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="gamma"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCell, self.uc_gamma) sizer.Add(box) # Space group control box = wx.BoxSizer(wx.HORIZONTAL) self.space_group_ctrl = wx.TextCtrl( self, name="space group", value=self._spacegroup) box.Add(self.space_group_ctrl, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="Space group"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(wx.EVT_TEXT, self.OnSpaceGroup, self.space_group_ctrl) sizer.Add(box) # Distance control img = self.GetParent().GetParent()._img box = wx.BoxSizer(wx.HORIZONTAL) self.distance_ctrl = FloatSpin( self, digits=self.digits, name="Detector Distance", value=img.get_detector_distance()) self.distance_ctrl.SetIncrement(0.5) box.Add(self.distance_ctrl, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) txtd = wx.StaticText(self, label="Detector Distance") box.Add(txtd, 0, wx.ALL|wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpin, self.distance_ctrl) sizer.Add(box) # Wavelength control img = self.GetParent().GetParent()._img box = wx.BoxSizer(wx.HORIZONTAL) self.wavelength_ctrl = FloatSpin( self, digits=4, name="Wavelength", value=img.get_wavelength()) self.wavelength_ctrl.SetIncrement(0.05) box.Add(self.wavelength_ctrl, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) txtw = wx.StaticText(self, label="Wavelength") box.Add(txtw, 0, wx.ALL|wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpin, self.wavelength_ctrl) sizer.Add(box) # d_min control if self.phil_params.calibrate_unitcell.d_min is not None: self.d_min = self.phil_params.calibrate_unitcell.d_min else: self.d_min = 3.5 box = wx.BoxSizer(wx.HORIZONTAL) self.d_min_ctrl = FloatSpin( self, digits=self.digits, name="d_min", value=self.d_min) box.Add(self.d_min_ctrl, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) txtd = wx.StaticText(self, label="Highest resolution for ring display") box.Add(txtd, 0, wx.ALL|wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpin, self.d_min_ctrl) sizer.Add(box) # Centering controls. self._center = [0, 0] box = wx.BoxSizer(wx.HORIZONTAL) self.spinner_fast = FloatSpin( self, digits=self.digits, name="fast_ctrl", value=self._center[0]) box.Add(self.spinner_fast, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="Center fast"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCenter, self.spinner_fast) self.spinner_slow = FloatSpin( self, digits=self.digits, name="slow_ctrl", value=self._center[1]) box.Add(self.spinner_slow, 0, wx.RIGHT | wx.TOP | wx.BOTTOM | wx.ALIGN_CENTER_VERTICAL, 5) box.Add(wx.StaticText(self, label="Center slow"), 0, wx.ALL | wx.ALIGN_CENTER_VERTICAL, 5) self.Bind(EVT_FLOATSPIN, self.OnSpinCenter, self.spinner_slow) sizer.Add(box) self.DrawRings() def __del__(self): if (hasattr(self, "_ring_layer") and self._ring_layer is not None): self._pyslip.DeleteLayer(self._ring_layer) def OnSpinCenter(self, event): obj = event.EventObject name = obj.GetName() if (name == "fast_ctrl"): self._center[0] = obj.GetValue() elif (name == "slow_ctrl"): self._center[1] = obj.GetValue() self.DrawRings() def OnSpinCell(self, event): obj = event.EventObject name = obj.GetName() self._cell[self._cell_control_names.index(name)] = obj.GetValue() self.DrawRings() def OnSpin(self, event): self.DrawRings() def OnSpaceGroup(self, event): obj = event.EventObject self._spacegroup = obj.GetValue() self.DrawRings() def _draw_rings_layer(self, dc, data, map_rel): """Draw a points layer. dc the device context to draw on data an iterable of point tuples: (x, y, place, radius, colour, x_off, y_off, pdata) map_rel points relative to map if True, MUST BE TRUE for lightweight Assumes all points are the same colour, saving 100's of ms. """ assert map_rel is True if len(data)==0: return (lon, lat, place, radius, colour, x_off, y_off, pdata) = data[0] scale = 2**self._pyslip.tiles.zoom_level # Draw points on map/view, using transparency if implemented. try: dc = wx.GCDC(dc) except NotImplementedError: pass dc.SetPen(wx.Pen(colour)) dc.SetBrush(wx.Brush(colour, wx.TRANSPARENT)) for (lon, lat, place, radius, colour, x_off, y_off, pdata) in data: (x, y) = self._pyslip.ConvertGeo2View((lon, lat)) dc.DrawCircle(x, y, radius * scale) def DrawRings(self): from cctbx.crystal import symmetry import cctbx.miller frame = self.GetParent().GetParent() try: uc = symmetry(unit_cell=self._cell, space_group_symbol=str(self._spacegroup)) hkl_list = cctbx.miller.build_set(uc, False, d_min=self.d_min_ctrl.GetValue()) except Exception as e: frame.update_statusbar(str(e)) return frame.update_statusbar("%d %d %d %d %d %d, "%tuple(self._cell) + "number of indices: %d"%len(hkl_list.indices())) spacings = list(hkl_list.d_spacings()) print("Printing spacings, len: %s"%len(spacings)) def cmp(a,b): if a[1] > b[1]: return 1 elif a[1] < b[1]: return -1 return 0 spacings = sorted(spacings, cmp=cmp, reverse=True) for d in spacings: print(d) detector = self._pyslip.tiles.raw_image.get_detector() beam = self._pyslip.tiles.raw_image.get_beam() wavelength = float(self.wavelength_ctrl.GetValue()) distance = float(self.distance_ctrl.GetValue()) pixel_size = detector[0].get_pixel_size()[0] # FIXME assumes square pixels, and that all panels use same pixel size twotheta = hkl_list.two_theta(wavelength = wavelength) L_mm = [] L_pixels = [] for tt in twotheta: L_mm.append(distance * math.tan(tt[1])) for lmm in L_mm: L_pixels.append(lmm/pixel_size) xrayframe = self.GetParent().GetParent() panel_id, beam_pixel_fast, beam_pixel_slow = xrayframe.get_beam_center_px() if len(detector) > 1: beam_pixel_slow, beam_pixel_fast = xrayframe.pyslip.tiles.flex_image.tile_readout_to_picture( panel_id, beam_pixel_slow - 0.5, beam_pixel_fast - 0.5) center = self._pyslip.tiles.picture_fast_slow_to_map_relative( beam_pixel_fast + self._center[0], beam_pixel_slow + self._center[1]) # XXX Transparency? ring_data = [(center[0], center[1], {"colour": "red", "radius": pxl}) for pxl in L_pixels] # Remove the old ring layer, and draw a new one. if (hasattr(self, "_ring_layer") and self._ring_layer is not None): self._pyslip.DeleteLayer(self._ring_layer) self._ring_layer = None self._ring_layer = self._pyslip.AddPointLayer( ring_data, map_rel=True, visible=True, show_levels=[-3, -2, -1, 0, 1, 2, 3, 4, 5], renderer=self._draw_rings_layer, name="<ring_layer>")
PypiClean
/pyelasticache_client-0.1.3.tar.gz/pyelasticache_client-0.1.3/pymemcache/pool.py
import collections import contextlib import sys import threading import six class ObjectPool(object): """A pool of objects that release/creates/destroys as needed.""" def __init__(self, obj_creator, after_remove=None, max_size=None, lock_generator=None): self._used_objs = collections.deque() self._free_objs = collections.deque() self._obj_creator = obj_creator if lock_generator is None: self._lock = threading.Lock() else: self._lock = lock_generator() self._after_remove = after_remove max_size = max_size or 2 ** 31 if not isinstance(max_size, six.integer_types) or max_size < 0: raise ValueError('"max_size" must be a positive integer') self.max_size = max_size @property def used(self): return tuple(self._used_objs) @property def free(self): return tuple(self._free_objs) @contextlib.contextmanager def get_and_release(self, destroy_on_fail=False): obj = self.get() try: yield obj except Exception: exc_info = sys.exc_info() if not destroy_on_fail: self.release(obj) else: self.destroy(obj) six.reraise(exc_info[0], exc_info[1], exc_info[2]) self.release(obj) def get(self): with self._lock: if not self._free_objs: curr_count = len(self._used_objs) if curr_count >= self.max_size: raise RuntimeError("Too many objects," " %s >= %s" % (curr_count, self.max_size)) obj = self._obj_creator() self._used_objs.append(obj) return obj else: obj = self._free_objs.pop() self._used_objs.append(obj) return obj def destroy(self, obj, silent=True): was_dropped = False with self._lock: try: self._used_objs.remove(obj) was_dropped = True except ValueError: if not silent: raise if was_dropped and self._after_remove is not None: self._after_remove(obj) def release(self, obj, silent=True): with self._lock: try: self._used_objs.remove(obj) self._free_objs.append(obj) except ValueError: if not silent: raise def clear(self): if self._after_remove is not None: needs_destroy = [] with self._lock: needs_destroy.extend(self._used_objs) needs_destroy.extend(self._free_objs) self._free_objs.clear() self._used_objs.clear() for obj in needs_destroy: self._after_remove(obj) else: with self._lock: self._free_objs.clear() self._used_objs.clear()
PypiClean
/dask-image-2023.8.1.tar.gz/dask-image-2023.8.1/dask_image/ndmorph/_utils.py
import numbers import dask.array as da import numpy as np from ..dispatch._dispatch_ndmorph import dispatch_binary_structure from ..ndfilters._utils import (_get_depth, _get_depth_boundary, _get_origin, _update_wrapper) _update_wrapper = _update_wrapper _get_depth_boundary = _get_depth_boundary _get_origin = _get_origin _get_depth = _get_depth def _get_structure(image, structure): # Create square connectivity as default if structure is None: generate_binary_structure = dispatch_binary_structure(image) structure = generate_binary_structure(image.ndim, 1) elif hasattr(structure, 'ndim'): if structure.ndim != image.ndim: raise RuntimeError( "`structure` must have the same rank as `image`." ) if not issubclass(structure.dtype.type, np.bool8): structure = (structure != 0) else: raise TypeError("`structure` must be an array.") return structure def _get_iterations(iterations): if not isinstance(iterations, numbers.Integral): raise TypeError("`iterations` must be of integral type.") if iterations < 1: raise NotImplementedError( "`iterations` must be equal to 1 or greater not less." ) return iterations def _get_dtype(a): # Get the dtype of a value or an array. # Even handle non-NumPy types. return getattr(a, "dtype", np.dtype(type(a))) def _get_mask(image, mask): if mask is None: mask = True mask_type = _get_dtype(mask).type if isinstance(mask, (np.ndarray, da.Array)): if mask.shape != image.shape: raise RuntimeError("`mask` must have the same shape as `image`.") if not issubclass(mask_type, np.bool8): mask = (mask != 0) elif issubclass(mask_type, np.bool8): mask = bool(mask) else: raise TypeError("`mask` must be a Boolean or an array.") return mask def _get_border_value(border_value): if not isinstance(border_value, numbers.Integral): raise TypeError("`border_value` must be of integral type.") border_value = (border_value != 0) return border_value def _get_brute_force(brute_force): if brute_force is not False: if brute_force is True: raise NotImplementedError( "`brute_force` other than `False` is not yet supported." ) else: raise TypeError( "`brute_force` must be `bool`." ) return brute_force
PypiClean
/quinteng-chaoyue-1.0.0.tar.gz/quinteng-chaoyue-1.0.0/quinteng/synthesis/evolution/qdrift.py
from typing import Union, Optional, Callable import numpy as np from quinteng.circuit.quantumcircuit import QuantumCircuit from quinteng.quantum_info.operators import SparsePauliOp, Pauli from quinteng.utils import algorithm_globals from .product_formula import ProductFormula from .lie_trotter import LieTrotter class QDrift(ProductFormula): r"""The QDrift Trotterization method, which selects each each term in the Trotterization randomly, with a probability proportional to its weight. Based on the work of Earl Campbell in Ref. [1]. References: [1]: E. Campbell, "A random compiler for fast Hamiltonian simulation" (2018). `arXiv:quant-ph/1811.08017 <https://arxiv.org/abs/1811.08017>`_ """ def __init__( self, reps: int = 1, insert_barriers: bool = False, cx_structure: str = "chain", atomic_evolution: Optional[ Callable[[Union[Pauli, SparsePauliOp], float], QuantumCircuit] ] = None, ) -> None: r""" Args: reps: The number of times to repeat the Trotterization circuit. insert_barriers: Whether to insert barriers between the atomic evolutions. cx_structure: How to arrange the CX gates for the Pauli evolutions, can be "chain", where next neighbor connections are used, or "fountain", where all qubits are connected to one. atomic_evolution: A function to construct the circuit for the evolution of single Pauli string. Per default, a single Pauli evolution is decomopsed in a CX chain and a single qubit Z rotation. """ super().__init__(1, reps, insert_barriers, cx_structure, atomic_evolution) self.sampled_ops = None def synthesize(self, evolution): # get operators and time to evolve operators = evolution.operator time = evolution.time if not isinstance(operators, list): pauli_list = [(Pauli(op), coeff) for op, coeff in operators.to_list()] coeffs = [np.real(coeff) for op, coeff in operators.to_list()] else: pauli_list = [(op, 1) for op in operators] coeffs = [1 for op in operators] # We artificially make the weights positive weights = np.abs(coeffs) lambd = np.sum(weights) num_gates = int(np.ceil(2 * (lambd ** 2) * (time ** 2) * self.reps)) # The protocol calls for the removal of the individual coefficients, # and multiplication by a constant evolution time. evolution_time = lambd * time / num_gates self.sampled_ops = algorithm_globals.random.choice( np.array(pauli_list, dtype=object), size=(num_gates,), p=weights / lambd, ) # Update the coefficients of sampled_ops self.sampled_ops = [(op, evolution_time) for op, coeff in self.sampled_ops] # pylint: disable=cyclic-import from quinteng.circuit.library.pauli_evolution import PauliEvolutionGate from quinteng.opflow import PauliOp # Build the evolution circuit using the LieTrotter synthesis with the sampled operators lie_trotter = LieTrotter( insert_barriers=self.insert_barriers, atomic_evolution=self.atomic_evolution ) evolution_circuit = PauliEvolutionGate( sum(PauliOp(op) for op, coeff in self.sampled_ops), time=evolution_time, synthesis=lie_trotter, ).definition return evolution_circuit
PypiClean
/file_type_identifier-0.2.3.tar.gz/file_type_identifier-0.2.3/fti/getters.py
import httpx from . import FileTypes from .file_type_getters import ( get_file_types_by_content, get_file_types_by_content_disposition, get_file_types_by_mime, get_file_types_by_url, ) def get_file_types(url: str, timeout: float = 5) -> set[FileTypes]: if result := get_file_types_by_url(url): return result try: with httpx.stream("GET", url, follow_redirects=True, timeout=httpx.Timeout(timeout)) as response: content_disposition = response.headers.get("content-disposition") if content_disposition and (result := get_file_types_by_content_disposition(content_disposition)): return result content_type = response.headers.get("content-type") if content_type and (result := get_file_types_by_mime(content_type)): return result content = next(response.iter_bytes(64)) if result := get_file_types_by_content(content): return result except (httpx.UnsupportedProtocol, httpx.ConnectError): raise ValueError("Wrong url!") except httpx.TimeoutException: raise ValueError("Timeout error!") except httpx.RequestError: raise ValueError("Unexpected error!") return set() async def get_file_types_async(url: str, timeout: float = 5) -> set[FileTypes]: if result := get_file_types_by_url(url): return result client = httpx.AsyncClient(timeout=httpx.Timeout(timeout)) try: async with client.stream("GET", url, follow_redirects=True) as response: content_disposition = response.headers.get("content-disposition") if content_disposition and (result := get_file_types_by_content_disposition(content_disposition)): return result content_type = response.headers.get("content-type") if content_type and (result := get_file_types_by_mime(content_type)): return result content = await response.aiter_bytes(64).__anext__() if result := get_file_types_by_content(content): return result except (httpx.UnsupportedProtocol, httpx.ConnectError): raise ValueError("Wrong url!") except httpx.TimeoutException: raise ValueError("Timeout error!") except httpx.RequestError: raise ValueError("Unexpected error!") finally: await client.aclose() return set()
PypiClean
/PolTools-1.0.7.tar.gz/PolTools-1.0.7/main_programs/TES_fold_change_heatmap.py
import glob import os import sys import argparse import multiprocessing from PolTools.main_programs import TES_heatmap from PolTools.utils.constants import generate_heatmap_location from PolTools.utils.heatmap_utils.generate_heatmap import generate_heatmap, Ticks, make_ticks_matrix from PolTools.utils.heatmap_utils.make_log_two_fold_change_matrix import make_log_two_fold_change_matrix from PolTools.utils.make_random_filename import generate_random_filename from PolTools.utils.nested_multiprocessing_pool import NestedPool from PolTools.utils.remove_files import remove_files from PolTools.utils.heatmap_utils.set_matrix_bounds import set_matrix_bounds from PolTools.main_programs.gene_body_fold_change_heatmap import combine_images def normalize_matrix(filename): # Make the sum of all the values 1 and save to the same file matrix = [] with open(filename) as file: for line in file: matrix.append([float(val) for val in line.split()]) matrix_sum = sum(sum(matrix, [])) with open(filename, 'w') as file: for row in matrix: file.write( "\t".join([str(val / matrix_sum) for val in row]) + "\n" ) def get_fold_change_matrix(numerator_seq_files_data, denominator_seq_files_data, matrix_params, filenames, max_threads, norm): # We use max_threads / 2 because we will be running two instances of the combined threads_per_heatmap = int(max_threads / 2) # Make sure that if the user only wants to run on one thread that it does not default to 0 if threads_per_heatmap == 0: threads_per_heatmap = 1 numerator_args = (numerator_seq_files_data, matrix_params, filenames, threads_per_heatmap) denominator_args = (denominator_seq_files_data, matrix_params, filenames, threads_per_heatmap) with NestedPool(max_threads) as pool: numerator_matrix_filename, denominator_matrix_filename = pool.starmap(TES_heatmap.get_matrix, [numerator_args, denominator_args]) # Normalize if necessary if norm: normalize_matrix(numerator_matrix_filename) normalize_matrix(denominator_matrix_filename) # Make the fold change matrix log_two_fold_change_matrix_filename = make_log_two_fold_change_matrix(numerator_matrix_filename, denominator_matrix_filename) remove_files(numerator_matrix_filename, denominator_matrix_filename) return log_two_fold_change_matrix_filename def set_max_fold_change(fold_change_matrix_filename, max_fold_change): return set_matrix_bounds(fold_change_matrix_filename, -1 * max_fold_change, max_fold_change) def make_ticks_image(width, interval_size, tick_params): minor_ticks_bp, major_ticks_bp = tick_params # Make the tick marks t = Ticks(minor_tick_mark_interval_size=(minor_ticks_bp / interval_size), major_tick_mark_interval_size=(major_ticks_bp / interval_size)) ticks_matrix = make_ticks_matrix(width, 50, 1, t) # Write to a file ticks_matrix_filename = generate_random_filename() with open(ticks_matrix_filename, 'w') as file: for row in ticks_matrix: file.write("\t".join([str(val) for val in row]) + "\n") ticks_image_filename = generate_random_filename().replace(".bed", ".tiff") os.system("/usr/bin/Rscript " + generate_heatmap_location + " " + " ".join([ticks_matrix_filename, "gray", ticks_image_filename, "2.2"])) remove_files(ticks_matrix_filename) return ticks_image_filename def get_args(args): def positive_int(num): try: val = int(num) if val <= 0: raise Exception("Go to the except") except: raise argparse.ArgumentTypeError(num + " must be positive") return val def positive_float(num): try: val = float(num) if val <= 0: raise Exception("Go to the except") except: raise argparse.ArgumentTypeError(num + " must be positive") return val parser = argparse.ArgumentParser(prog='PolTools TES_fold_change_heatmap', description="Generate a heatmap of 3' ends for each gene sorted by gene length " + "aligned by the transcription end site\n" + "More information can be found at " + "https://geoffscollins.github.io/PolTools/TES_fold_change_heatmap.html") parser.add_argument('truQuant_output_file', metavar='truQuant_output_file', type=str, help='truQuant output file which ends in -truQuant_output.txt') parser.add_argument('--numerator', nargs=2, action='append', metavar=('seq_file', 'spike_in'), required=True, help='Provide the sequencing file with its correction factor. You can supply ' 'more than one sequencing file by adding multiple --numerator arguments.') parser.add_argument('--denominator', nargs=2, action='append', metavar=('seq_file', 'spike_in'), required=True, help='Provide the sequencing file with its correction factor. You can supply ' 'more than one sequencing file by adding multiple -denominator arguments.') parser.add_argument('output_prefix', metavar='output_prefix', type=str, help='Prefix for the output filename') parser.add_argument('-w', '--width', metavar='width', dest='width', type=positive_int, default=2_000, help='Width of the heatmap in pixels') parser.add_argument('-e', '--height', metavar='height', dest='height', type=positive_int, default=2_000, help='Height of the heatmap in pixels') parser.add_argument('-d', '--downstream_distance', metavar='downstream_distance', dest='downstream_distance', type=positive_int, default=50_000, help='Distance downstream from the transcription end site') parser.add_argument('-u', '--upstream_distance', metavar='upstream_distance', dest='upstream_distance', type=positive_int, default=50_000, help='Distance upstream of the start of the gene body') parser.add_argument('-b', '--bp_width', metavar='bp_width', dest='bp_width', default=400_000, type=positive_int, help='Total number of base pairs shown on the heatmap. This number must be greater than the ' + 'upstream distance + distance past TES.') parser.add_argument('-m', '--max_log2_fc', metavar='max_log2_fc', dest='max_log2_fc', type=positive_float, default=None, help='Max log2 fold change of the heatmap') parser.add_argument('--minor_ticks', metavar='minor_ticks', dest='minor_ticks', type=positive_int, default=10_000, help='Distance between minor ticks (bp)') parser.add_argument('--major_ticks', metavar='major_ticks', dest='major_ticks', type=positive_int, default=50_000, help='Distance between major ticks (bp)') parser.add_argument('-t', '--threads', dest='threads', metavar='threads', type=positive_int, nargs='?', default=multiprocessing.cpu_count()) parser.add_argument('--norm', dest='norm', metavar='norm', action='store_true') parser.set_defaults(norm=False) args = parser.parse_args(args) truQuant_output_file = args.truQuant_output_file output_filename_prefix = args.output_prefix width = args.width height = args.height downstream_distance = args.downstream_distance upstream_distance = args.upstream_distance bp_width = args.bp_width max_log2_fc = args.max_log2_fc minor_ticks = args.minor_ticks major_ticks = args.major_ticks max_threads = args.threads # Find all regions to blacklist tsr_file = glob.glob(truQuant_output_file.replace("-truQuant_output.txt", "") + "*TSR.tab") if not tsr_file: sys.stderr.write("No tsrFinder file was found. Exiting ...\n") sys.exit(1) if len(tsr_file) != 1: sys.stderr.write("More than one tsrFinder file was found for this run of truQuant. Exiting ...\n") sys.exit(1) tsr_file = tsr_file[0] numerator_seq_files_data = [] denominator_seq_files_data = [] for dataset in args.numerator: seq_file, corr_factor = dataset corr_factor = positive_float(corr_factor) numerator_seq_files_data.append((seq_file, corr_factor)) if not os.path.isfile(seq_file): sys.stderr.write("File " + seq_file + " was not found.\n") sys.exit(1) for dataset in args.denominator: seq_file, corr_factor = dataset corr_factor = positive_float(corr_factor) denominator_seq_files_data.append((seq_file, corr_factor)) if not os.path.isfile(seq_file): sys.stderr.write("File " + seq_file + " was not found.\n") sys.exit(1) # If the interval size is not an integer, then we can't use it if bp_width % width: sys.stderr.write( "The heatmap width in px must be a factor of the base pair width (bp width / px width must be an integer)") sys.exit(1) interval_size = int(bp_width / width) matrix_params = (upstream_distance, downstream_distance, bp_width, width, height, interval_size) heatmap_params = (bp_width, width, height, max_log2_fc, interval_size, minor_ticks, major_ticks) filenames = (truQuant_output_file, tsr_file, output_filename_prefix) return numerator_seq_files_data, denominator_seq_files_data, matrix_params, heatmap_params, filenames, max_threads, args.norm def main(args): numerator_seq_files_data, denominator_seq_files_data, matrix_params, heatmap_params, filenames, max_threads, norm = get_args(args) # Get the fold change matrix fold_change_matrix = get_fold_change_matrix(numerator_seq_files_data, denominator_seq_files_data, matrix_params, filenames, max_threads, norm) # Now plot! bp_width, width, height, max_log2_fc, interval_size, minor_ticks, major_ticks = heatmap_params output_prefix = filenames[-1] output_filename = output_prefix + "_max_" + str(max_log2_fc) + "_width_" + str(bp_width) + \ "bp_fold_change_TES_heatmap" only_heatmap_filename = generate_random_filename(".tiff") negative_log2_value = -1 * max_log2_fc if max_log2_fc else None generate_heatmap(fold_change_matrix, 'red/blue', only_heatmap_filename, 2.2, negative_log2_value, max_log2_fc) tick_params = (minor_ticks, major_ticks) ticks_image_filename = make_ticks_image(width, interval_size, tick_params) combine_images(ticks_image_filename, only_heatmap_filename, output_filename) remove_files(fold_change_matrix, ticks_image_filename, only_heatmap_filename) if __name__ == '__main__': main(sys.argv[1:])
PypiClean
/dsin100daysv30-6.0.1.tar.gz/dsin100daysv30-6.0.1/notebook/static/components/MathJax/localization/pt/MathMenu.js
MathJax.Localization.addTranslation("pt","MathMenu",{version:"2.7.5",isLoaded:true,strings:{Show:"Mostrar f\u00F3rmulas como",MathMLcode:"C\u00F3digo MathML",OriginalMathML:"MathML original",TeXCommands:"Comandos TeX",AsciiMathInput:"Entrada AsciiMathML",Original:"Formato original",ErrorMessage:"Mensagem de erro",Annotation:"Anota\u00E7\u00E3o",TeX:"TeX",StarMath:"StarMath",Maple:"Maple",ContentMathML:"MathML do conte\u00FAdo",OpenMath:"OpenMath",texHints:"Mostrar dicas de TeX em MathML",Settings:"Configura\u00E7\u00F5es das f\u00F3rmulas",ZoomTrigger:"Ativador do zoom",Hover:"Passar o rato",Click:"Clique",DoubleClick:"Duplo clique",NoZoom:"Sem zoom",TriggerRequires:"O ativador requer:",Option:"Op\u00E7\u00E3o",Alt:"Alt",Command:"Comando",Control:"Control",Shift:"Shift",ZoomFactor:"Fator de zoom",Renderer:"Renderizador matem\u00E1tico",MPHandles:"Deixe que o MathPlayer resolva:",MenuEvents:"Eventos de menu",MouseEvents:"Eventos do rato",MenuAndMouse:"Eventos do rato e de menu",FontPrefs:"Prefer\u00EAncias de fontes",ForHTMLCSS:"Para HTML-CSS:",Auto:"Autom\u00E1tico",TeXLocal:"TeX (local)",TeXWeb:"TeX (web)",TeXImage:"TeX (imagem)",STIXLocal:"STIX (local)",STIXWeb:"STIX (web)",AsanaMathWeb:"Asana Math (web)",GyrePagellaWeb:"Gyre Pagella (web)",GyreTermesWeb:"Gyre Termes (web)",LatinModernWeb:"Latin Modern (web)",NeoEulerWeb:"Neo Euler (web)",ContextMenu:"Menu de contexto",Browser:"Navegador",Scale:"Redimensionar todas as f\u00F3rmulas ...",Discoverable:"Destacar ao passar com o rato",Locale:"L\u00EDngua",LoadLocale:"Carregar a partir de URL ...",About:"Sobre o MathJax",Help:"Ajuda do MathJax",localTeXfonts:"a usar fontes TeX locais",webTeXfonts:"a usar fontes TeX da web",imagefonts:"a usar fontes feitas com imagens",localSTIXfonts:"a usar fontes STIX",webSVGfonts:"a usar fontes SVG da web",genericfonts:"a usar fontes unicode gen\u00E9ricas",wofforotffonts:"fontes WOFF ou OTF",eotffonts:"fontes EOT",svgfonts:"fontes SVG",WebkitNativeMMLWarning:"N\u00E3o parece haver suporte nativo ao MathML no seu navegador, ent\u00E3o a mudan\u00E7a para MathML pode tornar ileg\u00EDveis as f\u00F3rmulas matem\u00E1ticas da p\u00E1gina.",MSIENativeMMLWarning:"O Internet Explorer requer o plugin MathPlayer para processar MathML.",OperaNativeMMLWarning:"O suporte ao MathML no Opera \u00E9 limitado, ent\u00E3o a mudan\u00E7a para MathML pode piorar a renderiza\u00E7\u00E3o de algumas express\u00F5es.",SafariNativeMMLWarning:"O suporte ao MathML nativo do seu navegador n\u00E3o implementa todos os recursos usados pelo MathJax, ent\u00E3o algumas express\u00F5es podem n\u00E3o ser exibidas adequadamente.",FirefoxNativeMMLWarning:"O suporte ao MathML nativo do seu navegador n\u00E3o implementa todos os recursos usados pelo MathJax, ent\u00E3o algumas express\u00F5es podem n\u00E3o ser exibidas adequadamente.",MSIESVGWarning:"N\u00E3o h\u00E1 uma implementa\u00E7\u00E3o de SVG nas vers\u00F5es do Internet Explorer anteriores ao IE9 ou quando ele est\u00E1 emulando o IE8 ou as vers\u00F5es anteriores. A mudan\u00E7a para SVG far\u00E1 com que as f\u00F3rmulas n\u00E3o sejam exibidas adequadamente.",LoadURL:"Carregar os dados de tradu\u00E7\u00E3o a partir desta URL:",BadURL:"A URL deve ser para um um ficheiro de JavaScript que defina os dados de tradu\u00E7\u00E3o do MathJax. Os nomes dos ficheiros de Javascript devem terminar com '.js'",BadData:"Falha ao carregar os dados de tradu\u00E7\u00E3o de %1",SwitchAnyway:"Mudar para este renderizador mesmo assim?\n\n(Pressione OK para mudar, CANCELAR para continuar com o renderizador atual)",ScaleMath:"Redimensionar todas as f\u00F3rmulas matem\u00E1ticas (em rela\u00E7\u00E3o ao texto \u00E0 sua volta) em",NonZeroScale:"A escala n\u00E3o deve ser zero",PercentScale:"A escala deve ser uma percentagem (por exemplo, 120%%)",IE8warning:"Isto desabilitar\u00E1 o menu MathJax e os recursos de zoom, mas voc\u00EA poder\u00E1 usar Alt-Clique em uma express\u00E3o para obter o menu MathJax em vez disso.\n\nDeseja realmente alterar as configura\u00E7\u00F5es do MathPlayer?",IE9warning:"O menu de contexto do MathJax ser\u00E1 desabilitado, mas pode usar Alt-Clique numa express\u00E3o para obter o menu MathJax em vez disso.",NoOriginalForm:"Sem uma forma original dispon\u00EDvel",Close:"Fechar",EqSource:"C\u00F3digo de equa\u00E7\u00E3o MathJax",CloseAboutDialog:"Fechar caixa sobre MathJax",FastPreview:"Pr\u00E9-visualiza\u00E7\u00E3o r\u00E1pida",AssistiveMML:"MAthML assistiva",InTabOrder:"Incluir na ordem da guia"}});MathJax.Ajax.loadComplete("[MathJax]/localization/pt/MathMenu.js");
PypiClean
/OAuthClientUser-0.1.2.tar.gz/OAuthClientUser-0.1.2/OAuthUser/authentication.py
import json from datetime import datetime, timedelta from django.core.exceptions import ObjectDoesNotExist from django.contrib.auth import get_user_model from django.conf import settings from rest_framework.exceptions import AuthenticationFailed from rest_framework.authentication import BaseAuthentication, get_authorization_header from .http_utils import get_account_info from .models import TUserAccessToken, TUserExtra class OAuthAccessTokenAuthentication(BaseAuthentication): def authenticate(self, request): auth_header = get_authorization_header(request) if auth_header in ['', b'', None]: print('No HTTP AUTHORIZATION HEADER found.') return None auth = [str(a, encoding='utf-8') if isinstance(a, bytes) else a for a in auth_header.split(b' ')] if auth is None or not isinstance(auth, list) or len(auth) < 2 or 'bearer' != auth[0].lower(): print('Not Bearer Token Authorization') return None access_token = auth[1] dt_now = datetime.now() saved = TUserAccessToken.objects.filter(access_token=access_token) valid = saved.filter(recheck_after__lte=dt_now) if valid.exists(): # 验证成功 user = valid.first().user return user, access_token else: if not saved.exists(): saved.delete() status, response = get_account_info(settings.OAUTH_ACCOUNT_URL, 'Bearer', access_token) if status != 200: print(status) print(response) raise AuthenticationFailed account_info = json.loads(response) username = account_info.get('username') user_model = get_user_model() try: user = user_model.objects.select_related('extra').get(username=username) except ObjectDoesNotExist: user = user_model.objects.create(username=username) remote_privileges_list = account_info.get('privileges', []) if not hasattr(user, 'extra'): TUserExtra.objects.create( user=user, full_name=account_info.get('full_name'), phone_number=account_info.get('mobile'), access_token=access_token, token_type='Bearer', expires_in=600, remote_privileges='|'.join(remote_privileges_list)) else: user.extra.full_name = account_info.get('full_name') user.extra.access_token = access_token user.extra.token_type = 'Bearer' user.extra.expires_in = 600 user.extra.remote_privileges = '|'.join(remote_privileges_list) user.extra.save() TUserAccessToken.objects.create( access_token=access_token, user=user, recheck_after=dt_now + timedelta(minutes=10)) return user, access_token
PypiClean
/picoapi-0.1.5.tar.gz/picoapi-0.1.5/README.md
Picoapi ======= A wrapper around FastAPI to simplify microservice creation. Very opinionated but also simple to fork if you would like to add your own version of service registration and configuration to FastAPI. Usage ===== create a .env file or export the following variables: | ENV variable | Required (default) | Default | Description | Examples | Implemented | |------------------------ |-------------------- |--------------------- |-------------------------------------------------------------------------------------------- |------------------------------------------------ |------------- | | API_HOST | Yes | | The host of the API | myhost\|123.123.123.123\|localhost\|127.0.0.1 | | | API_PORT | No | 8888 | The port of the API | 8080 | | | API_BIND | Yes | | The interface descriptor to bind to | 127.0.0.1\|0.0.0.0 | | | API_TITLE | Yes | | The FastAPI title | Example API Name | | | API_DESCRIPTION | No | A brief Description | The FastAPI description | An Example of a short description about an API | | | API_REGISTER_PATH | Yes | | The url of the Keeper registration endpoint (similar to consul) | http://keeper:8100/register | | | API_HEALTH_PATH | No | | The path relative to this FastAPI to call for health checks | /health | | | API_HEALTH_INTERVAL | No | 300 | The frequency to perform health checks (in seconds) | 300 | | | API_VERSION | No | 0.0.1 | The version of this FastAPI | 0.0.1-alpha | | | API_TAGS | No | | The tags for this microservice, used as part of discovery, delimited with ":" (like $PATH) | servicetag1:servicetag2:servicetag3 | | | API_CORS_ALLOW_ORIGINS | No | * | The CORS allowed origins, delimited with "!" | | No | | API_CORS_ALLOW_METHODS | No | * | The CORS allowed methods, delimited with "!" | | No | | API_CORS_ALLOW_HEADERS | No | * | The CORS allowed headers, delimited with "!" | | No | Example .env file: ```bash # API config # ========== API_BIND="0.0.0.0" API_HOST="localhost" API_PORT="8888" API_TITLE="test" API_DESCRIPTION="test description" API_VERSION="0.0.1-alpha" # microservice registration # ========================= API_KEEPER_URL="http://localhost:8100/register" API_HEALTH_PATH="/health" API_HEALTH_INTERVAL="300" ``` Authors & Contributors ====================== - [Patrick Coffey](https://github.com/schlerp) - Author - [Asanga Abeyaratne](https://github.com/asaabey) - Contributor
PypiClean
/reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/plot_utils/spines.py
def turn_off_all_axes(pylab): turn_off_bottom_and_top(pylab) turn_off_left_and_right(pylab) def turn_off_bottom_and_top(pylab): ax = pylab.gca() for loc, spine in ax.spines.items(): if loc in ["bottom", "top"]: spine.set_color("none") # don't draw spine pylab.xticks([], []) def turn_off_right(pylab): ax = pylab.gca() for loc, spine in ax.spines.items(): if loc in ["right"]: spine.set_color("none") # don't draw spine ax.yaxis.set_ticks_position("left") def turn_off_top(pylab): ax = pylab.gca() for loc, spine in ax.spines.items(): if loc in ["top"]: spine.set_color("none") # don't draw spine ax.yaxis.set_ticks_position("bottom") def turn_off_left_and_right(pylab): ax = pylab.gca() for loc, spine in ax.spines.items(): if loc in ["left", "right"]: spine.set_color("none") # don't draw spine pylab.yticks([], []) def set_left_spines_outward(pylab, offset=10): ax = pylab.gca() for loc, spine in ax.spines.items(): if loc in ["left"]: spine.set_position(("outward", offset)) def set_thick_ticks(pylab, markersize=3, markeredgewidth=1): ax = pylab.gca() for l in ax.get_xticklines() + ax.get_yticklines(): l.set_markersize(markersize) l.set_markeredgewidth(markeredgewidth) def set_spines_outward(pylab, outward_offset=10): ax = pylab.gca() for loc, spine in ax.spines.items(): if loc in ["left", "bottom"]: spine.set_position(("outward", outward_offset)) elif loc in ["right", "top"]: spine.set_color("none") # don't draw spine else: raise ValueError("unknown spine location: %s" % loc) # turn off ticks where there is no spine ax.xaxis.set_ticks_position("bottom") ax.yaxis.set_ticks_position("left") def set_spines_look_A( pylab, outward_offset=10, linewidth=2, markersize=3, markeredgewidth=1 ): """ Taken from http://matplotlib.sourceforge.net/examples/pylab_examples /spine_placement_demo.html """ ax = pylab.gca() set_spines_outward(pylab, outward_offset) set_thick_ticks(pylab, markersize, markeredgewidth) try: # f = pylab.gcf() # ax.get_frame().set_linewidth(linewidth) [i.set_linewidth(linewidth) for i in ax.spines.items()] except BaseException as e: # print('set_linewidth() not working in matplotlib 1.3.1: %s' % e) pass # ax.get_frame().set_linewidth(linewidth) # for l in ax1.yaxis.get_minorticklines()+ax1.xaxis.get_minorticklines(): # # l.set_markersize(3) # # l.set_markeredgewidth(1.2)
PypiClean
/slither_analyzer-0.9.6-py3-none-any.whl/slither/tools/possible_paths/__main__.py
import sys import logging from argparse import ArgumentParser, Namespace from crytic_compile import cryticparser from slither import Slither from slither.core.declarations import FunctionContract from slither.utils.colors import red from slither.tools.possible_paths.possible_paths import ( find_target_paths, resolve_functions, ResolveFunctionException, ) logging.basicConfig() logging.getLogger("Slither").setLevel(logging.INFO) def parse_args() -> Namespace: """ Parse the underlying arguments for the program. :return: Returns the arguments for the program. """ parser: ArgumentParser = ArgumentParser( description="PossiblePaths", usage="possible_paths.py filename [contract.function targets]", ) parser.add_argument( "filename", help="The filename of the contract or truffle directory to analyze." ) parser.add_argument("targets", nargs="+") cryticparser.init(parser) return parser.parse_args() def main() -> None: # ------------------------------ # PossiblePaths.py # Usage: python3 possible_paths.py filename targets # Example: python3 possible_paths.py contract.sol contract1.function1 contract2.function2 contract3.function3 # ------------------------------ # Parse all arguments args = parse_args() # Perform slither analysis on the given filename slither = Slither(args.filename, **vars(args)) try: targets = resolve_functions(slither, args.targets) except ResolveFunctionException as resolvefunction: print(red(resolvefunction)) sys.exit(-1) # Print out all target functions. print("Target functions:") for target in targets: if isinstance(target, FunctionContract): print(f"- {target.contract_declarer.name}.{target.full_name}") else: pass # TODO implement me print("\n") # Obtain all paths which reach the target functions. reaching_paths = find_target_paths(slither, targets) reaching_functions = {y for x in reaching_paths for y in x if y not in targets} # Print out all function names which can reach the targets. print("The following functions reach the specified targets:") for function_desc in sorted([f"{f.canonical_name}" for f in reaching_functions]): print(f"- {function_desc}") print("\n") # Format all function paths. reaching_paths_str = [ " -> ".join([f"{f.canonical_name}" for f in reaching_path]) for reaching_path in reaching_paths ] # Print a sorted list of all function paths which can reach the targets. print("The following paths reach the specified targets:") for reaching_path in sorted(reaching_paths_str): print(f"{reaching_path}\n") if __name__ == "__main__": main()
PypiClean
/sage-conf-10.0b0.tar.gz/sage-conf-10.0b0/sage_root/build/pkgs/_prereq/SPKG.rst
_prereq: Represents system packages required for installing SageMath from source ================================================================================ Description ----------- This dummy package represents the minimal requirements (system packages) for installing SageMath from source. In addition to standard :wikipedia:`POSIX <POSIX>` utilities and the :wikipedia:`bash <Bash_(Unix_shell)>` shell, the following standard command-line development tools must be installed on your computer: - **make**: GNU make, version 3.80 or later. Version 3.82 or later is recommended. - **m4**: GNU m4 1.4.2 or later (non-GNU or older versions might also work). - **perl**: version 5.8.0 or later. - **ar** and **ranlib**: can be obtained as part of GNU binutils. - **tar**: GNU tar version 1.17 or later, or BSD tar (as provided on macOS). - **python**: Python 3.4 or later, or Python 2.7. (This range of versions is a minimal requirement for internal purposes of the SageMath build system, which is referred to as ``sage-bootstrap-python``.) Other versions of these may work, but they are untested. On macOS, suitable versions of all of these tools are provided by the Xcode Command Line Tools. To install them, open a terminal window and run ``xcode-select --install``; then click "Install" in the pop-up window. If the Xcode Command Line Tools are already installed, you may want to check if they need to be updated by typing ``softwareupdate -l``. On Linux, ``ar`` and ``ranlib`` are in the `binutils <https://www.gnu.org/software/binutils/>`_ package. The other programs are usually located in packages with their respective names. On Redhat-derived systems not all perl components are installed by default and you might have to install the ``perl-ExtUtils-MakeMaker`` package. To check if you have the above prerequisites installed, for example ``perl``, type:: $ command -v perl or:: $ which perl on the command line. If it gives an error (or returns nothing), then either ``perl`` is not installed, or it is installed but not in your :wikipedia:`PATH <PATH_%28variable%29>`.
PypiClean
/p360_contact_manager-1.2.0-py3-none-any.whl/p360_contact_manager/usecases/synchronize.py
"""Synchronize data with p360 through SynchronizeEnterprise api endpoint.""" import json import logging from typing import Callable from attr import dataclass from returns.curry import partial from returns.pipeline import flow, is_successful from returns.pointfree import bind from returns.result import ResultE, safe from typing_extensions import final @final @dataclass(frozen=True, slots=True) class Synchronize(object): """Synchronize worklist data with p360.""" _worklist: str _error_margin: int _synchronize_enterprise: Callable _read: Callable _write: Callable _output: str = 'result_synchronize.json' _log = logging.getLogger('usecases.Synchronize') def __call__(self) -> ResultE[bool]: """Read worklist, synchronize to p360, write result file.""" return flow( self._read(self._worklist, 'r'), bind(safe(json.loads)), bind(self._handle_worklist), bind(safe(json.dumps)), bind(partial(self._write, file_path=self._output)), ) @safe def _handle_worklist(self, worklist: list) -> dict: """Handle the input worklist file. Loop enterprises call synchronize endpoint if okay, put to okay, if bad, put to bad with error_message continue """ sync_result: dict = { 'errors': 0, 'synchronized': [], 'failed': [], } for ent in worklist: payload = ent['payload'] ent_no = payload['parameter']['EnterpriseNumber'] self._log.info('Current enterprise: %s', ent_no) self._log.info('brreg url: %s', ent['brreg_url']) sync = self._synchronize_enterprise(payload) self._log.info(sync) if is_successful(sync): sync_result['synchronized'].append(ent_no) continue sync_result['errors'] += 1 sync_result['failed'].append( { 'enterprise_number': ent_no, 'payload': payload, 'error_message': str(sync.failure()), }, ) if sync_result['errors'] > self._error_margin: self._log.error('Exceeded error margin, stopping execution') return sync_result return sync_result
PypiClean
/midtools-1.0.3.tar.gz/midtools-1.0.3/bin/create-reads.py
from __future__ import print_function import sys from random import uniform, normalvariate from math import log10 from dark.reads import ( Read, addFASTACommandLineOptions, parseFASTACommandLineOptions) from midtools.mutate import mutateRead from midtools.utils import s def makeRead(genome, meanLength, sdLength, minReadLength, maxReadLength, id_, rate, circularGenome): """ Make a read, according to various parameters and constraints regarding its length. Note that when circularGenome is False, reads generated using this method will not in fact have a mean length of C{meanLength}. This is because they are sometimes truncated at the start and end of the genome. @param genome: The C{str} genome to base the read on. @param meanLength: The C{float} mean read length. @param sdLength: The C{float} standard deviation of the read lengths. @param minReadLength: The C{int} minimum read length. @param maxReadLength: The C{int} maximum read length. @param id_: The C{str} read id. @param rate: The per-base C{float} mutation rate. @param circularGenome: If C{True}, the genome will be treated as circular. Reads that would otherwise be truncated by running into the end of the genome will continue with bases from the start of the genome. """ genomeLen = len(genome) length = -1 while (0 >= length > genomeLen or length < minReadLength or length > maxReadLength): length = int(normalvariate(meanLength, sdLength) + 0.5) if circularGenome: offset = int(uniform(0.0, genomeLen)) sequence = genome[offset:offset + length] # If we didn't get enough from the end of the genome, take whatever # else we need from its start. if len(sequence) < length: sequence += genome[0:length - len(sequence)] assert len(sequence) == length else: # For symmetry, we calculate an offset that allows the read to # overlap (by at least minReadLength bases) with the start or end # of the genome. If that happens, we truncate the read. offset = int(uniform(-(length - 1) + minReadLength, genomeLen - minReadLength)) if offset < 0: sequence = genome[:offset + length] else: sequence = genome[offset:offset + length] assert maxReadLength >= len(sequence) >= minReadLength, ( 'maxReadLength=%d, len(sequence)=%d, minReadLength=%d ' 'readLength=%d offset=%d' % (maxReadLength, len(sequence), minReadLength, length, offset)) read = Read(id_, sequence) mutationOffsets = () if rate == 0.0 else mutateRead(read, rate) return read, offset, mutationOffsets if __name__ == '__main__': import argparse parser = argparse.ArgumentParser( formatter_class=argparse.ArgumentDefaultsHelpFormatter, description='Create DNA reads.') parser.add_argument( '--idPrefix', default='read-', help=('The prefix for the created read ids. The read number ' 'will be appended.')) parser.add_argument( '--count', type=int, default=100, help='The number of reads to create') parser.add_argument( '--minReadLength', type=int, default=10, help='The minimum length read to create') parser.add_argument( '--maxReadLength', type=int, default=None, help=('The maximum length read to create. Defaults to the genome ' 'length')) parser.add_argument( '--rate', type=float, default=0.0, help='The per-base mutation rate to use') parser.add_argument( '--meanLength', type=float, default=100.0, help='The mean read length') parser.add_argument( '--sdLength', type=float, default=10.0, help='The standard deviation of read length') parser.add_argument( '--verbose', action='store_true', default=False, help='Print (to stderr) information about the created reads.') parser.add_argument( '--fastaReads', action='store_true', default=False, help='Make the reads be FASTA instead of FASTQ') parser.add_argument( '--qualityChar', default='I', help=('The quality character to use for all quality scores when ' '--fastq is used')) parser.add_argument( '--circularGenome', action='store_true', default=False, help=('If specified, reads will wrap around the genome (currently not ' 'compatible with --alignReads).')) parser.add_argument( '--printGenome', action='store_true', default=False, help='If specified, print the genome as the first sequence.') parser.add_argument( '--alignReads', action='store_true', default=False, help=('If specified, print the reads aligned (with "-" characters) ' 'to the genome.')) addFASTACommandLineOptions(parser) args = parser.parse_args() reads = list(parseFASTACommandLineOptions(args)) # There should only be one "read", the sequence we are to create other # reads from. assert len(reads) == 1, ( 'FASTA input contained %d sequence%s (expected just one).' % ( len(reads), s(len(reads)))) genome = reads[0] genomeLen = len(genome) meanLength = args.meanLength if meanLength > genomeLen: raise ValueError('The mean read length (%d) is greater than the ' 'genome length (%d)' % (int(meanLength), genomeLen)) if meanLength <= 0: raise ValueError('The mean read length must be greater than zero') sdLength = args.sdLength if sdLength <= 0.0: raise ValueError('The read length standard deviation must be > 0.0') rate = args.rate if not (0.0 <= rate <= 1.0): raise ValueError('The read mutation rate must be in [0.0, 1.0]') minReadLength = args.minReadLength if minReadLength <= 0: raise ValueError('The minimum read length must be positive') maxReadLength = args.maxReadLength if maxReadLength is None: maxReadLength = genomeLen elif maxReadLength <= 0: raise ValueError('The maximum read length must be positive') if minReadLength > maxReadLength: raise ValueError( 'The minimum read length cannot exceed the maximum read length') alignReads = args.alignReads circularGenome = args.circularGenome if circularGenome and alignReads: raise ValueError( 'You cannot specify both --circularGenome and --alignReads') idPrefix = args.idPrefix verbose = args.verbose genomeSequence = genome.sequence readCountWidth = int(log10(args.count)) + 1 genomeLengthWidth = int(log10(genomeLen)) + 1 if args.printGenome: print(genome.toString('fasta'), end='') fastq, format_ = (False, 'fasta') if args.fastaReads else (True, 'fastq') qualityChar = args.qualityChar for i in range(args.count): id_ = '%s%0*d' % (idPrefix, readCountWidth, i + 1) read, offset, mutationOffsets = makeRead( genomeSequence, meanLength, sdLength, minReadLength, maxReadLength, id_, rate, circularGenome) read.id = read.id + '-length-%0*d-offset-%0*d' % ( genomeLengthWidth, len(read), genomeLengthWidth, offset) if mutationOffsets: read.id = read.id + '-mutations-at-%s' % ( ','.join(map(str, sorted(mutationOffsets)))) else: read.id = read.id + '-no-mutations' if verbose: print('Created read of length %d with %d mutations' % (len(read), len(mutationOffsets)), file=sys.stderr) if alignReads: sequence = ('-' * offset) + read.sequence if len(sequence) < genomeLen: sequence += '-' * (genomeLen - len(sequence)) read.sequence = sequence[:genomeLen] if fastq: read.quality = qualityChar * len(read.sequence) print(read.toString(format_), end='')
PypiClean
/huobi-client-pundix-2.0.0.tar.gz/huobi-client-pundix-2.0.0/huobi/model/generic/symbol.py
class Symbol: """ The Huobi supported symbols. :member base_currency: The base currency in a trading symbol. quote_currency: The quote currency in a trading symbol. price_precision: The quote currency precision when quote price (decimal places). amount_precision: The base currency precision when quote amount (decimal places). symbol_partition: The trading section, possible values: [main,innovation,bifurcation]. symbol: The symbol, like "btcusdt". state : trade status, maybe one in [online,offline,suspend] value_precision : value precision min_order_amt : minimum volume limit only used in limit-order and sell-market order max_order_amt : Maximum volume min_order_value : Minimum order amount leverage_ratio : Leverage ratio for symbol limit_order_min_order_amt: Minimum order amount of limit order in base currency (NEW) limit_order_max_order_amt: Max order amount of limit order in base currency (NEW) sell_market_min_order_amt: Minimum order amount of sell-market order in base currency (NEW) sell_market_max_order_amt: Max order amount of sell-market order in base currency (NEW) buy_market_max_order_amt: Max order value of buy-market order in quote currency (NEW) max_order_value: Max order value of limit order and buy-market order in usdt (NEW) """ def __init__(self): self.base_currency = "" self.quote_currency = "" self.price_precision = 0 self.amount_precision = 0 self.symbol_partition = "" self.symbol = "" self.state = "" self.value_precision = 0 self.min_order_amt = "" self.max_order_amt = "" self.min_order_value = "" self.leverage_ratio = 0 self.limit_order_min_order_amt = 0 self.limit_order_max_order_amt = 0 self.sell_market_min_order_amt = 0 self.sell_market_max_order_amt = 0 self.buy_market_max_order_value = 0 self.max_order_value = 0 def print_object(self, format_data=""): from huobi.utils.print_mix_object import PrintBasic PrintBasic.print_basic(self.base_currency, format_data + "Base Currency") PrintBasic.print_basic(self.quote_currency, format_data + "Quote Currency") PrintBasic.print_basic(self.price_precision, format_data + "Price Precision") PrintBasic.print_basic(self.amount_precision, format_data + "Amount Precision") PrintBasic.print_basic(self.symbol_partition, format_data + "Symbol Partition") PrintBasic.print_basic(self.symbol, format_data + "Symbol") PrintBasic.print_basic(self.state, format_data + "State") PrintBasic.print_basic(self.value_precision, format_data + "Value Precision") PrintBasic.print_basic(self.min_order_amt, format_data + "Min Order Amount") PrintBasic.print_basic(self.max_order_amt, format_data + "Max Order Amount") PrintBasic.print_basic(self.min_order_value, format_data + "Min Order Value") PrintBasic.print_basic(self.leverage_ratio, format_data + "Leverage Ratio") PrintBasic.print_basic(self.limit_order_min_order_amt, format_data + "Minimum order amount (Limit Order)") PrintBasic.print_basic(self.limit_order_max_order_amt, format_data + "Max order amount (Limit Order)") PrintBasic.print_basic(self.sell_market_min_order_amt, format_data + "Min order amount (Sell Market Order)") PrintBasic.print_basic(self.sell_market_max_order_amt, format_data + "Max order amount (Sell Market Order)") PrintBasic.print_basic(self.buy_market_max_order_value, format_data + "Max order value (Buy Market Order)") PrintBasic.print_basic(self.max_order_value, format_data + "Max order value (In USDT)")
PypiClean
/mars-gym-0.1.0.tar.gz/mars-gym-0.1.0/docs/quick_start.rst
Quick Start ================================ In this tutorial we will present a simple example using MARS. Each module will be explained in a superficial way with focus on the application. .. image:: ../images/img2.jpg :width: 700 :align: center Three main components make the framework: * The first one is a highly customizable module where the consumer can ingest and process a massive amount of **data** for learning using spark jobs. * The second component was designed for **training** purposes. It holds an extensible module built on top of PyTorch to design learning architectures. It also has an OpenAI Gym environment that ingests the processed dataset to simulate the targeted marketplace. * Finally, the last component is an **evaluation** module that provides a set of distinct perspectives on the agent’s performance. It presents not only traditional recommendation metrics but also off-policy evaluations, to account for the bias induced from the historical data representation. All code used in this guide is designed to ilustrate how each class must be implemented. They will vary according to each project, but the consistent and necessary methods are displayed here. Some examples can be found at the :code:`samples` folder. These are used to run our examples, so make sure this folder is located in the same place you'll be running the commands from the following sections. Dataset ******* MARS provides some datasets preprocessed as examples to test the framework. They are real datasets, which contain interaction data between users and items and the metadata of the items to be recommended. * Trivago Dataset - http://recsys.trivago.cloud/challenge/dataset/ * Yoochose Dataset - http://2015.recsyschallenge.com/challenge.html .. code-block:: python >>> from mars_gym.data import utils >>> utils.datasets() ['random', 'yoochoose', 'processed_yoochoose', 'trivago_rio', 'processed_trivago_rio'] >>> df, df_meta = utils.load_dataset('processed_trivago_rio') >>> df.head() session_id user_id timestamp action_type item_id impressions list_reference_item pos_item_id clicked 0 05fe82b496fb9 M1Z13DD0P2KH 1541422443 clickout item 4304686 ['109351', '150138', '4345728', '105014', '478'... ['', '', '', '', ''] 7 1.0 1 05fe82b496fb9 M1Z13DD0P2KH 1541422474 clickout item 960255 ['1475717', '5196406', '104880', '109351', '68'... ['4304686', '', '', '', ''] 20 1.0 2 05fe82b496fb9 M1Z13DD0P2KH 1541423039 clickout item 2188598 ['104558', '326781', '104786', '1223390', '206'... ['4304686', '960255', '', '', ''] 9 1.0 3 05fe82b496fb9 M1Z13DD0P2KH 1541424631 clickout item 8459162 ['105014', '5659850', '478121', '109351', '956'... ['4304686', '960255', '2188598', '', ''] 23 1.0 4 05fe82b496fb9 M1Z13DD0P2KH 1541424685 interaction info 8459162 NaN ['4304686', '960255', '2188598', '8459162', ''] -1 0.0 >>> df_meta[['list_metadata']].head() list_metadata 0 [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, ... 1 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 2 [0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, ... 3 [0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, ... 4 [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, ... .. note:: The :code:`load_dataset()` function will download and load only the datasets in :code:`mars_gym.data.utils.datasets()` Prepare data ************ The Data Engineering Module is responsible for preprocessing the data and setting up interactions and metadata for the simulation module. It uses `Luigi <https://github.com/spotify/luigi>`_ as a pipeline tool. :code:`BasePrepareDataFrames` is the main class responsible for validating and preparing the data. .. code-block:: python from mars_gym.data.utils import DownloadDataset class PrepareInteractionData(luigi.Task): def requires(self): return DownloadDataset(dataset="processed_trivago_rio", output_path=OUTPUT_PATH) def output(self): return luigi.LocalTarget(os.path.join(DATASET_DIR, "dataset.csv",)) def run(self): os.makedirs(DATASET_DIR, exist_ok=True) df = pd.read_csv(self.input()[0].path) # .... transform dataset df.to_csv(self.output().path) class PrepareMetaData(luigi.Task): def requires(self): return DownloadDataset(dataset="processed_trivago_rio", output_path=OUTPUT_PATH) def output(self): return luigi.LocalTarget(os.path.join(DATASET_DIR, "metadata.csv",)) def run(self): os.makedirs(DATASET_DIR, exist_ok=True) df = pd.read_csv(self.input()[1].path) # .... transform dataset df.to_csv(self.output().path, index="item_id") The class inherited from :code:`BasePrepareDataFrames` is the one we will use from within MARS. It is necessary to implement 4 methods in this class. The :code:`timestamp_property`, which is a feature that defines the temporal order, :code:`dataset_dir`, which is the local path where the dataset will be saved, :code:`read_data_frame_path`, which is the local path of the interaction dataset and :code:`metadata_data_frame_path`, which is the local path of the metadata dataset. .. code-block:: python from mars_gym.data.task import BasePrepareDataFrames class PrepareTrivagoDataFrame(BasePrepareDataFrames): def requires(self): return ( PrepareInteractionData(), PrepareMetaData(), ) @property def timestamp_property(self) -> str: return "timestamp" @property def dataset_dir(self) -> str: return DATASET_DIR @property def read_data_frame_path(self) -> pd.DataFrame: return self.input()[0].path @property def metadata_data_frame_path(self) -> Optional[str]: return self.input()[1].path It is possible to test this pipeline before the simulation. Since this is a Luigi task it will give you summary about its success or failure, the commands to test are the following: .. code-block:: python >>> from samples.trivago_simple.data import PrepareTrivagoDataFrame >>> import luigi >>> job = PrepareTrivagoDataFrame() >>> luigi.build([job], local_scheduler=True) .... INFO: Worker Worker(salt=154256821, workers=1, host=user-pc, username=user, pid=16527) was stopped. Shutting down Keep-Alive thread INFO: ===== Luigi Execution Summary ===== Scheduled 4 tasks of which: * 4 ran successfully: - 1 DownloadDataset(output_path=output, dataset=processed_trivago_rio) - 1 PrepareInteractionData() - 1 PrepareMetaData() - 1 PrepareTrivagoDataFrame(...) This progress looks :) because there were no failed tasks or missing dependencies ===== Luigi Execution Summary ===== >>> [o.path for o in job.output()] ['.../train_cc25c002c7.csv', '.../val_cc25c002c7.csv', '.../test_cc25c002c7.csv', '.../metadata.csv'] The :code:`BasePrepareDataFrames` is highly configurable and parameterizable. In general, the output of this job is the split and processed datasets to be used by MARS. * `DATASET_DIR/train_cc25c002c7.csv` * `DATASET_DIR/val_cc25c002c7.csv` * `DATASET_DIR/test_cc25c002c7.csv` * `DATASET_DIR/metadata.csv` Configuration ************* Before the simulation, we need to prepare a configuration file with the design parameters and contextual information to be used in the model. We need to define a variable as an instance of :code:`ProjectConfig` .. code-block:: python from mars_gym.data.dataset import InteractionsDataset from mars_gym.meta_config import * from samples.trivago_rio import data trivago_rio = ProjectConfig( base_dir=data.BASE_DIR, prepare_data_frames_task=data.PrepareTrivagoDataFrame, dataset_class=InteractionsDataset, user_column=Column("user_id", IOType.INDEXABLE), item_column=Column("item_id", IOType.INDEXABLE), other_input_columns=[ Column("pos_item_id", IOType.NUMBER), Column("list_reference_item", IOType.INDEXABLE_ARRAY, same_index_as="item_id"), ], metadata_columns=[Column("list_metadata", IOType.INT_ARRAY),], output_column=Column("clicked", IOType.NUMBER), available_arms_column_name="impressions" ) * :code:`base_dir`: Local path where the dataset and files generated by the data engineer module will be saved * :code:`prepare_data_frames_task`: Class inherited from BasePrepareDataFrames. This defines the data engineer pipeline. * :code:`dataset_class`: This class defines how the dataset will be used in the simulation module. MARS already implements different types. * :code:`user_column`: Column that identifies the user * :code:`item_column`: Column that identifies the item * :code:`other_input_columns`: Columns that will be used as input for the model and context * :code:`metadata_columns`: Metadata columns that will be used as input for the model and context * :code:`output_column`: Reward column, the column that defines wether the recommendation was sucessful or not * :code:`available_arms_column_name`: Name of the column with items available for recommendation at the time of interaction. This column must contain a list of items the same type as :code:`item_column`. If this information is not available, MARS will randomly generate the items. .. note:: We recommend creating a `config.py` file with all project definitions. It is common to have several different configurations to experiment. Model and Simulation ******************** The Recommendation Agent is composed of Reward Estimator and a Recommendation Policy. The model is trained using the rewards from the environment and the policy chooses actions (recommendations) using the context received, again, from the environment. Reward Estimator ################ In order to implement a Reward Estimator ρ(x, a) we use a Pytorch Model that will estimate a reward in a contextual bandit problem. It uses the context 'x' (all information passed from environment) and the available actions 'a' to estimate a reward for each action. .. .. image:: ../images/math_reward_estimator.png .. :width: 300 .. :align: center Model ##### The model needs to inherit from RecommenderModule. This class receives through its constructor the :code:`ProjectConfig` and a :code:`Dict` with IndexMapping for all categorical variables. The model is a Pytorch :code:`nn.Module` and receives in the foward function all context defined in :code:`ProjectConfig` (:code:`user_column`, :code:`item_column`, :code:`other_input_columns`, and :code:`metadata_columns`). .. code-block:: python import luigi from typing import Dict, Any import torch import torch.nn as nn from mars_gym.meta_config import ProjectConfig from mars_gym.model.abstract import RecommenderModule class SimpleLinearModel(RecommenderModule): def __init__( self, project_config: ProjectConfig, index_mapping: Dict[str, Dict[Any, int]], ): """ build model architecture """ super().__init__(project_config, index_mapping) #... def forward( self, user_ids: torch.Tensor, item_ids: torch.Tensor, pos_item_id: torch.Tensor, list_reference_item: torch.Tensor, list_metadata: torch.Tensor, ): """ build forward """ pass This model will be trained using the Counterfactual Risk Minimization (CRM) [`1 <https://www.cs.cornell.edu/people/tj/publications/swaminathan_joachims_15b.pdf>`_] to reduce bias that came from the dataset. Everything about this training can be parameterized and easily altered. .. .. image:: ../images/math_crm_loss.png .. :width: 400 .. :align: center * [`1 <https://www.cs.cornell.edu/people/tj/publications/swaminathan_joachims_15b.pdf>`_] Adith Swaminathan and Thorsten Joachims. 2015. Counterfactual Risk Minimization: Learning from Logged Bandit Feedback. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (Lille, France) (ICML’15). JMLR.org, 814–823. **Model Example** This is an example of a simple linear model used in the trivago samples: .. code-block:: python class SimpleLinearModel(RecommenderModule): def __init__( self, project_config: ProjectConfig, index_mapping: Dict[str, Dict[Any, int]], n_factors: int, metadata_size: int, window_hist_size: int, ): super().__init__(project_config, index_mapping) self.user_embeddings = nn.Embedding(self._n_users, n_factors) self.item_embeddings = nn.Embedding(self._n_items, n_factors) # user + item + flatten hist + position + metadata num_dense = 2 * n_factors + window_hist_size * n_factors + 1 + metadata_size self.dense = nn.Sequential( nn.Linear(num_dense, 500), nn.SELU(), nn.Linear(500, 1), ) def flatten(self, input: torch.Tensor): return input.view(input.size(0), -1) def forward( self, user_ids: torch.Tensor, item_ids: torch.Tensor, pos_item_id: torch.Tensor, list_reference_item: torch.Tensor, list_metadata: torch.Tensor, ): user_emb = self.user_embeddings(user_ids) item_emb = self.item_embeddings(item_ids) history_items_emb = self.item_embeddings(list_reference_item) x = torch.cat( ( user_emb, item_emb, self.flatten(history_items_emb), pos_item_id.float().unsqueeze(1), list_metadata.float(), ), dim=1, ) x = self.dense(x) return torch.sigmoid(x) Recommendation Policy ##################### We need to implement a Recommendation Policy π(a|x), this is a bandit strategy 'π' that will choose an action 'a' based on the context 'x'. .. image:: ../images/math_policy_recommendation.png :width: 100 :align: center **Bandit** The Bandit needs to be inherited from BanditPolicy. We need to implement the :code:`._select_idx(...)` function. This method is called by the environment to receive an action given the context. .. code-block:: python from mars_gym.model.bandit import BanditPolicy from typing import Dict, Any, List, Tuple, Union class BasePolicy(BanditPolicy): def __init__(self, reward_model: nn.Module, seed: int = 42): """ Initialize bandit information and params """ super().__init__(reward_model) def _select_idx( self, arm_indices: List[int], arm_contexts: Tuple[np.ndarray, ...] = None, arm_scores: List[float] = None, pos: int = 0, ) -> Union[int, Tuple[int, float]]: """ Choose the index of arm selected in turn """ return action * :code:`arm_indices`: Available actions at the time of interaction (same as :code:`available_arms_column_name`) * :code:`arm_contexts`: Context information at the time of interaction * :code:`arm_scores`: Estimated reward, that came from Reward Estimator, for each action. **Example of Epsilon-Greedy Policy** .. code-block:: python class EGreedyPolicy(BanditPolicy): def __init__(self, reward_model: nn.Module, seed: int = 42): super().__init__(reward_model) self._rng = RandomState(seed) def _select_idx( self, arm_indices: List[int], arm_contexts: Tuple[np.ndarray, ...] = None, arm_scores: List[float] = None, pos: int = 0, ) -> Union[int, Tuple[int, float]]: n_arms = len(arm_indices) arm_probas = np.ones(n_arms) / n_arms if self._rng.choice([True, False], p=[self._epsilon, 1.0 - self._epsilon]): action = self._rng.choice(len(arm_indices), p=arm_probas) else: action = int(np.argmax(arm_scores)) return action Simulation ########## MARS-Gym simulates the dynamics of the marketplace. This includes several processes. The framework filters only successful interactions. They are the only ones that tell us what the users really want, thus they are used to compose the rewards. Each simulation step is an interaction, with observations being the user's metadata, and actions being the items to recommend. The sequence of steps follows the sequence of interactions in the filtered ground-truth dataset to maintain the temporal dynamic. Finally, the interactions between the proposed agent and the environment generate new interaction logs that are used in subsequent steps. .. image:: ../images/img3.jpg :width: 700 :align: center For simulation, we use the :code:`InteractionTraining` class. This class is a Gym implementation and receives as parameters the information about the project (:code:`ProjectConfig`), reward estimator (:code:`RecommenderModule`), bandit policy (:code:`BanditPolicy`) and other training parameters. .. code-block:: python >>> from mars_gym.simulation.interaction import InteractionTraining >>> >>> job_train = InteractionTraining( >>> project="samples.trivago_simple.config.trivago_rio", >>> recommender_module_class="samples.trivago_simple.simulation.SimpleLinearModel", >>> recommender_extra_params={ >>> "n_factors": 10, >>> "metadata_size": 148, >>> "window_hist_size": 5, >>> }, >>> bandit_policy_class="samples.trivago_simple.simulation.EGreedyPolicy", >>> bandit_policy_params={ >>> "epsilon": 0.1, >>> "seed": 42 >>> }, >>> test_size=0.1, >>> obs_batch_size=100, >>> num_episodes=1, >>> ) >>> >>> luigi.build([job_train], local_scheduler=True) ... ... 0/100(t): 100%|████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 30.32it/s, loss=0.0025, running_loss=0.0024] 1/100(t): 100%|█████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 45.82it/s, loss=0.003, running_loss=0.0028] ... ... 10/100(v): 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 81.10it/s, val_loss=0.2949] Interaction Stats (75.36%) count mean std dataset all 7300.0 0.044110 0.205353 train 5840.0 0.042808 0.202442 valid 1460.0 0.049315 0.216599 Saving logs... Saving test set predictions... 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:00<00:00, 4063441.72it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:00<00:00, 3831989.55it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:16<00:00, 151.33it/s] INFO: Informed scheduler that task InteractionTraining____samples_trivago____epsilon___0_1__4fc1370d9d has status DONE 2020-06-22 08:41:37,842 : INFO : Informed scheduler that task InteractionTraining____samples_trivago____epsilon___0_1__4fc1370d9d has status DONE DEBUG: Asking scheduler for work... The best way to run is in **Script Mode**: .. code-block:: console $ mars-gym run interaction \ --project samples.trivago_simple.config.trivago_rio \ --recommender-module-class samples.trivago_simple.simulation.SimpleLinearModel \ --recommender-extra-params '{"n_factors": 10, "metadata_size": 148, "window_hist_size": 5}' \ --bandit-policy-class samples.trivago_simple.simulation.EGreedyPolicy \ --bandit-policy-params '{"epsilon": 0.1}' \ --obs-batch-size 100 ... ... Interaction Stats (75.36%) count mean std dataset all 7300.0 0.044110 0.205353 train 5840.0 0.042808 0.202442 valid 1460.0 0.049315 0.216599 Saving logs... Saving test set predictions... 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:00<00:00, 4063441.72it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:00<00:00, 3831989.55it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:16<00:00, 151.33it/s] INFO: Informed scheduler that task InteractionTraining____samples_trivago____epsilon___0_1__4fc1370d9d has status DONE 2020-06-22 08:41:37,842 : INFO : Informed scheduler that task InteractionTraining____samples_trivago____epsilon___0_1__4fc1370d9d has status DONE DEBUG: Asking scheduler for work... .. note:: Make sure you have downloaded the dataset to be processed. In this script specifically, we are using the "processed_trivago_rio" dataset Each simulation generates artifacts for evaluation and metadata that can be used to deploy models in another environment: * ../params.json * ../sim-datalog.csv * ../index_mapping.pkl * ../bandit.pkl * ../weights.pt * ../test_set_predictions.csv Supervised Learning ################### It is also possible to use MARS-gym for supervised learning. It is useful for validating and testing the reward model before using it in a simulation. In such cases, we can use :code:`SupervisedModelTraining` class with similar parameters. .. code-block:: console $ mars-gym run supervised \ --project samples.trivago_simple.config.trivago_rio \ --recommender-module-class samples.trivago_simple.simulation.SimpleLinearModel \ --recommender-extra-params '{"n_factors": 10, "metadata_size": 148, "window_hist_size": 5}' \ --early-stopping-min-delta 0.0001 --negative-proportion 0.8 \ --learning-rate 0.0001 --epochs 50 --batch-size 100 --metrics='["loss"]' ... ... DEBUG: Checking if SupervisedModelTraining(project=samples.trivago_simple.config.trivago_rio, sample_size=-1, minimum_interactions=5, session_test_size=0.1, test_size=0.2, dataset_split_method=time, test_split_type=random, val_size=0.2, n_splits=5, split_index=0, data_frames_preparation_extra_params={}, sampling_strategy=none, balance_fields=[], sampling_proportions={}, use_sampling_in_validation=False, eq_filters={}, neq_filters={}, isin_filters={}, seed=42, observation=, negative_proportion=0.8, recommender_module_class=samples.trivago_simple.simulation.SimpleLinearModel, recommender_extra_params={"n_factors": 10, "metadata_size": 148, "window_hist_size": 5}, device=cuda, batch_size=100, epochs=50, optimizer=adam, optimizer_params={}, learning_rate=0.0001, loss_function=mse, loss_function_params={}, gradient_norm_clipping=0.0, gradient_norm_clipping_type=2, early_stopping_patience=5, early_stopping_min_delta=0.0001, monitor_metric=val_loss, monitor_mode=min, generator_workers=0, pin_memory=False, policy_estimator_extra_params={}, metrics=["loss"], bandit_policy_class=mars_gym.model.bandit.ModelPolicy, bandit_policy_params={}) is complete ... 20/50(t): 100%|████████████████████████████████████████████████████████████████| 388/388 [00:01<00:00, 242.70it/s, loss=0.129, running_loss=0.1277] 20/50(v): 100%|███████████████████████████████████████████████████████████████████████████████████| 97/97 [00:00<00:00, 323.86it/s, val_loss=0.125] 21/50(t): 100%|████████████████████████████████████████████████████████████████| 388/388 [00:01<00:00, 201.85it/s, loss=0.1291, running_loss=0.129] 21/50(v): 100%|██████████████████████████████████████████████████████████████████████████████████| 97/97 [00:00<00:00, 323.73it/s, val_loss=0.1252] Saving test set predictions... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:00<00:00, 3655489.13it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:00<00:00, 3219842.88it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 2422/2422 [00:13<00:00, 181.27it/s] ... .. image:: ../images/supervised_learning/history.jpg :width: 400 :align: center ------------------------------------- Evaluation ********** We have a specific command for evaluation. This task implements three rating categories: Rank Metrics, Fairness Metrics, and Off-policy Metrics. Before the evaluation, it is necessary to run a simulation or supervised training, after this we will use the :code:`task_id` provided by luigi, and also used as the folder name in :code:`output/interaction/InteractionTraining/results/task_id`. For evaluation, we use the :code:`mars-gym evaluate` command, which has the :code:`mars-gym evaluate interaction` and :code:`mars-gym evaluate supervised` variants. Each evaluation command generates many artifacts with metrics and metadata that can be used by the Evaluation Platform. * EVALUATION_DIR/metrics.json * EVALUATION_DIR/rank_metrics.csv * EVALUATION_DIR/df_offpolicy.csv * EVALUATION_DIR/fairness_df.csv * EVALUATION_DIR/fairness_metrics.csv Rank Metrics ############ By default, every run of :code:`mars-gym evaluate` will compute Rank Metrics, such as: * nDCG * Mean Average Precision .. code-block:: console $ mars-gym evaluate interaction \ --model-task-id InteractionTraining____samples_trivago____epsilon___0_1__3fe8c849e3 .. image:: ../images/dataviz/rank.png :width: 500 Notice that each evaluation command will receive its own :code:`task_id` preceded by the training's :code:`task_id`. Off-policy Metrics ################## For off-policy evaluation, MARS-Gym uses three main estimators [`3 <https://dl.acm.org/doi/10.5555/3104482.3104620>`_]: * Direct Method * Inverse Propensity Score * Doubly Robust All of which can be seen and compared with our Evaluation Platform. In order to run these metrics, just add the flag :code:`--offpolicy-eval` to the command: .. code-block:: console $ mars-gym evaluate interaction \ --model-task-id InteractionTraining____samples_trivago____epsilon___0_1__3fe8c849e3 \ --offpolicy-eval .. image:: ../images/dataviz/off.png :width: 500 [`3 <https://dl.acm.org/doi/10.5555/3104482.3104620>`_] Miroslav Dudík, John Langford, and Lihong Li. 2011. Doubly Robust Policy Evaluation and Learning. InProceedings of the 28th InternationalConference on International Conference on Machine Learning(Bellevue, Washington, USA)(ICML’11). Omnipress, Madison, WI, USA, 1097–1104. Fairness Metrics ################ In MARS-Gym, we consider three perspectives to measure fairness [`2 <https://doi.org/10.1145/3038912.3052660>`_]: * **Disparate Treatment** .. image:: ../images/dataviz/treatment.png :width: 500 * **Disparate Impact** .. image:: ../images/dataviz/impact.png :width: 500 * **Disparate Mistreatment** .. image:: ../images/dataviz/mistreatment.png :width: 500 To calculate the metrics of fairness, you need to pass the parameter :code:`--fairness-columns`, this parameter receives an array of attributes according to which the metrics will be computed. Ex: .. code-block:: console $ mars-gym evaluate interaction \ --model-task-id InteractionTraining____samples_trivago____epsilon___0_1__3fe8c849e3 \ --fairness-columns '["pos_item_id"]' [`2 <https://doi.org/10.1145/3038912.3052660>`_] Zafar et. al, 2017. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment https://doi.org/10.1145/3038912.3052660 Evaluation Platform ################### The Evaluation Platform is a web application that centralizes all views of the evaluation metrics. .. image:: ../images/dataviz/image1.png :width: 800 :align: center It is an external service made with `Streamlit <https://www.streamlit.io/>`_ library. To start the service, use this command: .. code-block:: console $ mars-gym viz You can now view your Streamlit app in your browser. Local URL: http://localhost:8501 In this platform you'll be able to select experiments, metrics, and visualize them in a number of ways, including the iteraction results from training. .. .. image:: ../images/dataviz/image2.png .. :width: 700
PypiClean
/ais_dom-2023.7.2-py3-none-any.whl/homeassistant/components/unifi/device_tracker.py
from __future__ import annotations from collections.abc import Callable, Mapping from dataclasses import dataclass from datetime import timedelta import logging from typing import Any, Generic import aiounifi from aiounifi.interfaces.api_handlers import ItemEvent from aiounifi.interfaces.clients import Clients from aiounifi.interfaces.devices import Devices from aiounifi.models.api import ApiItemT from aiounifi.models.client import Client from aiounifi.models.device import Device from aiounifi.models.event import Event, EventKey from homeassistant.components.device_tracker import ScannerEntity, SourceType from homeassistant.config_entries import ConfigEntry from homeassistant.core import Event as core_Event, HomeAssistant, callback from homeassistant.helpers.dispatcher import async_dispatcher_connect from homeassistant.helpers.entity_platform import AddEntitiesCallback import homeassistant.util.dt as dt_util from .const import DOMAIN as UNIFI_DOMAIN from .controller import UniFiController from .entity import ( HandlerT, UnifiEntity, UnifiEntityDescription, async_device_available_fn, ) LOGGER = logging.getLogger(__name__) CLIENT_TRACKER = "client" DEVICE_TRACKER = "device" CLIENT_CONNECTED_ATTRIBUTES = [ "_is_guest_by_uap", "ap_mac", "authorized", "essid", "ip", "is_11r", "is_guest", "note", "qos_policy_applied", "radio", "radio_proto", "vlan", ] CLIENT_STATIC_ATTRIBUTES = [ "mac", "name", "oui", ] CLIENT_CONNECTED_ALL_ATTRIBUTES = CLIENT_CONNECTED_ATTRIBUTES + CLIENT_STATIC_ATTRIBUTES WIRED_CONNECTION = (EventKey.WIRED_CLIENT_CONNECTED,) WIRED_DISCONNECTION = (EventKey.WIRED_CLIENT_DISCONNECTED,) WIRELESS_CONNECTION = ( EventKey.WIRELESS_CLIENT_CONNECTED, EventKey.WIRELESS_CLIENT_ROAM, EventKey.WIRELESS_CLIENT_ROAM_RADIO, EventKey.WIRELESS_GUEST_CONNECTED, EventKey.WIRELESS_GUEST_ROAM, EventKey.WIRELESS_GUEST_ROAM_RADIO, ) WIRELESS_DISCONNECTION = ( EventKey.WIRELESS_CLIENT_DISCONNECTED, EventKey.WIRELESS_GUEST_DISCONNECTED, ) @callback def async_client_allowed_fn(controller: UniFiController, obj_id: str) -> bool: """Check if client is allowed.""" if not controller.option_track_clients: return False client = controller.api.clients[obj_id] if client.mac not in controller.wireless_clients: if not controller.option_track_wired_clients: return False elif ( client.essid and controller.option_ssid_filter and client.essid not in controller.option_ssid_filter ): return False return True @callback def async_client_is_connected_fn(controller: UniFiController, obj_id: str) -> bool: """Check if device object is disabled.""" client = controller.api.clients[obj_id] if controller.wireless_clients.is_wireless(client) and client.is_wired: if not controller.option_ignore_wired_bug: return False # Wired bug in action if ( not client.is_wired and client.essid and controller.option_ssid_filter and client.essid not in controller.option_ssid_filter ): return False if ( dt_util.utcnow() - dt_util.utc_from_timestamp(client.last_seen or 0) > controller.option_detection_time ): return False return True @callback def async_device_heartbeat_timedelta_fn( controller: UniFiController, obj_id: str ) -> timedelta: """Check if device object is disabled.""" device = controller.api.devices[obj_id] return timedelta(seconds=device.next_interval + 60) @dataclass class UnifiEntityTrackerDescriptionMixin(Generic[HandlerT, ApiItemT]): """Device tracker local functions.""" heartbeat_timedelta_fn: Callable[[UniFiController, str], timedelta] ip_address_fn: Callable[[aiounifi.Controller, str], str] is_connected_fn: Callable[[UniFiController, str], bool] hostname_fn: Callable[[aiounifi.Controller, str], str | None] @dataclass class UnifiTrackerEntityDescription( UnifiEntityDescription[HandlerT, ApiItemT], UnifiEntityTrackerDescriptionMixin[HandlerT, ApiItemT], ): """Class describing UniFi device tracker entity.""" ENTITY_DESCRIPTIONS: tuple[UnifiTrackerEntityDescription, ...] = ( UnifiTrackerEntityDescription[Clients, Client]( key="Client device scanner", has_entity_name=True, allowed_fn=async_client_allowed_fn, api_handler_fn=lambda api: api.clients, available_fn=lambda controller, obj_id: controller.available, device_info_fn=lambda api, obj_id: None, event_is_on=(WIRED_CONNECTION + WIRELESS_CONNECTION), event_to_subscribe=( WIRED_CONNECTION + WIRED_DISCONNECTION + WIRELESS_CONNECTION + WIRELESS_DISCONNECTION ), heartbeat_timedelta_fn=lambda controller, _: controller.option_detection_time, is_connected_fn=async_client_is_connected_fn, name_fn=lambda client: client.name or client.hostname, object_fn=lambda api, obj_id: api.clients[obj_id], supported_fn=lambda controller, obj_id: True, unique_id_fn=lambda controller, obj_id: f"{obj_id}-{controller.site}", ip_address_fn=lambda api, obj_id: api.clients[obj_id].ip, hostname_fn=lambda api, obj_id: api.clients[obj_id].hostname, ), UnifiTrackerEntityDescription[Devices, Device]( key="Device scanner", has_entity_name=True, icon="mdi:ethernet", allowed_fn=lambda controller, obj_id: controller.option_track_devices, api_handler_fn=lambda api: api.devices, available_fn=async_device_available_fn, device_info_fn=lambda api, obj_id: None, event_is_on=None, event_to_subscribe=None, heartbeat_timedelta_fn=async_device_heartbeat_timedelta_fn, is_connected_fn=lambda ctrlr, obj_id: ctrlr.api.devices[obj_id].state == 1, name_fn=lambda device: device.name or device.model, object_fn=lambda api, obj_id: api.devices[obj_id], supported_fn=lambda controller, obj_id: True, unique_id_fn=lambda controller, obj_id: obj_id, ip_address_fn=lambda api, obj_id: api.devices[obj_id].ip, hostname_fn=lambda api, obj_id: None, ), ) async def async_setup_entry( hass: HomeAssistant, config_entry: ConfigEntry, async_add_entities: AddEntitiesCallback, ) -> None: """Set up device tracker for UniFi Network integration.""" controller: UniFiController = hass.data[UNIFI_DOMAIN][config_entry.entry_id] controller.register_platform_add_entities( UnifiScannerEntity, ENTITY_DESCRIPTIONS, async_add_entities ) class UnifiScannerEntity(UnifiEntity[HandlerT, ApiItemT], ScannerEntity): """Representation of a UniFi scanner.""" entity_description: UnifiTrackerEntityDescription _event_is_on: tuple[EventKey, ...] _ignore_events: bool _is_connected: bool @callback def async_initiate_state(self) -> None: """Initiate entity state. Initiate is_connected. """ description = self.entity_description self._event_is_on = description.event_is_on or () self._ignore_events = False self._is_connected = description.is_connected_fn(self.controller, self._obj_id) if self.is_connected: self.controller.async_heartbeat( self.unique_id, dt_util.utcnow() + description.heartbeat_timedelta_fn(self.controller, self._obj_id), ) @property def is_connected(self) -> bool: """Return true if the device is connected to the network.""" return self._is_connected @property def hostname(self) -> str | None: """Return hostname of the device.""" return self.entity_description.hostname_fn(self.controller.api, self._obj_id) @property def ip_address(self) -> str: """Return the primary ip address of the device.""" return self.entity_description.ip_address_fn(self.controller.api, self._obj_id) @property def mac_address(self) -> str: """Return the mac address of the device.""" return self._obj_id @property def source_type(self) -> SourceType: """Return the source type, eg gps or router, of the device.""" return SourceType.ROUTER @property def unique_id(self) -> str: """Return a unique ID.""" return self._attr_unique_id @callback def _make_disconnected(self, *_: core_Event) -> None: """No heart beat by device.""" self._is_connected = False self.async_write_ha_state() @callback def async_update_state(self, event: ItemEvent, obj_id: str) -> None: """Update entity state. Remove heartbeat check if controller state has changed and entity is unavailable. Update is_connected. Schedule new heartbeat check if connected. """ description = self.entity_description if event == ItemEvent.CHANGED: # Prioritize normal data updates over events self._ignore_events = True elif event == ItemEvent.ADDED and not self.available: # From unifi.entity.async_signal_reachable_callback # Controller connection state has changed and entity is unavailable # Cancel heartbeat self.controller.async_heartbeat(self.unique_id) return if is_connected := description.is_connected_fn(self.controller, self._obj_id): self._is_connected = is_connected self.controller.async_heartbeat( self.unique_id, dt_util.utcnow() + description.heartbeat_timedelta_fn(self.controller, self._obj_id), ) @callback def async_event_callback(self, event: Event) -> None: """Event subscription callback.""" if event.mac != self._obj_id or self._ignore_events: return if event.key in self._event_is_on: self.controller.async_heartbeat(self.unique_id) self._is_connected = True self.async_write_ha_state() return self.controller.async_heartbeat( self.unique_id, dt_util.utcnow() + self.entity_description.heartbeat_timedelta_fn( self.controller, self._obj_id ), ) async def async_added_to_hass(self) -> None: """Register callbacks.""" await super().async_added_to_hass() self.async_on_remove( async_dispatcher_connect( self.hass, f"{self.controller.signal_heartbeat_missed}_{self.unique_id}", self._make_disconnected, ) ) async def async_will_remove_from_hass(self) -> None: """Disconnect object when removed.""" await super().async_will_remove_from_hass() self.controller.async_heartbeat(self.unique_id) @property def extra_state_attributes(self) -> Mapping[str, Any] | None: """Return the client state attributes.""" if self.entity_description.key != "Client device scanner": return None client = self.entity_description.object_fn(self.controller.api, self._obj_id) raw = client.raw attributes_to_check = CLIENT_STATIC_ATTRIBUTES if self.is_connected: attributes_to_check = CLIENT_CONNECTED_ALL_ATTRIBUTES attributes = {k: raw[k] for k in attributes_to_check if k in raw} return attributes
PypiClean
/ppb-1.1rc3.tar.gz/ppb-1.1rc3/examples/external_event_loop_integration/README.md
# External Event Loop Integration This example demonstrates embedding ppb in an external event loop, in this case Twisted. In order to run this example, you'll need to install the requirements.txt in this directory. Otherwise, it's behavior is similar to the keyboard_and_mouse_controls example: A space ship in the center of the screen facing down can be controlled with the arrows or "WASD". You can fire a laser beam with your primary mouse button or the space bar. When a laser hits one of the enemy ships (round), the laser and the ship are removed from play. Additionally, if you navigate to localhost:8080 in your browser, you will see the number of enemies still in play. This is the demonstration of interaction between ppb and a webserver.
PypiClean
/c65faucet-1.0.59.tar.gz/c65faucet-1.0.59/faucet/__main__.py
# Copyright (C) 2015 Brad Cowie, Christopher Lorier and Joe Stringer. # Copyright (C) 2015 Research and Education Advanced Network New Zealand Ltd. # Copyright (C) 2015--2019 The Contributors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os import sys from pbr.version import VersionInfo if sys.version_info < (3,) or sys.version_info < (3, 5): raise ImportError( """You are trying to run faucet on python {py} Faucet is not compatible with python {py}, please upgrade to python 3.5 or newer.""".format( py=".".join([str(v) for v in sys.version_info[:3]]) ) ) RYU_OPTIONAL_ARGS = [ ("ca-certs", "CA certificates"), ( "config-dir", """Path to a config directory to pull `*.conf` files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s) specified via previous --config-file, arguments hence over-ridden options in the directory take precedence.""", ), ( "config-file", """Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. Defaults to None.""", "/etc/faucet/ryu.conf", ), ("ctl-cert", "controller certificate"), ("ctl-privkey", "controller private key"), ("default-log-level", "default log level"), ("log-config-file", "Path to a logging config file to use"), ("log-dir", "log file directory"), ("log-file", "log file name"), ("log-file-mode", "default log file permission"), ("observe-links", "observe link discovery events"), ("ofp-listen-host", "openflow listen host (default 0.0.0.0)"), ("ofp-ssl-listen-port", "openflow ssl listen port (default: 6653)"), ( "ofp-switch-address-list", """list of IP address and port pairs (default empty). e.g., "127.0.0.1:6653,[::1]:6653""", ), ( "ofp-switch-connect-interval", "interval in seconds to connect to switches (default 1)", ), ("ofp-tcp-listen-port", "openflow tcp listen port (default: 6653)"), ("pid-file", "pid file name"), ("user-flags", "Additional flags file for user applications"), ] def parse_args(sys_args): """Parse Faucet/Gauge arguments. Returns: argparse.Namespace: command line arguments """ args = argparse.ArgumentParser(prog="faucet", description="Faucet SDN Controller") args.add_argument("--gauge", action="store_true", help="run Gauge instead") args.add_argument( "-v", "--verbose", action="store_true", help="produce verbose output" ) args.add_argument( "-V", "--version", action="store_true", help="print version and exit" ) args.add_argument("--use-stderr", action="store_true", help="log to standard error") args.add_argument("--use-syslog", action="store_true", help="output to syslog") args.add_argument( "--ryu-app-lists", action="append", help="add Ryu app (can be specified multiple times)", metavar="APP", ) for ryu_arg in RYU_OPTIONAL_ARGS: if len(ryu_arg) >= 3: args.add_argument( "--ryu-%s" % ryu_arg[0], help=ryu_arg[1], default=ryu_arg[2] ) else: args.add_argument("--ryu-%s" % ryu_arg[0], help=ryu_arg[1]) return args.parse_args(sys_args) def print_version(): """Print version number and exit.""" version = VersionInfo("c65faucet").semantic_version().release_string() message = "c65faucet %s" % version print(message) def build_ryu_args(argv): args = parse_args(argv[1:]) # Checking version number? if args.version: print_version() return [] prog = os.path.basename(argv[0]) ryu_args = [] # Handle log location if args.use_stderr: ryu_args.append("--use-stderr") if args.use_syslog: ryu_args.append("--use-syslog") # Verbose output? if args.verbose: ryu_args.append("--verbose") for arg, val in vars(args).items(): if not val or not arg.startswith("ryu"): continue if arg == "ryu_app_lists": continue if arg == "ryu_config_file" and not os.path.isfile(val): continue arg_name = arg.replace("ryu_", "").replace("_", "-") ryu_args.append("--%s=%s" % (arg_name, val)) # Running Faucet or Gauge? if args.gauge or os.path.basename(prog) == "gauge": ryu_args.append("faucet.gauge") else: ryu_args.append("faucet.faucet") # Check for additional Ryu apps. if args.ryu_app_lists: ryu_args.extend(args.ryu_app_lists) # Replace current process with ryu-manager from PATH (no PID change). ryu_args.insert(0, "osken-manager") return ryu_args def main(): """Main program.""" ryu_args = build_ryu_args(sys.argv) if ryu_args: os.execvp(ryu_args[0], ryu_args) if __name__ == "__main__": main()
PypiClean
/INDIpy-0.4.0.tar.gz/INDIpy-0.4.0/indi/routing/router.py
import logging from typing import Dict, List, Optional, Union, cast from indi.message import EnableBLOB, IndiMessage, NewBLOBVector, const from indi.routing import Client, Device logger = logging.getLogger(__name__) SenderType = Optional[Union[Client, Device]] class Router: """Message router Passess messages between device drivers and client connections. """ _instance = None DEFAULT_BLOB_POLICY = const.BLOBEnable.NEVER def __init__(self) -> None: self.clients: List[Client] = [] self.devices: List[Device] = [] self.blob_routing: Dict[ SenderType, Dict[Optional[str], const.BLOBEnableType] ] = {} @classmethod def instance(cls): if not cls._instance: cls._instance = cls() return cls._instance def register_device(self, device: Device): self.devices.append(device) def register_client(self, client: Client): logger.debug("Router: registering client %s", client) self.clients.append(client) self.blob_routing[client] = {} def unregister_client(self, client: Client): logger.debug("Router: unregistering client %s", client) if client in self.clients: self.clients.remove(client) if client in self.blob_routing: del self.blob_routing[client] def process_message(self, message: IndiMessage, sender: SenderType = None): is_blob = isinstance(message, NewBLOBVector) if message.from_client: if isinstance(message, EnableBLOB): self.process_enable_blob(message, sender) for device in self.devices: if not device == sender and device.accepts(message.device): device.message_from_client(message) if message.from_device: for client in self.clients: if not client == sender: device_name = getattr(message, "device") client_blob_policy = self.blob_routing.get(client, {}).get( device_name, self.DEFAULT_BLOB_POLICY ) if ( is_blob and client_blob_policy in ( const.BLOBEnable.ALSO, const.BLOBEnable.ONLY, ) ) or (not is_blob and client_blob_policy == const.BLOBEnable.NEVER): client.message_from_device(message) def process_enable_blob(self, message: EnableBLOB, sender: SenderType): self.blob_routing[sender][message.device] = message.value
PypiClean
/js_sdk-12.0.0.tar.gz/js_sdk-12.0.0/jumpscale/tools/qrcode/__init__.py
import pyqrcode def _get_qr_object(content, version, mode, encoding): return pyqrcode.create(content, version=version, mode=mode, encoding=encoding) def png_get(content, file_path, version=None, mode=None, encoding=None, scale=1): """Writes QR code to file in PNG format Args: content (str): content to be encoded file_path (str): file path to write to version (int, optional): data capacity of the code, automatic if not specified. Defaults to None. mode (str, optional): how the content will be encoded, automatic if not specified. Defaults to None. encoding (str, optional): Encoding of the specfied content string. Defaults to None. scale (int, optional): scale of the QR code relative to the module. Defaults to 1. """ qr_object = _get_qr_object(content, version, mode, encoding) qr_object.png(file_path, scale=scale) def svg_get(content, file_path, version=None, mode=None, encoding=None, scale=1, title=None): """Writes QR code to file in SVG format Args: content (str): content to be encoded file_path (str): file path to write to version (int, optional): data capacity of the code, automatic if not specified. Defaults to None. mode (str, optional): how the content will be encoded, automatic if not specified. Defaults to None. encoding (str, optional): Encoding of the specfied content string. Defaults to None. scale (int, optional): scale of the QR code relative to the module. Defaults to 1. title (str, optional): title of the SVG. Defauls to None. """ qr_object = _get_qr_object(content, version, mode, encoding) qr_object.svg(file_path, scale=scale, title=title) def base64_get(content, version=None, mode=None, encoding=None, scale=1): """returns base 64 of QR code Args: content (str): content to be encoded version (int, optional): data capacity of the code, automatic if not specified. Defaults to None. mode (str, optional): how the content will be encoded, automatic if not specified. Defaults to None. encoding (str, optional): Encoding of the specfied content string. Defaults to None. scale (int, optional): scale of the QR code relative to the module. Defaults to 1. """ qr_object = _get_qr_object(content, version, mode, encoding) return qr_object.png_as_base64_str(scale=scale)
PypiClean
/dfipy-2.0.1.tar.gz/dfipy-2.0.1/dfi/validate.py
import json import logging from datetime import datetime from typing import List, Optional, Union import pandas as pd import requests from dfi import models _logger = logging.getLogger(__name__) class DFIDataFrameColumnsNameError(Exception): """Raised when the column names of a dataframe are not as expected""" class DFIDataCSVConversionError(Exception): """Raised when the user tries to ingest into DFI a csv that is wrongly formatted""" class DFIDataJSONConversionError(Exception): """Raised when the user tries to ingest into DFI a json that is wrongly formatted""" class DFIInputDataError(Exception): """Raised when the user tries to ingest into DFI a piece of data wrongly formatted""" class DFIInputValueError(Exception): """Raised when the user passes a wrong input value to the DFI query""" class DFIInputValueOutOfBoundError(Exception): """Raised when the user passes a wrong input value to the DFI query""" class DFIResponseError(Exception): """Raised when an error propagated back from the HTTP API""" class DFIResponseWarning(Warning): """Raised when some exceptional case is propagated back from the HTTP API""" def bounding_box(list_bbox: Optional[List[float]]) -> None: """ Check input list of coordinates correspond to a bounding box, with lon, lat within the range. :param list_bbox: a bbox :returns: `None` :raises `DFIInputValueError`: If `polygon` is ill-formed. :raises `DFIInputValueOutOfBoundError`: If not -180.0 < longitude <= 180.0 or not -90 < latitude <= 90.0. """ if list_bbox is None: return if len(list_bbox) != 4: raise DFIInputValueError(f"Input bounding box parameters must be a list of 4 floats. User passed {list_bbox}") for value in list_bbox: if not isinstance(value, float): raise DFIInputValueError(f"Input value {value} of type {type(value)} must be a float.") min_lng, min_lat, max_lng, max_lat = list_bbox if not -180 < min_lng <= 180: raise DFIInputValueOutOfBoundError(f"Input min longitude {min_lng} is out of range.") if not -180 < max_lng <= 180: raise DFIInputValueOutOfBoundError(f"Input max longitude {max_lng} is out of range.") if not -90 < min_lat < 90: raise DFIInputValueOutOfBoundError(f"Input min latitude {min_lat} is out of range.") if not -90 < max_lat < 90: raise DFIInputValueOutOfBoundError(f"Input max latitude {max_lat} is out of range.") def data(list_data: List[dict]) -> None: """ Check that the input is a list of dict with correct keys. :param list_data: a list of dictionaries with keys "coordinate", "time", "id", "payload". :returns: `None` :raises `DFIInputDataError`: if the input dictionaries do not have the correct keys. :raises `DFIInputValueOutOfBoundError`: if the input coordinates are out of bound. """ expected_keys = {"coordinate", "time", "id", "payload"} for dict_data in list_data: if not expected_keys.issubset(set(dict_data.keys())): raise DFIInputDataError(f"Keys expected {expected_keys}. Found instead {set(dict_data.keys())}") if not isinstance(dict_data["coordinate"], (list, tuple)): raise DFIInputDataError(f"Coordinates passed are {dict_data['coordinate']}. Expecting a tuple or a list.") if len(dict_data["coordinate"]) != 2: raise DFIInputDataError( f"Coordinates passed are {dict_data['coordinate']}. Expecting a tuple or a list with two elements." ) lng, lat = dict_data["coordinate"] if not -180 < lng <= 180: raise DFIInputValueOutOfBoundError(f"Input max longitude {lng} is out of range.") if not -90 < lat < 90: raise DFIInputValueOutOfBoundError(f"Input min latitude {lat} is out of range.") def df_hexes(df_h3: pd.DataFrame) -> None: """ Check the column names are correct. :param df_h3: A dataframe with a `"hex_id"` column. :returns: `None` :raises `DFIDataFrameColumnsNameError`: If a column name is not in `["entity_id", "latitude", "longitude", "timestamp", "hex_id"]` """ for col_name in ["entity_id", "latitude", "longitude", "timestamp", "hex_id"]: if col_name not in df_h3.columns: raise DFIDataFrameColumnsNameError(f"Column name {col_name} expected in df_records but not found.") def df_hexes_heatmap(df_h3: pd.DataFrame) -> None: """ Check the column names are correct. :param df_h3: A dataframe with a `"hex_id"` column. :returns: `None` :raises `DFIDataFrameColumnsNameError`: If a column name is not in `["entity_id", "latitude", "longitude", "timestamp", "hex_id", "period_start", "period end"]` """ for col_name in ["entity_id", "latitude", "longitude", "timestamp", "hex_id", "period_start", "period end"]: if col_name not in df_h3.columns: raise DFIDataFrameColumnsNameError(f"Column name {col_name} expected in df_records but not found.") def df_records(df_rec: pd.DataFrame) -> None: """ Check the column names are correct. :param h3_res: A dataframe of records :returns `None`: :raises `DFIDataFrameColumnsNameError`: If a column name is not in `["entity_id", "latitude", "longitude", "timestamp"]` """ for col_name in ["entity_id", "latitude", "longitude", "timestamp"]: if col_name not in df_rec.columns: raise DFIDataFrameColumnsNameError(f"Column name {col_name} expected in df_records but not found.") def entities(input_entities: Optional[List[str]]) -> None: """ Validate a given list of entities is a list of strings. :param `input_entities`: a list of entity ids :returns: `None` :raises `DFIInputValueError`: If `input_entities` is not a list of string or is empty or has duplicates. """ if input_entities is None: return if not isinstance(input_entities, list): raise DFIInputValueError(f"Entities must be a list of strings. Received {input_entities}") if len(input_entities) == 0: raise DFIInputValueError("Entities must be a list of strings. Received an empty list") if len(set(input_entities)) < len(input_entities): duplicates_found = set([x for x in input_entities if input_entities.count(x) > 1]) raise DFIInputValueError(f"Entities list must not contain duplicates. Duplicates found {duplicates_found}") def h3_resolution(h3_res: int) -> None: """ check the input is within an acceptable range. :param `h3_res`: An H3 resolution :returns: `None` :raises `DFIInputValueOutOfBoundError`: If not 1<= h3_res <= 15 """ if (h3_res < 1) or (h3_res > 15): raise DFIInputValueOutOfBoundError( f"Resolution is incorrect. It must be between 1 and 15. User passed {h3_res}" ) def list_polygons_response(vert: Optional[models.Polygon], resp: requests.models.Response) -> None: """ :param vert: :returns: `None` :raises `DFIResponseError`: If there was an error querying the DFI API. :::{TODO} Display error from API. ::: """ if vert is None: msg = f"Polygon list can not be retrieved from the json response: {resp.json()}" _logger.error(msg) raise DFIResponseError(msg) def polygon(poly: Optional[models.Polygon]) -> None: """ Check input list of coordinates correspond to a list of vertices, or a bounding box. :param poly: A list of vertices or bbox. :returns `None`: :raises `DFIInputValueError`: If `polygon` is ill-formed. """ if poly is None: return if not isinstance(poly, (list, tuple)): raise DFIInputValueError(f"Polygon {poly} must be of type list or tuple.") if len(poly) == 0: raise DFIInputValueError(f"Given polygon {poly} is empty.") if not isinstance(poly[0], (list, tuple, float)): raise DFIInputValueError(f"Polygon {poly} must be a list of tuples or floats.") if isinstance(poly[0], float): bounding_box(poly) if isinstance(poly[0], (list, tuple)): vertices(poly) def response( resp: requests.models.Response, url: str, headers: dict, params: dict, payload: Optional[dict] = None, ) -> None: """ Log the response of a request with the given parameters. Raise an error if status code is not 20x. :param resp: a response object :param url: the queried url :param headers: request headers :param params: request params :param payload: request payload :returns: `None` :raises `DFIResponseError`: If there was an error querying the DFI API. :::{TODO} Display error from API. ::: """ # prevent from showing the user token to terminal and logs headers = headers.copy() headers["X-API-TOKEN"] = "Bearer XXX" msg = f"""Response status code {resp.status_code}. Query URL: {url}, HEADER: {json.dumps(headers, sort_keys=True, indent=4)}, PARAMS: {json.dumps(params, sort_keys=True, indent=4)} """ if payload is not None: msg += f"PAYLOAD: {json.dumps(payload, sort_keys=True, indent=4)}" if int(resp.status_code / 10) != 20: msg += f" Status Code {resp.status_code}. " msg += resp.text _logger.error(msg) raise DFIResponseError(msg) _logger.debug(msg) def time_interval(time_interv: Optional[models.TimeInterval] = None) -> None: """ Validate input datetimes are both given and compatible. :param time_interv: - a tuple of start time and end time bounds e.g. `(start_time, end_time)` :returns: `None` :raises `DFIInputValueError`: If `time_interv` is ill-formed. """ if time_interv is None: return if len(time_interv) != 2: msg = f"Time interval is not an interval with two dates. User passed {time_interv}" raise DFIInputValueError(msg) start_time, end_time = time_interv if start_time is None and end_time is None: return if (start_time is None and end_time is not None) or (start_time is not None and end_time is None): msg = ( "start_time and end_time must be both initialised or both None. " f"User passed start_time={start_time}, end_time={end_time}" ) raise DFIInputValueError(msg) if not isinstance(start_time, datetime): msg = f"Start time should be of type datetime. User passed {start_time}" raise DFIInputValueError(msg) if not isinstance(end_time, datetime): msg = f"End time should be of type datetime. User passed {end_time}" raise DFIInputValueError(msg) if not start_time < end_time: msg = f"Start time {start_time} happened after than end time {end_time}." raise DFIInputValueError(msg) def url_s3(url_object: Union[str, List[str]]) -> None: """ Validate input S3 URL, which can be a string or a list of strings. :param url_object: - a string with an S3 URL or a list or S3 UR:. :returns: `None` :raises `DFIInputValueError`: if the types are not as expected. """ msg = f"Given URL can be either a list of strings or a list. User passed {url_object}" if not isinstance(url_object, (str, list)): raise DFIInputValueError(msg) if isinstance(url_object, list): for url_item in url_object: if not isinstance(url_item, str): raise DFIInputValueError(msg) def vertices(input_vertices: Optional[List[List[float]]]) -> None: """ Check input list of vertices correspond to a polygon. It does not check if the polygon is simple. :param input_vertices: a list of polygon vertices :returns: `None` :raises `DFIInputValueError`: If `input_vertices` is ill-formed. """ if input_vertices is None: return if len(input_vertices) < 3: raise DFIInputValueError(f"A polygon can not have less than 3 vertices. User passed {input_vertices}.") for vertex in input_vertices: if not len(vertex) == 2: raise DFIInputValueError(f"Length of each vertex must be 2. User passed {vertex}") if not isinstance(vertex[0], float) or not isinstance(vertex[1], float): raise DFIInputValueError( f"Coordinates must be of type float." f" User passed {vertex} of types ({type(vertex[0])}, {type(vertex[1])})" ) lng, lat = vertex if not -180 < lng <= 180: raise DFIInputValueOutOfBoundError(f"Input longitude {lng} is out of range.") if not -90 < lat < 90: raise DFIInputValueOutOfBoundError(f"Input latitude {lat} is out of range.") if not input_vertices[0] == input_vertices[-1]: raise DFIInputValueError("First and last vertices are expected to be identical points.") def vertices_response(vert: Optional[models.Polygon], resp: requests.models.Response) -> None: """ :param vert: :returns: `None` :raises `DFIResponseError`: If there was an error querying the DFI API. :::{TODO} Display error from API. ::: """ if vert is None: msg = f"Polygon vertices can not be retrieved from the json response: {resp.json()}" _logger.error(msg) raise DFIResponseError(msg)
PypiClean
/scs_core-2.8.9-py3-none-any.whl/scs_core/control/control_receipt.py
import hashlib from collections import OrderedDict from scs_core.control.command import Command from scs_core.data.datetime import LocalizedDatetime from scs_core.data.json import JSONable, JSONify from scs_core.sample.sample import Sample # -------------------------------------------------------------------------------------------------------------------- class ControlReceipt(JSONable): """ classdocs """ VERSION = 2.0 # ---------------------------------------------------------------------------------------------------------------- @classmethod def construct_from_jdict(cls, jdict): if not jdict: return None tag = jdict.get('tag') attn = jdict.get('attn') try: version = round(float(jdict.get('ver')), 1) except (TypeError, ValueError): version = None rec = LocalizedDatetime.construct_from_iso8601(jdict.get('rec')) command = Command.construct_from_jdict(jdict.get('cmd')) omd = jdict.get('omd') digest = jdict.get('digest') datum = cls(tag, attn, rec, command, omd, digest, version=version) return datum @classmethod def construct_from_datum(cls, datum, rec, command, key): digest = ControlReceipt.__hash(datum.attn, rec, command, datum.digest, key, cls.VERSION) return cls(datum.attn, datum.tag, rec, command, datum.digest, digest, version=cls.VERSION) # ---------------------------------------------------------------------------------------------------------------- @classmethod def __hash(cls, tag, rec, command, omd, key, version): rec_iso8601 = rec.as_iso8601(include_millis=Sample.INCLUDE_MILLIS) text = str(tag) + JSONify.dumps(rec_iso8601) + JSONify.dumps(command) + str(omd) + str(key) if version == 2.0: hash_object = hashlib.sha1(text.encode()) else: hash_object = hashlib.sha256(text.encode()) return hash_object.hexdigest() # ---------------------------------------------------------------------------------------------------------------- def __init__(self, tag, attn, rec, command, omd, digest, version=None): """ Constructor """ self.__tag = tag # string self.__attn = attn # string self.__rec = rec # LocalizedDatetime self.__version = version # float self.__command = command # Command self.__omd = omd # string self.__digest = digest # string # ---------------------------------------------------------------------------------------------------------------- def is_valid(self, key): digest = self.__hash(self.tag, self.rec, self.command, self.omd, key, self.version) return digest == self.__digest # ---------------------------------------------------------------------------------------------------------------- def as_json(self): jdict = OrderedDict() jdict['tag'] = self.tag jdict['attn'] = self.attn jdict['rec'] = self.rec.as_iso8601(include_millis=Sample.INCLUDE_MILLIS) jdict['ver'] = round(self.version, 1) jdict['cmd'] = self.command jdict['omd'] = self.omd jdict['digest'] = self.__digest return jdict # ---------------------------------------------------------------------------------------------------------------- @property def tag(self): return self.__tag @property def attn(self): return self.__attn @property def rec(self): return self.__rec @property def version(self): return self.__version @property def command(self): return self.__command @property def omd(self): return self.__omd # ---------------------------------------------------------------------------------------------------------------- def __str__(self, *args, **kwargs): return "ControlReceipt:{tag:%s, attn:%s, rec:%s, version:%s, command:%s, omd:%s, digest:%s}" % \ (self.tag, self.attn, self.rec, self.version, self.command, self.omd, self.__digest)
PypiClean
/thrill-0.0.1.tar.gz/thrill-0.0.1/extlib/googletest/googlemock/scripts/generator/README.cppclean
Goal: ----- CppClean attempts to find problems in C++ source that slow development in large code bases, for example various forms of unused code. Unused code can be unused functions, methods, data members, types, etc to unnecessary #include directives. Unnecessary #includes can cause considerable extra compiles increasing the edit-compile-run cycle. The project home page is: http://code.google.com/p/cppclean/ Features: --------- * Find and print C++ language constructs: classes, methods, functions, etc. * Find classes with virtual methods, no virtual destructor, and no bases * Find global/static data that are potential problems when using threads * Unnecessary forward class declarations * Unnecessary function declarations * Undeclared function definitions * (planned) Find unnecessary header files #included - No direct reference to anything in the header - Header is unnecessary if classes were forward declared instead * (planned) Source files that reference headers not directly #included, ie, files that rely on a transitive #include from another header * (planned) Unused members (private, protected, & public) methods and data * (planned) Store AST in a SQL database so relationships can be queried AST is Abstract Syntax Tree, a representation of parsed source code. http://en.wikipedia.org/wiki/Abstract_syntax_tree System Requirements: -------------------- * Python 2.4 or later (2.3 probably works too) * Works on Windows (untested), Mac OS X, and Unix How to Run: ----------- For all examples, it is assumed that cppclean resides in a directory called /cppclean. To print warnings for classes with virtual methods, no virtual destructor and no base classes: /cppclean/run.sh nonvirtual_dtors.py file1.h file2.h file3.cc ... To print all the functions defined in header file(s): /cppclean/run.sh functions.py file1.h file2.h ... All the commands take multiple files on the command line. Other programs include: find_warnings, headers, methods, and types. Some other programs are available, but used primarily for debugging. run.sh is a simple wrapper that sets PYTHONPATH to /cppclean and then runs the program in /cppclean/cpp/PROGRAM.py. There is currently no equivalent for Windows. Contributions for a run.bat file would be greatly appreciated. How to Configure: ----------------- You can add a siteheaders.py file in /cppclean/cpp to configure where to look for other headers (typically -I options passed to a compiler). Currently two values are supported: _TRANSITIVE and GetIncludeDirs. _TRANSITIVE should be set to a boolean value (True or False) indicating whether to transitively process all header files. The default is False. GetIncludeDirs is a function that takes a single argument and returns a sequence of directories to include. This can be a generator or return a static list. def GetIncludeDirs(filename): return ['/some/path/with/other/headers'] # Here is a more complicated example. def GetIncludeDirs(filename): yield '/path1' yield os.path.join('/path2', os.path.dirname(filename)) yield '/path3' How to Test: ------------ For all examples, it is assumed that cppclean resides in a directory called /cppclean. The tests require cd /cppclean make test # To generate expected results after a change: make expected Current Status: --------------- The parser works pretty well for header files, parsing about 99% of Google's header files. Anything which inspects structure of C++ source files should work reasonably well. Function bodies are not transformed to an AST, but left as tokens. Much work is still needed on finding unused header files and storing an AST in a database. Non-goals: ---------- * Parsing all valid C++ source * Handling invalid C++ source gracefully * Compiling to machine code (or anything beyond an AST) Contact: -------- If you used cppclean, I would love to hear about your experiences [email protected]. Even if you don't use cppclean, I'd like to hear from you. :-) (You can contact me directly at: [email protected])
PypiClean
/hikari_yuyo-1.19.0.tar.gz/hikari_yuyo-1.19.0/docs/usage/components.md
# Message Components Message components are the interactive buttons and select menus you'll see on some messages sent by bots. ### Making a Component Client The Component client keeps track of registered components and handles executing them. This can be created with any of the following class methods: * [ComponentClient.from_gateway_bot][yuyo.components.ComponentClient.from_gateway_bot]: Create a component client from a Hikari gateway bot (i.e. [hikari.GatewayBot][hikari.impl.gateway_bot.GatewayBot]). * [ComponentClient.from_rest_bot][yuyo.components.ComponentClient.from_rest_bot]: Create a component client from a Hikari REST bot (i.e. [hikari.RESTBot][hikari.impl.rest_bot.RESTBot] or [yuyo.asgi.AsgiBot][]). * [ComponentClient.from_tanjun][yuyo.components.ComponentClient.from_tanjun]: Create a component client from a Tanjun [Client][tanjun.abc.Client]. This method will make the component client use Tanjun's Alluka client for dependency injection, essentially mirroring the dependencies registered for Tanjun's DI while also registering [ComponentClient][yuyo.components.ComponentClient] as a type dependency. Client state can be managed through dependency injection. This is implemented using [Alluka][alluka] and more information about it can be found in Alluka's [usage guide](https://alluka.cursed.solutions/usage/). The Alluka client used for component execution can be found at [ComponentClient.alluka][yuyo.components.ComponentClient.alluka]. For the sake of simplicity, the following examples all assume the component client can be accessed through Alluka style dependency injection. ### Types of components ##### Buttons ![button colours](./images/button_colours.png) Message buttons have several different styles, as shown above. Most of these are interactive, meaning that an interaction will be sent to the bot when a user clicks on it. The only non-interactive style is link buttons which simply open the set link in a browser for the user who clicked on it. A row can have up to 5 buttons in it. ### Select Menus ![select menu example](./images/select_menu_example.png) Select menus let users select between 0 to 25 options (dependent on how the bot configured it). These selections are communicated to the bot once the user has finished selecting options via an interaction and there's several different resources they can be selecting: * Text menus: lets the bot pre-define up to 25 text options * User menus: lets the user pick up to 25 users * Role menus: lets the user pick up to 25 roles * Channel menus: lets the user pick up to 25 channels * Mentionable menus: lets the user pick up to 25 roles and users !!! note As of writing user, role, channel and mentionable menus only let you select entities from the current guild. Only text menus work properly in DM channels. Each select menu takes up a whole row. ### Declaring Components When adding sub-components to a select menu, they'll either be appended to the last row or they'll be added to a new row if the new entry wouldn't fit in the last row. A message can only have up to 5 component rows on it. There's several different ways to declare components using Yuyo: ### Subclassing ```py --8<-- "./docs_src/components.py:32:55" ``` When subclassing [ActionColumnExecutor][yuyo.components.ActionColumnExecutor], you can use any of the following class descriptors to add "static" sub-components (which'll be included on every instance and subclass of the column) to it: * [as_channel_menu][yuyo.components.as_channel_menu] * [as_interactive_button][yuyo.components.as_interactive_button] * [as_mentionable_menu][yuyo.components.as_mentionable_menu] * [as_role_menu][yuyo.components.as_role_menu] * [as_text_menu][yuyo.components.as_text_menu] * [as_user_menu][yuyo.components.as_user_menu] * [link_button][yuyo.components.link_button] ```py --8<-- "./docs_src/components.py:59:64" ``` Most of these descriptors decorate a callback which'll be called when that specific sub-component is used by a user, with the only exception being link buttons which open a link for the user instead of sending an interaction to the bot. !!! warning If you declare `__init__` on an [ActionColumnExecutor][yuyo.components.ActionColumnExecutor] subclass then you must make sure to first call `super().__init__()` in it. ```py --8<-- "./docs_src/components.py:69:80" ``` Alternatively, static sub-components can be added to an [ActionColumnExecutor][yuyo.components.ActionColumnExecutor] subclass using its chainable `add_static_{}` class methods. ```py --8<-- "./docs_src/components.py:85:104" ``` Or by using its `with_static_{}` decorator class methods. The only sub-component type which cannot be added through a decorator call is link buttons. !!! note [column_template][yuyo.components.column_template] just provides a shorthand for creating an [ActionColumnExecutor][yuyo.components.ActionColumnExecutor] subclass and all of these class methods also work on a normal class. ### Builder ```py --8<-- "./docs_src/components.py:109:120" ``` You can also dynamically build a [ActionColumnExecutor][yuyo.components.ActionColumnExecutor] after initialising it by using its chainable `add_{}` methods to add sub-components. ```py --8<-- "./docs_src/components.py:125:144" ``` Or by using its `with_{}` decorator methods. The only sub-component type which can't be added through a decorator call is link buttons. ### Handling Component Interactions There's two main ways to handle component interactions with Yuyo: ##### Stateful ```py --8<-- "./docs_src/components.py:148:164" ``` Subclassing [ActionColumnExecutor][yuyo.components.ActionColumnExecutor] allows you to associate state with a specific message's components through OOP. When doing this you'll usually be creating an instance of the components column per message. [ComponentClient.register_executor][yuyo.components.ComponentClient.register_executor] defaults `timeout` to a 30 second sliding timeout (meaning that the timer resets every use). ##### Stateless ```py --8<-- "./docs_src/components.py:168:184" ``` Alternatively, components can be reused by registering the component to the client on startup with `timeout=None` and sending the same component's rows per-execution. Custom IDs have some special handling which allows you to track some metadata for a specific message's components. They are split into two parts as `"{match}:{metadata}"`, where the "match" part is what Yuyo will use to find the executor for a message's components and the "metadata" ([ComponentContext.id_metadata][yuyo.components.ComponentContext.id_metadata]) part represents any developer added metadata for that specific instance of the component. The `id_metadata` init argument lets you set the metadata for the static components in an action column while initiating it by passing a dict of match IDs/descriptor callback names to the metadata for each specified component. Custom IDs cannot be longer than 100 characters in total length and the match parts of the custom IDs in an executor have to be globally unique when registering it globally (i.e. without passing `message=`). !!! note For stateless components like described/above to work properly the match part of custom IDs needs to stay the same between bot restarts. The `as_` descriptors achieve this by generating a constant default ID from the path for the component's callback (which consists of the callback's name and the qualnames of the class and the relevant modules). This does, however, mean that any changes to the function's name or the name of the class/modules it's in will change this generated custom ID leading to it no-longer match any previously declared message components. However, the `add_` and `with_` (class)methods generate a random default whenever called and will have to be manually supplied a constant custom ID through the optional `custom_id` argument. The `as_` descriptors also have a `custom_id` argument which overrides the default path generated ID. ### Responding to Components ```py --8<-- "./docs_src/components.py:188:194" ``` [ComponentContext.respond][yuyo.components.ComponentContext.respond] is used to respond to an interaction with a new message, this has a similar signature to Hikari's message respond method but will only be guaranteed to return a [hikari.Message][hikari.messages.Message] object when `ensure_result=True` is passed. Alternatively, [yuyo.InteractionError][yuyo.components.InteractionError] can be raised to end the execution of a component with a response message. ##### Ephemeral responses ```py --8<-- "./docs_src/components.py:198:202" ``` Ephemeral responses mark the response message as private (so that only the author can see it) and temporary. A response can be marked as ephemeral by passing `ephemeral=True` to either [ComponentContext.create_initial_response][yuyo.components.ComponentContext.create_initial_response] (when initially responding to the interaction with a message response) or [ComponentContext.create_followup][yuyo.components.ComponentContext.create_followup] (for followup responses). ##### Deferrals Interactions need an initial response within 3 seconds but, if you can't give a response within 3 seconds, you can defer the first response using [ComponentContext.defer][yuyo.components.ComponentContext.defer]. A deferral should then be finished by editing in the initial response using either [ComponentContext.edit_initial_response][yuyo.components.ComponentContext.edit_initial_response] or [ComponentContext.respond][yuyo.components.ComponentContext.respond] and if you want a response to be an ephemeral message create then you'll have to pass `ephemeral=True` when deferring. ##### Updating the source message ```py --8<-- "./docs_src/components.py:206:209" ``` You can also use the initial response to edit the message the component being used is on. To do this you need to pass `response_type=hikari.ResponseType.MESSAGE_UPDATE` while calling [ComponentContext.create_initial_response][yuyo.components.ComponentContext.create_initial_response]. After doing this any further calls to [ComponentContext.delete_initial_response][yuyo.components.ComponentContext.delete_initial_response] and [ComponentContext.edit_initial_response][yuyo.components.ComponentContext.edit_initial_response] will target the source message as well. You cannot change the ephemeral state of the source message. You need to pass `response_type=hikari.ResponseType.DEFERRED_MESSAGE_UPDATE` When deferring with the intent to update the source message. ##### Modal responses You can also create a Modal prompt as the initial response to a component interaction. For more information on how to handle modals see the [Modals usage guide](../modals), where [ComponentContext.create_modal_response][yuyo.components.ComponentContext.create_modal_response] should be used to create the initial prompt. ### Other Executors ##### Pagination Yuyo provides a standard component paginator implementation through [components.ComponentPaginator][yuyo.components.ComponentPaginator]. ```py --8<-- "./docs_src/components.py:213:218" ``` This paginator takes iterators/generators of [yuyo.pagination.Page][]s and will only push the iterator forwards as the user interacts with the paginator. This allows for lazily generating responses. Because of this you must use [iter][] before passing a list of pre-built data to its init. ```py --8<-- "./docs_src/components.py:226:227" ``` This also supports asynchronous iterators/generators, allowing for functionality like fetching data as the user scrolls through it. ```py --8<-- "./docs_src/components.py:231:238" ``` The paginator only enables 3 buttons by default: step backwards, stop and step forwards. To enable the other 2 buttons or even just customise these buttons (i.e. set a specific custom_id or emoji/label) you should pass `triggers=[]` to [ComponentPaginator.\_\_init\_\_][yuyo.components.ComponentPaginator.__init__] to disable the default triggers then use the provided builder methods as shown above. You can also add your own buttons to this alongside the pagination buttons using the methods provided by [ActionColumnExecutor][yuyo.components.ActionColumnExecutor].
PypiClean
/dionysus-2.0.9.tar.gz/dionysus-2.0.9/bindings/python/pybind11/docs/advanced/pycpp/object.rst
Python types ############ .. _wrappers: Available wrappers ================== All major Python types are available as thin C++ wrapper classes. These can also be used as function parameters -- see :ref:`python_objects_as_args`. Available types include :class:`handle`, :class:`object`, :class:`bool_`, :class:`int_`, :class:`float_`, :class:`str`, :class:`bytes`, :class:`tuple`, :class:`list`, :class:`dict`, :class:`slice`, :class:`none`, :class:`capsule`, :class:`iterable`, :class:`iterator`, :class:`function`, :class:`buffer`, :class:`array`, and :class:`array_t`. .. warning:: Be sure to review the :ref:`pytypes_gotchas` before using this heavily in your C++ API. .. _casting_back_and_forth: Casting back and forth ====================== In this kind of mixed code, it is often necessary to convert arbitrary C++ types to Python, which can be done using :func:`py::cast`: .. code-block:: cpp MyClass *cls = ..; py::object obj = py::cast(cls); The reverse direction uses the following syntax: .. code-block:: cpp py::object obj = ...; MyClass *cls = obj.cast<MyClass *>(); When conversion fails, both directions throw the exception :class:`cast_error`. .. _python_libs: Accessing Python libraries from C++ =================================== It is also possible to import objects defined in the Python standard library or available in the current Python environment (``sys.path``) and work with these in C++. This example obtains a reference to the Python ``Decimal`` class. .. code-block:: cpp // Equivalent to "from decimal import Decimal" py::object Decimal = py::module_::import("decimal").attr("Decimal"); .. code-block:: cpp // Try to import scipy py::object scipy = py::module_::import("scipy"); return scipy.attr("__version__"); .. _calling_python_functions: Calling Python functions ======================== It is also possible to call Python classes, functions and methods via ``operator()``. .. code-block:: cpp // Construct a Python object of class Decimal py::object pi = Decimal("3.14159"); .. code-block:: cpp // Use Python to make our directories py::object os = py::module_::import("os"); py::object makedirs = os.attr("makedirs"); makedirs("/tmp/path/to/somewhere"); One can convert the result obtained from Python to a pure C++ version if a ``py::class_`` or type conversion is defined. .. code-block:: cpp py::function f = <...>; py::object result_py = f(1234, "hello", some_instance); MyClass &result = result_py.cast<MyClass>(); .. _calling_python_methods: Calling Python methods ======================== To call an object's method, one can again use ``.attr`` to obtain access to the Python method. .. code-block:: cpp // Calculate e^π in decimal py::object exp_pi = pi.attr("exp")(); py::print(py::str(exp_pi)); In the example above ``pi.attr("exp")`` is a *bound method*: it will always call the method for that same instance of the class. Alternately one can create an *unbound method* via the Python class (instead of instance) and pass the ``self`` object explicitly, followed by other arguments. .. code-block:: cpp py::object decimal_exp = Decimal.attr("exp"); // Compute the e^n for n=0..4 for (int n = 0; n < 5; n++) { py::print(decimal_exp(Decimal(n)); } Keyword arguments ================= Keyword arguments are also supported. In Python, there is the usual call syntax: .. code-block:: python def f(number, say, to): ... # function code f(1234, say="hello", to=some_instance) # keyword call in Python In C++, the same call can be made using: .. code-block:: cpp using namespace pybind11::literals; // to bring in the `_a` literal f(1234, "say"_a="hello", "to"_a=some_instance); // keyword call in C++ Unpacking arguments =================== Unpacking of ``*args`` and ``**kwargs`` is also possible and can be mixed with other arguments: .. code-block:: cpp // * unpacking py::tuple args = py::make_tuple(1234, "hello", some_instance); f(*args); // ** unpacking py::dict kwargs = py::dict("number"_a=1234, "say"_a="hello", "to"_a=some_instance); f(**kwargs); // mixed keywords, * and ** unpacking py::tuple args = py::make_tuple(1234); py::dict kwargs = py::dict("to"_a=some_instance); f(*args, "say"_a="hello", **kwargs); Generalized unpacking according to PEP448_ is also supported: .. code-block:: cpp py::dict kwargs1 = py::dict("number"_a=1234); py::dict kwargs2 = py::dict("to"_a=some_instance); f(**kwargs1, "say"_a="hello", **kwargs2); .. seealso:: The file :file:`tests/test_pytypes.cpp` contains a complete example that demonstrates passing native Python types in more detail. The file :file:`tests/test_callbacks.cpp` presents a few examples of calling Python functions from C++, including keywords arguments and unpacking. .. _PEP448: https://www.python.org/dev/peps/pep-0448/ .. _implicit_casting: Implicit casting ================ When using the C++ interface for Python types, or calling Python functions, objects of type :class:`object` are returned. It is possible to invoke implicit conversions to subclasses like :class:`dict`. The same holds for the proxy objects returned by ``operator[]`` or ``obj.attr()``. Casting to subtypes improves code readability and allows values to be passed to C++ functions that require a specific subtype rather than a generic :class:`object`. .. code-block:: cpp #include <pybind11/numpy.h> using namespace pybind11::literals; py::module_ os = py::module_::import("os"); py::module_ path = py::module_::import("os.path"); // like 'import os.path as path' py::module_ np = py::module_::import("numpy"); // like 'import numpy as np' py::str curdir_abs = path.attr("abspath")(path.attr("curdir")); py::print(py::str("Current directory: ") + curdir_abs); py::dict environ = os.attr("environ"); py::print(environ["HOME"]); py::array_t<float> arr = np.attr("ones")(3, "dtype"_a="float32"); py::print(py::repr(arr + py::int_(1))); These implicit conversions are available for subclasses of :class:`object`; there is no need to call ``obj.cast()`` explicitly as for custom classes, see :ref:`casting_back_and_forth`. .. note:: If a trivial conversion via move constructor is not possible, both implicit and explicit casting (calling ``obj.cast()``) will attempt a "rich" conversion. For instance, ``py::list env = os.attr("environ");`` will succeed and is equivalent to the Python code ``env = list(os.environ)`` that produces a list of the dict keys. .. TODO: Adapt text once PR #2349 has landed Handling exceptions =================== Python exceptions from wrapper classes will be thrown as a ``py::error_already_set``. See :ref:`Handling exceptions from Python in C++ <handling_python_exceptions_cpp>` for more information on handling exceptions raised when calling C++ wrapper classes. .. _pytypes_gotchas: Gotchas ======= Default-Constructed Wrappers ---------------------------- When a wrapper type is default-constructed, it is **not** a valid Python object (i.e. it is not ``py::none()``). It is simply the same as ``PyObject*`` null pointer. To check for this, use ``static_cast<bool>(my_wrapper)``. Assigning py::none() to wrappers -------------------------------- You may be tempted to use types like ``py::str`` and ``py::dict`` in C++ signatures (either pure C++, or in bound signatures), and assign them default values of ``py::none()``. However, in a best case scenario, it will fail fast because ``None`` is not convertible to that type (e.g. ``py::dict``), or in a worse case scenario, it will silently work but corrupt the types you want to work with (e.g. ``py::str(py::none())`` will yield ``"None"`` in Python).
PypiClean
/deriva-catalog-manage-0.9.1.tar.gz/deriva-catalog-manage-0.9.1/deriva/utils/catalog/manage/dump_catalog.py
from __future__ import print_function from urllib.parse import urlparse import ast import logging import os import re import sys import traceback import requests from requests.exceptions import HTTPError from deriva.core import format_exception from deriva.core.utils import eprint from deriva.core.base_cli import BaseCLI from yapf.yapflib.yapf_api import FormatCode from deriva.core import get_credential, AttrDict, ErmrestCatalog from deriva.core import tag as chaise_tags from deriva.utils.catalog.manage.deriva_file_templates import table_file_template, schema_file_template, \ catalog_file_template from deriva.utils.catalog.version import __version__ as VERSION from deriva.utils.catalog.manage.graph_catalog import DerivaCatalogToGraph IS_PY2 = (sys.version_info[0] == 2) IS_PY3 = (sys.version_info[0] == 3) from urllib.parse import urlparse logger = logging.getLogger(__name__) yapf_style = { 'based_on_style': 'pep8', 'allow_split_before_dict_value': False, 'split_before_first_argument': False, 'disable_ending_comma_heuristic': True, 'DEDENT_CLOSING_BRACKETS': True, 'column_limit': 100 } class DerivaDumpCatalogException (Exception): """Base exception class for DerivaDumpCatalog. """ def __init__(self, message): """Initializes the exception. """ super(DerivaDumpCatalogException, self).__init__(message) class UsageException (DerivaDumpCatalogException): """Usage exception. """ def __init__(self, message): """Initializes the exception. """ super(UsageException, self).__init__(message) class DerivaCatalogToString: def __init__(self, catalog, provide_system_columns=True, groups=None): self._model = catalog.getCatalogModel() self.host = urlparse(catalog.get_server_uri()).hostname self.catalog_id = self._model.catalog.catalog_id self._provide_system_columns = provide_system_columns # Get the currently known groups for this catalog. self._groups = groups if groups is None: try: self._groups = AttrDict( {e['Display_Name']: e['ID'] for e in self._model.catalog.getPathBuilder().public.ERMrest_Group.entities()} ) except AttributeError: logger.warning('Cannot access ERMrest_Group table. Check ACLs') self._groups = AttrDict() self._referenced_groups = {} self._variables = self._groups.copy() self._variables.update(chaise_tags) def substitute_variables(self, code): """ Factor out code and replace with a variable name. :param code: :return: new code """ for k, v in self._variables.items(): varsub = r"(['\"])+{}\1".format(v) if k in chaise_tags: repl = 'chaise_tags.{}'.format(k) elif k in self._groups: repl = 'groups[{!r}]'.format(k) if v in code: self._referenced_groups[k] = v else: repl = k code = re.sub(varsub, repl, code) return code def variable_to_str(self, name, value, substitute=True): """ Print out a variable assignment on one line if empty, otherwise pretty print. :param name: Left hand side of assigment :param value: Right hand side of assignment :param substitute: If true, replace the group and tag values with their corresponding names :return: """ s = '{} = {!r}\n'.format(name, value) if substitute: s = self.substitute_variables(s) return s def tag_variables_to_str(self, annotations): """ For each convenient annotation name in tag_map, print out a variable declaration of the form annotation = v where v is the value of the annotation the dictionary. If the tag is not in the set of annotations, do nothing. :param annotations: :return: """ s = [] for t, v in chaise_tags.items(): if v in annotations: s.append(self.variable_to_str(t, annotations[v])) s.append('\n') return ''.join(s) def annotations_to_str(self, annotations, var_name='annotations'): """ Print out the annotation definition in annotations, substituting the python variable for each of the tags specified in tag_map. :param annotations: :param var_name: :return: """ var_map = {v: k for k, v in self._variables.items()} if annotations == {}: s = '{} = {{}}\n'.format(var_name) else: s = '{} = {{'.format(var_name) for t, v in annotations.items(): if t in var_map: # Use variable value rather then inline annotation value. s += self.substitute_variables('{!r}:{},'.format(t, var_map[t])) else: s += "'{}' : {!r},".format(t, v) s += '}\n' return s def schema_to_str(self, schema_name): schema = self._model.schemas[schema_name] annotations = self.variable_to_str('annotations', schema.annotations) acls = self.variable_to_str('acls', schema.acls) comments = self.variable_to_str('comment', schema.comment) groups = self.variable_to_str('groups', self._referenced_groups, substitute=False) s = schema_file_template.format(host=self.host, catalog_id=self.catalog_id, schema_name=schema_name, annotations=annotations, acls=acls, comments=comments, groups=groups, table_names='table_names = [\n{}]\n'.format( str.join('', ['{!r},\n'.format(i) for i in schema.tables]))) s = FormatCode(s, style_config=yapf_style)[0] return s def catalog_to_str(self): tag_variables = self.tag_variables_to_str(self._model.annotations) annotations = self.annotations_to_str(self._model.annotations) acls = self.variable_to_str('acls', self._model.acls) groups = self.variable_to_str('groups', self._referenced_groups, substitute=False) s = catalog_file_template.format(host=self.host, catalog_id=self.catalog_id, groups=groups, tag_variables=tag_variables, annotations=annotations, acls=acls) s = FormatCode(s, style_config=yapf_style)[0] return s def table_annotations_to_str(self, table): s = ''.join([self.tag_variables_to_str(table.annotations), '\n', self.annotations_to_str(table.annotations, var_name='table_annotations'), '\n', self.variable_to_str('table_comment', table.comment), '\n', self.variable_to_str('table_acls', table.acls), '\n', self.variable_to_str('table_acl_bindings', table.acl_bindings)]) return s def column_annotations_to_str(self, table): column_annotations = {} column_acls = {} column_acl_bindings = {} column_comment = {} for i in table.column_definitions: if not (i.annotations == '' or not i.comment): column_annotations[i.name] = i.annotations if not (i.comment == '' or not i.comment): column_comment[i.name] = i.comment if i.annotations != {}: column_annotations[i.name] = i.annotations if i.acls != {}: column_acls[i.name] = i.acls if i.acl_bindings != {}: column_acl_bindings[i.name] = i.acl_bindings s = self.variable_to_str('column_annotations', column_annotations) + '\n' s += self.variable_to_str('column_comment', column_comment) + '\n' s += self.variable_to_str('column_acls', column_acls) + '\n' s += self.variable_to_str('column_acl_bindings', column_acl_bindings) + '\n' return s def foreign_key_defs_to_str(self, table): s = 'fkey_defs = [\n' for fkey in table.foreign_keys: s += """ em.ForeignKey.define({}, '{}', '{}', {}, constraint_names={},\n""".format([c.name for c in fkey.foreign_key_columns], fkey.pk_table.schema.name, fkey.pk_table.name, [c.name for c in fkey.referenced_columns], fkey.names) for i in ['annotations', 'acls', 'acl_bindings', 'on_update', 'on_delete', 'comment']: a = getattr(fkey, i) if not (a == {} or a is None or a == 'NO ACTION' or a == ''): v = "'" + a + "'" if re.match('comment|on_update|on_delete', i) else a s += " {}={},\n".format(i, v) s += ' ),\n' s += ']' s = self.substitute_variables(s) return s def key_defs_to_str(self, table): s = 'key_defs = [\n' for key in table.keys: s += """ em.Key.define({}, constraint_names={},\n""".format([c.name for c in key.unique_columns], key.names if key.name else []) for i in ['annotations', 'comment']: a = getattr(key, i) if not (a == {} or a is None or a == ''): v = "'" + a + "'" if i == 'comment' else a s += " {} = {},\n".format(i, v) s += '),\n' s += ']' s = self.substitute_variables(s) return s def column_defs_to_str(self, table): system_columns = ['RID', 'RCB', 'RMB', 'RCT', 'RMT'] s = ['column_defs = ['] for col in table.column_definitions: if col.name in system_columns and self._provide_system_columns: continue s.append(''' em.Column.define('{}', em.builtin_types['{}'],'''. format(col.name, col.type.typename + '[]' if 'is_array' is True else col.type.typename)) if col.nullok is False: s.append("nullok=False,") if col.default and col.name not in system_columns: s.append("default={!r},".format(col.default)) for i in ['annotations', 'acls', 'acl_bindings', 'comment']: colvar = getattr(col, i) if colvar: # if we have a value for this field.... s.append("{}=column_{}['{}'],".format(i, i, col.name)) s.append('),\n') s.append(']') return ''.join(s) def table_def_to_str(self): s = """table_def = em.Table.define(table_name, column_defs=column_defs, key_defs=key_defs, fkey_defs=fkey_defs, annotations=table_annotations, acls=table_acls, acl_bindings=table_acl_bindings, comment=table_comment, provide_system = {} )""".format(self._provide_system_columns) return s def table_to_str(self, schema_name, table_name): logger.debug('%s %s %s', schema_name, table_name, [i for i in self._model.schemas]) table = self._model.schemas[schema_name].tables[table_name] column_annotations = self.column_annotations_to_str(table) column_defs = self.column_defs_to_str(table) table_annotations = self.table_annotations_to_str(table) key_defs = self.key_defs_to_str(table) fkey_defs = self.foreign_key_defs_to_str(table) table_def = self.table_def_to_str() groups = self.variable_to_str('groups', self._referenced_groups, substitute=False) s = table_file_template.format(host=self.host, catalog_id=self.catalog_id, table_name=table_name, schema_name=schema_name, groups=groups, column_annotations=column_annotations, column_defs=column_defs, table_annotations=table_annotations, key_defs=key_defs, fkey_defs=fkey_defs, table_def=table_def) s = FormatCode(s, style_config=yapf_style)[0] return s class DerivaDumpCatalogCLI (BaseCLI): def __init__(self, description, epilog): super(DerivaDumpCatalogCLI, self).__init__(description, epilog, VERSION, hostname_required=True) def python_value(s): try: val = ast.literal_eval(s) except ValueError: val = s return val self.dumpdir = '' self.host = None self.catalog_id = 1 self.graph_format = None self.catalog = None # parent arg parser parser = self.parser parser.add_argument('--catalog', '--catalog-id', metavar='CATALOG-NUMBER', default=1, help='ID number of desired catalog') parser.add_argument('--dir', default="catalog-configs", help='output directory name') group = parser.add_mutually_exclusive_group() group.add_argument('--table', default=None, help='Only dump out the spec for the specified table. Format is ' 'schema_name:table_name') parser.add_argument('--schemas', nargs='*', default=[], help='Only dump out the spec for the specified schemas.') parser.add_argument('--skip-schemas', nargs='*', default=[], help='List of schema so skip over') group.add_argument('--graph', action='store_true', help='Dump graph of catalog') parser.add_argument('--graph-format', choices=['pdf', 'dot', 'png', 'svg'], default='pdf', help='Format to use for graph dump') @staticmethod def _get_credential(host_name, token=None): if token: return {"cookie": "webauthn={t}".format(t=token)} else: return get_credential(host_name) def _dump_table(self, schema_name, table_name, stringer=None, dumpdir='.'): logger.info("Dumping out table def: {}:{}".format(schema_name,table_name)) if not stringer: stringer = DerivaCatalogToString(self.catalog) table_string = stringer.table_to_str(schema_name, table_name) filename= dumpdir + '/' + table_name + '.py' os.makedirs(os.path.dirname(filename), exist_ok=True) with open(filename, 'wb') as f: f.write(table_string.encode("utf-8")) def _dump_catalog(self): stringer = DerivaCatalogToString(self.catalog) catalog_string = stringer.catalog_to_str() with open('{}/{}_{}.py'.format(self.dumpdir, self.host, self.catalog_id), 'wb') as f: f.write(catalog_string.encode("utf-8")) for schema_name in self.schemas: logger.info("Dumping schema def for {}....".format(schema_name)) schema_string = stringer.schema_to_str(schema_name) with open('{}/{}.schema.py'.format(self.dumpdir, schema_name), 'wb') as f: f.write(schema_string.encode("utf-8")) for schema_name, schema in self.model.schemas.items(): if schema_name in self.schemas: for table_name in schema.tables: self._dump_table(schema_name, table_name, stringer=stringer, dumpdir='{}/{}'.format(self.dumpdir, schema_name)) def _graph_catalog(self): graph = DerivaCatalogToGraph(self.catalog) graphfile = '{}_{}'.format(self.host, self.catalog_id) graph.catalog_to_graph(schemas=[s for s in self.schemas if s not in ['_acl_admin', 'public', 'WWW']], skip_terms=True, skip_association_tables=True) graph.save(filename=graphfile, format=self.graph_format) def main(self): args = self.parse_cli() self.dumpdir = args.dir self.host = args.host self.catalog_id = args.catalog self.graph_format = args.graph_format if self.host is None: eprint('Host name must be provided') return 1 self.catalog = ErmrestCatalog('https', self.host, self.catalog_id, credentials=self._get_credential(self.host)) self.model = self.catalog.getCatalogModel() self.schemas = [s for s in (args.schemas if args.schemas else self.model.schemas) if s not in args.skip_schemas ] try: os.makedirs(self.dumpdir, exist_ok=True) except OSError as e: sys.stderr.write(str(e)) return 1 logger.info('Catalog has {} schema and {} tables'.format(len(self.model.schemas), sum([len(v.tables) for k, v in self.model.schemas.items()]))) logger.info('\n'.join([' {} has {} tables'.format(k, len(s.tables)) for k, s in self.model.schemas.items()])) try: if args.table: if ':' not in args.table: raise DerivaDumpCatalogException('Table name must be in form of schema:table') [schema_name, table_name] = args.table.split(":") self._dump_table(schema_name, table_name) elif args.graph: self._graph_catalog() else: self._dump_catalog() except DerivaDumpCatalogException as e: print(e) except HTTPError as e: if e.response.status_code == requests.codes.unauthorized: msg = 'Authentication required for {}'.format(args.server) elif e.response.status_code == requests.codes.forbidden: msg = 'Permission denied' else: msg = e logging.debug(format_exception(e)) eprint(msg) except RuntimeError as e: sys.stderr.write(str(e)) return 1 except: traceback.print_exc() return 1 finally: sys.stderr.write("\n\n") return def main(): DESC = "DERIVA Dump Catalog Command-Line Interface" INFO = "For more information see: https://github.com/informatics-isi-edu/deriva-catalog-manage" return DerivaDumpCatalogCLI(DESC, INFO).main() if __name__ == '__main__': sys.exit(main())
PypiClean
/fastllama_python_test-0.1.tar.gz/fastllama_python_test-0.1/scripts/convert-unversioned-ggml-to-ggml.py
import argparse import glob import os import struct import sys from sentencepiece import SentencePieceProcessor HPARAMS = keys = ["vocab_size", "dim", "multiple_of", "n_heads", "n_layers"] def parse_args(): parser = argparse.ArgumentParser(description='Upgrade old ggml model files to the current format') parser.add_argument('dir_model', help='directory containing ggml .bin files') parser.add_argument('tokenizer_model', help='path to LLaMA tokenizer.model file') return parser.parse_args() def read_header(f_in): struct_fmt = "i" * (3 + len(HPARAMS)) struct_size = struct.calcsize(struct_fmt) buf = f_in.read(struct_size) return struct.unpack(struct_fmt, buf) def write_header(f_out, header): (magic, vocab_size, dim, multiple_of, n_heads, n_layers, rot, ftype) = header if magic != 0x67676d6c: raise Exception('Invalid file magic. Must be an old style ggml file.') values = [ 0x67676d66, # magic: ggmf in hex 1, # file version vocab_size, dim, multiple_of, n_heads, n_layers, rot, ftype ] f_out.write(struct.pack("i" * len(values), *values)) def write_tokens(fout, tokenizer): for i in range(tokenizer.vocab_size()): if tokenizer.is_unknown(i): text = " \u2047 ".encode("utf-8") elif tokenizer.is_control(i): text = b"" elif tokenizer.is_byte(i): piece = tokenizer.id_to_piece(i) if len(piece) != 6: print(f"Invalid token: {piece}") sys.exit(1) byte_value = int(piece[3:-1], 16) text = struct.pack("B", byte_value) else: text = tokenizer.id_to_piece(i).replace("\u2581", " ").encode("utf-8") fout.write(struct.pack("i", len(text))) fout.write(text) fout.write(struct.pack("f", tokenizer.get_score(i))) def read_tokens(f_in, tokenizer): for i in range(tokenizer.vocab_size()): len_b = f_in.read(4) (length,) = struct.unpack("i", len_b) f_in.read(length) def copy_all_data(f_out, f_in): while True: buf = f_in.read(1024 * 1024) if not buf: break f_out.write(buf) def convert_one_file(path_in, tokenizer): path_tmp = f"{path_in}.tmp" path_orig= f"{path_in}.orig" print(f"converting {path_in}") with open(path_in, "rb") as f_in, open(path_tmp, "wb") as f_out: write_header(f_out, read_header(f_in)) read_tokens(f_in, tokenizer) write_tokens(f_out, tokenizer) copy_all_data(f_out, f_in) os.rename(path_in, path_orig) os.rename(path_tmp, path_in) def main(): args = parse_args() files = [] files.extend(glob.glob(f"{args.dir_model}/*.bin")) files.extend(glob.glob(f"{args.dir_model}/*.bin.*")) tokenizer = SentencePieceProcessor(args.tokenizer_model) for file in files: convert_one_file(file, tokenizer) if __name__ == "__main__": main()
PypiClean
/acc_lib-0.0.29-py3-none-any.whl/acc_lib/acc_lib.py
import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.patches import matplotlib.collections import scipy.constants as constants from scipy.fft import fft, fftfreq import NAFFlib # Numerical Analysis of the Fundamental Frequencies # Define plotting parameters plt.rcParams.update({'font.family':'DejaVu Sans'}) mpl.rcParams['axes.labelsize'] = 20 mpl.rcParams['xtick.labelsize'] = 14 mpl.rcParams['ytick.labelsize'] = 14 # Initiate class containing helpful functions for plotting class plot_tools: def plot_twiss(fig, twiss, twiss_from_madx=False, plot_magnets=False, also_closed_orbit=False): """ Method to plot Twiss parameters As parameter input, can either use Xtrack, or from MAD-X generated Twiss tables """ if also_closed_orbit: spbet = fig.add_subplot(3,1,1) spco = fig.add_subplot(3,1,2, sharex=spbet) spdisp = fig.add_subplot(3,1,3, sharex=spbet) else: spbet = fig.add_subplot(2,1,1) spdisp = fig.add_subplot(2,1,2, sharex=spbet) spbet.plot(twiss['s'], twiss['betx']) spbet.plot(twiss['s'], twiss['bety']) #spbet.yaxis.label.set_size(18) if also_closed_orbit: spco.plot(twiss['s'], twiss['x']) spco.plot(twiss['s'], twiss['y']) #spco.yaxis.label.set_size(18) spdisp.plot(twiss['s'], twiss['dx']) spdisp.plot(twiss['s'], twiss['dy']) spdisp.xaxis.label.set_size(18) #spdisp.yaxis.label.set_size(18) spbet.set_ylabel(r'$\beta_{x,y}$ [m]') if also_closed_orbit: spco.set_ylabel(r'(Closed orbit)$_{x,y}$ [m]') spdisp.set_ylabel(r'$D_{x,y}$ [m]') spdisp.set_xlabel('s [m]') if not twiss_from_madx: fig.suptitle( r'$q_x$ = ' f'{twiss["qx"]:.2f}' r' $q_y$ = ' f'{twiss["qy"]:.2f}' '\n' r"$Q'_x$ = " f'{twiss["dqx"]:.2f}' r" $Q'_y$ = " f'{twiss["dqy"]:.2f}' r' $\gamma_{tr}$ = ' f'{1/np.sqrt(twiss["momentum_compaction_factor"]):.2f}', fontsize=18 ) if twiss_from_madx: fig.suptitle( r'$q_x$ = ' f'{twiss.summary["q1"]:.2f}' r' $q_y$ = ' f'{twiss.summary["q2"]:.2f}' '\n' r"$Q'_x$ = " f'{twiss.summary["dq1"]:.2f}' r" $Q'_y$ = " f'{twiss.summary["dq2"]:.2f}' r' $\gamma_{tr}$ = ' f'{twiss.summary["gammatr"]:.2f}', fontsize=18 ) # Plot quadrupoles and dipole magnets if desired --> still not working... if plot_magnets: # Check if Twiss is dataframe if not isinstance(twiss, pd.DataFrame): twiss = twiss.dframe() for _, row in twiss.iterrows(): if row['keyword'] == 'quadrupole': _ = spbet.add_patch( mpl.patches.Rectangle( (row['s']-row['l'], 0), row['l'], np.sign(row['k1l']), facecolor='k', edgecolor='k')) elif (row['keyword'] == 'rbend' or row['keyword'] == 'sbend'): _ = spbet.add_patch( mpl.patches.Rectangle( (row['s']-row['l'], -1), row['l'], 2, facecolor='None', edgecolor='k')) # Use share x ticks and set size plt.setp(spbet.get_xticklabels(), visible=False) plt.xticks(fontsize=14) plt.yticks(fontsize=14) fig.subplots_adjust(left=.15, right=.92, hspace=.27) fig.tight_layout() def plot_phase_space_ellipse(fig, tracker=None, use_coords_from_tracker=True, x=None, px=None, y=None, py=None, axis='both'): """ Method to plot phase space ellipse from X-suite tracking """ # Use last recording from tracker, else specified x, px, y, py if use_coords_from_tracker: x = tracker.record_last_track.x px = tracker.record_last_track.px y = tracker.record_last_track.y py = tracker.record_last_track.py else: if [ele for ele in (x, px, y, py) if ele is None]: print("If tracker coordinates are not used, x, px, y, py must be given!") fig.suptitle('Phase space ellipse',fontsize=18) if axis == 'both': ax = fig.add_subplot(2, 1, 1) # create an axes object in the figure ax.plot(x, px, 'ro', markersize=0.2, alpha=0.3) ax.set_ylabel("$p_{x}$") ax.set_xlabel("$x$") ax2 = fig.add_subplot(2, 1, 2, sharex=ax) # create a second axes object in the figure ax2.plot(y, py, 'bo', markersize=0.2, alpha=0.3) ax2.set_ylabel("$p_{y}$") ax2.set_xlabel("$y$") else: ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure if axis == 'horizontal': ax.plot(x, px, 'ro', markersize=0.2, alpha=0.3) ax.set_ylabel("$p_{x}$") ax.set_xlabel("$x$") if axis == 'vertical': ax.plot(y, py, 'bo', markersize=0.2, alpha=0.3) ax.set_ylabel("$p_{y}$") ax.set_xlabel("$y$") fig.tight_layout() def plot_centroid_motion(fig, tracker=None, use_coords_from_tracker=True, x=None, y=None, axis='both'): """ Method to plot centroid from tracker, either in horizontal or vertical plane """ fig.suptitle('X-suite tracking - centroid motion',fontsize=18) # Use last recording from tracker, else specified x, px, y, py if use_coords_from_tracker: x = tracker.record_last_track.x y = tracker.record_last_track.y else: if [ele for ele in (x, y) if ele is None]: print("If tracker coordinates are not used, x, px, y, py must be given!") if axis == 'both': ax = fig.add_subplot(2, 1, 1) # create an axes object in the figure ax.plot(np.mean(x, axis=0), marker='o', color='r', markersize=5) ax.set_ylabel("Centroid $X$ [m]") ax.set_xlabel("#turns") ax2 = fig.add_subplot(2, 1, 2, sharex=ax) # create a second axes object in the figure ax2.plot(np.mean(y, axis=0), marker='o', color='b', markersize=5) ax2.set_ylabel("Centroid $Y$ [m]") ax2.set_xlabel("#turns") plt.setp(ax.get_xticklabels(), visible=False) else: ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure if axis == 'horizontal': ax.plot(np.mean(x, axis=0), marker='o', color='r', markersize=5) ax.set_ylabel("Horizontal centroid $X$ [m]") ax.set_xlabel("#turns") if axis == 'vertical': ax.plot(np.mean(y, axis=0), marker='o', color='b', markersize=5) ax.set_ylabel("Vertical centroid $Y$ [m]") ax.set_xlabel("#turns") fig.tight_layout() def simple_FFT(fig, tracker, axis='horizontal'): """ Method perform simple FFT to find tune - following Volker Ziemann's example from his book "Hands-on Accelerator Physics using Matlab" Chapter 3: transverse optics """ ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure num_turns = len(tracker.record_last_track.x[0]) x_fft = fftfreq(num_turns)[:num_turns//2] # integer division if axis == 'horizontal': y_fft = 2*np.abs(fft(np.mean(tracker.record_last_track.x, axis=0)))/num_turns ax.set_xlabel("Fractional horizontal tune") if axis == 'vertical': y_fft = 2*np.abs(fft(np.mean(tracker.record_last_track.y, axis=0)))/num_turns ax.set_xlabel("Fractional vertical tune") fig.suptitle('FFT spectrum of tracking',fontsize=18) ax.yaxis.label.set_size(20) ax.xaxis.label.set_size(20) plt.xticks(fontsize=14) plt.yticks(fontsize=14) ax.plot(x_fft, y_fft[:int(num_turns/2)]) # only plot positive frequencies, i.e. first half of vector ax.set_ylabel("Amplitude [m]") fig.tight_layout() def get_tune_footprint(fig, tracker, int_Q=0): """ Method to find tune of all particles using NAFF (Numerical Analysis of the Fundamental Frequencies) Can also add integer working tune. """ Q_x = np.zeros(len(tracker.record_last_track.x)) Q_y = np.zeros(len(tracker.record_last_track.y)) # Iterate over turn-by-turn data to find horizontal and vertical tune of each particle for count, particle in enumerate(tracker.record_last_track.x): Q_x[count] = NAFFlib.get_tune(particle) for count, particle in enumerate(tracker.record_last_track.y): Q_y[count] = NAFFlib.get_tune(particle) Q_x += int_Q Q_x += int_Q """ fig.suptitle('Tune footprint',fontsize=18) ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure ax.yaxis.label.set_size(20) ax.xaxis.label.set_size(20) plt.xticks(fontsize=14) plt.yticks(fontsize=14) ax.plot(Q_x, Q_y, 'go', markersize=1.5, alpha=0.3) ax.set_ylabel("$Q_{y}$") ax.set_xlabel("$Q_{x}$") #ax.set_xlim(0.15-1e-4, 0.15+1e-4) fig.tight_layout() """ return Q_x, Q_y def plot_tune_footprint_from_tracker(fig, tracker, int_Q, Q_x=None, Q_y=None): """ Method to plot tune footprint. If Qx and Qy are not given, the method finds tune of all particles using NAFF Also returns min and max values for the tune footprints """ # If frequency analysis has not been done before if Q_x is None and Q_y is None: Q_x = np.zeros(len(tracker.record_last_track.x)) Q_y = np.zeros(len(tracker.record_last_track.y)) # Iterate over turn-by-turn data to find horizontal and vertical tune of each particle for count, particle in enumerate(tracker.record_last_track.x): Q_x[count] = NAFFlib.get_tune(particle) for count, particle in enumerate(tracker.record_last_track.y): Q_y[count] = NAFFlib.get_tune(particle) else: Q_x = Q_x Q_y = Q_y # Add integer tune to fractional tune Q_x_full = Q_x+int_Q Q_y_full = Q_y+int_Q ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure ax.plot(Q_x_full, Q_y_full, 'go', markersize=3.5, alpha=0.3) ax.set_ylabel("$Q_{y}$") ax.set_xlabel("$Q_{x}$") Qx_min, Qx_max, Qy_min, Qy_max = np.min(Q_x), np.max(Q_x), np.min(Q_y), np.max(Q_y) return Qx_min, Qx_max, Qy_min, Qy_max, ax class resonance_lines(object): """ Class from Foteini Asvesta to plot resonance lines of chosen orders in tune diagram Provide input of ranges in Qx and Qy, the orders and the periodiciyu of the resonances """ def __init__(self, Qx_range, Qy_range, orders, periodicity): if np.std(Qx_range): self.Qx_min = np.min(Qx_range) self.Qx_max = np.max(Qx_range) else: self.Qx_min = np.floor(Qx_range)-0.05 self.Qx_max = np.floor(Qx_range)+1.05 if np.std(Qy_range): self.Qy_min = np.min(Qy_range) self.Qy_max = np.max(Qy_range) else: self.Qy_min = np.floor(Qy_range)-0.05 self.Qy_max = np.floor(Qy_range)+1.05 self.periodicity = periodicity self.orders = orders nx, ny = [], [] for order in np.nditer(np.array(orders)): t = np.array(range(-order, order+1)) nx.extend(order - np.abs(t)) ny.extend(t) nx = np.array(nx) ny = np.array(ny) cextr = np.array([nx*np.floor(self.Qx_min)+ny*np.floor(self.Qy_min), \ nx*np.ceil(self.Qx_max)+ny*np.floor(self.Qy_min), \ nx*np.floor(self.Qx_min)+ny*np.ceil(self.Qy_max), \ nx*np.ceil(self.Qx_max)+ny*np.ceil(self.Qy_max)], dtype='int') cmin = np.min(cextr, axis=0) cmax = np.max(cextr, axis=0) res_sum = [range(cmin[i], cmax[i]+1) for i in range(cextr.shape[1])] self.resonance_list = zip(nx, ny, res_sum) def plot_resonance(self, figure_object = None, interactive=True): if(interactive): plt.ion() if figure_object: fig = figure_object plt.figure(fig.number) else: fig = plt.figure() ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure Qx_min = self.Qx_min Qx_max = self.Qx_max Qy_min = self.Qy_min Qy_max = self.Qy_max ax.set_xlabel('$\mathrm{Q_x}$') ax.set_ylabel('$\mathrm{Q_y}$') ax.set_xlim(Qx_min, Qx_max) ax.set_ylim(Qy_min, Qy_max) for resonance in self.resonance_list: nx = resonance[0] ny = resonance[1] for res_sum in resonance[2]: if ny: line, = ax.plot([Qx_min, Qx_max], \ [(res_sum-nx*Qx_min)/ny, (res_sum-nx*Qx_max)/ny]) else: line, = ax.plot([np.float(res_sum)/nx, np.float(res_sum)/nx],[Qy_min, Qy_max]) if ny%2: plt.setp(line, linestyle='--', zorder=1) # for skew resonances if res_sum%self.periodicity: plt.setp(line, color='b', zorder=1) # non-systematic resonances else: plt.setp(line, color='r', zorder=1, linewidth=2.0) # systematic resonances if(interactive): plt.draw() return ax def plot_resonance_and_tune_footprint(self, tracker, Q_work_int = None, figure_object = None, Q_x=None, Q_y=None, interactive=False): """ Method to plot tune footprint and resonances in the same plot """ if(interactive): plt.ion() if figure_object: fig = figure_object plt.figure(fig.number) else: fig = plt.figure() ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure fig.suptitle('Tune footprint and resonances up to order {}'.format(self.orders[-1]), fontsize=18) Qx_min = self.Qx_min Qx_max = self.Qx_max Qy_min = self.Qy_min Qy_max = self.Qy_max ax.set_xlabel('$\mathrm{Q_x}$') ax.set_ylabel('$\mathrm{Q_y}$') ax.set_xlim(Qx_min, Qx_max) ax.set_ylim(Qy_min, Qy_max) for resonance in self.resonance_list: nx = resonance[0] ny = resonance[1] for res_sum in resonance[2]: if ny: line, = ax.plot([Qx_min, Qx_max], \ [(res_sum-nx*Qx_min)/ny, (res_sum-nx*Qx_max)/ny]) else: line, = ax.plot([np.float(res_sum)/nx, np.float(res_sum)/nx],[Qy_min, Qy_max]) if ny%2: plt.setp(line, linestyle='--', zorder=1) # for skew resonances if res_sum%self.periodicity: plt.setp(line, color='b', zorder=1) # non-systematic resonances else: plt.setp(line, color='r', zorder=1, linewidth=2.0) # systematic resonances if(interactive): plt.draw() # If frequency analysis has not been done before if Q_x is None and Q_y is None: # Get the fractional tunes from NAFF Q_x = np.zeros(len(tracker.record_last_track.x)) Q_y = np.zeros(len(tracker.record_last_track.y)) # Iterate over turn-by-turn data to find horizontal and vertical tune of each particle for count, particle in enumerate(tracker.record_last_track.x): Q_x[count] = NAFFlib.get_tune(particle) for count, particle in enumerate(tracker.record_last_track.y): Q_y[count] = NAFFlib.get_tune(particle) else: Q_x = Q_x Q_y = Q_y # Add integer tune to fractional tune if Q_work_int is not None: Q_x += Q_work_int Q_y += Q_work_int ax.plot(Q_x, Q_y, 'go', markersize=3.5, alpha=0.3) Qx_min, Qx_max, Qy_min, Qy_max = np.min(Q_x), np.max(Q_x), np.min(Q_y), np.max(Q_y) return Qx_min, Qx_max, Qy_min, Qy_max, ax def print_resonances(self): for resonance in self.resonance_list: for res_sum in resonance[2]: print(str(resonance[0]).rjust(3), 'Qx ', ("+", "-")[resonance[1]<0], \ str(abs(resonance[1])).rjust(2), 'Qy = ', str(res_sum).rjust(3), \ '\t', ("(non-systematic)", "(systematic)")[res_sum%self.periodicity==0]) # Descripte class of particles to find properties class particles: def print_particle(particle_object): """ Method to print particle properties """ df = particle_object.to_pandas() dash = '-' * 55 print("PARTICLES:\n\n") print('{:<27} {:>12}'.format("Property", "Value")) print(dash) for column in df: print('{:<27} {:>12}'.format(df[column].name, df[column].values[0])) print(dash) print('\n') def classical_particle_radius(particle_ref): """ Method to calculate classical particle radius from given reference particle, also ions """ m0 = particle_ref.mass0*1.782661921e-36 # electron volt - kg conversion r0 = (particle_ref.q0*constants.elementary_charge)**2/(4*np.pi*constants.epsilon_0*m0*constants.c**2) #1.5347e-18 is default for protons return r0 # Tools specifically for CPymad and MAD-X class madx_tools: def print_seq(madx, seq='SPS'): """ Function to print elements in sequence """ #Print the elements in the reduced short sequence dash = '-' * 65 print('{:<27} {:>12} {:>15} {:>8}'.format("Element", "Location", "Type", "Length")) print(dash) for ele in madx.sequence[seq].elements: print('{:<27} {:>12.6f} {:>15} {:>8.3}'.format(ele.name, ele.at, ele.base_type.name, ele.length)) return def print_madx_error(): """ If context manager has been used, print the lines of the temporary error file """ with open('tempfile', 'r') as f: lines = f.readlines() for ele in lines: if '+=+=+= fatal' in ele: print('{}'.format(ele)) def plot_envelope(fig, madx, twiss, seq_name='sps', axis='horizontal', nx=5, ny=5, hcolor="b"): """ Function to plot beam envelope with aperture, can choose horizontal (default) or vertical Returns axis object such that apertures can be plotted in the same plot """ ax = fig.add_subplot(1, 1, 1) # create an axes object in the figure ax.yaxis.label.set_size(20) ax.xaxis.label.set_size(20) plt.xticks(fontsize=14) plt.yticks(fontsize=14) # Extract beam parameters ex = madx.sequence[seq_name].beam.ex ey = madx.sequence[seq_name].beam.ey sige = madx.sequence[seq_name].beam.sige # Define some parameters for the beam one_sigma_x = np.sqrt(ex*twiss.betx + (sige*twiss.dx)**2) #beam size in x one_sigma_y = np.sqrt(ey*twiss.bety + (sige*twiss.dy)**2) #beam size in y # Choose whether to plot horizontal or vertical axis if axis=='horizontal': fig.suptitle('Beam envelope - horizontal',fontsize=18) ax.plot(twiss.s, twiss.x, color = hcolor) ax.set_ylabel("x [m]") ax.set_xlabel("s [m]") ax.fill_between(twiss.s, twiss.x+nx*one_sigma_x,twiss.x-nx*one_sigma_x, alpha = 0.4, color = hcolor, label='_nolegend_') elif axis=='vertical': fig.suptitle('Beam envelope - vertical',fontsize=18) ax.plot(twiss.s, twiss.y, color = "r") ax.set_ylabel("y [m]") ax.set_xlabel("s [m]") ax.fill_between(twiss.s, twiss.y+ny*one_sigma_y,twiss.y-ny*one_sigma_y, alpha = 0.4, color = "r", label='_nolegend_') else: print("Unvalid vertical parameter!") fig.tight_layout() return ax # ---------------------------------------------- METHODS RELATED TO APERTURES -------------------------------------------------------------------- def get_apertures_real(twiss, axis='horizontal'): """ Method to extract real apertures with sequence element lengths """ pos = list(twiss['s']) #Choose whether to plot horizontal or vertical axis: if axis=='horizontal': aper = list(twiss['aper_1']) elif axis=='vertical': aper = list(twiss['aper_2']) else: print("Unvalid axis parameter!") #Initiate arrays for new aperture new_aper = aper[:] new_pos = pos[:] indices = [] #Search for all non-zero aperture elements for i in range(len(aper) - 1, 0, -1): if aper[i] != 0: new_aper.insert(i, aper[i]) indices.append(i) indices = list(reversed(indices)) #Keep track of exact position in new array with counter counter = 0 for j in indices: new_pos.insert(j + counter, (pos[j] - twiss.l[j])) counter += 1 #Replace all zeros with Nan for i in range(len(new_aper)): if new_aper[i] == 0: new_aper[i] = np.nan return np.array(new_pos), np.array(new_aper) def search_next(i, list): """ Return list without empty elements """ for j in range(i, len(list)): if list[j] != 0: return list[j] def plot_apertures_real(ax, s, aper, unit="m", offset=None): """ Plot the real aperture, with arrays containing nan values - possibility to add offset """ aper_flipped = aper.copy() for i in range(len(aper_flipped)): if aper_flipped[i] != np.nan: aper_flipped[i] = -1 * aper_flipped[i] if offset is not None: #Add offset if given for i in range(len(aper)): aper[i] = aper[i] + + offset[i] aper_flipped[i] = aper_flipped[i] + offset[i] if ax is None: ax = plt.gca() if not unit == "mm": ax.fill_between(s, aper, 0.2, color="black", alpha=0.4, label='_nolegend_') #previously step="pre", but do not use if entry and exit aperture offset differs ax.fill_between(s, aper_flipped, -0.2, color="black", alpha=0.4, label='_nolegend_') #previously step="pre" ax.plot(s, aper, "k", label='_nolegend_') #previously drawstyle="steps", but do not use if entry and exit aperture offset differs ax.plot(s, aper_flipped, "k", label='_nolegend_') #drawstyle="steps" else: ax.fill_between(s, aper, 0.2 * 1e3, color="black", alpha=0.4, label='_nolegend_') #previously step="pre" ax.fill_between( s, aper_flipped, -0.2 * 1e3, color="black", label='_nolegend_', alpha=0.4, ) #previously step="pre" ax.plot(s, aper, "k", label='_nolegend_') #previously drawstyle="steps" ax.plot(s, aper_flipped, "k", label='_nolegend_') #previously drawstyle="steps" class footprint: """ Class to convert real to normalized phase space coordinates, and to draw footprints Example following the X-suite space charge example on GitHub: https://github.com/xsuite/xtrack/blob/main/examples/spacecharge/001_spacecharge_footprint.py """ # Return grid of x and y coordinates from range of polar coordinates def initial_xy_polar(r_min, r_max, r_N, theta_min, theta_max, theta_N): return np.array([[(r*np.cos(theta),r*np.sin(theta)) for r in np.linspace(r_min,r_max,r_N)] for theta in np.linspace(theta_min,theta_max,theta_N)]) # Return x-y grid of x and y coordinates from range of cartesian coordinates def initial_xy_cartesian(x_min, x_max, x_N, y_min, y_max, y_N): return np.array([[(x,y) for x in np.linspace(x_min,x_max,x_N)] for y in np.linspace(y_min,y_max,y_N)]) # Draw footprint from stacked xy coordinates def draw_footprint(A, axis_object=None, figure_object=None, axis=0, linewidth=4): """ Input A should be a 3-D numpy array with shape (Nx,Ny,2) representing a 2-D array of (x,y) points. This function will draw lines between adjacent points in the 2-D array. """ if len(A.shape) != 3: print('ERROR: Invalid input matrix') return None if A.shape[2] != 2: print('ERROR: Points are not defined in 2D space') return None sx = A.shape[0]-1 sy = A.shape[1]-1 p1 = A[:-1,:-1,:].reshape(sx*sy,2)[:,:] p2 = A[1:,:-1,:].reshape(sx*sy,2)[:] p3 = A[1:,1:,:].reshape(sx*sy,2)[:] p4 = A[:-1,1:,:].reshape(sx*sy,2)[:] #Stack endpoints to form polygons Polygons = np.stack((p1,p2,p3,p4)) #transpose polygons Polygons = np.transpose(Polygons,(1,0,2)) patches = list(map(matplotlib.patches.Polygon,Polygons)) #assign colors patch_colors = [(0,0,0) for a in Polygons] patch_colors[(sx-1)*sy:] = [(0,1,0)]*sy patch_colors[(sy-1)::sy] = [(0,0,1)]*sx p_collection = matplotlib.collections.PatchCollection(patches,facecolors=[],linewidth=linewidth,edgecolor=patch_colors) if axis_object is None: if figure_object: fig = figure_object else: fig = plt.figure() if len(fig.axes) == 0: plt.subplot(1,1,1) if axis >= len(fig.axes) or axis < 0: i = 0 else: i = axis ax = fig.axes[i] else: ax = axis_object fig = None ax.add_collection(p_collection) return fig
PypiClean
/servierpck-0.1.tar.gz/servierpck-0.1/Package/model2.py
import pandas as pd import pandas as pd from feature_extractor import * import pandas from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import cross_val_score from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import StratifiedKFold from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline import numpy as np import sys from sklearn.metrics import classification_report from joblib import dump from keras import models from keras import layers from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.pipeline import Pipeline, FeatureUnion def data_prepocessing(csv_filepath): """ Implement load_data Arguements: csv_filepath -- the path directory of the csv_filepath in the workspace Returnes: X_train, X_test, y_train, y_test """ df = pd.read_csv(csv_filepath) X = df['smiles'].values y = df['P1'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1000) vectorizer = CountVectorizer() vectorizer.fit(X_train) X_train = vectorizer.transform(X_train) X_test = vectorizer.transform(X_test) return X_train, X_test, y_train, y_test def build_model(X_train): """ Implement build_model Arguements: Returnes: the builded model """ input_dim = X_train.shape[1] # Number of features model = Sequential() model.add(layers.Dense(10, input_dim=input_dim, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model def evaluate_model(model, X_train, X_test, y_train, y_test): """ Implement evaluate model Retuerns: report the f1 score, precision and recall """ # predict on test data # Y_pred = model.predict(X_test) loss, accuracy = model.evaluate(X_train, y_train, verbose=False) print("Training Accuracy: {:.4f}".format(accuracy)) loss, accuracy = model.evaluate(X_test, y_test, verbose=False) print("Testing Accuracy: {:.4f}".format(accuracy)) def save_model(model, model_filepath): """ Implement save model Retuerns: export the model as a pickle file """ model.save(model_filepath) def main(): if len(sys.argv) == 3: database_filepath, model_filepath = sys.argv[1:] print('Loading data...\n DATABASE: {}'.format(database_filepath)) X_train, X_test, y_train, y_test = data_prepocessing(database_filepath) # X_train, X_test, Y_train, Y_test = train_test_split( # X, Y, test_size=0.2) print('Building model...') model = build_model(X_train) print('Training model...') model.fit(X_train, y_train, epochs=10, verbose=False, validation_data=(X_test, y_test), batch_size=10) print('Evaluating model...') evaluate_model(model, X_train, X_test, y_train, y_test) print('Saving model...\n MODEL: {}'.format(model_filepath)) save_model(model, model_filepath) print('Trained model saved!') else: print('Please provide the filepath of the dataset ' 'as the first argument and the filepath of the pickle file to ' 'save the model to as the second argument. \n\nExample: python ' 'train_classifier.py ../data/dataset_single.csv classifier.pkl') if __name__ == '__main__': main()
PypiClean
/python-sources-3.10.5.tar.gz/python-sources-3.10.5/Python-3.10.5/Lib/distutils/command/build_py.py
import os import importlib.util import sys import glob from distutils.core import Command from distutils.errors import * from distutils.util import convert_path, Mixin2to3 from distutils import log class build_py (Command): description = "\"build\" pure Python modules (copy to build directory)" user_options = [ ('build-lib=', 'd', "directory to \"build\" (copy) to"), ('compile', 'c', "compile .py to .pyc"), ('no-compile', None, "don't compile .py files [default]"), ('optimize=', 'O', "also compile with optimization: -O1 for \"python -O\", " "-O2 for \"python -OO\", and -O0 to disable [default: -O0]"), ('force', 'f', "forcibly build everything (ignore file timestamps)"), ] boolean_options = ['compile', 'force'] negative_opt = {'no-compile' : 'compile'} def initialize_options(self): self.build_lib = None self.py_modules = None self.package = None self.package_data = None self.package_dir = None self.compile = 0 self.optimize = 0 self.force = None def finalize_options(self): self.set_undefined_options('build', ('build_lib', 'build_lib'), ('force', 'force')) # Get the distribution options that are aliases for build_py # options -- list of packages and list of modules. self.packages = self.distribution.packages self.py_modules = self.distribution.py_modules self.package_data = self.distribution.package_data self.package_dir = {} if self.distribution.package_dir: for name, path in self.distribution.package_dir.items(): self.package_dir[name] = convert_path(path) self.data_files = self.get_data_files() # Ick, copied straight from install_lib.py (fancy_getopt needs a # type system! Hell, *everything* needs a type system!!!) if not isinstance(self.optimize, int): try: self.optimize = int(self.optimize) assert 0 <= self.optimize <= 2 except (ValueError, AssertionError): raise DistutilsOptionError("optimize must be 0, 1, or 2") def run(self): # XXX copy_file by default preserves atime and mtime. IMHO this is # the right thing to do, but perhaps it should be an option -- in # particular, a site administrator might want installed files to # reflect the time of installation rather than the last # modification time before the installed release. # XXX copy_file by default preserves mode, which appears to be the # wrong thing to do: if a file is read-only in the working # directory, we want it to be installed read/write so that the next # installation of the same module distribution can overwrite it # without problems. (This might be a Unix-specific issue.) Thus # we turn off 'preserve_mode' when copying to the build directory, # since the build directory is supposed to be exactly what the # installation will look like (ie. we preserve mode when # installing). # Two options control which modules will be installed: 'packages' # and 'py_modules'. The former lets us work with whole packages, not # specifying individual modules at all; the latter is for # specifying modules one-at-a-time. if self.py_modules: self.build_modules() if self.packages: self.build_packages() self.build_package_data() self.byte_compile(self.get_outputs(include_bytecode=0)) def get_data_files(self): """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" data = [] if not self.packages: return data for package in self.packages: # Locate package source directory src_dir = self.get_package_dir(package) # Compute package build directory build_dir = os.path.join(*([self.build_lib] + package.split('.'))) # Length of path to strip from found files plen = 0 if src_dir: plen = len(src_dir)+1 # Strip directory from globbed filenames filenames = [ file[plen:] for file in self.find_data_files(package, src_dir) ] data.append((package, src_dir, build_dir, filenames)) return data def find_data_files(self, package, src_dir): """Return filenames for package's data files in 'src_dir'""" globs = (self.package_data.get('', []) + self.package_data.get(package, [])) files = [] for pattern in globs: # Each pattern has to be converted to a platform-specific path filelist = glob.glob(os.path.join(glob.escape(src_dir), convert_path(pattern))) # Files that match more than one pattern are only added once files.extend([fn for fn in filelist if fn not in files and os.path.isfile(fn)]) return files def build_package_data(self): """Copy data files into build directory""" lastdir = None for package, src_dir, build_dir, filenames in self.data_files: for filename in filenames: target = os.path.join(build_dir, filename) self.mkpath(os.path.dirname(target)) self.copy_file(os.path.join(src_dir, filename), target, preserve_mode=False) def get_package_dir(self, package): """Return the directory, relative to the top of the source distribution, where package 'package' should be found (at least according to the 'package_dir' option, if any).""" path = package.split('.') if not self.package_dir: if path: return os.path.join(*path) else: return '' else: tail = [] while path: try: pdir = self.package_dir['.'.join(path)] except KeyError: tail.insert(0, path[-1]) del path[-1] else: tail.insert(0, pdir) return os.path.join(*tail) else: # Oops, got all the way through 'path' without finding a # match in package_dir. If package_dir defines a directory # for the root (nameless) package, then fallback on it; # otherwise, we might as well have not consulted # package_dir at all, as we just use the directory implied # by 'tail' (which should be the same as the original value # of 'path' at this point). pdir = self.package_dir.get('') if pdir is not None: tail.insert(0, pdir) if tail: return os.path.join(*tail) else: return '' def check_package(self, package, package_dir): # Empty dir name means current directory, which we can probably # assume exists. Also, os.path.exists and isdir don't know about # my "empty string means current dir" convention, so we have to # circumvent them. if package_dir != "": if not os.path.exists(package_dir): raise DistutilsFileError( "package directory '%s' does not exist" % package_dir) if not os.path.isdir(package_dir): raise DistutilsFileError( "supposed package directory '%s' exists, " "but is not a directory" % package_dir) # Require __init__.py for all but the "root package" if package: init_py = os.path.join(package_dir, "__init__.py") if os.path.isfile(init_py): return init_py else: log.warn(("package init file '%s' not found " + "(or not a regular file)"), init_py) # Either not in a package at all (__init__.py not expected), or # __init__.py doesn't exist -- so don't return the filename. return None def check_module(self, module, module_file): if not os.path.isfile(module_file): log.warn("file %s (for module %s) not found", module_file, module) return False else: return True def find_package_modules(self, package, package_dir): self.check_package(package, package_dir) module_files = glob.glob(os.path.join(glob.escape(package_dir), "*.py")) modules = [] setup_script = os.path.abspath(self.distribution.script_name) for f in module_files: abs_f = os.path.abspath(f) if abs_f != setup_script: module = os.path.splitext(os.path.basename(f))[0] modules.append((package, module, f)) else: self.debug_print("excluding %s" % setup_script) return modules def find_modules(self): """Finds individually-specified Python modules, ie. those listed by module name in 'self.py_modules'. Returns a list of tuples (package, module_base, filename): 'package' is a tuple of the path through package-space to the module; 'module_base' is the bare (no packages, no dots) module name, and 'filename' is the path to the ".py" file (relative to the distribution root) that implements the module. """ # Map package names to tuples of useful info about the package: # (package_dir, checked) # package_dir - the directory where we'll find source files for # this package # checked - true if we have checked that the package directory # is valid (exists, contains __init__.py, ... ?) packages = {} # List of (package, module, filename) tuples to return modules = [] # We treat modules-in-packages almost the same as toplevel modules, # just the "package" for a toplevel is empty (either an empty # string or empty list, depending on context). Differences: # - don't check for __init__.py in directory for empty package for module in self.py_modules: path = module.split('.') package = '.'.join(path[0:-1]) module_base = path[-1] try: (package_dir, checked) = packages[package] except KeyError: package_dir = self.get_package_dir(package) checked = 0 if not checked: init_py = self.check_package(package, package_dir) packages[package] = (package_dir, 1) if init_py: modules.append((package, "__init__", init_py)) # XXX perhaps we should also check for just .pyc files # (so greedy closed-source bastards can distribute Python # modules too) module_file = os.path.join(package_dir, module_base + ".py") if not self.check_module(module, module_file): continue modules.append((package, module_base, module_file)) return modules def find_all_modules(self): """Compute the list of all modules that will be built, whether they are specified one-module-at-a-time ('self.py_modules') or by whole packages ('self.packages'). Return a list of tuples (package, module, module_file), just like 'find_modules()' and 'find_package_modules()' do.""" modules = [] if self.py_modules: modules.extend(self.find_modules()) if self.packages: for package in self.packages: package_dir = self.get_package_dir(package) m = self.find_package_modules(package, package_dir) modules.extend(m) return modules def get_source_files(self): return [module[-1] for module in self.find_all_modules()] def get_module_outfile(self, build_dir, package, module): outfile_path = [build_dir] + list(package) + [module + ".py"] return os.path.join(*outfile_path) def get_outputs(self, include_bytecode=1): modules = self.find_all_modules() outputs = [] for (package, module, module_file) in modules: package = package.split('.') filename = self.get_module_outfile(self.build_lib, package, module) outputs.append(filename) if include_bytecode: if self.compile: outputs.append(importlib.util.cache_from_source( filename, optimization='')) if self.optimize > 0: outputs.append(importlib.util.cache_from_source( filename, optimization=self.optimize)) outputs += [ os.path.join(build_dir, filename) for package, src_dir, build_dir, filenames in self.data_files for filename in filenames ] return outputs def build_module(self, module, module_file, package): if isinstance(package, str): package = package.split('.') elif not isinstance(package, (list, tuple)): raise TypeError( "'package' must be a string (dot-separated), list, or tuple") # Now put the module source file into the "build" area -- this is # easy, we just copy it somewhere under self.build_lib (the build # directory for Python source). outfile = self.get_module_outfile(self.build_lib, package, module) dir = os.path.dirname(outfile) self.mkpath(dir) return self.copy_file(module_file, outfile, preserve_mode=0) def build_modules(self): modules = self.find_modules() for (package, module, module_file) in modules: # Now "build" the module -- ie. copy the source file to # self.build_lib (the build directory for Python source). # (Actually, it gets copied to the directory for this package # under self.build_lib.) self.build_module(module, module_file, package) def build_packages(self): for package in self.packages: # Get list of (package, module, module_file) tuples based on # scanning the package directory. 'package' is only included # in the tuple so that 'find_modules()' and # 'find_package_tuples()' have a consistent interface; it's # ignored here (apart from a sanity check). Also, 'module' is # the *unqualified* module name (ie. no dots, no package -- we # already know its package!), and 'module_file' is the path to # the .py file, relative to the current directory # (ie. including 'package_dir'). package_dir = self.get_package_dir(package) modules = self.find_package_modules(package, package_dir) # Now loop over the modules we found, "building" each one (just # copy it to self.build_lib). for (package_, module, module_file) in modules: assert package == package_ self.build_module(module, module_file, package) def byte_compile(self, files): if sys.dont_write_bytecode: self.warn('byte-compiling is disabled, skipping.') return from distutils.util import byte_compile prefix = self.build_lib if prefix[-1] != os.sep: prefix = prefix + os.sep # XXX this code is essentially the same as the 'byte_compile() # method of the "install_lib" command, except for the determination # of the 'prefix' string. Hmmm. if self.compile: byte_compile(files, optimize=0, force=self.force, prefix=prefix, dry_run=self.dry_run) if self.optimize > 0: byte_compile(files, optimize=self.optimize, force=self.force, prefix=prefix, dry_run=self.dry_run) class build_py_2to3(build_py, Mixin2to3): def run(self): self.updated_files = [] # Base class code if self.py_modules: self.build_modules() if self.packages: self.build_packages() self.build_package_data() # 2to3 self.run_2to3(self.updated_files) # Remaining base class code self.byte_compile(self.get_outputs(include_bytecode=0)) def build_module(self, module, module_file, package): res = build_py.build_module(self, module, module_file, package) if res[1]: # file was copied self.updated_files.append(res[0]) return res
PypiClean
/pysit-0.5b3.zip/pysit-0.5b3/examples/HorizontalReflector2DFrequency.py
import time import numpy as np import matplotlib.pyplot as plt from pysit import * from pysit.gallery import horizontal_reflector if __name__ == '__main__': # Setup hybrid=True # Define Domain pmlx = PML(0.1, 100) pmlz = PML(0.1, 100) x_config = (0.1, 1.0, pmlx, pmlx) z_config = (0.1, 0.8, pmlz, pmlz) d = RectangularDomain(x_config, z_config) m = CartesianMesh(d, 90, 70) # Generate true wave speed C, C0, m, d = horizontal_reflector(m) # Set up shots zmin = d.z.lbound zmax = d.z.rbound zpos = zmin + (1./9.)*zmax shots = equispaced_acquisition(m, RickerWavelet(10.0), sources=1, source_depth=zpos, source_kwargs={}, receivers='max', receiver_depth=zpos, receiver_kwargs={}, ) # Define and configure the wave solver trange = (0.0,3.0) solver_time = ConstantDensityAcousticWave(m, formulation='scalar', model_parameters={'C': C}, spatial_accuracy_order=6, trange=trange) # Generate synthetic Seismic data print('Generating data...') base_model = solver_time.ModelParameters(m,{'C': C}) tt = time.time() generate_seismic_data(shots, solver_time, base_model) print 'Data generation: {0}s'.format(time.time()-tt) # Define and configure the objective function if hybrid: solver = ConstantDensityAcousticWave(m, formulation='scalar', model_parameters={'C': C}, spatial_accuracy_order=4, trange=trange) objective = HybridLeastSquares(solver) else: solver = ConstantDensityHelmholtz(m, model_parameters={'C': C0}, spatial_shifted_differences=True, spatial_accuracy_order=4) objective = FrequencyLeastSquares(solver) # Define the inversion algorithm invalg = LBFGS(objective) initial_value = solver.ModelParameters(m,{'C': C0}) # Execute inversion algorithm print('Running Descent...') tt = time.time() status_configuration = {'value_frequency' : 1, 'residual_frequency' : 1, 'residual_length_frequency' : 1, 'objective_frequency' : 1, 'step_frequency' : 1, 'step_length_frequency' : 1, 'gradient_frequency' : 1, 'gradient_length_frequency' : 1, 'run_time_frequency' : 1, 'alpha_frequency' : 1, } invalg.max_linesearch_iterations=40 loop_configuration=[(60,{'frequencies' : [2.0, 3.5, 5.0]}), (15,{'frequencies' : [6.5, 8.0, 9.5]})] #3 steps at one set of frequencies and 3 at another set loop_configuration=[(2,{'frequencies' : [2.0, 3.5, 5.0]})] result = invalg(shots, initial_value, loop_configuration, verbose=True, status_configuration=status_configuration) print '...run time: {0}s'.format(time.time()-tt) obj_vals = np.array([v for k,v in invalg.objective_history.items()]) plt.figure() plt.semilogy(obj_vals) plt.figure() plt.subplot(3,1,1) vis.plot(C0, m) plt.title('Initial Model') plt.subplot(3,1,2) vis.plot(C, m) plt.title('True Model') plt.subplot(3,1,3) vis.plot(result.C, m) plt.title('Reconstruction') plt.show()
PypiClean
/airflow_hdinsight-0.0.1.3-py3-none-any.whl/airflowhdi/operators/azure_hdinsight_create_cluster_operator.py
from airflow import settings, AirflowException from azure.mgmt.hdinsight.models import ClusterCreateProperties, ClusterDefinition, ComputeProfile, Role, \ LinuxOperatingSystemProfile, OsProfile, \ StorageProfile, StorageAccount, OSType, Tier, HardwareProfile from airflowhdi.hooks import AzureHDInsightHook from airflow.models import BaseOperator, Connection from airflow.utils.decorators import apply_defaults class AzureHDInsightCreateClusterOperator(BaseOperator): """ .. seealso:: See the documentation of :class:`airflowhdi.hooks.AzureHDInsightHook` for explanation on the parameters of this operator """ template_fields = ['cluster_params'] #to allow deep nested templatization by airflow on the entire cluster param spec ClusterCreateProperties.template_fields = ['cluster_definition', 'compute_profile', 'storage_profile'] ClusterDefinition.template_fields = ['configurations'] ComputeProfile.template_fields = ['roles'] Role.template_fields = ['os_profile'] OsProfile.template_fields = ['linux_operating_system_profile'] LinuxOperatingSystemProfile.template_fields = ['username', 'password'] StorageProfile.template_fields = ['storageaccounts'] StorageAccount.template_fields = ['name', 'key', 'container'] @apply_defaults def __init__(self, cluster_name, cluster_params: ClusterCreateProperties, azure_conn_id='azure_hdinsight_default', *args, **kwargs ): """ :param azure_conn_id: connection ID of the Azure HDInsight cluster. :type azure_conn_id: string :param cluster_name: Unique cluster name of the HDInsight cluster :type cluster_name: str :param cluster_params: the :class:`azure.mgmt.hdinsight.models.ClusterCreateProperties` representing the HDI cluster spec. You can explore some sample specs `here <https://github.com/Azure-Samples/hdinsight-python-sdk-samples>`_. This python object follows the same structure as the `HDInsight arm template <https://docs.microsoft.com/en-us/azure/templates/microsoft.hdinsight/2018-06-01-preview/clusters>`_. :download:`Example ClusterCreateProperties<../../examples/azure_hdi_cluster_conn.py>` :type cluster_params: ClusterCreateProperties """ super(AzureHDInsightCreateClusterOperator, self).__init__(*args, **kwargs) self.cluster_name = cluster_name self.cluster_params = cluster_params self.azure_conn_id = azure_conn_id def execute(self, context): azure_hook = AzureHDInsightHook(azure_conn_id=self.azure_conn_id) self.log.info("Executing HDInsightCreateClusterOperator ") azure_hook.create_cluster(self.cluster_params, self.cluster_name) self.log.info("Finished executing HDInsightCreateClusterOperator") class ConnectedAzureHDInsightCreateClusterOperator(AzureHDInsightCreateClusterOperator): """ An extension of the :class:`AzureHDInsightCreateClusterOperator` which allows getting credentials and other common properties for :class:`azure.mgmt.hdinsight.models.ClusterCreateProperties` from a connection """ # make sure these are imported. eval() below needs them. param_field_types = [OSType, Tier, ClusterDefinition, ComputeProfile, Role, \ HardwareProfile, LinuxOperatingSystemProfile, OsProfile, \ StorageProfile, StorageAccount] @apply_defaults def __init__(self, azure_conn_id=None, hdi_conn_id=None, *args, **kwargs ): """ :param azure_conn_id: connection ID of the Azure HDInsight cluster. :type azure_conn_id: string :param hdi_conn_id: connection ID of the connection that contains a :class:`azure.mgmt.hdinsight.models.ClusterCreateProperties` object in its extra field :type hdi_conn_id: str :param cluster_params: cluster creation spec :type cluster_params: ClusterCreateProperties :param cluster_name: Unique cluster name of the HDInsight cluster :type cluster_name: str """ session = settings.Session() azure_conn = session.query(Connection).filter(Connection.conn_id == azure_conn_id).first() hdi_conn = session.query(Connection).filter(Connection.conn_id == hdi_conn_id).first() cluster_params = eval(compile(hdi_conn.extra, 'file', 'eval')) if azure_conn: super(ConnectedAzureHDInsightCreateClusterOperator, self).__init__(*args, **dict( kwargs, params=azure_conn.extra_dejson, azure_conn_id=azure_conn_id, cluster_params=cluster_params)) else: raise AirflowException( f"Connection with conn_id {azure_conn_id} not found" ) def execute(self, context): return super(ConnectedAzureHDInsightCreateClusterOperator, self).execute( context=context)
PypiClean
/ez_webdriver-1.1.7-py3-none-any.whl/ez_webdriver/__init__.py
from . import ez_webdriver def chrome(version="auto", path=None, name="chromedriver", os_type=None, is_arm=False) -> str: """ version: 三种参数 auto(匹配当前版本) latest(最新版本) x.x(指定版本,2级就够,太细的版本可能匹配不到) path: 可以是目录(指定放置驱动文件的目录),可以是文件(指定驱动文件的路径) name: 下载的驱动文件名称(下载的文件名,除非镜像源的文件改名了,否则不要改动) os_type: 默认自动,手动填入系统信息,e.g. "win64" , "linux32-arm", "mac64 arm" is_arm: 默认自动,是否ARM架构,匹配对应 arm/aarch64 驱动 """ return ez_webdriver.chrome(version=version, path=path, name=name, os_type=os_type, is_arm=is_arm) def firefox(version="auto", path=None, name="geckodriver", os_type=None, is_arm=False) -> str: """ version: 三种参数 auto(匹配当前版本) latest(最新版本) x.x(指定版本,2级就够,太细的版本可能匹配不到) path: 可以是目录(指定放置驱动文件的目录),可以是文件(指定驱动文件的路径) name: 下载的驱动文件名称(下载的文件名,除非镜像源的文件改名了,否则不要改动) os_type: 默认自动,手动填入系统信息,e.g. "win64" , "linux32-arm", "mac64 arm" is_arm: 默认自动,是否ARM架构,匹配对应 arm/aarch64 驱动 """ return ez_webdriver.firefox(version=version, path=path, name=name, os_type=os_type, is_arm=is_arm) def edge(version="auto", path=None, name="edgedriver", os_type=None, is_arm=False) -> str: """ version: 三种参数 auto(匹配当前版本) latest(最新版本) x.x(指定版本,2级就够,太细的版本可能匹配不到) path: 可以是目录(指定放置驱动文件的目录),可以是文件(指定驱动文件的路径) name: 下载的驱动文件名称(下载的文件名,除非镜像源的文件改名了,否则不要改动) os_type: 默认自动,手动填入系统信息,e.g. "win64" , "linux32-arm", "mac64 arm" is_arm: 默认自动,是否ARM架构,匹配对应 arm/aarch64 驱动 """ return ez_webdriver.edge(version=version, path=path, name=name, os_type=os_type, is_arm=is_arm) def ie(version="auto", path=None, name="IEDriverServer", os_type=None, is_arm=False) -> str: """ version: 三种参数 auto(匹配当前版本) latest(最新版本) x.x(指定版本,2级就够,太细的版本可能匹配不到) path: 可以是目录(指定放置驱动文件的目录),可以是文件(指定驱动文件的路径) name: 下载的驱动文件名称(下载的文件名,除非镜像源的文件改名了,否则不要改动) os_type: 默认自动,手动填入系统信息,e.g. "win64" , "linux32-arm", "mac64 arm" is_arm: 默认自动,是否ARM架构,匹配对应 arm/aarch64 驱动 """ return ez_webdriver.ie(version=version, path=path, name=name, os_type=os_type, is_arm=is_arm) def clear(path=None) -> None: """ path: 指定清除的目录(会删除目录下所有文件) 清除驱动缓存,清空默认目录 _webdriver 下所有内容 (适用驱动文件损坏情况,本模块每种浏览器仅保留一个冗余旧版本) """ return ez_webdriver.clear(path)
PypiClean
/Faker-19.3.1.tar.gz/Faker-19.3.1/faker/providers/color/bn_BD/__init__.py
from collections import OrderedDict from .. import Provider as ColorProvider localized = True class Provider(ColorProvider): """Implement color provider for ``bn_BD`` locale.""" all_colors = OrderedDict( ( ("এলিস নীল", "#F0F8FF"), ("এন্টিক সাদা", "#FAEBD7"), ("জল রং", "#00FFFF"), ("হালকা নীল সবুজ", "#7FFFD4"), ("উজ্জ্বল নীল", "#F0FFFF"), ("ফ্যাকাশে বেলে হলুদ বাদামী", "#F5F5DC"), ("বিস্কুট রং", "#FFE4C4"), ("কালো", "#000000"), ("বালু রং", "#FFEBCD"), ("নীল", "#0000FF"), ("নীলাভ রক্তবর্ণ", "#8A2BE2"), ("বাদামী", "#A52A2A"), ("কাঠ রং", "#DEB887"), ("সামরিক নীল", "#5F9EA0"), ("উজ্জ্বল হলুদাভ সবুজ", "#7FFF00"), ("চকলেট রং", "#D2691E"), ("প্রবাল রং", "#FF7F50"), ("ঝুমকা ফুলের নীল", "#6495ED"), ("সিল্ক রং", "#FFF8DC"), ("অগ্নি রং", "#DC143C"), ("সায়ান", "#00FFFF"), ("কালচে নীল", "#00008B"), ("কালচে সায়ান", "#008B8B"), ("কালচে ধাতব সোনালি", "#B8860B"), ("কালচে ধূসর", "#A9A9A9"), ("কালচে সবুজ", "#006400"), ("কালচে খাকী", "#BDB76B"), ("কালচে হালকা বেগুনী লাল", "#8B008B"), ("কালচে জলপাই সবুজ", "#556B2F"), ("কালচে কমলা", "#FF8C00"), ("কালচে অর্কিড রং", "#9932CC"), ("কালচে লাল", "#8B0000"), ("কালচে স্যামন রং", "#E9967A"), ("কালচে সামুদ্রিক সবুজ", "#8FBC8F"), ("কালচে পাথুরে নীল", "#483D8B"), ("কালচে পাথুরে ধূসর", "#2F4F4F"), ("কালচে ফিরোজা", "#00CED1"), ("কালচে বেগুনী", "#9400D3"), ("গাঢ় গোলাপি", "#FF1493"), ("গাঢ় আকাশী নীল", "#00BFFF"), ("আবছা ধূসর", "#696969"), ("ডজার নীল", "#1E90FF"), ("পোড়া ইট রং", "#B22222"), ("ফুলেল সাদা", "#FFFAF0"), ("বন্য সবুজ", "#228B22"), ("উজ্জ্বল গোলাপি বেগুনী", "#FF00FF"), ("মেটে রং", "#DCDCDC"), ("টাইটান সাদা", "#F8F8FF"), ("সোনালি", "#FFD700"), ("ধাতব সোনালি", "#DAA520"), ("ধূসর", "#808080"), ("সবুজ", "#008000"), ("সবুজাভ হলুদ", "#ADFF2F"), ("মধু রং", "#F0FFF0"), ("উষ্ণ গোলাপি", "#FF69B4"), ("ভারতীয় লাল", "#CD5C5C"), ("বেগুনী নীল", "#4B0082"), ("আইভরি", "#FFFFF0"), ("খাকী", "#F0E68C"), ("ল্যাভেণ্ডার রং", "#E6E6FA"), ("ল্যাভেন্ডার লাল", "#FFF0F5"), ("তৃণ সবুজ", "#7CFC00"), ("হালকা সিল্ক রং", "#FFFACD"), ("হালকা নীল", "#ADD8E6"), ("হালকা প্রবাল রং", "#F08080"), ("হালকা সায়ান", "#E0FFFF"), ("হালকা ধাতব সোনালি হলুদ", "#FAFAD2"), ("হালকা ধূসর", "#D3D3D3"), ("হালকা সবুজ", "#90EE90"), ("হালকা গোলাপি", "#FFB6C1"), ("হালকা স্যামন রং", "#FFA07A"), ("হালকা সামুদ্রিক সবুজ", "#20B2AA"), ("হালকা আকাশী নীল", "#87CEFA"), ("হালকা পাথুরে ধূসর", "#778899"), ("হালকা ধাতব নীল", "#B0C4DE"), ("হালকা হলুদ", "#FFFFE0"), ("লাইম রং", "#00FF00"), ("লাইম সবুজ", "#32CD32"), ("পাট রং", "#FAF0E6"), ("হালকা বেগুনী লাল", "#FF00FF"), ("মেরুন", "#800000"), ("মাঝারী নীল সবুজ", "#66CDAA"), ("মাঝারী নীল", "#0000CD"), ("মাঝারী অর্কিড রং", "#BA55D3"), ("মাঝারী বেগুনী", "#9370DB"), ("মাঝারী সামুদ্রিক সবুজ", "#3CB371"), ("মাঝারী পাথুরে নীল", "#7B68EE"), ("মাঝারী বাসন্তী সবুজ", "#00FA9A"), ("মাঝারী ফিরোজা", "#48D1CC"), ("মাঝারী বেগুনী লাল", "#C71585"), ("মিডনাইট নীল", "#191970"), ("হালকা পীত পুদিনা রং", "#F5FFFA"), ("ধোঁয়াটে গোলাপ রং", "#FFE4E1"), ("মোকাসিন", "#FFE4B5"), ("নাভাজো সাদা", "#FFDEAD"), ("নেভি ব্লু", "#000080"), ("ওল্ড লেইস রং", "#FDF5E6"), ("জলপাই রং", "#808000"), ("ম্যাটমাটে জলপাই রং", "#6B8E23"), ("কমলা", "#FFA500"), ("কমলা লাল", "#FF4500"), ("অর্কিড রং", "#DA70D6"), ("ফ্যাকাশে ধাতব সোনালি", "#EEE8AA"), ("ফ্যাকাশে সবুজ", "#98FB98"), ("ফ্যাকাশে ফিরোজা", "#AFEEEE"), ("ফ্যাকাশে বেগুনী লাল", "#DB7093"), ("পাপায়াহুপ", "#FFEFD5"), ("পীচ রং", "#FFDAB9"), ("পেরু রং", "#CD853F"), ("গোলাপি", "#FFC0CB"), ("জাম রং", "#DDA0DD"), ("গুঁড়া নীল", "#B0E0E6"), ("বেগুনী", "#800080"), ("লাল", "#FF0000"), ("গোলাপী লাল", "#BC8F8F"), ("রয়্যাল ব্লু", "#4169E1"), ("স্যাডল ব্রাউন", "#8B4513"), ("স্যামন রং", "#FA8072"), ("বেলে বাদামী", "#F4A460"), ("সামুদ্রিক সবুজ", "#2E8B57"), ("ঝিনুক রং", "#FFF5EE"), ("মেটে রং", "#A0522D"), ("রূপালী", "#C0C0C0"), ("আকাশী নীল", "#87CEEB"), ("পাথুরে নীল", "#6A5ACD"), ("পাথুরে ধূসর", "#708090"), ("তুষার শুভ্র রং", "#FFFAFA"), ("বাসন্তী সবুজ", "#00FF7F"), ("ধাতব নীল", "#4682B4"), ("তামাটে রং", "#D2B48C"), ("পেষ্ট রং", "#008080"), ("থিসল রং", "#D8BFD8"), ("টমেটো রং", "#FF6347"), ("ফিরোজা", "#40E0D0"), ("রক্তবেগুনী", "#EE82EE"), ("গম রং", "#F5DEB3"), ("সাদা", "#FFFFFF"), ("ধোঁয়াটে সাদা", "#F5F5F5"), ("হলুদ", "#FFFF00"), ("হলুদাভ সবুজ", "#9ACD32"), ) ) safe_colors = ( "কালো", "মেরুন", "সবুজ", "নেভি", "জলপাই রং", "বেগুনী", "পেষ্ট রং", "লাইম রং", "নীল", "রূপালী", "ধূসর", "হলুদ", "উজ্জ্বল গোলাপি বেগুনী", "জল রং", "সাদা", )
PypiClean
/ComponentDB-CLI-3.15.5.tar.gz/ComponentDB-CLI-3.15.5/cdbCli/service/cli/cdbCliCmnds/setItemLogById.py
import sys import re import click import csv from cdbApi import LogEntryEditInformation from cdbApi import ApiException from datetime import datetime from cdbCli.common.cli.cliBase import CliBase ############################################################################################## # # # Add log to item given the item's ID # # # ############################################################################################## def set_item_log_by_id_helper(item_api, item_id, log_entry, effective_date=None): """Helper function to set a log for an item in CDB :param item_api: Necessary item api object :param item_id: item ID of the object which the log is being written for :param log_entry: the log entry to be written :param effective_date: optional date of log""" if effective_date: effective_date = datetime.strptime(effective_date, "%Y-%m-%d") try: log_entry_obj = LogEntryEditInformation( item_id=item_id, log_entry=log_entry, effective_date=effective_date ) item_api.add_log_entry_to_item(log_entry_edit_information=log_entry_obj) except ApiException as e: p = r'"localizedMessage.*' matches = re.findall(p, e.body) if matches: error = "Error uploading log entry: " + matches[0][:-2] click.echo(error) else: click.echo("Error uploading log entry") exit(1) @click.command() @click.option( "--input-file", help="Input csv file with item_id,log_data,effective_date default is STDIN", type=click.File("r"), default=sys.stdin, ) @click.option( "--effective-date", is_flag=True, help="Set if effective date is listed in input" ) @click.pass_obj def set_item_log_by_id(cli, input_file, effective_date=False): """Adds a log entry to the given item ids with optional effective date \b Example (file): set-item-log-by-id --input-file filename.csv --effective-date='yes' Example (pipe): cat filename.csv | set-item-log-by-id Example (terminal): set-item-log-by-id header <Insert Item ID>,<example log text> Input is either through a named csv file or through STDIN. Default is STDIN The format of the input data is an intended row to be removed followed by <Item ID>,<Log Data>,<Effective Date>. """ try: factory = cli.require_authenticated_api() except ApiException: click.echo("Unauthorized User/ Wrong Username or Password. Try again.") return item_api = factory.getItemApi() stdin_msg = "Entry per line: <item_id>,<log_text>" reader, stdin_tty_mode = cli.prepare_cli_input_csv_reader(input_file, stdin_msg) # Parse lines of csv for row in reader: if row.__len__() == 0 and stdin_tty_mode: break if not row[0]: continue item_id = row[0] log_entry = row[1] if effective_date: effective_date = row[2] else: effective_date = None set_item_log_by_id_helper(item_api, item_id, log_entry, effective_date) if __name__ == "__main__": set_item_log_by_id()
PypiClean
/holoviews-1.17.1.tar.gz/holoviews-1.17.1/examples/gallery/demos/bokeh/dropdown_economic.ipynb
Most examples work across multiple plotting backends, this example is also available for: * [Matplotlib - dropdown_economic](../matplotlib/dropdown_economic.ipynb) ``` import pandas as pd import holoviews as hv from holoviews import opts, dim hv.extension('bokeh') ``` ## Declaring data ``` macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', delimiter='\t') key_dimensions = [('year', 'Year'), ('country', 'Country')] value_dimensions = [('unem', 'Unemployment'), ('capmob', 'Capital Mobility'), ('gdp', 'GDP Growth'), ('trade', 'Trade')] macro = hv.Table(macro_df, key_dimensions, value_dimensions) ``` ## Plot ``` gdp_curves = macro.to.curve('Year', 'GDP Growth') gdp_unem_scatter = macro.to.scatter('Year', ['GDP Growth', 'Unemployment']) annotations = hv.Arrow(1973, 8, 'Oil Crisis', 'v') * hv.Arrow(1975, 6, 'Stagflation', 'v') *\ hv.Arrow(1979, 8, 'Energy Crisis', 'v') * hv.Arrow(1981.9, 5, 'Early Eighties\n Recession', 'v') (gdp_curves * gdp_unem_scatter* annotations).opts( opts.Curve(color='k'), opts.Scatter(cmap='Blues', color='Unemployment', line_color='k', size=dim('Unemployment')*1.5), opts.Text(text_font_size='13px'), opts.Overlay(height=400, show_frame=False, width=700)) ```
PypiClean
/templite-0.2.1.tar.gz/templite-0.2.1/templite.py
import sys, os import re class Templite(object): autowrite = re.compile('(^[\'\"])|(^[a-zA-Z0-9_\[\]\'\"]+$)') delimiters = ('${', '}$') cache = {} def __init__(self, text=None, filename=None, encoding='utf-8', delimiters=None, caching=False): """Loads a template from string or file.""" if filename: filename = os.path.abspath(filename) mtime = os.path.getmtime(filename) self.file = key = filename elif text is not None: self.file = mtime = None key = hash(text) else: raise ValueError('either text or filename required') # set attributes self.encoding = encoding self.caching = caching if delimiters: start, end = delimiters if len(start) != 2 or len(end) != 2: raise ValueError('each delimiter must be two characters long') self.delimiters = delimiters # check cache cache = self.cache if caching and key in cache and cache[key][0] == mtime: self._code = cache[key][1] return # read file if filename: with open(filename) as fh: text = fh.read() self._code = self._compile(text) if caching: cache[key] = (mtime, self._code) def _compile(self, source): offset = 0 tokens = ['# -*- coding: %s -*-' % self.encoding] start, end = self.delimiters escaped = (re.escape(start), re.escape(end)) regex = re.compile('%s(.*?)%s' % escaped, re.DOTALL) for i, part in enumerate(regex.split(source)): part = part.replace('\\'.join(start), start) part = part.replace('\\'.join(end), end) if i % 2 == 0: if not part: continue part = part.replace('\\', '\\\\').replace('"', '\\"') part = '\t' * offset + 'write("""%s""")' % part else: part = part.rstrip() if not part: continue part_stripped = part.lstrip() if part_stripped.startswith(':'): if not offset: raise SyntaxError('no block statement to terminate: ${%s}$' % part) offset -= 1 part = part_stripped[1:] if not part.endswith(':'): continue elif self.autowrite.match(part_stripped): part = 'write(%s)' % part_stripped lines = part.splitlines() margin = min(len(l) - len(l.lstrip()) for l in lines if l.strip()) part = '\n'.join('\t' * offset + l[margin:] for l in lines) if part.endswith(':'): offset += 1 tokens.append(part) if offset: raise SyntaxError('%i block statement(s) not terminated' % offset) return compile('\n'.join(tokens), self.file or '<string>', 'exec') def render(self, **namespace): """Renders the template according to the given namespace.""" stack = [] namespace['__file__'] = self.file # add write method def write(*args): for value in args: if isinstance(value, unicode): value = value.encode(self.encoding) stack.append(str(value)) namespace['write'] = write # add include method def include(file): if not os.path.isabs(file): if self.file: base = os.path.dirname(self.file) else: base = os.path.dirname(sys.argv[0]) file = os.path.join(base, file) t = Templite(None, file, self.encoding, self.delimiters, self.caching) stack.append(t.render(**namespace)) namespace['include'] = include # execute template code exec self._code in namespace return ''.join(stack)
PypiClean
/webviz-subsurface-0.2.22.tar.gz/webviz-subsurface-0.2.22/webviz_subsurface/plugins/_rft_plotter/_views/_sim_vs_obs_view/_view.py
from typing import List, Union import webviz_core_components as wcc from dash import Input, Output, callback from webviz_config.utils import StrEnum, callback_typecheck from webviz_config.webviz_plugin_subclasses import ViewABC from ..._reusable_settings import FilterLayout from ..._reusable_view_element import GeneralViewElement from ..._types import ColorAndSizeByType from ..._utils import RftPlotterDataModel, filter_frame from ._settings import Ensembles, PlotType, PlotTypeSettings, SizeColorSettings from ._utils import update_crossplot, update_errorplot class SimVsObsView(ViewABC): class Ids(StrEnum): PLOT_TYPE = "plot-type" ENSEMBLES = "ensembles" FILTERS = "filters" SIZE_COLOR_SETTINGS = "size-color-settings" VIEW_ELEMENT = "view-element" def __init__(self, datamodel: RftPlotterDataModel) -> None: super().__init__("Sim vs obs") self._datamodel = datamodel self.add_settings_groups( { self.Ids.PLOT_TYPE: PlotTypeSettings(), self.Ids.ENSEMBLES: Ensembles(self._datamodel.ensembles), self.Ids.FILTERS: FilterLayout( wells=self._datamodel.well_names, zones=self._datamodel.zone_names, dates=self._datamodel.dates, ), self.Ids.SIZE_COLOR_SETTINGS: SizeColorSettings(), } ) self.add_view_element(GeneralViewElement(), self.Ids.VIEW_ELEMENT) def set_callbacks(self) -> None: @callback( Output( self.view_element(self.Ids.VIEW_ELEMENT) .component_unique_id(GeneralViewElement.Ids.CHART) .to_string(), "children", ), Input( self.settings_group(self.Ids.PLOT_TYPE) .component_unique_id(PlotTypeSettings.Ids.PLOT_TYPE) .to_string(), "value", ), Input( self.settings_group(self.Ids.ENSEMBLES) .component_unique_id(Ensembles.Ids.ENSEMBLES) .to_string(), "value", ), Input( self.settings_group(self.Ids.FILTERS) .component_unique_id(FilterLayout.Ids.FILTER_WELLS) .to_string(), "value", ), Input( self.settings_group(self.Ids.FILTERS) .component_unique_id(FilterLayout.Ids.FILTER_ZONES) .to_string(), "value", ), Input( self.settings_group(self.Ids.FILTERS) .component_unique_id(FilterLayout.Ids.FILTER_DATES) .to_string(), "value", ), Input( self.settings_group(self.Ids.SIZE_COLOR_SETTINGS) .component_unique_id(SizeColorSettings.Ids.CROSSPLOT_SIZE_BY) .to_string(), "value", ), Input( self.settings_group(self.Ids.SIZE_COLOR_SETTINGS) .component_unique_id(SizeColorSettings.Ids.CROSSPLOT_COLOR_BY) .to_string(), "value", ), ) @callback_typecheck def _update_graph( plot_type: PlotType, ensembles: List[str], wells: List[str], zones: List[str], dates: List[str], sizeby: ColorAndSizeByType, colorby: ColorAndSizeByType, ) -> Union[str, List[wcc.Graph]]: df = filter_frame( self._datamodel.ertdatadf, {"WELL": wells, "ZONE": zones, "DATE": dates, "ENSEMBLE": ensembles}, ) if df.empty: return "No data matching the given filter criterias" if plot_type == PlotType.CROSSPLOT: return update_crossplot(df, sizeby, colorby) if plot_type == PlotType.ERROR_BOXPLOT: return [update_errorplot(df, self._datamodel.enscolors)] raise ValueError(f"Plot type: {plot_type.value} not implemented")
PypiClean
/Firenado-0.9.0a2.tar.gz/Firenado-0.9.0a2/firenado/tornadoweb.py
from . import data from . import session from . import uimodules from .config import get_class_from_config from cartola import fs from cartola.config import load_yaml_file import firenado.conf import inspect import logging import os from tornado.httpclient import HTTPRequest from tornado.template import Loader import tornado.web import tornado.websocket from typing import Any logger = logging.getLogger(__name__) def get_request(url, **kwargs): """ Return a HTTPRequest to help with AsyncHTTPClient and HTTPClient execution. The HTTPRequest will use the provided url combined with path if provided. The HTTPRequest method will be GET by default and can be changed if method is informed. If form_urlencoded is defined as True a Content-Type header will be added to the request with application/x-www-form-urlencoded value. :param str url: Base url to be set to the HTTPRequest :key form_urlencoded: If the true will add the header Content-Type application/x-www-form-urlencoded to the form. Default is False. :key method: Method to be used by the HTTPRequest. Default it GET. :key path: If informed will add the path to the base url informed. Default is None. :return HTTPRequest: """ method = kwargs.get("method", "GET") path = kwargs.get("path", None) form_urlencoded = kwargs.get("form_urlencoded", False) if path is not None: if not url.endswith("/"): url = "%s/" % url url = "%s%s" % (url, path) request = HTTPRequest(url, method=method) if form_urlencoded: request.headers.add("Content-Type", "application/x-www-form-urlencoded") return request class TornadoErrorHandler: def __init__(self, host): self._host = host @property def host(self): return self._host def is_component(self): if isinstance(self.host, TornadoComponent): return True return False def is_handler(self): if isinstance(self.host, TornadoHandler): return True return False def handle_error(self, request: "TornadoHandler", status_code: int, **kwargs: Any) -> None: request.write_error(status_code, **kwargs) class TornadoApplication(tornado.web.Application, data.DataConnectedMixin, session.SessionEnginedMixin): """ Firenado basic Tornado application. """ def __init__(self, default_host="", transforms=None, **settings): logger.debug("Wiring application located at %s.", firenado.conf.APP_ROOT_PATH) self.components = {} settings.update(firenado.conf.app['settings']) handlers = [] ui_modules = [] data.configure_data_sources(firenado.conf.app['data']['sources'], self) self.__load_components() for key, component in self.components.items(): component_handlers = component.get_handlers() for i in range(0, len(component_handlers)): if issubclass( component_handlers[i][1], TornadoHandler ) or issubclass( component_handlers[i][1], TornadoWebSocketHandler ): if len(component_handlers[i]) < 3: component_handlers[i] = ( component_handlers[i][0], component_handlers[i][1], {'component': component} ) else: component_handlers[i][1].component = component handlers = handlers + component_handlers # Adding component ui modules to the application ui modules list ui_modules.append(uimodules) if component.get_ui_modules(): ui_modules.append(component.get_ui_modules()) if firenado.conf.app['component']: if firenado.conf.app['static_path']: if os.path.isabs(firenado.conf.app['static_path']): settings['static_path'] = firenado.conf.app['static_path'] else: settings['static_path'] = os.path.join( self.components[firenado.conf.app[ 'component']].get_component_path(), firenado.conf.app['static_path']) else: settings['static_path'] = os.path.join( self.components[ firenado.conf.app[ 'component']].get_component_path(), 'static') else: settings['static_path'] = os.path.join( os.path.dirname(__file__), "static") static_url_prefix = firenado.conf.app['static_url_prefix'] if static_url_prefix != "/": static_url_prefix = "%s/" % static_url_prefix settings['static_url_prefix'] = static_url_prefix if len(ui_modules) > 0: settings['ui_modules'] = ui_modules if firenado.conf.app['url_root_path'] is not None: from .util.url_util import rooted_path for idx, handler in enumerate(handlers): handler_list = list(handler) handler_list[0] = rooted_path( firenado.conf.app['url_root_path'], handler_list[0]) handlers[idx] = tuple(handler_list) super().__init__(handlers=handlers, default_host=default_host, transforms=transforms, **settings) logger.debug("Checking if session is enabled.") if firenado.conf.session['enabled']: logger.debug("Session is enabled. Starting session engine.") # This is forcing the session engine to be created from the get-go # an assert will be use to call it and check if is not none. assert self.session_engine is not None else: logger.debug("Session is disabled.") def get_app_component(self) -> "TornadoComponent": """ Return the component set as the application component at the app config. :return: TornadoComponent """ return self.components[firenado.conf.app['component']] def __load_components(self): """ Loads all enabled components registered from the components config section. """ for key, value in firenado.conf.components.items(): if value['enabled']: component_class = get_class_from_config(value) self.components[key] = component_class(key, self) if self.components[key].get_config_file(): filename = self.components[key].get_complete_config_file() if filename is not None: self.components[key].conf = load_yaml_file(filename) self.components[key].process_config() self.components[key].has_conf = True else: logger.debug("Failed to find the file for the " "component %s at %s. Component's " "filename returned is %s.", key, firenado.conf.APP_CONFIG_PATH, self.components[key].get_config_file()) # Initializing enabled components after the load. for key, value in firenado.conf.components.items(): if value['enabled']: self.components[key].initialize() class TornadoComponent: """ Firenado applications are organized in components. A component could be an application or something that can be distributed as an add-on or a plugin. """ def __init__(self, name, application): self.name = name self.application = application self.conf = {} self._has_conf = False self.plugins = dict() def after_request(self, handler): """ Add a logic to be executed after all component's handlers execution. """ pass def before_request(self, handler): """ Add a logic to be executed before all component's handler execution. """ pass def get_error_handler(self) -> TornadoErrorHandler: """Return a `TornadoErrorHandler` here to provide a different error handling than the tornado's default. If the error handler is implemented at the component, all handlers will use it as default. If a handler implements the `get_error_handler` method, it will be used instead of the one implemented at the component.""" return None def is_current_app(self): if not firenado.conf.is_multi_app: return True if firenado.conf.current_app_name == self.name: return True return False @property def has_conf(self): return self._has_conf @has_conf.setter def has_conf(self, value): self._has_conf = value def get_handlers(self): """ Returns handlers being added by the component to the application. :return: A list of handlers the component provides. """ return [] def get_ui_modules(self): """ Returns uimodules the component provides to the application. It could be just a module, a list or a dictionary of modules. :return: Uimodules the component provides. """ return None def get_component_path(self): """ Returns the component path. """ return os.path.abspath(os.path.dirname( inspect.getfile(self.__class__))) def get_config_filename(self): return None def get_config_file(self): filename = self.get_config_filename() if filename is not None: return filename return None def get_complete_config_file(self): """ Return the config file with the correct extension, if get_config_file has no extension. :return str: The config file with extension """ if fs.file_has_extension(self.get_config_file()): if os.path.isfile(self.get_config_file()): return os.path.join(firenado.conf.APP_CONFIG_PATH, self.get_config_file()) config_file_extensions = ['yml', 'yaml'] for extension in config_file_extensions: candidate_filename = "%s.%s" % (self.get_config_file(), extension) if os.path.isfile(os.path.join( firenado.conf.APP_CONFIG_PATH, candidate_filename)): return os.path.join(firenado.conf.APP_CONFIG_PATH, candidate_filename) return None def get_template_path(self): """ Returns the path that holds the component's templates. """ return os.path.join(os.path.abspath(os.path.dirname( inspect.getfile(self.__class__))), 'templates') def initialize(self): """ If you want to add logic while the component is initializing please overwrite this method. """ pass def install(self): """ Firenado handles an application installation looping thought all components and triggering the install method of them. If """ pass def process_config(self): """ To process your component configuration please overwrite this method reading the data on self.conf. """ pass def shutdown(self): """ If you have resources that will hang after the shutdown please overwrite this method and close/unload those resources. """ pass class SessionHandler: """ Set the stage for a handler with session. The session per-se will be managed by the ComponentHandler that extends SessionHandler. """ def __init__(self): self.session = None self.skip_auth = False def authenticated(self): """ Returns if the current user is authenticated. If the current user is set then we consider authenticated. :return: bool True is current user is set """ return self.current_user is not None class ComponentHandler(SessionHandler): """ This mixin will define a handler with components and session. ComponentHandler is the base of what a Firenado handler should be and it will be used to unify TornadoHandler and TornadoWebSocketHandler logic. Other functionalities will be implemented in TemplateHandler that assumes the handler being applied to is also a ComponentHandler. """ def __init__(self, **kwargs): super().__init__() self.component = None def initialize(self, component): self.component = component def get_data_connected(self): return self.application def write_error(self, status_code: int, **kwargs: Any) -> None: """ See: https://tinyurl.com/9t3jrend :param int status_code: :param Any kwargs: :return: """ error_handler = self.get_error_handler() if error_handler is None: error_handler = self.component.get_error_handler() if error_handler is None: super().write_error(status_code, **kwargs) else: error_handler.handle_error(self, status_code, **kwargs) @session.read def prepare(self): if hasattr(self, "authenticate"): if self.authenticate and hasattr(self.authenticate, '__call__'): self.authenticate() self.component.before_request(self) self.before_request() @session.write def on_finish(self): self.after_request() self.component.after_request(self) def after_request(self): """Called after the end of a request. Override this method to perform cleanup, logging, etc. This method is a counterpart to `prepare`. ``on_finish`` may not produce any output, as it is called after the response has been sent to the client. Use this method instead of `on_finish` to avoid the session to break and use session features. This method will be called by `on_finish` with a valid session. This method is called before it's component's after_request if defined. """ pass def get_error_handler(self) -> TornadoErrorHandler: """Return a `TornadoErrorHandler` here to provide a different error handling than the tornado's default. If the error handler is implemented at the handler, it will be used instead of the one implemented at the component.""" return None def before_request(self): """Called at the beginning of a request before `get`/`post`/etc. Override this method to perform common initialization regardless of the request method. Use this method instead of `prepare` to avoid the session to break and use session features. This method will be called by `prepare` with a valid session. This method is called after it's component's before_request if defined. Asynchronous support: Use ``async def`` or decorate this method with `.gen.coroutine` to make it asynchronous. If this method returns an ``Awaitable`` execution will not proceed until the ``Awaitable`` is done. .. versionadded:: 0.1.10 Asynchronous support. """ pass def get_rooted_path(self, path): from .util.url_util import rooted_path root = firenado.conf.app['url_root_path'] return rooted_path(root, path) class TemplateHandler: """ Deals with all aspects related to templates. This mixin will assume it was applied to a ComponentHandler. It will resolve and deal with templates defined in the same component or other components of an application. """ def __init__(self): self.__template_variables = dict() @property def template_variables(self): return self.__template_variables def add_variable_to_template(self, name, variable): """ Add a variable to a dict that will be added to the template during the render or render_string execution. """ self.__template_variables[name] = variable def render_string(self, template_name, **kwargs): kwargs['user_agent'] = self.user_agent if hasattr( self, 'user_agent') else None kwargs['credential'] = self.credential if hasattr( self, 'credential') else None for name, variable in self.template_variables.items(): kwargs[name] = variable if self.ui: return super(TemplateHandler, self).render_string( template_name, **kwargs) else: # TODO: After a redirect I'm still hitting here. # Need to figure out what is going on. self._finished = False return None def get_template_path(self): """Override to customize template path for each handler. By default, we use the ``template_path`` application setting. Return None to load templates relative to the calling file. """ if self.component is None: # This is the default behaviour provided by Tornado. # No components on the request no fancy template path. return super().get_template_path() else: return self.component.get_template_path() def get_firenado_template_path(self): """Override to customize the firenado template path for each handler. By default, we use the ``firenado_template_path`` application setting. Return None to load templates relative to the calling file. """ return self.application.settings.get('firenado_template_path') def create_template_loader(self, template_path): """Returns a new template loader for the given path. May be overridden by subclasses. By default returns a directory-based loader on the given path, using the ``autoescape`` application setting. If a ``template_loader`` application setting is supplied, uses that instead. """ settings = self.application.settings kwargs = {} if 'autoescape' in settings: # autoescape=None means "no escaping", so we have to be sure # to only pass this kwarg if the user asked for it. kwargs['autoescape'] = settings['autoescape'] return FirenadoComponentLoader( template_path, component=self.component, **kwargs) class TornadoHandler(ComponentHandler, TemplateHandler, tornado.web.RequestHandler): """ Base request handler to be used on a Firenado application. It provides session and handles component paths. """ def __init__(self, application, request, **kwargs): ComponentHandler.__init__(self, **kwargs) TemplateHandler.__init__(self) tornado.web.RequestHandler.__init__(self, application, request, **kwargs) class TornadoWebSocketHandler(ComponentHandler, TemplateHandler, tornado.websocket.WebSocketHandler): def __init__(self, application, request, **kwargs): ComponentHandler.__init__(self, **kwargs) TemplateHandler.__init__(self) tornado.websocket.WebSocketHandler.__init__( self, application, request, **kwargs) class FirenadoComponentLoader(Loader): """ A template loader that loads from a single root directory. """ def __init__(self, root_directory, component=None, **kwargs): # TODO: Check if we should alter/use the root_directory value # here or on the resolve_path method. self.component = component super(FirenadoComponentLoader, self).__init__(root_directory, **kwargs) def resolve_path(self, name, parent_path=None): """ When a template name comes with a ':' it means a template from another component is being referenced. The component template will be resolved and passed to the original Tornado resolve_path method. :param name: The template name :param parent_path: The template parent path :return: Tornado resolve_path result. """ logger.debug("Resolving template %s.", name) name_resolved = name if ':' in name: name_x = name.split(':') component_name = name_x[0] name_resolved = os.path.join( self.component.application.components[ component_name].get_template_path(), name_x[-1]) if name != name_resolved: logger.debug("Template %s resolved at %s.", name, name_resolved) return super().resolve_path(name_resolved, parent_path)
PypiClean
/sedatatools-1.1.5.tar.gz/sedatatools-1.1.5/CONTRIBUTING.rst
.. highlight:: shell ============ Contributing ============ Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. You can contribute in many ways: Types of Contributions ---------------------- Report Bugs ~~~~~~~~~~~ Report bugs at https://github.com/vedadr/sedatatools/issues. If you are reporting a bug, please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. Fix Bugs ~~~~~~~~ Look through the GitHub issues for bugs. Anything tagged with "bug" and "help wanted" is open to whoever wants to implement it. Implement Features ~~~~~~~~~~~~~~~~~~ Look through the GitHub issues for features. Anything tagged with "enhancement" and "help wanted" is open to whoever wants to implement it. Write Documentation ~~~~~~~~~~~~~~~~~~~ sedatatools could always use more documentation, whether as part of the official sedatatools docs, in docstrings, or even on the web in blog posts, articles, and such. Submit Feedback ~~~~~~~~~~~~~~~ The best way to send feedback is to file an issue at https://github.com/vedadr/sedatatools/issues. If you are proposing a feature: * Explain in detail how it would work. * Keep the scope as narrow as possible, to make it easier to implement. * Remember that this is a volunteer-driven project, and that contributions are welcome :) Get Started! ------------ Ready to contribute? Here's how to set up `sedatatools` for local development. 1. Fork the `sedatatools` repo on GitHub. 2. Clone your fork locally:: $ git clone [email protected]:your_name_here/sedatatools.git 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: $ mkvirtualenv sedatatools $ cd sedatatools/ $ python setup.py develop 4. Create a branch for local development:: $ git checkout -b name-of-your-bugfix-or-feature Now you can make your changes locally. 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: $ flake8 sedatatools tests $ python setup.py test or py.test $ tox To get flake8 and tox, just pip install them into your virtualenv. 6. Commit your changes and push your branch to GitHub:: $ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature 7. Submit a pull request through the GitHub website. Pull Request Guidelines ----------------------- Before you submit a pull request, check that it meets these guidelines: 1. The pull request should include tests. 2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst. 3. The pull request should work for Python 2.7, 3.4, 3.5 and 3.6, and for PyPy. Check https://travis-ci.org/vedadr/sedatatools/pull_requests and make sure that the tests pass for all supported Python versions. Tips ---- To run a subset of tests:: $ py.test tests.test_sedatatools Deploying --------- A reminder for the maintainers on how to deploy. Make sure all your changes are committed (including an entry in HISTORY.rst). Then run:: $ bumpversion patch # possible: major / minor / patch $ git push $ git push --tags Travis will then deploy to PyPI if tests pass.
PypiClean
/ressources/lib/node_modules/highcharts/js/modules/pattern-fill.src.js
'use strict'; (function (factory) { if (typeof module === 'object' && module.exports) { module.exports = factory; } else if (typeof define === 'function' && define.amd) { define(function () { return factory; }); } else { factory(Highcharts); } }(function (Highcharts) { (function (H) { /** * Module for using patterns or images as point fills. * * (c) 2010-2018 Highsoft AS * Author: Torstein Hønsi, Øystein Moseng * * License: www.highcharts.com/license */ var wrap = H.wrap, each = H.each, merge = H.merge, pick = H.pick; /** * Utility function to compute a hash value from an object. Modified Java * String.hashCode implementation in JS. Use the preSeed parameter to add an * additional seeding step. * * @param {Object} obj The javascript object to compute the hash from. * @param {Bool} [preSeed=false] Add an optional preSeed stage. * * @return {String} The computed hash. */ function hashFromObject(obj, preSeed) { var str = JSON.stringify(obj), strLen = str.length || 0, hash = 0, i = 0, char, seedStep; if (preSeed) { seedStep = Math.max(Math.floor(strLen / 500), 1); for (var a = 0; a < strLen; a += seedStep) { hash += str.charCodeAt(a); } hash = hash & hash; } for (; i < strLen; ++i) { char = str.charCodeAt(i); hash = ((hash << 5) - hash) + char; hash = hash & hash; } return hash.toString(16).replace('-', '1'); } /** * Set dimensions on pattern from point. This function will set internal * pattern._width/_height properties if width and height are not both already * set. We only do this on image patterns. The _width/_height properties are * set to the size of the bounding box of the point, optionally taking aspect * ratio into account. If only one of width or height are supplied as options, * the undefined option is calculated as above. * * @param {Object} pattern The pattern to set dimensions on. */ H.Point.prototype.calculatePatternDimensions = function (pattern) { if (pattern.width && pattern.height) { return; } var bBox = this.graphic && ( this.graphic.getBBox && this.graphic.getBBox(true) || this.graphic.element && this.graphic.element.getBBox() ) || {}, shapeArgs = this.shapeArgs; // Prefer using shapeArgs, as it is animation agnostic if (shapeArgs) { bBox.width = shapeArgs.width || bBox.width; bBox.height = shapeArgs.height || bBox.height; bBox.x = shapeArgs.x || bBox.x; bBox.y = shapeArgs.y || bBox.y; } // For images we stretch to bounding box if (pattern.image) { // If we do not have a bounding box at this point, simply add a defer // key and pick this up in the fillSetter handler, where the bounding // box should exist. if (!bBox.width || !bBox.height) { pattern._width = 'defer'; pattern._height = 'defer'; return; } // Handle aspect ratio filling if (pattern.aspectRatio) { bBox.aspectRatio = bBox.width / bBox.height; if (pattern.aspectRatio > bBox.aspectRatio) { // Height of bBox will determine width bBox.aspectWidth = bBox.height * pattern.aspectRatio; } else { // Width of bBox will determine height bBox.aspectHeight = bBox.width / pattern.aspectRatio; } } // We set the width/height on internal properties to differentiate // between the options set by a user and by this function. pattern._width = pattern.width || Math.ceil(bBox.aspectWidth || bBox.width); pattern._height = pattern.height || Math.ceil(bBox.aspectHeight || bBox.height); } // Set x/y accordingly, centering if using aspect ratio, otherwise adjusting // so bounding box corner is 0,0 of pattern. if (!pattern.width) { pattern._x = pattern.x || 0; pattern._x += bBox.x - Math.round( bBox.aspectWidth ? Math.abs(bBox.aspectWidth - bBox.width) / 2 : 0 ); } if (!pattern.height) { pattern._y = pattern.y || 0; pattern._y += bBox.y - Math.round( bBox.aspectHeight ? Math.abs(bBox.aspectHeight - bBox.height) / 2 : 0 ); } }; /** * @typedef {Object} PatternOptions * @property {Object} pattern Holds a pattern definition. * @property {String} pattern.image URL to an image to use as the pattern. * @property {Number} pattern.width Width of the pattern. For images this is * automatically set to the width of the element bounding box if not supplied. * For non-image patterns the default is 32px. Note that automatic resizing of * image patterns to fill a bounding box dynamically is only supported for * patterns with an automatically calculated ID. * @property {Number} pattern.height Analogous to pattern.width. * @property {Number} pattern.aspectRatio For automatically calculated width and * height on images, it is possible to set an aspect ratio. The image will be * zoomed to fill the bounding box, maintaining the aspect ratio defined. * @property {Number} pattern.x Horizontal offset of the pattern. Defaults to 0. * @property {Number} pattern.y Vertical offset of the pattern. Defaults to 0. * @property {Object|String} pattern.path Either an SVG path as string, or an * object. As an object, supply the path string in the `path.d` property. Other * supported properties are standard SVG attributes like `path.stroke` and * `path.fill`. If a path is supplied for the pattern, the `image` property is * ignored. * @property {String} pattern.color Pattern color, used as default path stroke. * @property {Number} pattern.opacity Opacity of the pattern as a float value * from 0 to 1. * @property {String} pattern.id ID to assign to the pattern. This is * automatically computed if not added, and identical patterns are reused. To * refer to an existing pattern for a Highcharts color, use * `color: "url(#pattern-id)"`. * @property {Object|Boolean} animation Animation options for the image pattern * loading. * * @example * // Pattern used as a color option * color: { * pattern: { * path: { * d: 'M 3 3 L 8 3 L 8 8 Z', * fill: '#102045' * }, * width: 12, * height: 12, * color: '#907000', * opacity: 0.5 * } * } * * @sample highcharts/series/pattern-fill-area/ * Define a custom path pattern * @sample highcharts/series/pattern-fill-pie/ * Default patterns and a custom image pattern * @sample maps/demo/pattern-fill-map/ * Custom images on map */ /** * Add a pattern to the renderer. * * @private * @param {PatternOptions} options The pattern options. * * @return {Object} The added pattern. Undefined if the pattern already exists. */ H.SVGRenderer.prototype.addPattern = function (options, animation) { var pattern, animate = H.pick(animation, true), animationOptions = H.animObject(animate), path, defaultSize = 32, width = options.width || options._width || defaultSize, height = options.height || options._height || defaultSize, color = options.color || '#343434', id = options.id, ren = this, rect = function (fill) { ren.rect(0, 0, width, height) .attr({ fill: fill }) .add(pattern); }; if (!id) { this.idCounter = this.idCounter || 0; id = 'highcharts-pattern-' + this.idCounter; ++this.idCounter; } // Do nothing if ID already exists this.defIds = this.defIds || []; if (H.inArray(id, this.defIds) > -1) { return; } // Store ID in list to avoid duplicates this.defIds.push(id); // Create pattern element pattern = this.createElement('pattern').attr({ id: id, patternUnits: 'userSpaceOnUse', width: width, height: height, x: options._x || options.x || 0, y: options._y || options.y || 0 }).add(this.defs); // Set id on the SVGRenderer object pattern.id = id; // Use an SVG path for the pattern if (options.path) { path = options.path; // The background if (path.fill) { rect(path.fill); } // The pattern this.createElement('path').attr({ 'd': path.d || path, 'stroke': path.stroke || color, 'stroke-width': path.strokeWidth || 2 }).add(pattern); pattern.color = color; // Image pattern } else if (options.image) { if (animate) { this.image( options.image, 0, 0, width, height, function () { // Onload this.animate({ opacity: pick(options.opacity, 1) }, animationOptions); H.removeEvent(this.element, 'load'); } ).attr({ opacity: 0 }).add(pattern); } else { this.image(options.image, 0, 0, width, height).add(pattern); } } // For non-animated patterns, set opacity now if (!(options.image && animate) && options.opacity !== undefined) { each(pattern.element.childNodes, function (child) { child.setAttribute('opacity', options.opacity); }); } // Store for future reference this.patternElements = this.patternElements || {}; this.patternElements[id] = pattern; return pattern; }; /** * Make sure we have a series color */ wrap(H.Series.prototype, 'getColor', function (proceed) { var oldColor = this.options.color; // Temporarely remove color options to get defaults if (oldColor && oldColor.pattern && !oldColor.pattern.color) { delete this.options.color; // Get default proceed.apply(this, Array.prototype.slice.call(arguments, 1)); // Replace with old, but add default color oldColor.pattern.color = this.color; this.color = this.options.color = oldColor; } else { // We have a color, no need to do anything special proceed.apply(this, Array.prototype.slice.call(arguments, 1)); } }); /** * Calculate pattern dimensions on points that have their own pattern. */ wrap(H.Series.prototype, 'render', function (proceed) { var isResizing = this.chart.isResizing; if (this.isDirtyData || isResizing || !this.chart.hasRendered) { each(this.points || [], function (point) { var colorOptions = point.options && point.options.color; if (colorOptions && colorOptions.pattern) { // For most points we want to recalculate the dimensions on // render, where we have the shape args and bbox. But if we // are resizing and don't have the shape args, defer it, since // the bounding box is still not resized. if ( isResizing && !( point.shapeArgs && point.shapeArgs.width && point.shapeArgs.height ) ) { colorOptions.pattern._width = 'defer'; colorOptions.pattern._height = 'defer'; } else { point.calculatePatternDimensions(colorOptions.pattern); } } }); } return proceed.apply(this, Array.prototype.slice.call(arguments, 1)); }); /** * Merge series color options to points */ wrap(H.Point.prototype, 'applyOptions', function (proceed) { var point = proceed.apply(this, Array.prototype.slice.call(arguments, 1)), colorOptions = point.options.color; // Only do this if we have defined a specific color on this point. Otherwise // we will end up trying to re-add the series color for each point. if (colorOptions && colorOptions.pattern) { // Move path definition to object, allows for merge with series path // definition if (typeof colorOptions.pattern.path === 'string') { colorOptions.pattern.path = { d: colorOptions.pattern.path }; } // Merge with series options point.color = point.options.color = merge( point.series.options.color, colorOptions ); } return point; }); /** * Add functionality to SVG renderer to handle patterns as complex colors */ H.addEvent(H.SVGRenderer, 'complexColor', function (args) { var color = args.args[0], prop = args.args[1], element = args.args[2], pattern = color.pattern, value = '#343434', forceHashId; // Skip and call default if there is no pattern if (!pattern) { return true; } // We have a pattern. if ( pattern.image || typeof pattern.path === 'string' || pattern.path && pattern.path.d ) { // Real pattern. Add it and set the color value to be a reference. // Force Hash-based IDs for legend items, as they are drawn before // point render, meaning they are drawn before autocalculated image // width/heights. We don't want them to highjack the width/height for // this ID if it is defined by users. forceHashId = element.parentNode && element.parentNode.getAttribute('class'); forceHashId = forceHashId && forceHashId.indexOf('highcharts-legend') > -1; // If we don't have a width/height yet, handle it. Try faking a point // and running the algorithm again. if (pattern._width === 'defer' || pattern._height === 'defer') { H.Point.prototype.calculatePatternDimensions.call( { graphic: { element: element } }, pattern ); } // If we don't have an explicit ID, compute a hash from the // definition and use that as the ID. This ensures that points with // the same pattern definition reuse existing pattern elements by // default. We combine two hashes, the second with an additional // preSeed algorithm, to minimize collision probability. if (forceHashId || !pattern.id) { // Make a copy so we don't accidentally edit options when setting ID pattern = merge({}, pattern); pattern.id = 'highcharts-pattern-' + hashFromObject(pattern) + hashFromObject(pattern, true); } // Add it. This function does nothing if an element with this ID // already exists. this.addPattern(pattern, !this.forExport && H.pick( pattern.animation, this.globalAnimation, { duration: 100 } )); value = 'url(' + this.url + '#' + pattern.id + ')'; } else { // Not a full pattern definition, just add color value = pattern.color || value; } // Set the fill/stroke prop on the element element.setAttribute(prop, value); // Allow the color to be concatenated into tooltips formatters etc. color.toString = function () { return value; }; // Skip default handler return false; }); /** * When animation is used, we have to recalculate pattern dimensions after * resize, as the bounding boxes are not available until then. */ H.addEvent(H.Chart, 'endResize', function () { if ( H.grep(this.renderer.defIds || [], function (id) { return id && id.indexOf && id.indexOf('highcharts-pattern-') === 0; }).length ) { // We have non-default patterns to fix. Find them by looping through // all points. each(this.series, function (series) { each(series.points, function (point) { var colorOptions = point.options && point.options.color; if (colorOptions && colorOptions.pattern) { colorOptions.pattern._width = 'defer'; colorOptions.pattern._height = 'defer'; } }); }); // Redraw without animation this.redraw(false); } }); /** * Add a garbage collector to delete old patterns with autogenerated hashes that * are no longer being referenced. */ H.addEvent(H.Chart, 'redraw', function () { var usedIds = [], renderer = this.renderer, // Get the autocomputed patterns - these are the ones we might delete patterns = H.grep(renderer.defIds || [], function (pattern) { return pattern.indexOf && pattern.indexOf('highcharts-pattern-') === 0; }); if (patterns.length) { // Look through the DOM for usage of the patterns. This can be points, // series, tooltips etc. each(this.renderTo.querySelectorAll( '[color^="url(#"], [fill^="url(#"], [stroke^="url(#"]' ), function (node) { var id = node.getAttribute('fill') || node.getAttribute('color') || node.getAttribute('stroke'); if (id) { usedIds.push(id .substring(id.indexOf('url(#') + 5) .replace(')', '') ); } }); // Loop through the patterns that exist and see if they are used each(patterns, function (id) { if (H.inArray(id, usedIds) === -1) { // Remove id from used id list H.erase(renderer.defIds, id); // Remove pattern element if (renderer.patternElements[id]) { renderer.patternElements[id].destroy(); delete renderer.patternElements[id]; } } }); } }); /** * Add the predefined patterns */ H.Chart.prototype.callbacks.push(function (chart) { var colors = H.getOptions().colors; each([ 'M 0 0 L 10 10 M 9 -1 L 11 1 M -1 9 L 1 11', 'M 0 10 L 10 0 M -1 1 L 1 -1 M 9 11 L 11 9', 'M 3 0 L 3 10 M 8 0 L 8 10', 'M 0 3 L 10 3 M 0 8 L 10 8', 'M 0 3 L 5 3 L 5 0 M 5 10 L 5 7 L 10 7', 'M 3 3 L 8 3 L 8 8 L 3 8 Z', 'M 5 5 m -4 0 a 4 4 0 1 1 8 0 a 4 4 0 1 1 -8 0', 'M 10 3 L 5 3 L 5 0 M 5 10 L 5 7 L 0 7', 'M 2 5 L 5 2 L 8 5 L 5 8 Z', 'M 0 0 L 5 10 L 10 0' ], function (pattern, i) { chart.renderer.addPattern({ id: 'highcharts-default-pattern-' + i, path: pattern, color: colors[i], width: 10, height: 10 }); }); }); }(Highcharts)); return (function () { }()); }));
PypiClean
/mdshare-0.4.2.tar.gz/mdshare-0.4.2/bin/mdshare-index-maker.py
# This file is part of the markovmodel/mdshare project. # Copyright (C) 2017-2019 Computational Molecular Biology Group, # Freie Universitaet Berlin (GER) # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. from mdshare import fetch, Repository from mdshare.utils import file_hash from argparse import ArgumentParser from yaml import load, dump import fnmatch import tarfile import os def filter_files(files, patterns): """Keep only those files which match at least on pattern""" include = set() for pattern in patterns: match = fnmatch.filter(files, pattern) include = include | set(match) return list(sorted(include)) def get_metadata(file): """Get a dict with file hash and size""" return dict( hash=file_hash(file), size=os.path.getsize(file)) def make_container(container, files): """Make a .tar.gz container from a list of files""" with tarfile.open(container, 'w:gz') as fh: for file in files: fh.add(file) def build(template_file): """Build the catalogues from the given template""" with open(template_file, 'r') as fh: template = load(fh) for key in ('url', 'include', 'containers'): if key not in template: raise RuntimeError(f'Cannot build without {key} key') db = dict( url=template['url'], index=dict(), containers=dict()) files = filter_files(os.listdir(), template['include']) for file in files: db['index'].update({file: get_metadata(file)}) for container, patterns in template['containers'].items(): make_container(container, filter_files(files, patterns)) db['containers'].update({container: get_metadata(container)}) catalogue = f'{template["name"]}.yaml' with open(catalogue, 'w') as fh: fh.write(dump(db)) checksum = f'{template["name"]}.md5' with open(checksum, 'w') as fh: fh.write(file_hash(catalogue)) print(f'catalogue written to: {catalogue}') print(f'checksum written to: {checksum}') def test(catalogue_file, checksum_file): repository = Repository(catalogue_file, checksum_file) working_directory = 'mdshare-testing-area' os.mkdir(working_directory) for file in repository.index: local_file = fetch( file, working_directory=working_directory, repository=repository) os.remove(local_file) for file in repository.containers: local_files = fetch( file, working_directory=working_directory, repository=repository) try: os.remove(local_files) except TypeError: for local_file in local_files: os.remove(local_file) os.rmdir(working_directory) if __name__ == '__main__': parser = ArgumentParser() parser.add_argument( 'mode', help='action to take [ build | test ]', metavar='MODE') parser.add_argument( 'yaml', help='yaml file with catalogue or catalogue template', metavar='FILE') parser.add_argument( 'md5', help='md5 checksum file of the catalogue', metavar='FILE', nargs='?') args = parser.parse_args() if args.mode.lower() == 'build': build(args.yaml) elif args.mode.lower() == 'test': test(args.yaml, args.md5) else: raise ValueError(f'Unsupported mode: {args.mode}')
PypiClean
/xuzhao_markdown_editor-1.1.2-py3-none-any.whl/xuzhao_markdown/static/xuzhao_markdown/plugins/link-dialog/link-dialog.js
(function() { var factory = function (exports) { var pluginName = "link-dialog"; exports.fn.linkDialog = function() { var _this = this; var cm = this.cm; var editor = this.editor; var settings = this.settings; var selection = cm.getSelection(); var lang = this.lang; var linkLang = lang.dialog.link; var classPrefix = this.classPrefix; var dialogName = classPrefix + pluginName, dialog; cm.focus(); if (editor.find("." + dialogName).length > 0) { dialog = editor.find("." + dialogName); dialog.find("[data-url]").val("http://"); dialog.find("[data-title]").val(selection); this.dialogShowMask(dialog); this.dialogLockScreen(); dialog.show(); } else { var dialogHTML = "<div class=\"" + classPrefix + "form\">" + "<label>" + linkLang.url + "</label>" + "<input type=\"text\" value=\"http://\" data-url />" + "<br/>" + "<label>" + linkLang.urlTitle + "</label>" + "<input type=\"text\" value=\"" + selection + "\" data-title />" + "<br/>" + "</div>"; dialog = this.createDialog({ title : linkLang.title, width : 380, height : 211, content : dialogHTML, mask : settings.dialogShowMask, drag : settings.dialogDraggable, lockScreen : settings.dialogLockScreen, maskStyle : { opacity : settings.dialogMaskOpacity, backgroundColor : settings.dialogMaskBgColor }, buttons : { enter : [lang.buttons.enter, function() { var url = this.find("[data-url]").val(); var title = this.find("[data-title]").val(); if (url === "http://" || url === "") { alert(linkLang.urlEmpty); return false; } /*if (title === "") { alert(linkLang.titleEmpty); return false; }*/ var str = "[" + title + "](" + url + " \"" + title + "\")"; if (title == "") { str = "[" + url + "](" + url + ")"; } cm.replaceSelection(str); this.hide().lockScreen(false).hideMask(); return false; }], cancel : [lang.buttons.cancel, function() { this.hide().lockScreen(false).hideMask(); return false; }] } }); } }; }; // CommonJS/Node.js if (typeof require === "function" && typeof exports === "object" && typeof module === "object") { module.exports = factory; } else if (typeof define === "function") // AMD/CMD/Sea.js { if (define.amd) { // for Require.js define(["editormd"], function(editormd) { factory(editormd); }); } else { // for Sea.js define(function(require) { var editormd = require("./../../editormd"); factory(editormd); }); } } else { factory(window.editormd); } })();
PypiClean
/ScatPy-0.1.1.tar.gz/ScatPy-0.1.1/README.txt
************************************** ScatPy -- A Python interface to DDSCAT ************************************** ScatPy is a Python package for interfacing to the popular scattering simulator `DDSCAT <http://www.astro.princeton.edu/~draine/DDSCAT.html>`_. ScatPy provides a rich toolset to: * Create standard DDSCAT scattering targets based on physical (rather than dipole) dimensions * Construct and visualize complex custom scattering targets * Manage the job parameters found in the ddscat.par file * Organize iterative jobs requiring multiple targets or input parameters * Script job submission to cluster queue managers * Maintain profiles and defaults for deployment on platforms other than the local machine * Load, plot and manipulate DDSCAT output tables * Manage the output from multiple jobs through results collections * Work with and visualize nearfield results as multidimensional numpy arrays * Suitable for interactive or scripted use Documentation ============= Complete documentation can be found at: http://pythonhosted.org/ScatPy Download ======== The package can be downloaded for installation via easy_install at https://pypi.python.org/pypi/ScatPy Example ======= .. code:: python from ScatPy import * # Establish target geometry (in um) length = 0.100 radius = 0.020 target = targets.CYLNDRCAP(length, radius, d=0.005, material='Au_Palik.txt') # Create a job to be run in the subdirectory tmp/ job = DDscat(folder = './tmp', target=target) # Change the range of calculated wavelengths and ambient index job.settings.wavelengths = ranges.How_Range(0.300, 0.600, 15) job.settings.NAMBIENT = 1.0 # Run the job locally job.calculate() # Open the results qtable, plot Q_sca, and Q_abs, and add a legend ans = results.QTable(folder = './tmp') ax = ans.plot(['Q_sca', 'Q_abs']) ax.legend(loc=0)
PypiClean
/depthai_pipeline_graph-0.0.5-py3-none-any.whl/depthai_pipeline_graph/NodeGraphQt/widgets/scene.py
from Qt import QtGui, QtCore, QtWidgets from ..constants import ViewerEnum class NodeScene(QtWidgets.QGraphicsScene): def __init__(self, parent=None): super(NodeScene, self).__init__(parent) self._grid_mode = ViewerEnum.GRID_DISPLAY_LINES.value self._grid_color = ViewerEnum.GRID_COLOR.value self._bg_color = ViewerEnum.BACKGROUND_COLOR.value self.setBackgroundBrush(QtGui.QColor(*self._bg_color)) def __repr__(self): cls_name = str(self.__class__.__name__) return '<{}("{}") object at {}>'.format( cls_name, self.viewer(), hex(id(self))) # def _draw_text(self, painter, pen): # font = QtGui.QFont() # font.setPixelSize(48) # painter.setFont(font) # parent = self.viewer() # pos = QtCore.QPoint(20, parent.height() - 20) # painter.setPen(pen) # painter.drawText(parent.mapToScene(pos), 'Not Editable') def _draw_grid(self, painter, rect, pen, grid_size): """ draws the grid lines in the scene. Args: painter (QtGui.QPainter): painter object. rect (QtCore.QRectF): rect object. pen (QtGui.QPen): pen object. grid_size (int): grid size. """ left = int(rect.left()) right = int(rect.right()) top = int(rect.top()) bottom = int(rect.bottom()) first_left = left - (left % grid_size) first_top = top - (top % grid_size) lines = [] lines.extend([ QtCore.QLineF(x, top, x, bottom) for x in range(first_left, right, grid_size) ]) lines.extend([ QtCore.QLineF(left, y, right, y) for y in range(first_top, bottom, grid_size)] ) painter.setPen(pen) painter.drawLines(lines) def _draw_dots(self, painter, rect, pen, grid_size): """ draws the grid dots in the scene. Args: painter (QtGui.QPainter): painter object. rect (QtCore.QRectF): rect object. pen (QtGui.QPen): pen object. grid_size (int): grid size. """ zoom = self.viewer().get_zoom() if zoom < 0: grid_size = int(abs(zoom) / 0.3 + 1) * grid_size left = int(rect.left()) right = int(rect.right()) top = int(rect.top()) bottom = int(rect.bottom()) first_left = left - (left % grid_size) first_top = top - (top % grid_size) pen.setWidth(grid_size / 10) painter.setPen(pen) [painter.drawPoint(int(x), int(y)) for x in range(first_left, right, grid_size) for y in range(first_top, bottom, grid_size)] def drawBackground(self, painter, rect): super(NodeScene, self).drawBackground(painter, rect) painter.save() painter.setRenderHint(QtGui.QPainter.Antialiasing, False) painter.setBrush(self.backgroundBrush()) if self._grid_mode is ViewerEnum.GRID_DISPLAY_DOTS.value: pen = QtGui.QPen(QtGui.QColor(*self.grid_color), 0.65) self._draw_dots(painter, rect, pen, ViewerEnum.GRID_SIZE.value) elif self._grid_mode is ViewerEnum.GRID_DISPLAY_LINES.value: zoom = self.viewer().get_zoom() if zoom > -0.5: pen = QtGui.QPen(QtGui.QColor(*self.grid_color), 0.65) self._draw_grid( painter, rect, pen, ViewerEnum.GRID_SIZE.value ) color = QtGui.QColor(*self._bg_color).darker(150) if zoom < -0.0: color = color.darker(100 - int(zoom * 110)) pen = QtGui.QPen(color, 0.65) self._draw_grid( painter, rect, pen, ViewerEnum.GRID_SIZE.value * 8 ) painter.restore() def mousePressEvent(self, event): selected_nodes = self.viewer().selected_nodes() if self.viewer(): self.viewer().sceneMousePressEvent(event) super(NodeScene, self).mousePressEvent(event) keep_selection = any([ event.button() == QtCore.Qt.MiddleButton, event.button() == QtCore.Qt.RightButton, event.modifiers() == QtCore.Qt.AltModifier ]) if keep_selection: for node in selected_nodes: node.setSelected(True) def mouseMoveEvent(self, event): if self.viewer(): self.viewer().sceneMouseMoveEvent(event) super(NodeScene, self).mouseMoveEvent(event) def mouseReleaseEvent(self, event): if self.viewer(): self.viewer().sceneMouseReleaseEvent(event) super(NodeScene, self).mouseReleaseEvent(event) def viewer(self): return self.views()[0] if self.views() else None @property def grid_mode(self): return self._grid_mode @grid_mode.setter def grid_mode(self, mode=None): if mode is None: mode = ViewerEnum.GRID_DISPLAY_LINES.value self._grid_mode = mode @property def grid_color(self): return self._grid_color @grid_color.setter def grid_color(self, color=(0, 0, 0)): self._grid_color = color @property def background_color(self): return self._bg_color @background_color.setter def background_color(self, color=(0, 0, 0)): self._bg_color = color self.setBackgroundBrush(QtGui.QColor(*self._bg_color))
PypiClean
/repair_seq-1.0.3.tar.gz/repair_seq-1.0.3/repair_seq/arrayed_experiment_group.py
import gzip import itertools import time import shutil import warnings from pathlib import Path from collections import defaultdict, Counter import numpy as np import pandas as pd import pysam from ipywidgets import Layout, Select import knock_knock.explore import knock_knock.outcome from hits import utilities, sam from . import prime_editing_experiment, single_end_experiment, paired_end_experiment import repair_seq.experiment_group memoized_property = utilities.memoized_property class Batch: def __init__(self, base_dir, batch, category_groupings=None, baseline_condition=None, add_pseudocount=False, only_edited=False, progress=None, ): self.base_dir = Path(base_dir) self.batch = batch self.data_dir = self.base_dir / 'data' / batch if progress is None or getattr(progress, '_silent', False): def ignore_kwargs(x, **kwargs): return x progress = ignore_kwargs self.progress = progress self.category_groupings = category_groupings self.baseline_condition = baseline_condition self.add_pseudocount = add_pseudocount self.only_edited = only_edited self.sample_sheet_fn = self.data_dir / 'sample_sheet.csv' self.sample_sheet = pd.read_csv(self.sample_sheet_fn, index_col='sample_name') self.group_descriptions_fn = self.data_dir / 'group_descriptions.csv' self.group_descriptions = pd.read_csv(self.group_descriptions_fn, index_col='group').replace({np.nan: None}) self.condition_colors_fn = self.data_dir / 'condition_colors.csv' if self.condition_colors_fn.exists(): self.condition_colors = pd.read_csv(self.condition_colors_fn, index_col='perturbation', squeeze=True) else: self.condition_colors = None def __repr__(self): return f'Batch: {self.batch}, base_dir={self.base_dir}' @property def group_names(self): return self.sample_sheet['group'].unique() def group(self, group_name): return ArrayedExperimentGroup(self.base_dir, self.batch, group_name, category_groupings=self.category_groupings, baseline_condition=self.baseline_condition, add_pseudocount=self.add_pseudocount, only_edited=self.only_edited, progress=self.progress, ) @memoized_property def groups(self): groups = {group_name: self.group(group_name) for group_name in self.group_names} return groups def group_query(self, query_string): groups = [] for group_name, row in self.group_descriptions.query(query_string).iterrows(): groups.append(self.groups[group_name]) return groups def experiment_query(self, query_string): exps = [] for sample_name, row in self.sample_sheet.query(query_string).iterrows(): group = self.groups[row['group']] exp = group.sample_name_to_experiment(sample_name) exps.append(exp) return exps def copy_snapshot(self, new_base_dir, new_batch_name=None, groups_to_include=None, include_target_infos=True, ): if new_batch_name is None: new_batch_name = self.batch if groups_to_include is None: groups_to_include = {group_name: group_name for group_name in self.groups} new_base_dir = Path(new_base_dir) # Out of paranoia, make sure that new_base_dir is different # than this pool's base_dir since existing dirs will be deleted. if str(new_base_dir) == str(self.base_dir): raise ValueError('Attempted to copy to same base dir.') new_results_dir = new_base_dir / 'results' / new_batch_name new_data_dir = new_base_dir / 'data' / new_batch_name for new_dir in [new_results_dir, new_data_dir]: if new_dir.exists(): shutil.rmtree(new_dir) new_dir.mkdir() for new_group_name in groups_to_include.values(): (new_results_dir / new_group_name).mkdir() # Copy relevant results files. fns_to_copy = [ 'outcome_counts', 'total_outcome_counts', ] for old_group_name, new_group_name in groups_to_include.items(): old_group = self.groups[old_group_name] for fn_key in fns_to_copy: old_fn = old_group.fns[fn_key] new_fn = new_results_dir / new_group_name / old_fn.name shutil.copy(old_fn, new_fn) # Copy group descriptions. new_group_descriptions = self.group_descriptions.loc[sorted(groups_to_include)].copy() new_group_descriptions.index = [groups_to_include[name] for name in new_group_descriptions.index] new_group_descriptions.index.name = 'group' # Convoluted way of blanking supplmental_indices - '' will be parsed as nan, then coverted to None, # then converted to []. new_group_descriptions['supplemental_indices'] = '' new_group_descriptions.to_csv(new_data_dir / 'group_descriptions.csv') # Copy sample sheet. new_sample_sheet_fn = new_data_dir / 'sample_sheet.csv' new_sample_sheet = self.sample_sheet.query('group in @groups_to_include').copy() new_sample_sheet['group'] = new_sample_sheet['group'].replace(groups_to_include) new_sample_sheet.to_csv(new_sample_sheet_fn) ## Copy the pool sample sheet, wiping any value of supplemental_indices. #sample_sheet = copy.deepcopy(self.sample_sheet) #sample_sheet['supplemental_indices'] = [] #new_sample_sheet_fn = new_snapshot_dir / self.sample_sheet_fn.name #new_sample_sheet_fn.write_text(yaml.safe_dump(sample_sheet, default_flow_style=False)) if include_target_infos: for old_group_name in groups_to_include: old_group = self.groups[old_group_name] new_target_info_dir = new_base_dir / 'targets' / old_group.target_info.name if new_target_info_dir.exists(): shutil.rmtree(new_target_info_dir) shutil.copytree(old_group.target_info.dir, new_target_info_dir) def get_batch(base_dir, batch_name, progress=None, **kwargs): group_dir = Path(base_dir) / 'data' / batch_name group_descriptions_fn = group_dir / 'group_descriptions.csv' if group_descriptions_fn.exists(): batch = Batch(base_dir, batch_name, progress, **kwargs) else: batch = None return batch def get_all_batches(base_dir=Path.home() / 'projects' / 'repair_seq', progress=None, **kwargs): possible_batch_dirs = [p for p in (Path(base_dir) / 'data').iterdir() if p.is_dir()] batches = {} for possible_batch_dir in sorted(possible_batch_dirs): batch_name = possible_batch_dir.name batch = get_batch(base_dir, batch_name, progress=progress, **kwargs) if batch is not None: batches[batch_name] = batch return batches def get_all_experiments(base_dir=Path.home() / 'projects' / 'repair_seq', progress=None, conditions=None, **kwargs): if conditions is None: conditions = {} batches = get_all_batches(base_dir, progress, **kwargs) exps = {} for batch_name, batch in batches.items(): if 'batch' in conditions and batch_name not in conditions['batch']: continue for sample_name, row in batch.sample_sheet.iterrows(): group = batch.groups[row['group']] exps[batch_name, group.group, sample_name] = group.sample_name_to_experiment(sample_name) return exps class ArrayedExperimentGroup(repair_seq.experiment_group.ExperimentGroup): def __init__(self, base_dir, batch, group, category_groupings=None, progress=None, baseline_condition=None, add_pseudocount=None, only_edited=False, ): self.base_dir = Path(base_dir) self.batch = batch self.group = group self.category_groupings = category_groupings self.add_pseudocount = add_pseudocount self.only_edited = only_edited self.group_args = (base_dir, batch, group) super().__init__() if progress is None or getattr(progress, '_silent', False): def ignore_kwargs(x, **kwargs): return x progress = ignore_kwargs self.silent = True self.progress = progress self.Batch = Batch(self.base_dir, self.batch) self.batch_sample_sheet = self.Batch.sample_sheet self.sample_sheet = self.batch_sample_sheet.query('group == @self.group').copy() self.description = self.Batch.group_descriptions.loc[self.group].copy() self.condition_keys = self.description['condition_keys'].split(';') self.full_condition_keys = tuple(self.condition_keys + ['replicate']) if baseline_condition is not None: self.baseline_condition = baseline_condition else: self.baseline_condition = tuple(self.description['baseline_condition'].split(';')) self.experiment_type = self.description['experiment_type'] self.ExperimentType, self.CommonSequencesExperimentType = arrayed_specialized_experiment_factory(self.experiment_type) self.outcome_index_levels = ('category', 'subcategory', 'details') self.outcome_column_levels = self.full_condition_keys def condition_from_row(row): condition = tuple(row[key] for key in self.condition_keys) if len(condition) == 1: condition = condition[0] return condition def full_condition_from_row(row): return tuple(row[key] for key in self.full_condition_keys) self.full_conditions = [full_condition_from_row(row) for _, row in self.sample_sheet.iterrows()] conditions_are_unique = len(set(self.full_conditions)) == len(self.full_conditions) if not conditions_are_unique: print(f'{self}\nconditions are not unique:') for k, v in Counter(self.full_conditions).most_common(): print(k, v) raise ValueError self.full_condition_to_sample_name = {full_condition_from_row(row): sample_name for sample_name, row in self.sample_sheet.iterrows()} self.conditions = sorted(set(c[:-1] for c in self.full_conditions)) # Indexing breaks if it is a length 1 tuple. if len(self.condition_keys) == 1: self.baseline_condition = self.baseline_condition[0] self.conditions = [c[0] for c in self.conditions] self.sample_names = sorted(self.sample_sheet.index) self.condition_to_sample_names = defaultdict(list) for sample_name, row in self.sample_sheet.iterrows(): condition = condition_from_row(row) self.condition_to_sample_names[condition].append(sample_name) def __repr__(self): return f'ArrayedExperimentGroup: batch={self.batch}, group={self.group}, base_dir={self.base_dir}' @memoized_property def data_dir(self): return self.base_dir / 'data' / self.batch @memoized_property def results_dir(self): return self.base_dir / 'results' / self.batch / self.group def experiments(self, no_progress=False): for sample_name in self.sample_names: yield self.sample_name_to_experiment(sample_name, no_progress=no_progress) @memoized_property def first_experiment(self): return next(self.experiments()) @property def preprocessed_read_type(self): return self.first_experiment.preprocessed_read_type @property def categorizer(self): return self.first_experiment.categorizer @property def layout_mode(self): return self.first_experiment.layout_mode @property def target_info(self): return self.first_experiment.target_info @property def diagram_kwargs(self): return self.first_experiment.diagram_kwargs def common_sequence_chunk_exp_from_name(self, chunk_name): chunk_exp = self.CommonSequencesExperimentType(self.base_dir, self.batch, self.group, chunk_name, experiment_group=self, description=self.description, ) return chunk_exp @memoized_property def num_experiments(self): return len(self.sample_sheet) def condition_replicates(self, condition): sample_names = self.condition_to_sample_names[condition] return [self.sample_name_to_experiment(sample_name) for sample_name in sample_names] def sample_name_to_experiment(self, sample_name, no_progress=False): if no_progress: progress = None else: progress = self.progress exp = self.ExperimentType(self.base_dir, self.batch, self.group, sample_name, experiment_group=self, progress=progress) return exp @memoized_property def full_condition_to_experiment(self): return {full_condition: self.sample_name_to_experiment(sample_name) for full_condition, sample_name in self.full_condition_to_sample_name.items()} def extract_genomic_insertion_length_distributions(self): length_distributions = {} for condition, exp in self.progress(self.full_condition_to_experiment.items()): for organism in ['hg19', 'bosTau7']: key = (*condition, organism) length_distributions[key] = np.zeros(1600) for outcome in exp.outcome_iter(): if outcome.category == 'genomic insertion': organism = outcome.subcategory lti = knock_knock.outcome.LongTemplatedInsertionOutcome.from_string(outcome.details) key = (*condition, organism) length_distributions[key][lti.insertion_length()] += 1 length_distributions_df = pd.DataFrame(length_distributions).T length_distributions_df.index.names = list(self.outcome_column_levels) + ['organism'] # Normalize to number of valid reads in each sample. length_distributions_df = length_distributions_df.div(self.total_valid_reads, axis=0) length_distributions_df = length_distributions_df.reorder_levels(['organism'] + list(self.outcome_column_levels)) length_distributions_df.to_csv(self.fns['genomic_insertion_length_distributions']) @memoized_property def genomic_insertion_length_distributions(self): num_index_cols = len(self.outcome_column_levels) + 1 df = pd.read_csv(self.fns['genomic_insertion_length_distributions'], index_col=list(range(num_index_cols))) df.columns = [int(c) for c in df.columns] return df @memoized_property def outcome_counts(self): # Ignore nonspecific amplification products in denominator of any outcome fraction calculations. to_drop = ['nonspecific amplification', 'bad sequence'] # Empirically, overall editing rates can vary considerably across arrayed # experiments, presumably due to nucleofection efficiency. If self.only_edited # is true, exlcude unedited reads from outcome counting. if self.only_edited: to_drop.append('wild type') outcome_counts = self.outcome_counts_df(False).drop(to_drop, errors='ignore') # Sort columns to avoid annoying pandas PerformanceWarnings. outcome_counts = outcome_counts.sort_index(axis='columns') return outcome_counts @memoized_property def outcome_counts_with_bad(self): outcome_counts = self.outcome_counts_df(False) # Sort columns to avoid annoying pandas PerformanceWarnings. outcome_counts = outcome_counts.sort_index(axis='columns') return outcome_counts @memoized_property def total_valid_reads(self): return self.outcome_counts.sum() @memoized_property def outcome_fractions(self): fractions = self.outcome_counts / self.total_valid_reads order = fractions[self.baseline_condition].mean(axis='columns').sort_values(ascending=False).index fractions = fractions.loc[order] return fractions @memoized_property def outcome_fractions_with_bad(self): return self.outcome_counts / self.outcome_counts.sum() @memoized_property def outcome_fraction_condition_means(self): return self.outcome_fractions.groupby(axis='columns', level=self.condition_keys).mean() @memoized_property def outcome_fraction_baseline_means(self): return self.outcome_fraction_condition_means[self.baseline_condition] @memoized_property def outcome_fraction_condition_stds(self): return self.outcome_fractions.groupby(axis='columns', level=self.condition_keys).std() @memoized_property def outcomes_by_baseline_frequency(self): return self.outcome_fraction_baseline_means.sort_values(ascending=False).index.values @memoized_property def outcome_fraction_differences(self): return self.outcome_fractions.sub(self.outcome_fraction_baseline_means, axis=0) @memoized_property def outcome_fraction_difference_condition_means(self): return self.outcome_fraction_differences.groupby(axis='columns', level=self.condition_keys).mean() @memoized_property def outcome_fraction_difference_condition_stds(self): return self.outcome_fraction_differences.groupby(axis='columns', level=self.condition_keys).std() @memoized_property def log2_fold_changes(self): # Using the warnings context manager doesn't work here, maybe because of pandas multithreading? warnings.filterwarnings('ignore') fold_changes = self.outcome_fractions.div(self.outcome_fraction_baseline_means, axis=0) log2_fold_changes = np.log2(fold_changes) warnings.resetwarnings() return log2_fold_changes @memoized_property def log2_fold_change_condition_means(self): return self.log2_fold_changes.groupby(axis='columns', level=self.condition_keys).mean() @memoized_property def log2_fold_change_condition_stds(self): return self.log2_fold_changes.groupby(axis='columns', level=self.condition_keys).std() @memoized_property def category_fractions(self): fs = self.outcome_fractions.groupby(level='category').sum() if self.category_groupings is not None: only_relevant_cats = pd.Index.difference(fs.index, self.category_groupings['not_relevant']) relevant_but_not_specific_cats = pd.Index.difference(only_relevant_cats, self.category_groupings['specific']) only_relevant = fs.loc[only_relevant_cats] only_relevant_normalized = only_relevant / only_relevant.sum() relevant_but_not_specific = only_relevant_normalized.loc[relevant_but_not_specific_cats].sum() grouped = only_relevant_normalized.loc[self.category_groupings['specific']] grouped.loc['all others'] = relevant_but_not_specific fs = grouped if self.add_pseudocount: reads_per_sample = self.outcome_counts.drop(self.category_groupings['not_relevant'], errors='ignore').sum() counts = fs * reads_per_sample counts += 1 fs = counts / counts.sum() return fs @memoized_property def category_fraction_condition_means(self): return self.category_fractions.groupby(axis='columns', level=self.condition_keys).mean() @memoized_property def category_fraction_baseline_means(self): return self.category_fraction_condition_means[self.baseline_condition] @memoized_property def category_fraction_condition_stds(self): return self.category_fractions.groupby(axis='columns', level=self.condition_keys).std() @memoized_property def categories_by_baseline_frequency(self): return self.category_fraction_baseline_means.sort_values(ascending=False).index.values @memoized_property def category_fraction_differences(self): return self.category_fractions.sub(self.category_fraction_baseline_means, axis=0) @memoized_property def category_fraction_difference_condition_means(self): return self.category_fraction_differences.groupby(axis='columns', level=self.condition_keys).mean() @memoized_property def category_fraction_difference_condition_stds(self): return self.category_fraction_differences.groupby(axis='columns', level=self.condition_keys).std() @memoized_property def category_log2_fold_changes(self): # Using the warnings context manager doesn't work here, maybe because of pandas multithreading? warnings.filterwarnings('ignore') fold_changes = self.category_fractions.div(self.category_fraction_baseline_means, axis=0) log2_fold_changes = np.log2(fold_changes) warnings.resetwarnings() return log2_fold_changes @memoized_property def category_log2_fold_change_condition_means(self): # calculate mean in linear space, not log space fold_changes = self.category_fraction_condition_means.div(self.category_fraction_baseline_means, axis=0) return np.log2(fold_changes) @memoized_property def category_log2_fold_change_condition_stds(self): # calculate effective log2 fold change of mean +/- std in linear space means = self.category_fraction_condition_means stds = self.category_fraction_condition_stds baseline_means = self.category_fraction_baseline_means return { 'lower': np.log2((means - stds).div(baseline_means, axis=0)), 'upper': np.log2((means + stds).div(baseline_means, axis=0)), } # TODO: figure out how to avoid this hideous code duplication. @memoized_property def subcategory_fractions(self): return self.outcome_fractions.groupby(level=['category', 'subcategory']).sum() @memoized_property def subcategory_fraction_condition_means(self): return self.subcategory_fractions.groupby(axis='columns', level=self.condition_keys).mean() @memoized_property def subcategory_fraction_baseline_means(self): return self.subcategory_fraction_condition_means[self.baseline_condition] @memoized_property def subcategory_fraction_condition_stds(self): return self.subcategory_fractions.groupby(axis='columns', level=self.condition_keys).std() @memoized_property def subcategories_by_baseline_frequency(self): return self.subcategory_fraction_baseline_means.sort_values(ascending=False).index.values @memoized_property def subcategory_fraction_differences(self): return self.subcategory_fractions.sub(self.subcategory_fraction_baseline_means, axis=0) @memoized_property def subcategory_fraction_difference_condition_means(self): return self.subcategory_fraction_differences.groupby(axis='columns', level=self.condition_keys).mean() @memoized_property def subcategory_fraction_difference_condition_stds(self): return self.subcategory_fraction_differences.groupby(axis='columns', level=self.condition_keys).std() @memoized_property def subcategory_log2_fold_changes(self): # Using the warnings context manager doesn't work here, maybe because of pandas multithreading? warnings.filterwarnings('ignore') fold_changes = self.subcategory_fractions.div(self.subcategory_fraction_baseline_means, axis=0) log2_fold_changes = np.log2(fold_changes) warnings.resetwarnings() return log2_fold_changes @memoized_property def subcategory_log2_fold_change_condition_means(self): # calculate mean in linear space, not log space fold_changes = self.subcategory_fraction_condition_means.div(self.subcategory_fraction_baseline_means, axis=0) return np.log2(fold_changes) @memoized_property def subcategory_log2_fold_change_condition_stds(self): # calculate effective log2 fold change of mean +/- std in linear space means = self.subcategory_fraction_condition_means stds = self.subcategory_fraction_condition_stds baseline_means = self.subcategory_fraction_baseline_means return { 'lower': np.log2((means - stds).div(baseline_means, axis=0)), 'upper': np.log2((means + stds).div(baseline_means, axis=0)), } # Duplication of code in pooled_screen def donor_outcomes_containing_SNV(self, SNV_name): ti = self.target_info SNV_index = sorted(ti.donor_SNVs['target']).index(SNV_name) donor_base = ti.donor_SNVs['donor'][SNV_name]['base'] nt_fracs = self.outcome_fraction_baseline_means outcomes = [(c, s, d) for c, s, d in nt_fracs.index.values if c == 'donor' and d[SNV_index] == donor_base] return outcomes @memoized_property def conversion_fractions(self): conversion_fractions = {} SNVs = self.target_info.donor_SNVs['target'] outcome_fractions = self.outcome_fractions for SNV_name in SNVs: outcomes = self.donor_outcomes_containing_SNV(SNV_name) fractions = outcome_fractions.loc[outcomes].sum() conversion_fractions[SNV_name] = fractions conversion_fractions = pd.DataFrame.from_dict(conversion_fractions, orient='index').sort_index() return conversion_fractions def explore(self, **kwargs): explorer = ArrayedGroupExplorer(self, **kwargs) return explorer.layout class ArrayedExperiment: def __init__(self, base_dir, batch, group, sample_name, experiment_group=None): if experiment_group is None: experiment_group = ArrayedExperimentGroup(base_dir, batch, group, type(self)) self.base_dir = Path(base_dir) self.batch = batch self.group = group self.sample_name = sample_name self.experiment_group = experiment_group self.has_UMIs = False @property def default_read_type(self): # None required to trigger check for common sequence in alignment_groups return None def load_description(self): description = self.experiment_group.sample_sheet.loc[self.sample_name].to_dict() for key, value in self.experiment_group.description.items(): description[key] = value return description @memoized_property def data_dir(self): return self.experiment_group.data_dir def make_nonredundant_sequence_fastq(self): # Extract reads with sequences that weren't seen more than once across the group. fn = self.fns_by_read_type['fastq']['nonredundant'] with gzip.open(fn, 'wt', compresslevel=1) as fh: for read in self.reads_by_type(self.preprocessed_read_type): if read.seq not in self.experiment_group.common_sequence_to_outcome: fh.write(str(read)) @memoized_property def results_dir(self): return self.experiment_group.results_dir / self.sample_name @memoized_property def seq_to_outcome(self): seq_to_outcome = self.experiment_group.common_sequence_to_outcome for seq, outcome in seq_to_outcome.items(): outcome.special_alignment = self.experiment_group.common_name_to_special_alignment.get(outcome.query_name) return seq_to_outcome @memoized_property def seq_to_alignments(self): return self.experiment_group.common_sequence_to_alignments @memoized_property def combined_header(self): return sam.get_header(self.fns_by_read_type['bam_by_name']['nonredundant']) def alignment_groups(self, fn_key='bam_by_name', outcome=None, read_type=None): if read_type is None: nonredundant_alignment_groups = super().alignment_groups(read_type='nonredundant', outcome=outcome) reads = self.reads_by_type(self.preprocessed_read_type) if outcome is None: outcome_records = itertools.repeat(None) else: outcome_records = self.outcome_iter() for read, outcome_record in zip(reads, outcome_records): if outcome is None or outcome_record.category == outcome or (outcome_record.category, outcome_record.subcategory) == outcome: if read.seq in self.seq_to_alignments: name = read.name als = self.seq_to_alignments[read.seq] else: name, als = next(nonredundant_alignment_groups) if name != read.name: raise ValueError('iters out of sync', name, read.name) yield name, als else: yield from super().alignment_groups(fn_key=fn_key, outcome=outcome, read_type=read_type) def categorize_outcomes(self, max_reads=None): # Record how long each categorization takes. times_taken = [] if self.fns['outcomes_dir'].is_dir(): shutil.rmtree(str(self.fns['outcomes_dir'])) self.fns['outcomes_dir'].mkdir() outcome_to_qnames = defaultdict(list) bam_read_type = 'nonredundant' # iter wrap since tqdm objects are not iterators alignment_groups = iter(self.alignment_groups()) if max_reads is not None: alignment_groups = itertools.islice(alignment_groups, max_reads) special_als = defaultdict(list) with self.fns['outcome_list'].open('w') as outcome_fh: for name, als in self.progress(alignment_groups, desc='Categorizing reads'): seq = als[0].get_forward_sequence() # Special handling of empty sequence. if seq is None: seq = '' if seq in self.seq_to_outcome: layout = self.seq_to_outcome[seq] layout.query_name = name else: layout = self.categorizer(als, self.target_info, error_corrected=self.has_UMIs, mode=self.layout_mode) try: layout.categorize() except: print() print(self.sample_name, name) raise if layout.special_alignment is not None: special_als[layout.category, layout.subcategory].append(layout.special_alignment) outcome_to_qnames[layout.category, layout.subcategory].append(name) outcome = self.final_Outcome.from_layout(layout) outcome_fh.write(f'{outcome}\n') times_taken.append(time.monotonic()) # To make plotting easier, for each outcome, make a file listing all of # qnames for the outcome and a bam file (sorted by name) with all of the # alignments for these qnames. qname_to_outcome = {} bam_fn = self.fns_by_read_type['bam_by_name'][bam_read_type] header = sam.get_header(bam_fn) alignment_sorters = sam.multiple_AlignmentSorters(header, by_name=True) for outcome, qnames in outcome_to_qnames.items(): outcome_fns = self.outcome_fns(outcome) outcome_fns['dir'].mkdir() alignment_sorters[outcome] = outcome_fns['bam_by_name'][bam_read_type] with outcome_fns['query_names'].open('w') as fh: for qname in qnames: qname_to_outcome[qname] = outcome fh.write(qname + '\n') with alignment_sorters: saved_verbosity = pysam.set_verbosity(0) with pysam.AlignmentFile(bam_fn) as full_bam_fh: for al in self.progress(full_bam_fh, desc='Making outcome-specific bams'): if al.query_name in qname_to_outcome: outcome = qname_to_outcome[al.query_name] alignment_sorters[outcome].write(al) pysam.set_verbosity(saved_verbosity) # Make special alignments bams. for outcome, als in self.progress(special_als.items(), desc='Making special alignments bams'): outcome_fns = self.outcome_fns(outcome) bam_fn = outcome_fns['special_alignments'] sorter = sam.AlignmentSorter(bam_fn, header) with sorter: for al in als: sorter.write(al) return np.array(times_taken) def arrayed_specialized_experiment_factory(experiment_kind): experiment_kind_to_class = { 'paired_end': paired_end_experiment.PairedEndExperiment, 'single_end': single_end_experiment.SingleEndExperiment, 'prime_editing': prime_editing_experiment.PrimeEditingExperiment, 'twin_prime': prime_editing_experiment.TwinPrimeExperiment, } SpecializedExperiment = experiment_kind_to_class[experiment_kind] class ArrayedSpecializedExperiment(ArrayedExperiment, SpecializedExperiment): def __init__(self, base_dir, batch, group, sample_name, experiment_group=None, **kwargs): ArrayedExperiment.__init__(self, base_dir, batch, group, sample_name, experiment_group=experiment_group) SpecializedExperiment.__init__(self, base_dir, (batch, group), sample_name, **kwargs) def __repr__(self): return f'Arrayed{SpecializedExperiment.__repr__(self)}' class ArrayedSpecializedCommonSequencesExperiment(repair_seq.experiment_group.CommonSequencesExperiment, ArrayedExperiment, SpecializedExperiment): def __init__(self, base_dir, batch, group, sample_name, experiment_group=None, **kwargs): repair_seq.experiment_group.CommonSequencesExperiment.__init__(self) ArrayedExperiment.__init__(self, base_dir, batch, group, sample_name, experiment_group=experiment_group) SpecializedExperiment.__init__(self, base_dir, (batch, group), sample_name, **kwargs) return ArrayedSpecializedExperiment, ArrayedSpecializedCommonSequencesExperiment class ArrayedGroupExplorer(knock_knock.explore.Explorer): def __init__(self, group, initial_condition=None, by_outcome=True, **plot_kwargs, ): self.group = group if initial_condition is None: initial_condition = self.group.conditions[0] self.initial_condition = initial_condition self.experiments = {} super().__init__(by_outcome, **plot_kwargs) def populate_replicates(self, change): with self.output: condition = self.widgets['condition'].value exps = self.group.condition_replicates(condition) self.widgets['replicate'].options = [(exp.description['replicate'], exp) for exp in exps] self.widgets['replicate'].index = 0 def get_current_experiment(self): experiment = self.widgets['replicate'].value return experiment def set_up_read_selection_widgets(self): condition_options = [(', '.join(c) if isinstance(c, tuple) else c, c) for c in self.group.conditions] self.widgets.update({ 'condition': Select(options=condition_options, value=self.initial_condition, layout=Layout(height='200px', width='300px')), 'replicate': Select(options=[], layout=Layout(height='200px', width='150px')), }) self.populate_replicates({'name': 'initial'}) self.widgets['condition'].observe(self.populate_replicates, names='value') if self.by_outcome: self.populate_categories({'name': 'initial'}) self.populate_subcategories({'name': 'initial'}) self.widgets['replicate'].observe(self.populate_categories, names='value') self.widgets['category'].observe(self.populate_subcategories, names='value') self.widgets['subcategory'].observe(self.populate_read_ids, names='value') selection_widget_keys = ['condition', 'replicate', 'category', 'subcategory', 'read_id'] else: self.widgets['replicate'].observe(self.populate_read_ids, names='value') selection_widget_keys = ['condition', 'replicate', 'read_id'] self.populate_read_ids({'name': 'initial'}) return selection_widget_keys
PypiClean
/google-cloud-aiplatform-1.31.1.tar.gz/google-cloud-aiplatform-1.31.1/google/cloud/aiplatform_v1beta1/services/pipeline_service/async_client.py
from collections import OrderedDict import functools import re from typing import ( Dict, Mapping, MutableMapping, MutableSequence, Optional, Sequence, Tuple, Type, Union, ) from google.cloud.aiplatform_v1beta1 import gapic_version as package_version from google.api_core.client_options import ClientOptions from google.api_core import exceptions as core_exceptions from google.api_core import gapic_v1 from google.api_core import retry as retries from google.auth import credentials as ga_credentials # type: ignore from google.oauth2 import service_account # type: ignore try: OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault] except AttributeError: # pragma: NO COVER OptionalRetry = Union[retries.Retry, object] # type: ignore from google.api_core import operation as gac_operation # type: ignore from google.api_core import operation_async # type: ignore from google.cloud.aiplatform_v1beta1.services.pipeline_service import pagers from google.cloud.aiplatform_v1beta1.types import encryption_spec from google.cloud.aiplatform_v1beta1.types import model from google.cloud.aiplatform_v1beta1.types import operation as gca_operation from google.cloud.aiplatform_v1beta1.types import pipeline_job from google.cloud.aiplatform_v1beta1.types import pipeline_job as gca_pipeline_job from google.cloud.aiplatform_v1beta1.types import pipeline_service from google.cloud.aiplatform_v1beta1.types import pipeline_state from google.cloud.aiplatform_v1beta1.types import training_pipeline from google.cloud.aiplatform_v1beta1.types import ( training_pipeline as gca_training_pipeline, ) from google.cloud.location import locations_pb2 # type: ignore from google.iam.v1 import iam_policy_pb2 # type: ignore from google.iam.v1 import policy_pb2 # type: ignore from google.longrunning import operations_pb2 from google.protobuf import empty_pb2 # type: ignore from google.protobuf import struct_pb2 # type: ignore from google.protobuf import timestamp_pb2 # type: ignore from google.rpc import status_pb2 # type: ignore from .transports.base import PipelineServiceTransport, DEFAULT_CLIENT_INFO from .transports.grpc_asyncio import PipelineServiceGrpcAsyncIOTransport from .client import PipelineServiceClient class PipelineServiceAsyncClient: """A service for creating and managing Vertex AI's pipelines. This includes both ``TrainingPipeline`` resources (used for AutoML and custom training) and ``PipelineJob`` resources (used for Vertex AI Pipelines). """ _client: PipelineServiceClient DEFAULT_ENDPOINT = PipelineServiceClient.DEFAULT_ENDPOINT DEFAULT_MTLS_ENDPOINT = PipelineServiceClient.DEFAULT_MTLS_ENDPOINT artifact_path = staticmethod(PipelineServiceClient.artifact_path) parse_artifact_path = staticmethod(PipelineServiceClient.parse_artifact_path) context_path = staticmethod(PipelineServiceClient.context_path) parse_context_path = staticmethod(PipelineServiceClient.parse_context_path) custom_job_path = staticmethod(PipelineServiceClient.custom_job_path) parse_custom_job_path = staticmethod(PipelineServiceClient.parse_custom_job_path) endpoint_path = staticmethod(PipelineServiceClient.endpoint_path) parse_endpoint_path = staticmethod(PipelineServiceClient.parse_endpoint_path) execution_path = staticmethod(PipelineServiceClient.execution_path) parse_execution_path = staticmethod(PipelineServiceClient.parse_execution_path) model_path = staticmethod(PipelineServiceClient.model_path) parse_model_path = staticmethod(PipelineServiceClient.parse_model_path) network_path = staticmethod(PipelineServiceClient.network_path) parse_network_path = staticmethod(PipelineServiceClient.parse_network_path) pipeline_job_path = staticmethod(PipelineServiceClient.pipeline_job_path) parse_pipeline_job_path = staticmethod( PipelineServiceClient.parse_pipeline_job_path ) training_pipeline_path = staticmethod(PipelineServiceClient.training_pipeline_path) parse_training_pipeline_path = staticmethod( PipelineServiceClient.parse_training_pipeline_path ) common_billing_account_path = staticmethod( PipelineServiceClient.common_billing_account_path ) parse_common_billing_account_path = staticmethod( PipelineServiceClient.parse_common_billing_account_path ) common_folder_path = staticmethod(PipelineServiceClient.common_folder_path) parse_common_folder_path = staticmethod( PipelineServiceClient.parse_common_folder_path ) common_organization_path = staticmethod( PipelineServiceClient.common_organization_path ) parse_common_organization_path = staticmethod( PipelineServiceClient.parse_common_organization_path ) common_project_path = staticmethod(PipelineServiceClient.common_project_path) parse_common_project_path = staticmethod( PipelineServiceClient.parse_common_project_path ) common_location_path = staticmethod(PipelineServiceClient.common_location_path) parse_common_location_path = staticmethod( PipelineServiceClient.parse_common_location_path ) @classmethod def from_service_account_info(cls, info: dict, *args, **kwargs): """Creates an instance of this client using the provided credentials info. Args: info (dict): The service account private key info. args: Additional arguments to pass to the constructor. kwargs: Additional arguments to pass to the constructor. Returns: PipelineServiceAsyncClient: The constructed client. """ return PipelineServiceClient.from_service_account_info.__func__(PipelineServiceAsyncClient, info, *args, **kwargs) # type: ignore @classmethod def from_service_account_file(cls, filename: str, *args, **kwargs): """Creates an instance of this client using the provided credentials file. Args: filename (str): The path to the service account private key json file. args: Additional arguments to pass to the constructor. kwargs: Additional arguments to pass to the constructor. Returns: PipelineServiceAsyncClient: The constructed client. """ return PipelineServiceClient.from_service_account_file.__func__(PipelineServiceAsyncClient, filename, *args, **kwargs) # type: ignore from_service_account_json = from_service_account_file @classmethod def get_mtls_endpoint_and_cert_source( cls, client_options: Optional[ClientOptions] = None ): """Return the API endpoint and client cert source for mutual TLS. The client cert source is determined in the following order: (1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the client cert source is None. (2) if `client_options.client_cert_source` is provided, use the provided one; if the default client cert source exists, use the default one; otherwise the client cert source is None. The API endpoint is determined in the following order: (1) if `client_options.api_endpoint` if provided, use the provided one. (2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the default mTLS endpoint; if the environment variable is "never", use the default API endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise use the default API endpoint. More details can be found at https://google.aip.dev/auth/4114. Args: client_options (google.api_core.client_options.ClientOptions): Custom options for the client. Only the `api_endpoint` and `client_cert_source` properties may be used in this method. Returns: Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the client cert source to use. Raises: google.auth.exceptions.MutualTLSChannelError: If any errors happen. """ return PipelineServiceClient.get_mtls_endpoint_and_cert_source(client_options) # type: ignore @property def transport(self) -> PipelineServiceTransport: """Returns the transport used by the client instance. Returns: PipelineServiceTransport: The transport used by the client instance. """ return self._client.transport get_transport_class = functools.partial( type(PipelineServiceClient).get_transport_class, type(PipelineServiceClient) ) def __init__( self, *, credentials: Optional[ga_credentials.Credentials] = None, transport: Union[str, PipelineServiceTransport] = "grpc_asyncio", client_options: Optional[ClientOptions] = None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, ) -> None: """Instantiates the pipeline service client. Args: credentials (Optional[google.auth.credentials.Credentials]): The authorization credentials to attach to requests. These credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. transport (Union[str, ~.PipelineServiceTransport]): The transport to use. If set to None, a transport is chosen automatically. client_options (ClientOptions): Custom options for the client. It won't take effect if a ``transport`` instance is provided. (1) The ``api_endpoint`` property can be used to override the default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT environment variable can also be used to override the endpoint: "always" (always use the default mTLS endpoint), "never" (always use the default regular endpoint) and "auto" (auto switch to the default mTLS endpoint if client certificate is present, this is the default value). However, the ``api_endpoint`` property takes precedence if provided. (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable is "true", then the ``client_cert_source`` property can be used to provide client certificate for mutual TLS transport. If not provided, the default SSL client certificate will be used if present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not set, no client certificate will be used. Raises: google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport creation failed for any reason. """ self._client = PipelineServiceClient( credentials=credentials, transport=transport, client_options=client_options, client_info=client_info, ) async def create_training_pipeline( self, request: Optional[ Union[pipeline_service.CreateTrainingPipelineRequest, dict] ] = None, *, parent: Optional[str] = None, training_pipeline: Optional[gca_training_pipeline.TrainingPipeline] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> gca_training_pipeline.TrainingPipeline: r"""Creates a TrainingPipeline. A created TrainingPipeline right away will be attempted to be run. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_create_training_pipeline(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) training_pipeline = aiplatform_v1beta1.TrainingPipeline() training_pipeline.display_name = "display_name_value" training_pipeline.training_task_definition = "training_task_definition_value" training_pipeline.training_task_inputs.null_value = "NULL_VALUE" request = aiplatform_v1beta1.CreateTrainingPipelineRequest( parent="parent_value", training_pipeline=training_pipeline, ) # Make the request response = await client.create_training_pipeline(request=request) # Handle the response print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.CreateTrainingPipelineRequest, dict]]): The request object. Request message for [PipelineService.CreateTrainingPipeline][google.cloud.aiplatform.v1beta1.PipelineService.CreateTrainingPipeline]. parent (:class:`str`): Required. The resource name of the Location to create the TrainingPipeline in. Format: ``projects/{project}/locations/{location}`` This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. training_pipeline (:class:`google.cloud.aiplatform_v1beta1.types.TrainingPipeline`): Required. The TrainingPipeline to create. This corresponds to the ``training_pipeline`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.cloud.aiplatform_v1beta1.types.TrainingPipeline: The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, [upload][google.cloud.aiplatform.v1beta1.ModelService.UploadModel] the Model to Vertex AI, and evaluate the Model. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([parent, training_pipeline]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.CreateTrainingPipelineRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if parent is not None: request.parent = parent if training_pipeline is not None: request.training_pipeline = training_pipeline # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.create_training_pipeline, default_timeout=5.0, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def get_training_pipeline( self, request: Optional[ Union[pipeline_service.GetTrainingPipelineRequest, dict] ] = None, *, name: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> training_pipeline.TrainingPipeline: r"""Gets a TrainingPipeline. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_get_training_pipeline(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.GetTrainingPipelineRequest( name="name_value", ) # Make the request response = await client.get_training_pipeline(request=request) # Handle the response print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.GetTrainingPipelineRequest, dict]]): The request object. Request message for [PipelineService.GetTrainingPipeline][google.cloud.aiplatform.v1beta1.PipelineService.GetTrainingPipeline]. name (:class:`str`): Required. The name of the TrainingPipeline resource. Format: ``projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}`` This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.cloud.aiplatform_v1beta1.types.TrainingPipeline: The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, [upload][google.cloud.aiplatform.v1beta1.ModelService.UploadModel] the Model to Vertex AI, and evaluate the Model. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([name]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.GetTrainingPipelineRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if name is not None: request.name = name # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.get_training_pipeline, default_timeout=5.0, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def list_training_pipelines( self, request: Optional[ Union[pipeline_service.ListTrainingPipelinesRequest, dict] ] = None, *, parent: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> pagers.ListTrainingPipelinesAsyncPager: r"""Lists TrainingPipelines in a Location. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_list_training_pipelines(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.ListTrainingPipelinesRequest( parent="parent_value", ) # Make the request page_result = client.list_training_pipelines(request=request) # Handle the response async for response in page_result: print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.ListTrainingPipelinesRequest, dict]]): The request object. Request message for [PipelineService.ListTrainingPipelines][google.cloud.aiplatform.v1beta1.PipelineService.ListTrainingPipelines]. parent (:class:`str`): Required. The resource name of the Location to list the TrainingPipelines from. Format: ``projects/{project}/locations/{location}`` This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.cloud.aiplatform_v1beta1.services.pipeline_service.pagers.ListTrainingPipelinesAsyncPager: Response message for [PipelineService.ListTrainingPipelines][google.cloud.aiplatform.v1beta1.PipelineService.ListTrainingPipelines] Iterating over this object will yield results and resolve additional pages automatically. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([parent]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.ListTrainingPipelinesRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if parent is not None: request.parent = parent # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.list_training_pipelines, default_timeout=5.0, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # This method is paged; wrap the response in a pager, which provides # an `__aiter__` convenience method. response = pagers.ListTrainingPipelinesAsyncPager( method=rpc, request=request, response=response, metadata=metadata, ) # Done; return the response. return response async def delete_training_pipeline( self, request: Optional[ Union[pipeline_service.DeleteTrainingPipelineRequest, dict] ] = None, *, name: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> operation_async.AsyncOperation: r"""Deletes a TrainingPipeline. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_delete_training_pipeline(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.DeleteTrainingPipelineRequest( name="name_value", ) # Make the request operation = client.delete_training_pipeline(request=request) print("Waiting for operation to complete...") response = (await operation).result() # Handle the response print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.DeleteTrainingPipelineRequest, dict]]): The request object. Request message for [PipelineService.DeleteTrainingPipeline][google.cloud.aiplatform.v1beta1.PipelineService.DeleteTrainingPipeline]. name (:class:`str`): Required. The name of the TrainingPipeline resource to be deleted. Format: ``projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}`` This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([name]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.DeleteTrainingPipelineRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if name is not None: request.name = name # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.delete_training_pipeline, default_timeout=5.0, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Wrap the response in an operation future. response = operation_async.from_gapic( response, self._client._transport.operations_client, empty_pb2.Empty, metadata_type=gca_operation.DeleteOperationMetadata, ) # Done; return the response. return response async def cancel_training_pipeline( self, request: Optional[ Union[pipeline_service.CancelTrainingPipelineRequest, dict] ] = None, *, name: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> None: r"""Cancels a TrainingPipeline. Starts asynchronous cancellation on the TrainingPipeline. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use [PipelineService.GetTrainingPipeline][google.cloud.aiplatform.v1beta1.PipelineService.GetTrainingPipeline] or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the TrainingPipeline is not deleted; instead it becomes a pipeline with a [TrainingPipeline.error][google.cloud.aiplatform.v1beta1.TrainingPipeline.error] value with a [google.rpc.Status.code][google.rpc.Status.code] of 1, corresponding to ``Code.CANCELLED``, and [TrainingPipeline.state][google.cloud.aiplatform.v1beta1.TrainingPipeline.state] is set to ``CANCELLED``. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_cancel_training_pipeline(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.CancelTrainingPipelineRequest( name="name_value", ) # Make the request await client.cancel_training_pipeline(request=request) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.CancelTrainingPipelineRequest, dict]]): The request object. Request message for [PipelineService.CancelTrainingPipeline][google.cloud.aiplatform.v1beta1.PipelineService.CancelTrainingPipeline]. name (:class:`str`): Required. The name of the TrainingPipeline to cancel. Format: ``projects/{project}/locations/{location}/trainingPipelines/{training_pipeline}`` This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([name]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.CancelTrainingPipelineRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if name is not None: request.name = name # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.cancel_training_pipeline, default_timeout=5.0, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) async def create_pipeline_job( self, request: Optional[ Union[pipeline_service.CreatePipelineJobRequest, dict] ] = None, *, parent: Optional[str] = None, pipeline_job: Optional[gca_pipeline_job.PipelineJob] = None, pipeline_job_id: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> gca_pipeline_job.PipelineJob: r"""Creates a PipelineJob. A PipelineJob will run immediately when created. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_create_pipeline_job(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.CreatePipelineJobRequest( parent="parent_value", ) # Make the request response = await client.create_pipeline_job(request=request) # Handle the response print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.CreatePipelineJobRequest, dict]]): The request object. Request message for [PipelineService.CreatePipelineJob][google.cloud.aiplatform.v1beta1.PipelineService.CreatePipelineJob]. parent (:class:`str`): Required. The resource name of the Location to create the PipelineJob in. Format: ``projects/{project}/locations/{location}`` This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. pipeline_job (:class:`google.cloud.aiplatform_v1beta1.types.PipelineJob`): Required. The PipelineJob to create. This corresponds to the ``pipeline_job`` field on the ``request`` instance; if ``request`` is provided, this should not be set. pipeline_job_id (:class:`str`): The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /[a-z][0-9]-/. This corresponds to the ``pipeline_job_id`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.cloud.aiplatform_v1beta1.types.PipelineJob: An instance of a machine learning PipelineJob. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([parent, pipeline_job, pipeline_job_id]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.CreatePipelineJobRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if parent is not None: request.parent = parent if pipeline_job is not None: request.pipeline_job = pipeline_job if pipeline_job_id is not None: request.pipeline_job_id = pipeline_job_id # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.create_pipeline_job, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def get_pipeline_job( self, request: Optional[Union[pipeline_service.GetPipelineJobRequest, dict]] = None, *, name: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> pipeline_job.PipelineJob: r"""Gets a PipelineJob. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_get_pipeline_job(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.GetPipelineJobRequest( name="name_value", ) # Make the request response = await client.get_pipeline_job(request=request) # Handle the response print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.GetPipelineJobRequest, dict]]): The request object. Request message for [PipelineService.GetPipelineJob][google.cloud.aiplatform.v1beta1.PipelineService.GetPipelineJob]. name (:class:`str`): Required. The name of the PipelineJob resource. Format: ``projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`` This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.cloud.aiplatform_v1beta1.types.PipelineJob: An instance of a machine learning PipelineJob. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([name]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.GetPipelineJobRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if name is not None: request.name = name # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.get_pipeline_job, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def list_pipeline_jobs( self, request: Optional[Union[pipeline_service.ListPipelineJobsRequest, dict]] = None, *, parent: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> pagers.ListPipelineJobsAsyncPager: r"""Lists PipelineJobs in a Location. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_list_pipeline_jobs(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.ListPipelineJobsRequest( parent="parent_value", ) # Make the request page_result = client.list_pipeline_jobs(request=request) # Handle the response async for response in page_result: print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.ListPipelineJobsRequest, dict]]): The request object. Request message for [PipelineService.ListPipelineJobs][google.cloud.aiplatform.v1beta1.PipelineService.ListPipelineJobs]. parent (:class:`str`): Required. The resource name of the Location to list the PipelineJobs from. Format: ``projects/{project}/locations/{location}`` This corresponds to the ``parent`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.cloud.aiplatform_v1beta1.services.pipeline_service.pagers.ListPipelineJobsAsyncPager: Response message for [PipelineService.ListPipelineJobs][google.cloud.aiplatform.v1beta1.PipelineService.ListPipelineJobs] Iterating over this object will yield results and resolve additional pages automatically. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([parent]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.ListPipelineJobsRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if parent is not None: request.parent = parent # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.list_pipeline_jobs, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # This method is paged; wrap the response in a pager, which provides # an `__aiter__` convenience method. response = pagers.ListPipelineJobsAsyncPager( method=rpc, request=request, response=response, metadata=metadata, ) # Done; return the response. return response async def delete_pipeline_job( self, request: Optional[ Union[pipeline_service.DeletePipelineJobRequest, dict] ] = None, *, name: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> operation_async.AsyncOperation: r"""Deletes a PipelineJob. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_delete_pipeline_job(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.DeletePipelineJobRequest( name="name_value", ) # Make the request operation = client.delete_pipeline_job(request=request) print("Waiting for operation to complete...") response = (await operation).result() # Handle the response print(response) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.DeletePipelineJobRequest, dict]]): The request object. Request message for [PipelineService.DeletePipelineJob][google.cloud.aiplatform.v1beta1.PipelineService.DeletePipelineJob]. name (:class:`str`): Required. The name of the PipelineJob resource to be deleted. Format: ``projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`` This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: google.api_core.operation_async.AsyncOperation: An object representing a long-running operation. The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([name]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.DeletePipelineJobRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if name is not None: request.name = name # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.delete_pipeline_job, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Wrap the response in an operation future. response = operation_async.from_gapic( response, self._client._transport.operations_client, empty_pb2.Empty, metadata_type=gca_operation.DeleteOperationMetadata, ) # Done; return the response. return response async def cancel_pipeline_job( self, request: Optional[ Union[pipeline_service.CancelPipelineJobRequest, dict] ] = None, *, name: Optional[str] = None, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> None: r"""Cancels a PipelineJob. Starts asynchronous cancellation on the PipelineJob. The server makes a best effort to cancel the pipeline, but success is not guaranteed. Clients can use [PipelineService.GetPipelineJob][google.cloud.aiplatform.v1beta1.PipelineService.GetPipelineJob] or other methods to check whether the cancellation succeeded or whether the pipeline completed despite cancellation. On successful cancellation, the PipelineJob is not deleted; instead it becomes a pipeline with a [PipelineJob.error][google.cloud.aiplatform.v1beta1.PipelineJob.error] value with a [google.rpc.Status.code][google.rpc.Status.code] of 1, corresponding to ``Code.CANCELLED``, and [PipelineJob.state][google.cloud.aiplatform.v1beta1.PipelineJob.state] is set to ``CANCELLED``. .. code-block:: python # This snippet has been automatically generated and should be regarded as a # code template only. # It will require modifications to work: # - It may require correct/in-range values for request initialization. # - It may require specifying regional endpoints when creating the service # client as shown in: # https://googleapis.dev/python/google-api-core/latest/client_options.html from google.cloud import aiplatform_v1beta1 async def sample_cancel_pipeline_job(): # Create a client client = aiplatform_v1beta1.PipelineServiceAsyncClient() # Initialize request argument(s) request = aiplatform_v1beta1.CancelPipelineJobRequest( name="name_value", ) # Make the request await client.cancel_pipeline_job(request=request) Args: request (Optional[Union[google.cloud.aiplatform_v1beta1.types.CancelPipelineJobRequest, dict]]): The request object. Request message for [PipelineService.CancelPipelineJob][google.cloud.aiplatform.v1beta1.PipelineService.CancelPipelineJob]. name (:class:`str`): Required. The name of the PipelineJob to cancel. Format: ``projects/{project}/locations/{location}/pipelineJobs/{pipeline_job}`` This corresponds to the ``name`` field on the ``request`` instance; if ``request`` is provided, this should not be set. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. """ # Create or coerce a protobuf request object. # Quick check: If we got a request object, we should *not* have # gotten any keyword arguments that map to the request. has_flattened_params = any([name]) if request is not None and has_flattened_params: raise ValueError( "If the `request` argument is set, then none of " "the individual field arguments should be set." ) request = pipeline_service.CancelPipelineJobRequest(request) # If we have keyword arguments corresponding to fields on the # request, apply these. if name is not None: request.name = name # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method_async.wrap_method( self._client._transport.cancel_pipeline_job, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) async def list_operations( self, request: Optional[operations_pb2.ListOperationsRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> operations_pb2.ListOperationsResponse: r"""Lists operations that match the specified filter in the request. Args: request (:class:`~.operations_pb2.ListOperationsRequest`): The request object. Request message for `ListOperations` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.operations_pb2.ListOperationsResponse: Response message for ``ListOperations`` method. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = operations_pb2.ListOperationsRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.list_operations, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def get_operation( self, request: Optional[operations_pb2.GetOperationRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> operations_pb2.Operation: r"""Gets the latest state of a long-running operation. Args: request (:class:`~.operations_pb2.GetOperationRequest`): The request object. Request message for `GetOperation` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.operations_pb2.Operation: An ``Operation`` object. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = operations_pb2.GetOperationRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.get_operation, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def delete_operation( self, request: Optional[operations_pb2.DeleteOperationRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> None: r"""Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Args: request (:class:`~.operations_pb2.DeleteOperationRequest`): The request object. Request message for `DeleteOperation` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: None """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = operations_pb2.DeleteOperationRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.delete_operation, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) async def cancel_operation( self, request: Optional[operations_pb2.CancelOperationRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> None: r"""Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Args: request (:class:`~.operations_pb2.CancelOperationRequest`): The request object. Request message for `CancelOperation` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: None """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = operations_pb2.CancelOperationRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.cancel_operation, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) async def wait_operation( self, request: Optional[operations_pb2.WaitOperationRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> operations_pb2.Operation: r"""Waits until the specified long-running operation is done or reaches at most a specified timeout, returning the latest state. If the operation is already done, the latest state is immediately returned. If the timeout specified is greater than the default HTTP/RPC timeout, the HTTP/RPC timeout is used. If the server does not support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Args: request (:class:`~.operations_pb2.WaitOperationRequest`): The request object. Request message for `WaitOperation` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.operations_pb2.Operation: An ``Operation`` object. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = operations_pb2.WaitOperationRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.wait_operation, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def set_iam_policy( self, request: Optional[iam_policy_pb2.SetIamPolicyRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> policy_pb2.Policy: r"""Sets the IAM access control policy on the specified function. Replaces any existing policy. Args: request (:class:`~.iam_policy_pb2.SetIamPolicyRequest`): The request object. Request message for `SetIamPolicy` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.policy_pb2.Policy: Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources. A ``Policy`` is a collection of ``bindings``. A ``binding`` binds one or more ``members`` to a single ``role``. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A ``role`` is a named list of permissions (defined by IAM or configured by users). A ``binding`` can optionally specify a ``condition``, which is a logic expression that further constrains the role binding based on attributes about the request and/or target resource. **JSON Example** :: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:[email protected]", "group:[email protected]", "domain:google.com", "serviceAccount:[email protected]" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": ["user:[email protected]"], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ] } **YAML Example** :: bindings: - members: - user:[email protected] - group:[email protected] - domain:google.com - serviceAccount:[email protected] role: roles/resourcemanager.organizationAdmin - members: - user:[email protected] role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') For a description of IAM and its features, see the `IAM developer's guide <https://cloud.google.com/iam/docs>`__. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = iam_policy_pb2.SetIamPolicyRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.set_iam_policy, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def get_iam_policy( self, request: Optional[iam_policy_pb2.GetIamPolicyRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> policy_pb2.Policy: r"""Gets the IAM access control policy for a function. Returns an empty policy if the function exists and does not have a policy set. Args: request (:class:`~.iam_policy_pb2.GetIamPolicyRequest`): The request object. Request message for `GetIamPolicy` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.policy_pb2.Policy: Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources. A ``Policy`` is a collection of ``bindings``. A ``binding`` binds one or more ``members`` to a single ``role``. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A ``role`` is a named list of permissions (defined by IAM or configured by users). A ``binding`` can optionally specify a ``condition``, which is a logic expression that further constrains the role binding based on attributes about the request and/or target resource. **JSON Example** :: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:[email protected]", "group:[email protected]", "domain:google.com", "serviceAccount:[email protected]" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": ["user:[email protected]"], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ] } **YAML Example** :: bindings: - members: - user:[email protected] - group:[email protected] - domain:google.com - serviceAccount:[email protected] role: roles/resourcemanager.organizationAdmin - members: - user:[email protected] role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') For a description of IAM and its features, see the `IAM developer's guide <https://cloud.google.com/iam/docs>`__. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = iam_policy_pb2.GetIamPolicyRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.get_iam_policy, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def test_iam_permissions( self, request: Optional[iam_policy_pb2.TestIamPermissionsRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> iam_policy_pb2.TestIamPermissionsResponse: r"""Tests the specified IAM permissions against the IAM access control policy for a function. If the function does not exist, this will return an empty set of permissions, not a NOT_FOUND error. Args: request (:class:`~.iam_policy_pb2.TestIamPermissionsRequest`): The request object. Request message for `TestIamPermissions` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.iam_policy_pb2.TestIamPermissionsResponse: Response message for ``TestIamPermissions`` method. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = iam_policy_pb2.TestIamPermissionsRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.test_iam_permissions, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def get_location( self, request: Optional[locations_pb2.GetLocationRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> locations_pb2.Location: r"""Gets information about a location. Args: request (:class:`~.location_pb2.GetLocationRequest`): The request object. Request message for `GetLocation` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.location_pb2.Location: Location object. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = locations_pb2.GetLocationRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.get_location, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def list_locations( self, request: Optional[locations_pb2.ListLocationsRequest] = None, *, retry: OptionalRetry = gapic_v1.method.DEFAULT, timeout: Union[float, object] = gapic_v1.method.DEFAULT, metadata: Sequence[Tuple[str, str]] = (), ) -> locations_pb2.ListLocationsResponse: r"""Lists information about the supported locations for this service. Args: request (:class:`~.location_pb2.ListLocationsRequest`): The request object. Request message for `ListLocations` method. retry (google.api_core.retry.Retry): Designation of what errors, if any, should be retried. timeout (float): The timeout for this request. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.location_pb2.ListLocationsResponse: Response message for ``ListLocations`` method. """ # Create or coerce a protobuf request object. # The request isn't a proto-plus wrapped type, # so it must be constructed via keyword expansion. if isinstance(request, dict): request = locations_pb2.ListLocationsRequest(**request) # Wrap the RPC method; this adds retry and timeout information, # and friendly error handling. rpc = gapic_v1.method.wrap_method( self._client._transport.list_locations, default_timeout=None, client_info=DEFAULT_CLIENT_INFO, ) # Certain fields should be provided within the metadata header; # add these here. metadata = tuple(metadata) + ( gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), ) # Send the request. response = await rpc( request, retry=retry, timeout=timeout, metadata=metadata, ) # Done; return the response. return response async def __aenter__(self) -> "PipelineServiceAsyncClient": return self async def __aexit__(self, exc_type, exc, tb): await self.transport.close() DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( gapic_version=package_version.__version__ ) __all__ = ("PipelineServiceAsyncClient",)
PypiClean
/slurm_rest-0.0.37.1-py3-none-any.whl/slurm_rest/model/dbv0037_response_qos_delete.py
import re # noqa: F401 import sys # noqa: F401 from slurm_rest.model_utils import ( # noqa: F401 ApiTypeError, ModelComposed, ModelNormal, ModelSimple, cached_property, change_keys_js_to_python, convert_js_args_to_python_args, date, datetime, file_type, none_type, validate_get_composed_info, ) from ..model_utils import OpenApiModel from slurm_rest.exceptions import ApiAttributeError def lazy_import(): from slurm_rest.model.dbv0037_error import Dbv0037Error globals()['Dbv0037Error'] = Dbv0037Error class Dbv0037ResponseQosDelete(ModelNormal): """NOTE: This class is auto generated by OpenAPI Generator. Ref: https://openapi-generator.tech Do not edit the class manually. Attributes: allowed_values (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict with a capitalized key describing the allowed value and an allowed value. These dicts store the allowed enum values. attribute_map (dict): The key is attribute name and the value is json key in definition. discriminator_value_class_map (dict): A dict to go from the discriminator variable value to the discriminator class name. validations (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict that stores validations for max_length, min_length, max_items, min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum, inclusive_minimum, and regex. additional_properties_type (tuple): A tuple of classes accepted as additional properties values. """ allowed_values = { } validations = { } @cached_property def additional_properties_type(): """ This must be a method because a model may have properties that are of type self, this must run after the class is loaded """ lazy_import() return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501 _nullable = False @cached_property def openapi_types(): """ This must be a method because a model may have properties that are of type self, this must run after the class is loaded Returns openapi_types (dict): The key is attribute name and the value is attribute type. """ lazy_import() return { 'errors': ([Dbv0037Error],), # noqa: E501 } @cached_property def discriminator(): return None attribute_map = { 'errors': 'errors', # noqa: E501 } read_only_vars = { } _composed_schemas = {} @classmethod @convert_js_args_to_python_args def _from_openapi_data(cls, *args, **kwargs): # noqa: E501 """Dbv0037ResponseQosDelete - a model defined in OpenAPI Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _spec_property_naming (bool): True if the variable names in the input data are serialized names, as specified in the OpenAPI document. False if the variable names in the input data are pythonic names, e.g. snake case (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. _visited_composed_classes (tuple): This stores a tuple of classes that we have traveled through so that if we see that class again we will not use its discriminator again. When traveling through a discriminator, the composed schema that is is traveled through is added to this set. For example if Animal has a discriminator petType and we pass in "Dog", and the class Dog allOf includes Animal, we move through Animal once using the discriminator, and pick Dog. Then in Dog, we will make an instance of the Animal class but this time we won't travel through its discriminator because we passed in _visited_composed_classes = (Animal,) errors ([Dbv0037Error]): Slurm errors. [optional] # noqa: E501 """ _check_type = kwargs.pop('_check_type', True) _spec_property_naming = kwargs.pop('_spec_property_naming', False) _path_to_item = kwargs.pop('_path_to_item', ()) _configuration = kwargs.pop('_configuration', None) _visited_composed_classes = kwargs.pop('_visited_composed_classes', ()) self = super(OpenApiModel, cls).__new__(cls) if args: raise ApiTypeError( "Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % ( args, self.__class__.__name__, ), path_to_item=_path_to_item, valid_classes=(self.__class__,), ) self._data_store = {} self._check_type = _check_type self._spec_property_naming = _spec_property_naming self._path_to_item = _path_to_item self._configuration = _configuration self._visited_composed_classes = _visited_composed_classes + (self.__class__,) for var_name, var_value in kwargs.items(): if var_name not in self.attribute_map and \ self._configuration is not None and \ self._configuration.discard_unknown_keys and \ self.additional_properties_type is None: # discard variable. continue setattr(self, var_name, var_value) return self required_properties = set([ '_data_store', '_check_type', '_spec_property_naming', '_path_to_item', '_configuration', '_visited_composed_classes', ]) @convert_js_args_to_python_args def __init__(self, *args, **kwargs): # noqa: E501 """Dbv0037ResponseQosDelete - a model defined in OpenAPI Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _spec_property_naming (bool): True if the variable names in the input data are serialized names, as specified in the OpenAPI document. False if the variable names in the input data are pythonic names, e.g. snake case (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. _visited_composed_classes (tuple): This stores a tuple of classes that we have traveled through so that if we see that class again we will not use its discriminator again. When traveling through a discriminator, the composed schema that is is traveled through is added to this set. For example if Animal has a discriminator petType and we pass in "Dog", and the class Dog allOf includes Animal, we move through Animal once using the discriminator, and pick Dog. Then in Dog, we will make an instance of the Animal class but this time we won't travel through its discriminator because we passed in _visited_composed_classes = (Animal,) errors ([Dbv0037Error]): Slurm errors. [optional] # noqa: E501 """ _check_type = kwargs.pop('_check_type', True) _spec_property_naming = kwargs.pop('_spec_property_naming', False) _path_to_item = kwargs.pop('_path_to_item', ()) _configuration = kwargs.pop('_configuration', None) _visited_composed_classes = kwargs.pop('_visited_composed_classes', ()) if args: raise ApiTypeError( "Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % ( args, self.__class__.__name__, ), path_to_item=_path_to_item, valid_classes=(self.__class__,), ) self._data_store = {} self._check_type = _check_type self._spec_property_naming = _spec_property_naming self._path_to_item = _path_to_item self._configuration = _configuration self._visited_composed_classes = _visited_composed_classes + (self.__class__,) for var_name, var_value in kwargs.items(): if var_name not in self.attribute_map and \ self._configuration is not None and \ self._configuration.discard_unknown_keys and \ self.additional_properties_type is None: # discard variable. continue setattr(self, var_name, var_value) if var_name in self.read_only_vars: raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate " f"class with read only attributes.")
PypiClean
/nb_offline_convert-0.1.1.tar.gz/nb_offline_convert-0.1.1/share/jupyter/nbconvert/templates/lab_offline/static/MathJax/extensions/a11y/mathmaps/en/symbols/greek-symbols.js
[{"locale":"en"},{"category":"Ll","mappings":{"default":{"default":"greek beta symbol","alternative":"greek small letter curled beta","short":"beta"}},"key":"03D0"},{"category":"Ll","mappings":{"default":{"default":"greek theta symbol","alternative":"greek small letter script theta","short":"theta"}},"key":"03D1"},{"category":"Ll","mappings":{"default":{"default":"greek phi symbol","alternative":"greek small letter script phi","short":"phi"}},"key":"03D5"},{"category":"Ll","mappings":{"default":{"default":"greek pi symbol","alternative":"greek small letter omega pi","short":"pi"}},"key":"03D6"},{"category":"Ll","mappings":{"default":{"default":"greek kai symbol","short":"kai"}},"key":"03D7"},{"category":"Ll","mappings":{"default":{"default":"greek kappa symbol","alternative":"greek small letter script kappa","short":"kappa"}},"key":"03F0"},{"category":"Ll","mappings":{"default":{"default":"greek rho symbol","alternative":"greek small letter tailed rho","short":"rho"}},"key":"03F1"},{"category":"Ll","mappings":{"default":{"default":"greek lunate epsilon symbol","short":"epsilon"}},"key":"03F5"},{"category":"Sm","mappings":{"default":{"default":"greek reversed lunate epsilon symbol","short":"reversed epsilon"}},"key":"03F6"},{"category":"Lu","mappings":{"default":{"default":"greek capital theta symbol","short":"cap theta"},"mathspeak":{"default":"upper Theta"}},"key":"03F4"},{"category":"Lu","mappings":{"default":{"default":"mathematical bold capital theta symbol","alternative":"bold capital theta","short":"bold cap theta"},"mathspeak":{"default":"bold upper Theta"}},"key":"1D6B9"},{"category":"Lu","mappings":{"default":{"default":"mathematical italic capital theta symbol","alternative":"italic capital theta","short":"italic cap theta"},"mathspeak":{"default":"italic upper Theta"}},"key":"1D6F3"},{"category":"Lu","mappings":{"default":{"default":"mathematical sans serif bold capital theta symbol","alternative":"sans serif bold capital theta","short":"sans serif bold cap theta"},"mathspeak":{"default":"sans serif bold upper Theta"}},"key":"1D767"},{"category":"Sm","mappings":{"default":{"default":"mathematical bold nabla","alternative":"bold nabla"}},"key":"1D6C1"},{"category":"Sm","mappings":{"default":{"default":"mathematical bold partial differential","alternative":"bold partial differential","short":"bold partial differential"}},"key":"1D6DB"},{"category":"Ll","mappings":{"default":{"default":"mathematical bold epsilon symbol","alternative":"bold epsilon","short":"bold epsilon"}},"key":"1D6DC"},{"category":"Ll","mappings":{"default":{"default":"mathematical bold theta symbol","alternative":"bold theta","short":"bold theta"}},"key":"1D6DD"},{"category":"Ll","mappings":{"default":{"default":"mathematical bold kappa symbol","alternative":"bold kappa","short":"bold kappa"}},"key":"1D6DE"},{"category":"Ll","mappings":{"default":{"default":"mathematical bold phi symbol","alternative":"bold phi","short":"bold phi"}},"key":"1D6DF"},{"category":"Ll","mappings":{"default":{"default":"mathematical bold rho symbol","alternative":"bold rho","short":"bold rho"}},"key":"1D6E0"},{"category":"Ll","mappings":{"default":{"default":"mathematical bold pi symbol","alternative":"bold pi","short":"bold pi"}},"key":"1D6E1"},{"category":"Sm","mappings":{"default":{"default":"mathematical italic nabla","alternative":"italic nabla","short":"italic nabla"}},"key":"1D6FB"},{"category":"Sm","mappings":{"default":{"default":"mathematical italic partial differential","alternative":"italic partial differential","short":"italic partial differential"}},"key":"1D715"},{"category":"Ll","mappings":{"default":{"default":"mathematical italic epsilon symbol","alternative":"italic epsilon","short":"italic epsilon"}},"key":"1D716"},{"category":"Ll","mappings":{"default":{"default":"mathematical italic theta symbol","alternative":"italic theta","short":"italic theta"}},"key":"1D717"},{"category":"Ll","mappings":{"default":{"default":"mathematical italic kappa symbol","alternative":"italic kappa","short":"italic kappa"}},"key":"1D718"},{"category":"Ll","mappings":{"default":{"default":"mathematical italic phi symbol","alternative":"italic phi","short":"italic phi"}},"key":"1D719"},{"category":"Ll","mappings":{"default":{"default":"mathematical italic rho symbol","alternative":"italic rho","short":"italic rho"}},"key":"1D71A"},{"category":"Ll","mappings":{"default":{"default":"mathematical italic pi symbol","alternative":"italic pi","short":"italic pi"}},"key":"1D71B"},{"category":"Sm","mappings":{"default":{"default":"mathematical sans serif bold nabla","alternative":"sans serif bold nabla","short":"sans serif bold nabla"}},"key":"1D76F"},{"category":"Sm","mappings":{"default":{"default":"mathematical sans serif bold partial differential","alternative":"sans serif bold partial differential","short":"sans serif bold partial differential"}},"key":"1D789"},{"category":"Ll","mappings":{"default":{"default":"mathematical sans serif bold epsilon symbol","alternative":"sans serif bold epsilon","short":"sans serif bold epsilon"}},"key":"1D78A"},{"category":"Ll","mappings":{"default":{"default":"mathematical sans serif bold theta symbol","alternative":"sans serif bold theta","short":"sans serif bold theta"}},"key":"1D78B"},{"category":"Ll","mappings":{"default":{"default":"mathematical sans serif bold kappa symbol","alternative":"sans serif bold kappa","short":"sans serif bold kappa"}},"key":"1D78C"},{"category":"Ll","mappings":{"default":{"default":"mathematical sans serif bold phi symbol","alternative":"sans serif bold phi","short":"sans serif bold phi"}},"key":"1D78D"},{"category":"Ll","mappings":{"default":{"default":"mathematical sans serif bold rho symbol","alternative":"sans serif bold rho","short":"sans serif bold rho"}},"key":"1D78E"},{"category":"Ll","mappings":{"default":{"default":"mathematical sans serif bold pi symbol","alternative":"sans serif bold pi","short":"sans serif bold pi"}},"key":"1D78F"},{"category":"Lu","mappings":{"default":{"default":"mathematical bold capital digamma","alternative":"bold capital digamma","short":"bold cap digamma"},"mathspeak":{"default":"bold upper Digamma"}},"key":"1D7CA"},{"category":"Ll","mappings":{"default":{"default":"mathematical bold small digamma","alternative":"bold small digamma","short":"bold digamma"}},"key":"1D7CB"}]
PypiClean
/django-dojo-0.0.1.tar.gz/django-dojo-0.0.1/dojo/static/dojo/dojox/grid/enhanced/plugins/exporter/_ExportWriter.js
define([ "dojo/_base/declare" ], function(declare){ //require Exporter here, so the implementations only need to require this file, //and the users only need to require the implementation file. return declare("dojox.grid.enhanced.plugins.exporter._ExportWriter", null, { // summary: // This is an abstract class for all kinds of writers used in the Exporter plugin. // It utilizes the strategy pattern to break the export work into several stages, // and provide interfaces for all of them. // // Implementations might choose some of the functions in this class to override, // thus providing their own functionalities. // // The Exporter will go through the grid line by line. So in every line, all the Views // will be reached, and the header line is only handled once. // // An *argObj* object is passed to most functions of this class. // It carries context arguments that make sense when they are called. /*===== argObj: { // grid: EnhancedGrid // The grid object we are now handling. grid: null, // isHeader: bool // Indicating which context we're handling, header or content. isHeader: true, // view: _View // Reference to the current _View object. view: null, // viewIdx: int // The index of the current _View object in the views array. // If the grid does not have any rowselector view, it conforms to the index // in the _ViewManager.views. viewIdx: -1, // subrow: _View.structure.cells[i] // Reference to the current subrow. // A subrow describe the innter structure of a row in a view, it's an array of cells subrow: null, // subrowIdx: int // The index of the current subrow in the subrow array: _View.structure.cells. subrowIdx: -1, // cell: dojox.grid.__CellDef // Reference to the current cell. cell: null, // cellIdx: int // The index of the current cell in the current subrow. // It's different from cell.index, which is the index in the whole line. cellIdx: -1, // row: item // The current row of data (logically), a.k.a.: current item. row: null, // rowIdx: int // The index of the current row (item). rowIdx: -1, // spCols: int[] // An array of special column indexes(flat,not regarding structure). // Special columns are typically attached to grid as a kind of UI facility // by the grid widget, instead of some real data. // For example, indirect selectors and row indexers. // Users can choose to export it or not. spCols: [], // colOffset: int // If the grid has a _RowSelector view or something else, this view will NOT be // passed to the user in argObj. So the column index (cell.index) will appear shifted // (start from 1 instead of 0). This colOffset is provided to remove this shift. // // usage: // | var correctColIndex = argObj.cell.index + argObj.colOffset; colOffset: 0 }, =====*/ constructor: function(/* object? */writerArgs){ // summary: // Writer initializations goes here. // writerArgs: object? // Any implementation of this class might accept a writerArgs object (optional), // which contains some writer-specific arguments given by the user. }, _getExportDataForCell: function(rowIndex, rowItem, cell, grid){ var data = (cell.get || grid.get).call(cell, rowIndex, rowItem); if(this.formatter){ return this.formatter(data, cell, rowIndex, rowItem); }else{ return data; } }, beforeHeader: function(/* EnhancedGrid */grid){ // summary: // We are going to start the travel in the grid. // Is there anything we should do now? // tags: // protected extension // returns: // - true: go on handling the header row and then call afterHeader. // - false: skip the header row, won't call afterHeader. return true; //Boolean }, afterHeader: function(){ // summary: // The header line has been handled. // tags: // protected extension // returns: // undefined }, beforeContent: function(/* Array */items){ // summary: // We are ready to go through all the contents(items). // tags: // protected extension // items: // All the items fetched from the store // returns: // - true: go on handling the contents and then call afterContent. // - false: skip all the contents, won't call afterContent. return true; //Boolean }, afterContent: function(){ // summary: // We have finished the entire grid travel. // Do some clean up work if you need to. // tags: // protected extension // returns: // undefined }, beforeContentRow: function(/* object */argObj){ // summary: // Before handling a line of data (not header). // tags: // protected extension // argObj: // An object with at least the following context properties available: // | { // | grid,isHeader, // | row,rowIdx, // | spCols // | } // returns: // - true: go on handling the current data row and then call afterContentRow. // - false: skip the current data row, won't call afterContentRow. return true; //Boolean }, afterContentRow: function(/* object */argObj){ // summary: // After handling a line of data (not header). // tags: // protected extension // argObj: // An object with at least the following context properties available: // | { // | grid,isHeader, // | row,rowIdx, // | spCols // | } // returns: // undefined }, beforeView: function(/* object */argObj){ // summary: // Before handling a view. // tags: // protected extension // argObj: // An object with at least the following context properties available: // | { // | grid,isHeader, // | view,viewIdx, // | spCols(if isHeader==false) // | } // returns: // - true: go on handling the current view and then call afterView. // - false: skip the current view, won't call afterView. return true; //Boolean }, afterView: function(/* object */argObj){ // summary: // After handling a view. // tags: // protected extension // argObj: // An object with at least the following context properties available: // | { // | grid,isHeader, // | view,viewIdx, // | spCols(if isHeader==false) // | } // tags: // protected extension // returns: // undefined }, beforeSubrow: function(/* object */argObj){ // summary: // Before handling a subrow in a line (defined in the grid structure). // tags: // protected extension // argObj: // An object with at least the following context properties available: // | { // | grid,isHeader, // | row,rowIdx, // | view,viewIdx, // | subrow,subrowIdx, // | spCols(if isHeader==false) // | } // returns: // - true: go on handling the current subrow and then call afterSubrow. // - false: skip the current subrow, won't call afterSubrow. return true; //Boolean }, afterSubrow: function(/* object */argObj){ // summary: // Before handling a subrow in a line (defined in the grid structure). // tags: // protected extension // argObj: // An object with at least the following context properties available: // | { // | grid,isHeader, // | row,rowIdx, // | view,viewIdx, // | subrow,subrowIdx, // | spCols(if isHeader==false) // | } // returns: // undefined }, handleCell: function(/* object */argObj){ // summary: // Handle a header cell or data cell. // tags: // protected extension // argObj: // An object with at least the following context properties available: // | { // | grid,isHeader, // | row,rowIdx, // | view,viewIdx, // | subrow,subrowIdx, // | cell,cellIdx, // | spCols(if isHeader==false) // | } // returns: // undefined }, toString: function(){ // summary: // Export to a string. // tags: // protected extension // returns: // The exported result string. return ''; //String } }); });
PypiClean
/openlmai-0.27.8.tar.gz/openlmai-0.27.8/openai/util.py
import logging import os import re import sys from enum import Enum from typing import Optional import openai OPENAI_LOG = os.environ.get("OPENAI_LOG") logger = logging.getLogger("openai") __all__ = [ "log_info", "log_debug", "log_warn", "logfmt", ] api_key_to_header = ( lambda api, key: {"Authorization": f"Bearer {key}"} if api in (ApiType.OPEN_AI, ApiType.AZURE_AD) else {"api-key": f"{key}"} ) class ApiType(Enum): AZURE = 1 OPEN_AI = 2 AZURE_AD = 3 @staticmethod def from_str(label): if label.lower() == "azure": return ApiType.AZURE elif label.lower() in ("azure_ad", "azuread"): return ApiType.AZURE_AD elif label.lower() in ("open_ai", "openai"): return ApiType.OPEN_AI else: raise openai.error.InvalidAPIType( "The API type provided in invalid. Please select one of the supported API types: 'azure', 'azure_ad', 'open_ai'" ) def _console_log_level(): if openai.log in ["debug", "info"]: return openai.log elif OPENAI_LOG in ["debug", "info"]: return OPENAI_LOG else: return None def log_debug(message, **params): msg = logfmt(dict(message=message, **params)) if _console_log_level() == "debug": print(msg, file=sys.stderr) logger.debug(msg) def log_info(message, **params): msg = logfmt(dict(message=message, **params)) if _console_log_level() in ["debug", "info"]: print(msg, file=sys.stderr) logger.info(msg) def log_warn(message, **params): msg = logfmt(dict(message=message, **params)) print(msg, file=sys.stderr) logger.warn(msg) def logfmt(props): def fmt(key, val): # Handle case where val is a bytes or bytesarray if hasattr(val, "decode"): val = val.decode("utf-8") # Check if val is already a string to avoid re-encoding into ascii. if not isinstance(val, str): val = str(val) if re.search(r"\s", val): val = repr(val) # key should already be a string if re.search(r"\s", key): key = repr(key) return "{key}={val}".format(key=key, val=val) return " ".join([fmt(key, val) for key, val in sorted(props.items())]) def get_object_classes(): # This is here to avoid a circular dependency from openai.object_classes import OBJECT_CLASSES return OBJECT_CLASSES def convert_to_openai_object( resp, api_key=None, api_version=None, organization=None, engine=None, plain_old_data=False, ): # If we get a OpenAIResponse, we'll want to return a OpenAIObject. response_ms: Optional[int] = None if isinstance(resp, openai.openai_response.OpenAIResponse): organization = resp.organization response_ms = resp.response_ms resp = resp.data if plain_old_data: return resp elif isinstance(resp, list): return [ convert_to_openai_object( i, api_key, api_version, organization, engine=engine ) for i in resp ] elif isinstance(resp, dict) and not isinstance( resp, openai.openai_object.OpenAIObject ): resp = resp.copy() klass_name = resp.get("object") if isinstance(klass_name, str): klass = get_object_classes().get( klass_name, openai.openai_object.OpenAIObject ) else: klass = openai.openai_object.OpenAIObject return klass.construct_from( resp, api_key=api_key, api_version=api_version, organization=organization, response_ms=response_ms, engine=engine, ) else: return resp def convert_to_dict(obj): """Converts a OpenAIObject back to a regular dict. Nested OpenAIObjects are also converted back to regular dicts. :param obj: The OpenAIObject to convert. :returns: The OpenAIObject as a dict. """ if isinstance(obj, list): return [convert_to_dict(i) for i in obj] # This works by virtue of the fact that OpenAIObjects _are_ dicts. The dict # comprehension returns a regular dict and recursively applies the # conversion to each value. elif isinstance(obj, dict): return {k: convert_to_dict(v) for k, v in obj.items()} else: return obj def merge_dicts(x, y): z = x.copy() z.update(y) return z def default_api_key() -> str: if openai.api_key_path: with open(openai.api_key_path, "rt") as k: api_key = k.read().strip() if not api_key.startswith("sk-"): raise ValueError(f"Malformed API key in {openai.api_key_path}.") return api_key elif openai.api_key is not None: return openai.api_key else: raise openai.error.AuthenticationError( "No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details." )
PypiClean
/nni_upload_test-0.7.1904290925-py3-none-win_amd64.whl/nni_upload_test-0.7.1904290925.data/data/nni/node_modules/sshpk/lib/formats/x509.js
module.exports = { read: read, verify: verify, sign: sign, signAsync: signAsync, write: write }; var assert = require('assert-plus'); var asn1 = require('asn1'); var Buffer = require('safer-buffer').Buffer; var algs = require('../algs'); var utils = require('../utils'); var Key = require('../key'); var PrivateKey = require('../private-key'); var pem = require('./pem'); var Identity = require('../identity'); var Signature = require('../signature'); var Certificate = require('../certificate'); var pkcs8 = require('./pkcs8'); /* * This file is based on RFC5280 (X.509). */ /* Helper to read in a single mpint */ function readMPInt(der, nm) { assert.strictEqual(der.peek(), asn1.Ber.Integer, nm + ' is not an Integer'); return (utils.mpNormalize(der.readString(asn1.Ber.Integer, true))); } function verify(cert, key) { var sig = cert.signatures.x509; assert.object(sig, 'x509 signature'); var algParts = sig.algo.split('-'); if (algParts[0] !== key.type) return (false); var blob = sig.cache; if (blob === undefined) { var der = new asn1.BerWriter(); writeTBSCert(cert, der); blob = der.buffer; } var verifier = key.createVerify(algParts[1]); verifier.write(blob); return (verifier.verify(sig.signature)); } function Local(i) { return (asn1.Ber.Context | asn1.Ber.Constructor | i); } function Context(i) { return (asn1.Ber.Context | i); } var SIGN_ALGS = { 'rsa-md5': '1.2.840.113549.1.1.4', 'rsa-sha1': '1.2.840.113549.1.1.5', 'rsa-sha256': '1.2.840.113549.1.1.11', 'rsa-sha384': '1.2.840.113549.1.1.12', 'rsa-sha512': '1.2.840.113549.1.1.13', 'dsa-sha1': '1.2.840.10040.4.3', 'dsa-sha256': '2.16.840.1.101.3.4.3.2', 'ecdsa-sha1': '1.2.840.10045.4.1', 'ecdsa-sha256': '1.2.840.10045.4.3.2', 'ecdsa-sha384': '1.2.840.10045.4.3.3', 'ecdsa-sha512': '1.2.840.10045.4.3.4', 'ed25519-sha512': '1.3.101.112' }; Object.keys(SIGN_ALGS).forEach(function (k) { SIGN_ALGS[SIGN_ALGS[k]] = k; }); SIGN_ALGS['1.3.14.3.2.3'] = 'rsa-md5'; SIGN_ALGS['1.3.14.3.2.29'] = 'rsa-sha1'; var EXTS = { 'issuerKeyId': '2.5.29.35', 'altName': '2.5.29.17', 'basicConstraints': '2.5.29.19', 'keyUsage': '2.5.29.15', 'extKeyUsage': '2.5.29.37' }; function read(buf, options) { if (typeof (buf) === 'string') { buf = Buffer.from(buf, 'binary'); } assert.buffer(buf, 'buf'); var der = new asn1.BerReader(buf); der.readSequence(); if (Math.abs(der.length - der.remain) > 1) { throw (new Error('DER sequence does not contain whole byte ' + 'stream')); } var tbsStart = der.offset; der.readSequence(); var sigOffset = der.offset + der.length; var tbsEnd = sigOffset; if (der.peek() === Local(0)) { der.readSequence(Local(0)); var version = der.readInt(); assert.ok(version <= 3, 'only x.509 versions up to v3 supported'); } var cert = {}; cert.signatures = {}; var sig = (cert.signatures.x509 = {}); sig.extras = {}; cert.serial = readMPInt(der, 'serial'); der.readSequence(); var after = der.offset + der.length; var certAlgOid = der.readOID(); var certAlg = SIGN_ALGS[certAlgOid]; if (certAlg === undefined) throw (new Error('unknown signature algorithm ' + certAlgOid)); der._offset = after; cert.issuer = Identity.parseAsn1(der); der.readSequence(); cert.validFrom = readDate(der); cert.validUntil = readDate(der); cert.subjects = [Identity.parseAsn1(der)]; der.readSequence(); after = der.offset + der.length; cert.subjectKey = pkcs8.readPkcs8(undefined, 'public', der); der._offset = after; /* issuerUniqueID */ if (der.peek() === Local(1)) { der.readSequence(Local(1)); sig.extras.issuerUniqueID = buf.slice(der.offset, der.offset + der.length); der._offset += der.length; } /* subjectUniqueID */ if (der.peek() === Local(2)) { der.readSequence(Local(2)); sig.extras.subjectUniqueID = buf.slice(der.offset, der.offset + der.length); der._offset += der.length; } /* extensions */ if (der.peek() === Local(3)) { der.readSequence(Local(3)); var extEnd = der.offset + der.length; der.readSequence(); while (der.offset < extEnd) readExtension(cert, buf, der); assert.strictEqual(der.offset, extEnd); } assert.strictEqual(der.offset, sigOffset); der.readSequence(); after = der.offset + der.length; var sigAlgOid = der.readOID(); var sigAlg = SIGN_ALGS[sigAlgOid]; if (sigAlg === undefined) throw (new Error('unknown signature algorithm ' + sigAlgOid)); der._offset = after; var sigData = der.readString(asn1.Ber.BitString, true); if (sigData[0] === 0) sigData = sigData.slice(1); var algParts = sigAlg.split('-'); sig.signature = Signature.parse(sigData, algParts[0], 'asn1'); sig.signature.hashAlgorithm = algParts[1]; sig.algo = sigAlg; sig.cache = buf.slice(tbsStart, tbsEnd); return (new Certificate(cert)); } function readDate(der) { if (der.peek() === asn1.Ber.UTCTime) { return (utcTimeToDate(der.readString(asn1.Ber.UTCTime))); } else if (der.peek() === asn1.Ber.GeneralizedTime) { return (gTimeToDate(der.readString(asn1.Ber.GeneralizedTime))); } else { throw (new Error('Unsupported date format')); } } function writeDate(der, date) { if (date.getUTCFullYear() >= 2050 || date.getUTCFullYear() < 1950) { der.writeString(dateToGTime(date), asn1.Ber.GeneralizedTime); } else { der.writeString(dateToUTCTime(date), asn1.Ber.UTCTime); } } /* RFC5280, section 4.2.1.6 (GeneralName type) */ var ALTNAME = { OtherName: Local(0), RFC822Name: Context(1), DNSName: Context(2), X400Address: Local(3), DirectoryName: Local(4), EDIPartyName: Local(5), URI: Context(6), IPAddress: Context(7), OID: Context(8) }; /* RFC5280, section 4.2.1.12 (KeyPurposeId) */ var EXTPURPOSE = { 'serverAuth': '1.3.6.1.5.5.7.3.1', 'clientAuth': '1.3.6.1.5.5.7.3.2', 'codeSigning': '1.3.6.1.5.5.7.3.3', /* See https://github.com/joyent/oid-docs/blob/master/root.md */ 'joyentDocker': '1.3.6.1.4.1.38678.1.4.1', 'joyentCmon': '1.3.6.1.4.1.38678.1.4.2' }; var EXTPURPOSE_REV = {}; Object.keys(EXTPURPOSE).forEach(function (k) { EXTPURPOSE_REV[EXTPURPOSE[k]] = k; }); var KEYUSEBITS = [ 'signature', 'identity', 'keyEncryption', 'encryption', 'keyAgreement', 'ca', 'crl' ]; function readExtension(cert, buf, der) { der.readSequence(); var after = der.offset + der.length; var extId = der.readOID(); var id; var sig = cert.signatures.x509; if (!sig.extras.exts) sig.extras.exts = []; var critical; if (der.peek() === asn1.Ber.Boolean) critical = der.readBoolean(); switch (extId) { case (EXTS.basicConstraints): der.readSequence(asn1.Ber.OctetString); der.readSequence(); var bcEnd = der.offset + der.length; var ca = false; if (der.peek() === asn1.Ber.Boolean) ca = der.readBoolean(); if (cert.purposes === undefined) cert.purposes = []; if (ca === true) cert.purposes.push('ca'); var bc = { oid: extId, critical: critical }; if (der.offset < bcEnd && der.peek() === asn1.Ber.Integer) bc.pathLen = der.readInt(); sig.extras.exts.push(bc); break; case (EXTS.extKeyUsage): der.readSequence(asn1.Ber.OctetString); der.readSequence(); if (cert.purposes === undefined) cert.purposes = []; var ekEnd = der.offset + der.length; while (der.offset < ekEnd) { var oid = der.readOID(); cert.purposes.push(EXTPURPOSE_REV[oid] || oid); } /* * This is a bit of a hack: in the case where we have a cert * that's only allowed to do serverAuth or clientAuth (and not * the other), we want to make sure all our Subjects are of * the right type. But we already parsed our Subjects and * decided if they were hosts or users earlier (since it appears * first in the cert). * * So we go through and mutate them into the right kind here if * it doesn't match. This might not be hugely beneficial, as it * seems that single-purpose certs are not often seen in the * wild. */ if (cert.purposes.indexOf('serverAuth') !== -1 && cert.purposes.indexOf('clientAuth') === -1) { cert.subjects.forEach(function (ide) { if (ide.type !== 'host') { ide.type = 'host'; ide.hostname = ide.uid || ide.email || ide.components[0].value; } }); } else if (cert.purposes.indexOf('clientAuth') !== -1 && cert.purposes.indexOf('serverAuth') === -1) { cert.subjects.forEach(function (ide) { if (ide.type !== 'user') { ide.type = 'user'; ide.uid = ide.hostname || ide.email || ide.components[0].value; } }); } sig.extras.exts.push({ oid: extId, critical: critical }); break; case (EXTS.keyUsage): der.readSequence(asn1.Ber.OctetString); var bits = der.readString(asn1.Ber.BitString, true); var setBits = readBitField(bits, KEYUSEBITS); setBits.forEach(function (bit) { if (cert.purposes === undefined) cert.purposes = []; if (cert.purposes.indexOf(bit) === -1) cert.purposes.push(bit); }); sig.extras.exts.push({ oid: extId, critical: critical, bits: bits }); break; case (EXTS.altName): der.readSequence(asn1.Ber.OctetString); der.readSequence(); var aeEnd = der.offset + der.length; while (der.offset < aeEnd) { switch (der.peek()) { case ALTNAME.OtherName: case ALTNAME.EDIPartyName: der.readSequence(); der._offset += der.length; break; case ALTNAME.OID: der.readOID(ALTNAME.OID); break; case ALTNAME.RFC822Name: /* RFC822 specifies email addresses */ var email = der.readString(ALTNAME.RFC822Name); id = Identity.forEmail(email); if (!cert.subjects[0].equals(id)) cert.subjects.push(id); break; case ALTNAME.DirectoryName: der.readSequence(ALTNAME.DirectoryName); id = Identity.parseAsn1(der); if (!cert.subjects[0].equals(id)) cert.subjects.push(id); break; case ALTNAME.DNSName: var host = der.readString( ALTNAME.DNSName); id = Identity.forHost(host); if (!cert.subjects[0].equals(id)) cert.subjects.push(id); break; default: der.readString(der.peek()); break; } } sig.extras.exts.push({ oid: extId, critical: critical }); break; default: sig.extras.exts.push({ oid: extId, critical: critical, data: der.readString(asn1.Ber.OctetString, true) }); break; } der._offset = after; } var UTCTIME_RE = /^([0-9]{2})([0-9]{2})([0-9]{2})([0-9]{2})([0-9]{2})([0-9]{2})?Z$/; function utcTimeToDate(t) { var m = t.match(UTCTIME_RE); assert.ok(m, 'timestamps must be in UTC'); var d = new Date(); var thisYear = d.getUTCFullYear(); var century = Math.floor(thisYear / 100) * 100; var year = parseInt(m[1], 10); if (thisYear % 100 < 50 && year >= 60) year += (century - 1); else year += century; d.setUTCFullYear(year, parseInt(m[2], 10) - 1, parseInt(m[3], 10)); d.setUTCHours(parseInt(m[4], 10), parseInt(m[5], 10)); if (m[6] && m[6].length > 0) d.setUTCSeconds(parseInt(m[6], 10)); return (d); } var GTIME_RE = /^([0-9]{4})([0-9]{2})([0-9]{2})([0-9]{2})([0-9]{2})([0-9]{2})?Z$/; function gTimeToDate(t) { var m = t.match(GTIME_RE); assert.ok(m); var d = new Date(); d.setUTCFullYear(parseInt(m[1], 10), parseInt(m[2], 10) - 1, parseInt(m[3], 10)); d.setUTCHours(parseInt(m[4], 10), parseInt(m[5], 10)); if (m[6] && m[6].length > 0) d.setUTCSeconds(parseInt(m[6], 10)); return (d); } function zeroPad(n, m) { if (m === undefined) m = 2; var s = '' + n; while (s.length < m) s = '0' + s; return (s); } function dateToUTCTime(d) { var s = ''; s += zeroPad(d.getUTCFullYear() % 100); s += zeroPad(d.getUTCMonth() + 1); s += zeroPad(d.getUTCDate()); s += zeroPad(d.getUTCHours()); s += zeroPad(d.getUTCMinutes()); s += zeroPad(d.getUTCSeconds()); s += 'Z'; return (s); } function dateToGTime(d) { var s = ''; s += zeroPad(d.getUTCFullYear(), 4); s += zeroPad(d.getUTCMonth() + 1); s += zeroPad(d.getUTCDate()); s += zeroPad(d.getUTCHours()); s += zeroPad(d.getUTCMinutes()); s += zeroPad(d.getUTCSeconds()); s += 'Z'; return (s); } function sign(cert, key) { if (cert.signatures.x509 === undefined) cert.signatures.x509 = {}; var sig = cert.signatures.x509; sig.algo = key.type + '-' + key.defaultHashAlgorithm(); if (SIGN_ALGS[sig.algo] === undefined) return (false); var der = new asn1.BerWriter(); writeTBSCert(cert, der); var blob = der.buffer; sig.cache = blob; var signer = key.createSign(); signer.write(blob); cert.signatures.x509.signature = signer.sign(); return (true); } function signAsync(cert, signer, done) { if (cert.signatures.x509 === undefined) cert.signatures.x509 = {}; var sig = cert.signatures.x509; var der = new asn1.BerWriter(); writeTBSCert(cert, der); var blob = der.buffer; sig.cache = blob; signer(blob, function (err, signature) { if (err) { done(err); return; } sig.algo = signature.type + '-' + signature.hashAlgorithm; if (SIGN_ALGS[sig.algo] === undefined) { done(new Error('Invalid signing algorithm "' + sig.algo + '"')); return; } sig.signature = signature; done(); }); } function write(cert, options) { var sig = cert.signatures.x509; assert.object(sig, 'x509 signature'); var der = new asn1.BerWriter(); der.startSequence(); if (sig.cache) { der._ensure(sig.cache.length); sig.cache.copy(der._buf, der._offset); der._offset += sig.cache.length; } else { writeTBSCert(cert, der); } der.startSequence(); der.writeOID(SIGN_ALGS[sig.algo]); if (sig.algo.match(/^rsa-/)) der.writeNull(); der.endSequence(); var sigData = sig.signature.toBuffer('asn1'); var data = Buffer.alloc(sigData.length + 1); data[0] = 0; sigData.copy(data, 1); der.writeBuffer(data, asn1.Ber.BitString); der.endSequence(); return (der.buffer); } function writeTBSCert(cert, der) { var sig = cert.signatures.x509; assert.object(sig, 'x509 signature'); der.startSequence(); der.startSequence(Local(0)); der.writeInt(2); der.endSequence(); der.writeBuffer(utils.mpNormalize(cert.serial), asn1.Ber.Integer); der.startSequence(); der.writeOID(SIGN_ALGS[sig.algo]); if (sig.algo.match(/^rsa-/)) der.writeNull(); der.endSequence(); cert.issuer.toAsn1(der); der.startSequence(); writeDate(der, cert.validFrom); writeDate(der, cert.validUntil); der.endSequence(); var subject = cert.subjects[0]; var altNames = cert.subjects.slice(1); subject.toAsn1(der); pkcs8.writePkcs8(der, cert.subjectKey); if (sig.extras && sig.extras.issuerUniqueID) { der.writeBuffer(sig.extras.issuerUniqueID, Local(1)); } if (sig.extras && sig.extras.subjectUniqueID) { der.writeBuffer(sig.extras.subjectUniqueID, Local(2)); } if (altNames.length > 0 || subject.type === 'host' || (cert.purposes !== undefined && cert.purposes.length > 0) || (sig.extras && sig.extras.exts)) { der.startSequence(Local(3)); der.startSequence(); var exts = []; if (cert.purposes !== undefined && cert.purposes.length > 0) { exts.push({ oid: EXTS.basicConstraints, critical: true }); exts.push({ oid: EXTS.keyUsage, critical: true }); exts.push({ oid: EXTS.extKeyUsage, critical: true }); } exts.push({ oid: EXTS.altName }); if (sig.extras && sig.extras.exts) exts = sig.extras.exts; for (var i = 0; i < exts.length; ++i) { der.startSequence(); der.writeOID(exts[i].oid); if (exts[i].critical !== undefined) der.writeBoolean(exts[i].critical); if (exts[i].oid === EXTS.altName) { der.startSequence(asn1.Ber.OctetString); der.startSequence(); if (subject.type === 'host') { der.writeString(subject.hostname, Context(2)); } for (var j = 0; j < altNames.length; ++j) { if (altNames[j].type === 'host') { der.writeString( altNames[j].hostname, ALTNAME.DNSName); } else if (altNames[j].type === 'email') { der.writeString( altNames[j].email, ALTNAME.RFC822Name); } else { /* * Encode anything else as a * DN style name for now. */ der.startSequence( ALTNAME.DirectoryName); altNames[j].toAsn1(der); der.endSequence(); } } der.endSequence(); der.endSequence(); } else if (exts[i].oid === EXTS.basicConstraints) { der.startSequence(asn1.Ber.OctetString); der.startSequence(); var ca = (cert.purposes.indexOf('ca') !== -1); var pathLen = exts[i].pathLen; der.writeBoolean(ca); if (pathLen !== undefined) der.writeInt(pathLen); der.endSequence(); der.endSequence(); } else if (exts[i].oid === EXTS.extKeyUsage) { der.startSequence(asn1.Ber.OctetString); der.startSequence(); cert.purposes.forEach(function (purpose) { if (purpose === 'ca') return; if (KEYUSEBITS.indexOf(purpose) !== -1) return; var oid = purpose; if (EXTPURPOSE[purpose] !== undefined) oid = EXTPURPOSE[purpose]; der.writeOID(oid); }); der.endSequence(); der.endSequence(); } else if (exts[i].oid === EXTS.keyUsage) { der.startSequence(asn1.Ber.OctetString); /* * If we parsed this certificate from a byte * stream (i.e. we didn't generate it in sshpk) * then we'll have a ".bits" property on the * ext with the original raw byte contents. * * If we have this, use it here instead of * regenerating it. This guarantees we output * the same data we parsed, so signatures still * validate. */ if (exts[i].bits !== undefined) { der.writeBuffer(exts[i].bits, asn1.Ber.BitString); } else { var bits = writeBitField(cert.purposes, KEYUSEBITS); der.writeBuffer(bits, asn1.Ber.BitString); } der.endSequence(); } else { der.writeBuffer(exts[i].data, asn1.Ber.OctetString); } der.endSequence(); } der.endSequence(); der.endSequence(); } der.endSequence(); } /* * Reads an ASN.1 BER bitfield out of the Buffer produced by doing * `BerReader#readString(asn1.Ber.BitString)`. That function gives us the raw * contents of the BitString tag, which is a count of unused bits followed by * the bits as a right-padded byte string. * * `bits` is the Buffer, `bitIndex` should contain an array of string names * for the bits in the string, ordered starting with bit #0 in the ASN.1 spec. * * Returns an array of Strings, the names of the bits that were set to 1. */ function readBitField(bits, bitIndex) { var bitLen = 8 * (bits.length - 1) - bits[0]; var setBits = {}; for (var i = 0; i < bitLen; ++i) { var byteN = 1 + Math.floor(i / 8); var bit = 7 - (i % 8); var mask = 1 << bit; var bitVal = ((bits[byteN] & mask) !== 0); var name = bitIndex[i]; if (bitVal && typeof (name) === 'string') { setBits[name] = true; } } return (Object.keys(setBits)); } /* * `setBits` is an array of strings, containing the names for each bit that * sould be set to 1. `bitIndex` is same as in `readBitField()`. * * Returns a Buffer, ready to be written out with `BerWriter#writeString()`. */ function writeBitField(setBits, bitIndex) { var bitLen = bitIndex.length; var blen = Math.ceil(bitLen / 8); var unused = blen * 8 - bitLen; var bits = Buffer.alloc(1 + blen); // zero-filled bits[0] = unused; for (var i = 0; i < bitLen; ++i) { var byteN = 1 + Math.floor(i / 8); var bit = 7 - (i % 8); var mask = 1 << bit; var name = bitIndex[i]; if (name === undefined) continue; var bitVal = (setBits.indexOf(name) !== -1); if (bitVal) { bits[byteN] |= mask; } } return (bits); }
PypiClean
/ab12phylo-0.5.21b0-py3-none-any.whl/ab12phylo-0.5.21b0.dist-info/LICENSE.md
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <http://www.gnu.org/philosophy/why-not-lgpl.html>.
PypiClean
/tensorflow_cpu-2.14.0rc1-cp311-cp311-macosx_10_15_x86_64.whl/tensorflow/python/util/protobuf/compare.py
import difflib import math from ..compat import collections_abc import six from google.protobuf import descriptor from google.protobuf import descriptor_pool from google.protobuf import message from google.protobuf import text_format # TODO(alankelly): Distinguish between signalling and quiet NaNs. def isClose(x, y, relative_tolerance): # pylint: disable=invalid-name """Returns True if x is close to y given the relative tolerance or if x and y are both inf, both -inf, or both NaNs. This function does not distinguish between signalling and non-signalling NaN. Args: x: float value to be compared y: float value to be compared relative_tolerance: float. The allowable difference between the two values being compared is determined by multiplying the relative tolerance by the maximum of the two values. If this is not provided, then all floats are compared using string comparison. """ # NaNs are considered equal. if math.isnan(x) or math.isnan(y): return math.isnan(x) == math.isnan(y) if math.isinf(x) or math.isinf(y): return x == y return abs(x - y) <= relative_tolerance * max(abs(x), abs(y)) def checkFloatEqAndReplace(self, expected, actual, relative_tolerance): # pylint: disable=invalid-name """Recursively replaces the floats in actual with those in expected iff they are approximately equal. This is done because string equality will consider values such as 5.0999999999 and 5.1 as not being equal, despite being extremely close. Args: self: googletest.TestCase expected: expected values actual: actual values relative_tolerance: float, relative tolerance. """ for expected_fields, actual_fields in zip( expected.ListFields(), actual.ListFields() ): is_repeated = True expected_desc, expected_values = expected_fields actual_values = actual_fields[1] if expected_desc.label != descriptor.FieldDescriptor.LABEL_REPEATED: is_repeated = False expected_values = [expected_values] actual_values = [actual_values] if ( expected_desc.type == descriptor.FieldDescriptor.TYPE_FLOAT or expected_desc.type == descriptor.FieldDescriptor.TYPE_DOUBLE ): for i, (x, y) in enumerate(zip(expected_values, actual_values)): # Replace the actual value with the expected value if the test passes, # otherwise leave it and let it fail in the next test so that the error # message is nicely formatted if isClose(x, y, relative_tolerance): if is_repeated: getattr(actual, actual_fields[0].name)[i] = x else: setattr(actual, actual_fields[0].name, x) if ( expected_desc.type == descriptor.FieldDescriptor.TYPE_MESSAGE or expected_desc.type == descriptor.FieldDescriptor.TYPE_GROUP ): if ( expected_desc.type == descriptor.FieldDescriptor.TYPE_MESSAGE and expected_desc.message_type.has_options and expected_desc.message_type.GetOptions().map_entry ): # This is a map, only recurse if it has type message type. if ( expected_desc.message_type.fields_by_number[2].type == descriptor.FieldDescriptor.TYPE_MESSAGE ): for e_v, a_v in zip( six.itervalues(expected_values), six.itervalues(actual_values) ): checkFloatEqAndReplace( self, expected=e_v, actual=a_v, relative_tolerance=relative_tolerance, ) else: for v, a in zip(expected_values, actual_values): # recursive step checkFloatEqAndReplace( self, expected=v, actual=a, relative_tolerance=relative_tolerance ) def assertProtoEqual( self, a, b, check_initialized=True, normalize_numbers=False, msg=None, relative_tolerance=None, ): # pylint: disable=invalid-name( """Fails with a useful error if a and b aren't equal. Comparison of repeated fields matches the semantics of unittest.TestCase.assertEqual(), ie order and extra duplicates fields matter. Args: self: googletest.TestCase a: proto2 PB instance, or text string representing one. b: proto2 PB instance -- message.Message or subclass thereof. check_initialized: boolean, whether to fail if either a or b isn't initialized. normalize_numbers: boolean, whether to normalize types and precision of numbers before comparison. msg: if specified, is used as the error message on failure. relative_tolerance: float, relative tolerance. If this is not provided, then all floats are compared using string comparison otherwise, floating point comparisons are done using the relative tolerance provided. """ pool = descriptor_pool.Default() if isinstance(a, six.string_types): a = text_format.Parse(a, b.__class__(), descriptor_pool=pool) for pb in a, b: if check_initialized: errors = pb.FindInitializationErrors() if errors: self.fail('Initialization errors: %s\n%s' % (errors, pb)) if normalize_numbers: NormalizeNumberFields(pb) if relative_tolerance is not None: checkFloatEqAndReplace( self, expected=b, actual=a, relative_tolerance=relative_tolerance ) a_str = text_format.MessageToString(a, descriptor_pool=pool) b_str = text_format.MessageToString(b, descriptor_pool=pool) # Some Python versions would perform regular diff instead of multi-line # diff if string is longer than 2**16. We substitute this behavior # with a call to unified_diff instead to have easier-to-read diffs. # For context, see: https://bugs.python.org/issue11763. if len(a_str) < 2**16 and len(b_str) < 2**16: self.assertMultiLineEqual(a_str, b_str, msg=msg) else: diff = ''.join( difflib.unified_diff(a_str.splitlines(True), b_str.splitlines(True))) if diff: self.fail('%s :\n%s' % (msg, diff)) def NormalizeNumberFields(pb): """Normalizes types and precisions of number fields in a protocol buffer. Due to subtleties in the python protocol buffer implementation, it is possible for values to have different types and precision depending on whether they were set and retrieved directly or deserialized from a protobuf. This function normalizes integer values to ints and longs based on width, 32-bit floats to five digits of precision to account for python always storing them as 64-bit, and ensures doubles are floating point for when they're set to integers. Modifies pb in place. Recurses into nested objects. Args: pb: proto2 message. Returns: the given pb, modified in place. """ for desc, values in pb.ListFields(): is_repeated = True if desc.label != descriptor.FieldDescriptor.LABEL_REPEATED: is_repeated = False values = [values] normalized_values = None # We force 32-bit values to int and 64-bit values to long to make # alternate implementations where the distinction is more significant # (e.g. the C++ implementation) simpler. if desc.type in (descriptor.FieldDescriptor.TYPE_INT64, descriptor.FieldDescriptor.TYPE_UINT64, descriptor.FieldDescriptor.TYPE_SINT64): normalized_values = [int(x) for x in values] elif desc.type in (descriptor.FieldDescriptor.TYPE_INT32, descriptor.FieldDescriptor.TYPE_UINT32, descriptor.FieldDescriptor.TYPE_SINT32, descriptor.FieldDescriptor.TYPE_ENUM): normalized_values = [int(x) for x in values] elif desc.type == descriptor.FieldDescriptor.TYPE_FLOAT: normalized_values = [round(x, 6) for x in values] elif desc.type == descriptor.FieldDescriptor.TYPE_DOUBLE: normalized_values = [round(float(x), 7) for x in values] if normalized_values is not None: if is_repeated: pb.ClearField(desc.name) getattr(pb, desc.name).extend(normalized_values) else: setattr(pb, desc.name, normalized_values[0]) if (desc.type == descriptor.FieldDescriptor.TYPE_MESSAGE or desc.type == descriptor.FieldDescriptor.TYPE_GROUP): if (desc.type == descriptor.FieldDescriptor.TYPE_MESSAGE and desc.message_type.has_options and desc.message_type.GetOptions().map_entry): # This is a map, only recurse if the values have a message type. if (desc.message_type.fields_by_number[2].type == descriptor.FieldDescriptor.TYPE_MESSAGE): for v in six.itervalues(values): NormalizeNumberFields(v) else: for v in values: # recursive step NormalizeNumberFields(v) return pb def _IsMap(value): return isinstance(value, collections_abc.Mapping) def _IsRepeatedContainer(value): if isinstance(value, six.string_types): return False try: iter(value) return True except TypeError: return False def ProtoEq(a, b): """Compares two proto2 objects for equality. Recurses into nested messages. Uses list (not set) semantics for comparing repeated fields, ie duplicates and order matter. Args: a: A proto2 message or a primitive. b: A proto2 message or a primitive. Returns: `True` if the messages are equal. """ def Format(pb): """Returns a dictionary or unchanged pb bases on its type. Specifically, this function returns a dictionary that maps tag number (for messages) or element index (for repeated fields) to value, or just pb unchanged if it's neither. Args: pb: A proto2 message or a primitive. Returns: A dict or unchanged pb. """ if isinstance(pb, message.Message): return dict((desc.number, value) for desc, value in pb.ListFields()) elif _IsMap(pb): return dict(pb.items()) elif _IsRepeatedContainer(pb): return dict(enumerate(list(pb))) else: return pb a, b = Format(a), Format(b) # Base case if not isinstance(a, dict) or not isinstance(b, dict): return a == b # This list performs double duty: it compares two messages by tag value *or* # two repeated fields by element, in order. the magic is in the format() # function, which converts them both to the same easily comparable format. for tag in sorted(set(a.keys()) | set(b.keys())): if tag not in a or tag not in b: return False else: # Recursive step if not ProtoEq(a[tag], b[tag]): return False # Didn't find any values that differed, so they're equal! return True class ProtoAssertions(object): """Mix this into a googletest.TestCase class to get proto2 assertions. Usage: class SomeTestCase(compare.ProtoAssertions, googletest.TestCase): ... def testSomething(self): ... self.assertProtoEqual(a, b) See module-level definitions for method documentation. """ # pylint: disable=invalid-name def assertProtoEqual(self, *args, **kwargs): return assertProtoEqual(self, *args, **kwargs)
PypiClean
/hidefix-0.6.2.tar.gz/hidefix-0.6.2/README.md
[![Crates.io](https://img.shields.io/crates/v/hidefix.svg)](https://crates.io/crates/hidefix) [![PyPI](https://img.shields.io/pypi/v/hidefix.svg)](https://pypi.org/project/hidefix/) [![Documentation](https://docs.rs/hidefix/badge.svg)](https://docs.rs/hidefix/) [![Build (rust)](https://github.com/gauteh/hidefix/workflows/Rust/badge.svg)](https://github.com/gauteh/hidefix/actions?query=branch%3Amain) [![Build (python)](https://github.com/gauteh/hidefix/workflows/Python/badge.svg)](https://github.com/gauteh/hidefix/actions?query=branch%3Amain) [![codecov](https://codecov.io/gh/gauteh/hidefix/branch/main/graph/badge.svg)](https://codecov.io/gh/gauteh/hidefix) [![Rust nightly](https://img.shields.io/badge/rustc-nightly-orange)](https://rust-lang.github.io/rustup/installation/other.html) <img src="https://raw.githubusercontent.com/gauteh/hidefix/main/idefix.png"> # HIDEFIX This Rust and Python library provides an alternative reader for the [HDF5](https://support.hdfgroup.org/HDF5/doc/H5.format.html) file or [NetCDF4 file](https://www.unidata.ucar.edu/software/netcdf/docs/file_format_specifications.html) (which uses HDF5) which supports concurrent access to data. This is achieved by building an index of the chunks, allowing a thread to use many file handles to read the file. The original (native) HDF5 library is used to build the index, but once it has been created it is no longer needed. The index can be serialized to disk so that performing the indexing is not necessary. In Rust: ```rust use hidefix::prelude::*; let idx = Index::index("tests/data/coads_climatology.nc4").unwrap(); let mut r = idx.reader("SST").unwrap(); let values = r.values::<f32>(None, None).unwrap(); println!("SST: {:?}", values); ``` or with Python using Xarray: ```python import xarray as xr import hidefix ds = xr.open_dataset('file.nc', engine='hidefix') print(ds) ``` ## Motivation The HDF5 library requires internal locks to be _thread-safe_ since it relies on internal buffers which cannot be safely accessed/written to from multiple threads. This effectively causes multi-threaded applications to use sequential reads, while competing for the locks. And also apparently cause each other trouble, perhaps through dropping cached chunks which other threads still need. It can be safely used from different processes, but that requires potentially much more overhead than multi-threaded or asynchronous code. ## Some basic benchmarks `hidefix` is intended to perform better when concurrent reads are made either to the same dataset, same file or to different files from a single process. For basic benchmarks the performance is on-par or slightly better compared to doing standard *sequential* reads than the native HDF5 library (through its [rust-bindings](https://github.com/aldanor/hdf5-rust)). Where `hidefix` shines is once the _multiple threads_ in the _same process_ tries to read in _any way_ from a HDF5 file simultaneously. This simple benchmark tries to read a small dataset sequentially or concurrently using the `cached` reader from `hidefix` and the native reader from HDF5. The dataset is chunked, shuffled and compressed (using gzip): ```sh $ cargo bench --bench concurrency -- --ignored test shuffled_compressed::cache_concurrent_reads ... bench: 15,903,406 ns/iter (+/- 220,824) test shuffled_compressed::cache_sequential ... bench: 59,778,761 ns/iter (+/- 602,316) test shuffled_compressed::native_concurrent_reads ... bench: 411,605,868 ns/iter (+/- 35,346,233) test shuffled_compressed::native_sequential ... bench: 103,457,237 ns/iter (+/- 7,703,936) ``` ## Inspiration and other projects This work is based in part on the [DMR++ module](https://github.com/OPENDAP/bes/tree/master/modules/dmrpp_module) of the [OPeNDAP](https://www.opendap.org/) [Hyrax server](https://www.opendap.org/software/hyrax-data-server). The [zarr](https://zarr.readthedocs.io/en/stable/) format does something similar, and the same approach has been [tested out on HDF5](https://medium.com/pangeo/cloud-performant-reading-of-netcdf4-hdf5-data-using-the-zarr-library-1a95c5c92314) as swell.
PypiClean
/dslogparser-1.0.3.tar.gz/dslogparser-1.0.3/dslog2csv.py
# Note: should work correctly with either Python 2 or 3 from __future__ import print_function # Parse the FRC drive station logs which are packed binary data import sys import os import os.path import csv from dslogparser import DSLogParser, DSEventParser # Python 2 CSV writer wants binary output, but Py3 want regular _USE_BINARY_OUTPUT = sys.version_info[0] == 2 OUTPUT_COLUMNS = [ 'time', 'round_trip_time', 'packet_loss', 'voltage', 'rio_cpu', 'robot_disabled', 'robot_auto', 'robot_tele', 'ds_disabled', 'ds_auto', 'ds_tele', 'watchdog', 'brownout', 'can_usage', 'wifi_db', 'bandwidth', 'pdp_id', 'pdp_0', 'pdp_1', 'pdp_2', 'pdp_3', 'pdp_4', 'pdp_5', 'pdp_6', 'pdp_7', 'pdp_8', 'pdp_9', 'pdp_10', 'pdp_11', 'pdp_12', 'pdp_13', 'pdp_14', 'pdp_15', 'pdp_total_current', # don't output these. They are not correct # 'pdp_resistance', 'pdp_voltage', 'pdp_temp' ] def find_event_file(filename): evtname = os.path.splitext(filename)[0] + '.dsevents' if os.path.exists(evtname): return evtname return None if __name__ == '__main__': import argparse parser = argparse.ArgumentParser(description='FRC DSLog to CSV file') parser.add_argument('--one-output-per-file', action='store_true', help='Output one CSV per DSLog file') parser.add_argument('--output', '-o', help='Output filename (stdout otherwise)') parser.add_argument('--event', action='store_true', help='Input files are EVENT files') parser.add_argument('--add-match-info', action='store_true', help='Look for EVENT files matching DSLOG files and pull info') parser.add_argument('--matches-only', action='store_true', help='Ignore files which have no match info. Imples add-match-info') parser.add_argument('files', nargs='+', help='Input files') args = parser.parse_args() if args.matches_only: args.add_match_info = True if sys.platform == "win32": if _USE_BINARY_OUTPUT: # csv.writer requires binary output file import msvcrt msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) # do glob expanding on Windows. Linux/Mac does this automatically. import glob newfiles = [] for a in args.files: newfiles.extend(glob.glob(a)) args.files = newfiles if args.event: dsparser = DSEventParser(args.files[0]) for rec in dsparser.read_records(): print(rec['time'], rec['message']) else: col = ['inputfile', ] if args.add_match_info: col.extend(('match_name', 'field_time')) col.extend(OUTPUT_COLUMNS) if not args.one_output_per_file: if args.output: outstrm = open(args.output, 'wb' if _USE_BINARY_OUTPUT else 'w') else: outstrm = sys.stdout outcsv = csv.DictWriter(outstrm, fieldnames=col, extrasaction='ignore') outcsv.writeheader() else: outstrm = None outcsv = None for fn in args.files: match_info = None if args.add_match_info: evtfn = find_event_file(fn) if evtfn: match_info = DSEventParser.find_match_info(evtfn) if args.matches_only and not match_info: continue if args.one_output_per_file: if outstrm: outstrm.close() outname, _ = os.path.splitext(os.path.basename(fn)) outname += '.csv' outstrm = open(outname, 'wb' if _USE_BINARY_OUTPUT else 'w') outcsv = csv.DictWriter(outstrm, fieldnames=col, extrasaction='ignore') outcsv.writeheader() dsparser = DSLogParser(fn) for rec in dsparser.read_records(): rec['inputfile'] = fn if match_info: rec.update(match_info) # unpack the PDP currents to go into columns more easily for i in range(16): rec['pdp_{}'.format(i)] = rec['pdp_currents'][i] outcsv.writerow(rec) dsparser.close() if args.output or args.one_output_per_file: outstrm.close()
PypiClean
/vioneta_agro_frontend-20230809.1-py3-none-any.whl/hass_frontend/frontend_es5/41826-q6ZNQz_QxCU.js
"use strict";(self.webpackChunkvioneta_agro_frontend=self.webpackChunkvioneta_agro_frontend||[]).push([[41826,4631],{12198:function(n,t,e){e.d(t,{D_:function(){return Z},NC:function(){return _},Nh:function(){return v},U8:function(){return w},WB:function(){return f},mn:function(){return s},p6:function(){return c},pU:function(){return u},yQ:function(){return h}});var r=e(93359),o=e(14516),i=e(66477),u=(e(10520),function(n,t,e){return a(t,e.time_zone).format(n)}),a=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{weekday:"long",month:"long",day:"numeric",timeZone:"server"===n.time_zone?t:void 0})})),c=function(n,t,e){return m(t,e.time_zone).format(n)},m=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{year:"numeric",month:"long",day:"numeric",timeZone:"server"===n.time_zone?t:void 0})})),f=function(n,t,e){var o,u,a,c,m,f=l(t,e.time_zone);if(t.date_format===i.t6.language||t.date_format===i.t6.system)return f.format(n);var s=f.formatToParts(n),d=null===(o=s.find((function(n){return"literal"===n.type})))||void 0===o?void 0:o.value,_=null===(u=s.find((function(n){return"day"===n.type})))||void 0===u?void 0:u.value,g=null===(a=s.find((function(n){return"month"===n.type})))||void 0===a?void 0:a.value,v=null===(c=s.find((function(n){return"year"===n.type})))||void 0===c?void 0:c.value,y=s.at(s.length-1),h="literal"===(null==y?void 0:y.type)?null==y?void 0:y.value:"";return"bg"===t.language&&t.date_format===i.t6.YMD&&(h=""),(m={},(0,r.Z)(m,i.t6.DMY,"".concat(_).concat(d).concat(g).concat(d).concat(v).concat(h)),(0,r.Z)(m,i.t6.MDY,"".concat(g).concat(d).concat(_).concat(d).concat(v).concat(h)),(0,r.Z)(m,i.t6.YMD,"".concat(v).concat(d).concat(g).concat(d).concat(_).concat(h)),m)[t.date_format]},l=(0,o.Z)((function(n,t){var e=n.date_format===i.t6.system?void 0:n.language;return n.date_format===i.t6.language||(n.date_format,i.t6.system),new Intl.DateTimeFormat(e,{year:"numeric",month:"numeric",day:"numeric",timeZone:"server"===n.time_zone?t:void 0})})),s=function(n,t,e){return d(t,e.time_zone).format(n)},d=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{day:"numeric",month:"short",timeZone:"server"===n.time_zone?t:void 0})})),_=function(n,t,e){return g(t,e.time_zone).format(n)},g=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{month:"long",year:"numeric",timeZone:"server"===n.time_zone?t:void 0})})),v=function(n,t,e){return y(t,e.time_zone).format(n)},y=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{month:"long",timeZone:"server"===n.time_zone?t:void 0})})),h=function(n,t,e){return z(t,e.time_zone).format(n)},z=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{year:"numeric",timeZone:"server"===n.time_zone?t:void 0})})),Z=function(n,t,e){return p(t,e.time_zone).format(n)},p=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{weekday:"long",timeZone:"server"===n.time_zone?t:void 0})})),w=function(n,t,e){return D(t,e.time_zone).format(n)},D=(0,o.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{weekday:"short",timeZone:"server"===n.time_zone?t:void 0})}))},44583:function(n,t,e){e.d(t,{DG:function(){return m},E8:function(){return d},NR:function(){return g},o0:function(){return a},yD:function(){return l}});var r=e(14516),o=(e(10520),e(12198)),i=e(49684),u=e(65810),a=function(n,t,e){return c(t,e.time_zone).format(n)},c=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{year:"numeric",month:"long",day:"numeric",hour:(0,u.y)(n)?"numeric":"2-digit",minute:"2-digit",hourCycle:(0,u.y)(n)?"h12":"h23",timeZone:"server"===n.time_zone?t:void 0})})),m=function(n,t,e){return f(t,e.time_zone).format(n)},f=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{year:"numeric",month:"short",day:"numeric",hour:(0,u.y)(n)?"numeric":"2-digit",minute:"2-digit",hourCycle:(0,u.y)(n)?"h12":"h23",timeZone:"server"===n.time_zone?t:void 0})})),l=function(n,t,e){return s(t,e.time_zone).format(n)},s=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{month:"short",day:"numeric",hour:(0,u.y)(n)?"numeric":"2-digit",minute:"2-digit",hourCycle:(0,u.y)(n)?"h12":"h23",timeZone:"server"===n.time_zone?t:void 0})})),d=function(n,t,e){return _(t,e.time_zone).format(n)},_=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{year:"numeric",month:"long",day:"numeric",hour:(0,u.y)(n)?"numeric":"2-digit",minute:"2-digit",second:"2-digit",hourCycle:(0,u.y)(n)?"h12":"h23",timeZone:"server"===n.time_zone?t:void 0})})),g=function(n,t,e){return"".concat((0,o.WB)(n,t,e),", ").concat((0,i.mr)(n,t,e))}},49684:function(n,t,e){e.d(t,{Vu:function(){return a},Zs:function(){return l},mr:function(){return i},xO:function(){return m}});var r=e(14516),o=(e(10520),e(65810)),i=function(n,t,e){return u(t,e.time_zone).format(n)},u=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{hour:"numeric",minute:"2-digit",hourCycle:(0,o.y)(n)?"h12":"h23",timeZone:"server"===n.time_zone?t:void 0})})),a=function(n,t,e){return c(t,e.time_zone).format(n)},c=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{hour:(0,o.y)(n)?"numeric":"2-digit",minute:"2-digit",second:"2-digit",hourCycle:(0,o.y)(n)?"h12":"h23",timeZone:"server"===n.time_zone?t:void 0})})),m=function(n,t,e){return f(t,e.time_zone).format(n)},f=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat(n.language,{weekday:"long",hour:(0,o.y)(n)?"numeric":"2-digit",minute:"2-digit",hourCycle:(0,o.y)(n)?"h12":"h23",timeZone:"server"===n.time_zone?t:void 0})})),l=function(n,t,e){return s(t,e.time_zone).format(n)},s=(0,r.Z)((function(n,t){return new Intl.DateTimeFormat("en-GB",{hour:"numeric",minute:"2-digit",hour12:!1,timeZone:"server"===n.time_zone?t:void 0})}))},87367:function(n,t,e){e.d(t,{Z:function(){return o}});var r=function(n){for(var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:2,e=""+n,r=1;r<t;r++)e=parseInt(e)<Math.pow(10,r)?"0".concat(e):e;return e};function o(n){var t=Math.floor(n/1e3/3600),e=Math.floor(n/1e3%3600/60),o=Math.floor(n/1e3%3600%60),i=Math.floor(n%1e3);return t>0?"".concat(t,":").concat(r(e),":").concat(r(o)):e>0?"".concat(e,":").concat(r(o)):o>0||i>0?"".concat(o).concat(i>0?".".concat(r(i,3)):""):null}},65810:function(n,t,e){e.d(t,{y:function(){return i}});var r=e(14516),o=e(66477),i=(0,r.Z)((function(n){if(n.time_format===o.zt.language||n.time_format===o.zt.system){var t=n.time_format===o.zt.language?n.language:void 0;return new Date("January 1, 2023 22:00:00").toLocaleString(t).includes("10")}return n.time_format===o.zt.am_pm}))},41826:function(n,t,e){e.d(t,{D1:function(){return v},cG:function(){return y},v4:function(){return g}});var r=e(56007),o=e(66477),i=e(24833),u=e(87367),a={ms:1,s:1e3,min:6e4,h:36e5,d:864e5},c=e(12198),m=e(44583),f=e(49684),l=e(18457),s=e(68307),d=e(58831),_=e(40095),g=function(n,t,e,r,o,i){return y(n,e,r,o,t.entity_id,t.attributes,void 0!==i?i:t.state)},v=function(n,t,e,r,o,i){var u=null==o?void 0:o[t.entity_id];return y(n,e,r,u,t.entity_id,t.attributes,void 0!==i?i:t.state)},y=function(n,t,e,g,v,y,h){if(h===r.lz||h===r.nZ)return n("state.default.".concat(h));if((0,l.sJ)(y)){if("duration"===y.device_class&&y.unit_of_measurement&&a[y.unit_of_measurement])try{return Z=h,p=y.unit_of_measurement,(0,u.Z)(parseFloat(Z)*a[p])||"0"}catch(I){}if("monetary"===y.device_class)try{return(0,l.uf)(h,t,Object.assign({style:"currency",currency:y.unit_of_measurement,minimumFractionDigits:2},(0,l.l4)({state:h,attributes:y},g)))}catch(I){}var z=y.unit_of_measurement?"%"===y.unit_of_measurement?(0,s.K)(t)+"%":" ".concat(y.unit_of_measurement):"";return"".concat((0,l.uf)(h,t,(0,l.l4)({state:h,attributes:y},g))).concat(z)}var Z,p,w,D=(0,d.M)(v);if("datetime"===D){var b=new Date(h);return(0,m.o0)(b,t,e)}if(["date","input_datetime","time"].includes(D))try{var T=h.split(" ");if(2===T.length)return(0,m.o0)(new Date(T.join("T")),Object.assign(Object.assign({},t),{},{time_zone:o.c_.local}),e);if(1===T.length){if(h.includes("-"))return(0,c.p6)(new Date("".concat(h,"T00:00")),Object.assign(Object.assign({},t),{},{time_zone:o.c_.local}),e);if(h.includes(":")){var F=new Date;return(0,f.mr)(new Date("".concat(F.toISOString().split("T")[0],"T").concat(h)),Object.assign(Object.assign({},t),{},{time_zone:o.c_.local}),e)}}return h}catch(k){return h}if("counter"===D||"number"===D||"input_number"===D)return(0,l.uf)(h,t,(0,l.l4)({state:h,attributes:y},g));if(["button","event","image","input_button","scene","stt","tts"].includes(D)||"sensor"===D&&"timestamp"===y.device_class)try{return(0,m.o0)(new Date(h),t,e)}catch(I){return h}return"update"===D?"on"===h?(0,i.X4)(y)?(0,_.f)(y,i.k6)&&"number"==typeof y.in_progress?n("ui.card.update.installing_with_progress",{progress:y.in_progress}):n("ui.card.update.installing"):y.latest_version:y.skipped_version===y.latest_version?null!==(w=y.latest_version)&&void 0!==w?w:n("state.default.unavailable"):n("ui.card.update.up_to_date"):(null==g?void 0:g.translation_key)&&n("component.".concat(g.platform,".entity.").concat(D,".").concat(g.translation_key,".state.").concat(h))||y.device_class&&n("component.".concat(D,".entity_component.").concat(y.device_class,".state.").concat(h))||n("component.".concat(D,".entity_component._.state.").concat(h))||h}},68307:function(n,t,e){e.d(t,{K:function(){return r}});var r=function(n){switch(null==n?void 0:n.language){case"cz":case"de":case"fi":case"fr":case"sk":case"sv":return" ";default:return""}}},56007:function(n,t,e){e.d(t,{PX:function(){return u},V_:function(){return a},lz:function(){return i},nZ:function(){return o},rk:function(){return m}});var r=e(57966),o="unavailable",i="unknown",u="off",a=[o,i],c=[o,i,u],m=(0,r.z)(a);(0,r.z)(c)},10520:function(n,t,e){e.r(t);e(7151),e(33633),e(25534),e(64827),e(23044),e(1437),e(87520),e(42661),e(78337),e(87065),e(6042),e(19440),e(50897),e(30056),e(12679)}}]); //# sourceMappingURL=41826-q6ZNQz_QxCU.js.map
PypiClean
/sample-factory-2.1.1.tar.gz/sample-factory-2.1.1/sf_examples/vizdoom/doom/doom_params.py
import os from os.path import join from sample_factory.cfg.arguments import parse_full_cfg, parse_sf_args from sample_factory.utils.utils import str2bool def add_doom_env_args(parser): p = parser p.add_argument( "--num_agents", default=-1, type=int, help="Allows to set number of agents less than number of players, to allow humans to join the match. Default value (-1) means default number defined by the environment", ) p.add_argument("--num_humans", default=0, type=int, help="Meatbags want to play?") p.add_argument( "--num_bots", default=-1, type=int, help="Add classic (non-neural) bots to the match. If default (-1) then use number of bots specified in env cfg", ) p.add_argument( "--start_bot_difficulty", default=None, type=int, help="Adjust bot difficulty, useful for evaluation" ) p.add_argument( "--timelimit", default=None, type=float, help="Allows to override default match timelimit in minutes" ) p.add_argument("--res_w", default=128, type=int, help="Game frame width after resize") p.add_argument("--res_h", default=72, type=int, help="Game frame height after resize") p.add_argument( "--wide_aspect_ratio", default=False, type=str2bool, help="If true render wide aspect ratio (slower but gives better FOV to the agent)", ) def add_doom_env_eval_args(parser): """Arguments used only during evaluation.""" parser.add_argument( "--record_to", # default=join(os.getcwd(), "..", "recs"), default=None, type=str, help="Record episodes to this folder. This records a demo that can be replayed at full resolution. Currently, this does not work for bot environments so it is recommended to use --save_video to record episodes at lower resolution instead for such environments", ) def doom_override_defaults(parser): """RL params specific to Doom envs.""" parser.set_defaults( ppo_clip_value=0.2, # value used in all experiments in the paper obs_subtract_mean=0.0, obs_scale=255.0, exploration_loss="symmetric_kl", exploration_loss_coeff=0.001, normalize_returns=True, normalize_input=True, env_frameskip=4, eval_env_frameskip=1, # this is for smoother rendering during evaluation fps=35, # for evaluation only heartbeat_reporting_interval=600, ) def default_doom_cfg(algo="APPO", env="env", experiment="test"): """Useful in tests.""" argv = [f"--algo={algo}", f"--env={env}", f"--experiment={experiment}"] parser, args = parse_sf_args(argv) add_doom_env_args(parser) doom_override_defaults(parser) args = parse_full_cfg(parser, argv) return args
PypiClean
/llamac2py-0.1.4.tar.gz/llamac2py-0.1.4/README.md
# llamac2py llamac2py is a Python package that provides a wrapper for running inference using the Llama-2 Transformer model. The package includes a C executable (run.c) from [Karpathy's llama2.c](https://github.com/karpathy/llama2.c) that performs the inference, and the package allows easy inference for the same. --- ## Get Started: Clone the Repository: `git clone https://github.com/adarshxs/llamac2py` cd into the Repository: `cd llamac2py` download the Model (Will add support for more models): `wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin` Compile the C file: `run make` Then in a notebook or a Python script, run: ``` from llamac2py.wrapper import generate_short_story # Load your Llama-2 model checkpoint (model.bin) here checkpoint_file = 'path/to/your/model.bin' # Generate a short story with a prompt prompt_text = "Once upon a time, in a faraway land," short_story = generate_short_story(prompt_text, checkpoint_file) print(short_story) ```
PypiClean
/odoo_addon_l10n_fr_ecotaxe-15.0.1.0.0.15-py3-none-any.whl/odoo/addons/l10n_fr_ecotaxe/models/account_ecotaxe_classification.py
from odoo import api, fields, models class AccountEcotaxeClassification(models.Model): _name = "account.ecotaxe.classification" _description = "Account Ecotaxe Classification" @api.model def _default_company_id(self): return self.env.company name = fields.Char(required=True) code = fields.Char() ecotaxe_type = fields.Selection( [("fixed", "Fixed"), ("weight_based", "Weight based")], required=True, help="If ecotaxe is weight based," "the ecotaxe coef must take into account\n" "the weight unit of measure (kg by default)", ) ecotaxe_coef = fields.Float(digits="Ecotaxe") default_fixed_ecotaxe = fields.Float(help="Default fixed ecotaxe amount.") account_ecotaxe_categ_id = fields.Many2one( comodel_name="account.ecotaxe.category", string="Ecotaxe category", ) active = fields.Boolean(default=True) company_id = fields.Many2one( comodel_name="res.company", default=_default_company_id, help="Specify a company" " if you want to define this Ecotaxe Classification only for specific" " company. Otherwise, this Fiscal Classification will be available" " for all companies.", ) ecotaxe_product_status = fields.Selection( [("M", "Menager"), ("P", "Professionnel")], string="Product Status", required=True, ) ecotaxe_supplier_status = fields.Selection( [ ("FAB", "Fabricant"), ("REV", "Revendeur sous sa marque"), ("INT", "Introducteur"), ("IMP", "Importateur"), ("DIS", "Vendeur à distance"), ], string="Supplier Status", required=True, help="FAB ==> Fabricant : est établi en France et fabrique des EEE\n" "sous son propre nom ou sa propre marque, ou fait concevoir ou\n" " fabriquer des EEE et les commercialise sous\n" " son propre nom et sa propre marque\n" "REV ==> Revendeur sous sa marque : est établi en France et vend,\n" " sous son propre nom ou sa propre marque des EEE produits\n" " par d'autres fournisseurs" "INT ==> Introducteur : est établi en France et met sur le marché\n" "des EEE provenant d'un autre Etat membre" "IMP ==> Importateur : est établi en France et met sur marché\n" "des EEE provenant de pays hors Union Européenne" "DIS ==> Vendeur à distance : est établie dans un autre Etat\n" "membre ou dans un pays tiers et vend en France des EEE par\n" "communication à distance", ) ecotaxe_deb_code = fields.Char() ecotaxe_scale_code = fields.Char() @api.onchange("ecotaxe_type") def _onchange_ecotaxe_type(self): if self.ecotaxe_type == "weight_based": self.default_fixed_ecotaxe = 0 if self.ecotaxe_type == "fixed": self.ecotaxe_coef = 0
PypiClean
/certora_cli_alpha_oz_gambitlinux-20230710.11.11.234001-py3-none-macosx_10_9_universal2.whl/certora_cli/EVMVerifier/Compiler/CompilerCollectorFactory.py
import os from Shared.certoraUtils import run_solc_cmd from pathlib import Path from functools import lru_cache from typing import Tuple, Dict, Set import re import logging from EVMVerifier.Compiler.CompilerCollector import CompilerLang, CompilerCollector from EVMVerifier.Compiler.CompilerCollectorSol import CompilerCollectorSol, CompilerLangSol from EVMVerifier.Compiler.CompilerCollectorVy import CompilerCollectorVy, CompilerLangVy from Shared.certoraUtils import is_windows, match_path_to_mapping_key, remove_file, is_new_api from EVMVerifier.certoraContextClass import CertoraContext # logger for running the Solidity compiler and reporting any errors it emits solc_logger = logging.getLogger("solc") def get_relevant_solc(contract_file_path: Path, solc: str, solc_mappings: Dict[str, str]) -> str: """ @param contract_file_path: the contract that we are working on @param solc_mappings: input arg mapping contract to solc @param solc: solc we want to run in case the specified file_name is not in solc_mappings @return: the name of the solc executable we want to run on this contract (as a string, could be a path or a resolvable executable name) """ match = match_path_to_mapping_key(contract_file_path, solc_mappings) if match is not None: base = match else: base = solc if is_windows() and not base.endswith(".exe"): base = base + ".exe" solc_logger.debug(f"relevant solc is {base}") return base def get_extra_solc_args(contract_path: Path, context: CertoraContext) -> str: """ Adds all args in --solc_args, if any, and the optimization found in --solc_optimize_map, if exists. We assume that there are no conflicts between the two (the input was validated). @param contract_path: the contract that we are working on @param context: the context object @return str of solc args or optimizations found """ extra_solc_args = "" if not is_new_api() and context.solc_args is not None: extra_solc_args += ' '.join(context.solc_args) if is_new_api(): extra_solc_args += _solc_args_to_str(context) optimize_map = context.solc_optimize_map if is_new_api() else context.optimize_map if optimize_map is not None: match = match_path_to_mapping_key(contract_path, optimize_map) if match is not None: num_runs = match if int(num_runs) > 0: # If the contract has 0 in its number of runs in the map, we skip optimizing extra_solc_args += f" --optimize --optimize-runs {num_runs}" return extra_solc_args class CompilerCollectorFactory: """ Returns [CompilerCollector] instance, based on type of the file [file_name] and the file path solc_args: input args optimize and optimize_runs optimize_map: input arg mapping contract to optimized number of runs solc_mappings: input arg mapping contract to solc solc: solc we want to run in case the specified file_name is not in solc_mappings config_path: path to Certora config dir We added context as first step of making it the only parameters (the other params already appear in Context) """ def __init__(self, context: CertoraContext, solc_args: list, optimize_map: Dict[str, str], solc: str, solc_mappings: Dict[str, str], config_path: Path): self.context = context self._solc_args = solc_args self._optimize_map = optimize_map self._solc = solc self._solc_mappings = solc_mappings self._config_path = config_path self._stdout_paths_to_clean: Set[Path] = set() self._stderr_paths_to_clean: Set[Path] = set() @lru_cache(maxsize=32) def get_compiler_collector(self, path: Path) -> CompilerCollector: """ 1. Same file path will get the same compiler collector 2. autoFinder_X file will get the compiler collector of X file @returns [CompilerCollector] instance, based on type of the file [file_name] and the file path @param path: path of the file to create [CompilerCollector] for """ if str(path).endswith(".vy"): return CompilerCollectorVy() elif str(path).endswith(".sol"): version = self.__get_solc_version(path) return CompilerCollectorSol(version, get_extra_solc_args(path, self.context)) else: raise RuntimeError(f'expected {path} to represent a Solidity or Vyper file') def __get_solc_version(self, contract_file_path: Path) -> Tuple[int, int, int]: """ @param contract_file_path: the contract that we are working on @return: the running solc version """ solc_logger.debug(f"visiting contract file {contract_file_path}") solc_path = get_relevant_solc(contract_file_path, self._solc, self._solc_mappings) version = self.__get_solc_exe_version(solc_path) return version @lru_cache(maxsize=32) def __get_solc_exe_version(self, solc_name: str) -> Tuple[int, int, int]: """ @param solc_name: name of the solc we want to run on this contract @return: the running solc version """ out_name = f"version_check_{Path(solc_name).name}" stdout_path = self._config_path / f'{out_name}.stdout' stderr_path = self._config_path / f'{out_name}.stderr' self._stdout_paths_to_clean.add(stdout_path) self._stderr_paths_to_clean.add(stderr_path) run_solc_cmd( f"{solc_name} --version", wd=Path(os.getcwd()), output_file_name=out_name, config_path=self._config_path) with stdout_path.open() as r: version_string = r.read(-1) version_matches = [(int(m.group(1)), int(m.group(2)), int(m.group(3))) for m in [re.match(r'^(\d+)\.(\d+).(\d+)', l[len("Version: "):]) for l in version_string.splitlines() if l.startswith("Version: ")] if m is not None] if len(version_matches) != 1: msg = f"Couldn't extract Solidity version from output {version_string}, giving up" solc_logger.debug(msg) raise RuntimeError(msg) return version_matches[0] def __del__(self) -> None: for path in self._stdout_paths_to_clean: remove_file(path) for path in self._stderr_paths_to_clean: remove_file(path) # works only with the new_api def _solc_args_to_str(context: CertoraContext) -> str: args = [] if context.solc_via_ir: args.append('--via-ir') if context.solc_optimize: args.append('--optimize') runs = int(context.solc_optimize) if runs > 0: args.append(f"--optimize-runs {runs}") if context.solc_args: args.append(f"{context.solc_args}") return ' '.join(args) def get_compiler_lang(file_name: str) -> CompilerLang: """ Returns [CompilerLang] instance, based on type of the file [file_name] :param file_name: name of the file to create [CompilerLang] from """ if file_name.endswith(".vy"): return CompilerLangVy() elif file_name.endswith(".sol"): return CompilerLangSol() else: raise RuntimeError(f'expected {file_name} to represent a Solidity or Vyper file')
PypiClean
/enigma-catalyst-0.5.21.tar.gz/enigma-catalyst-0.5.21/catalyst/lib/labelarray.py
from functools import partial, total_ordering from operator import eq, ne import re import numpy as np from numpy import ndarray import pandas as pd from toolz import compose from catalyst.utils.compat import unicode from catalyst.utils.functional import instance from catalyst.utils.preprocess import preprocess from catalyst.utils.sentinel import sentinel from catalyst.utils.input_validation import ( coerce, expect_kinds, expect_types, optional, ) from catalyst.utils.numpy_utils import ( bool_dtype, unsigned_int_dtype_with_size_in_bytes, is_object, ) from catalyst.utils.pandas_utils import ignore_pandas_nan_categorical_warning from ._factorize import ( factorize_strings, factorize_strings_known_categories, smallest_uint_that_can_hold, ) def compare_arrays(left, right): "Eq check with a short-circuit for identical objects." return ( left is right or ((left.shape == right.shape) and (left == right).all()) ) def _make_unsupported_method(name): def method(*args, **kwargs): raise NotImplementedError( "Method %s is not supported on LabelArrays." % name ) method.__name__ = name method.__doc__ = "Unsupported LabelArray Method: %s" % name return method class MissingValueMismatch(ValueError): """ Error raised on attempt to perform operations between LabelArrays with mismatched missing_values. """ def __init__(self, left, right): super(MissingValueMismatch, self).__init__( "LabelArray missing_values don't match:" " left={}, right={}".format(left, right) ) class CategoryMismatch(ValueError): """ Error raised on attempt to perform operations between LabelArrays with mismatched category arrays. """ def __init__(self, left, right): (mismatches,) = np.where(left != right) assert len(mismatches), "Not actually a mismatch!" super(CategoryMismatch, self).__init__( "LabelArray categories don't match:\n" "Mismatched Indices: {mismatches}\n" "Left: {left}\n" "Right: {right}".format( mismatches=mismatches, left=left[mismatches], right=right[mismatches], ) ) _NotPassed = sentinel('_NotPassed') class LabelArray(ndarray): """ An ndarray subclass for working with arrays of strings. Factorizes the input array into integers, but overloads equality on strings to check against the factor label. Parameters ---------- values : array-like Array of values that can be passed to np.asarray with dtype=object. missing_value : str Scalar value to treat as 'missing' for operations on ``self``. categories : list[str], optional List of values to use as categories. If not supplied, categories will be inferred as the unique set of entries in ``values``. sort : bool, optional Whether to sort categories. If sort is False and categories is supplied, they are left in the order provided. If sort is False and categories is None, categories will be constructed in a random order. Attributes ---------- categories : ndarray[str] An array containing the unique labels of self. reverse_categories : dict[str -> int] Reverse lookup table for ``categories``. Stores the index in ``categories`` at which each entry each unique entry is found. missing_value : str or None A sentinel missing value with NaN semantics for comparisons. Notes ----- Consumers should be cautious when passing instances of LabelArray to numpy functions. We attempt to disallow as many meaningless operations as possible, but since a LabelArray is just an ndarray of ints with some additional metadata, many numpy functions (for example, trigonometric) will happily accept a LabelArray and treat its values as though they were integers. In a future change, we may be able to disallow more numerical operations by creating a wrapper dtype which doesn't register an implementation for most numpy ufuncs. Until that change is made, consumers of LabelArray should assume that it is undefined behavior to pass a LabelArray to any numpy ufunc that operates on semantically-numerical data. See Also -------- http://docs.scipy.org/doc/numpy-1.10.0/user/basics.subclassing.html """ SUPPORTED_SCALAR_TYPES = (bytes, unicode, type(None)) SUPPORTED_NON_NONE_SCALAR_TYPES = (bytes, unicode) @preprocess( values=coerce(list, partial(np.asarray, dtype=object)), categories=coerce(np.ndarray, list), ) @expect_types( values=np.ndarray, missing_value=SUPPORTED_SCALAR_TYPES, categories=optional(list), ) @expect_kinds(values=("O", "S", "U")) def __new__(cls, values, missing_value, categories=None, sort=True): # Numpy's fixed-width string types aren't very efficient. Working with # object arrays is faster than bytes or unicode arrays in almost all # cases. if not is_object(values): values = values.astype(object) if categories is None: codes, categories, reverse_categories = factorize_strings( values.ravel(), missing_value=missing_value, sort=sort, ) else: codes, categories, reverse_categories = ( factorize_strings_known_categories( values.ravel(), categories=categories, missing_value=missing_value, sort=sort, ) ) categories.setflags(write=False) return cls.from_codes_and_metadata( codes=codes.reshape(values.shape), categories=categories, reverse_categories=reverse_categories, missing_value=missing_value, ) @classmethod def from_codes_and_metadata(cls, codes, categories, reverse_categories, missing_value): """ Rehydrate a LabelArray from the codes and metadata. Parameters ---------- codes : np.ndarray[integral] The codes for the label array. categories : np.ndarray[object] The unique string categories. reverse_categories : dict[str, int] The mapping from category to its code-index. missing_value : any The value used to represent missing data. """ ret = codes.view(type=cls, dtype=np.void) ret._categories = categories ret._reverse_categories = reverse_categories ret._missing_value = missing_value return ret @classmethod def from_categorical(cls, categorical, missing_value=None): """ Create a LabelArray from a pandas categorical. Parameters ---------- categorical : pd.Categorical The categorical object to convert. missing_value : bytes, unicode, or None, optional The missing value to use for this LabelArray. Returns ------- la : LabelArray The LabelArray representation of this categorical. """ return LabelArray( categorical, missing_value, categorical.categories, ) @property def categories(self): # This is a property because it should be immutable. return self._categories @property def reverse_categories(self): # This is a property because it should be immutable. return self._reverse_categories @property def missing_value(self): # This is a property because it should be immutable. return self._missing_value @property def missing_value_code(self): return self.reverse_categories[self.missing_value] def has_label(self, value): return value in self.reverse_categories def __array_finalize__(self, obj): """ Called by Numpy after array construction. There are three cases where this can happen: 1. Someone tries to directly construct a new array by doing:: >>> ndarray.__new__(LabelArray, ...) # doctest: +SKIP In this case, obj will be None. We treat this as an error case and fail. 2. Someone (most likely our own __new__) does:: >>> other_array.view(type=LabelArray) # doctest: +SKIP In this case, `self` will be the new LabelArray instance, and ``obj` will be the array on which ``view`` is being called. The caller of ``obj.view`` is responsible for setting category metadata on ``self`` after we exit. 3. Someone creates a new LabelArray by slicing an existing one. In this case, ``obj`` will be the original LabelArray. We're responsible for copying over the parent array's category metadata. """ if obj is None: raise TypeError( "Direct construction of LabelArrays is not supported." ) # See docstring for an explanation of when these will or will not be # set. self._categories = getattr(obj, 'categories', None) self._reverse_categories = getattr(obj, 'reverse_categories', None) self._missing_value = getattr(obj, 'missing_value', None) def as_int_array(self): """ Convert self into a regular ndarray of ints. This is an O(1) operation. It does not copy the underlying data. """ return self.view( type=ndarray, dtype=unsigned_int_dtype_with_size_in_bytes(self.itemsize), ) def as_string_array(self): """ Convert self back into an array of strings. This is an O(N) operation. """ return self.categories[self.as_int_array()] def as_categorical(self, name=None): """ Coerce self into a pandas categorical. This is only defined on 1D arrays, since that's all pandas supports. """ if len(self.shape) > 1: raise ValueError("Can't convert a 2D array to a categorical.") with ignore_pandas_nan_categorical_warning(): return pd.Categorical.from_codes( self.as_int_array(), # We need to make a copy because pandas >= 0.17 fails if this # buffer isn't writeable. self.categories.copy(), ordered=False, name=name, ) def as_categorical_frame(self, index, columns, name=None): """ Coerce self into a pandas DataFrame of Categoricals. """ if len(self.shape) != 2: raise ValueError( "Can't convert a non-2D LabelArray into a DataFrame." ) expected_shape = (len(index), len(columns)) if expected_shape != self.shape: raise ValueError( "Can't construct a DataFrame with provided indices:\n\n" "LabelArray shape is {actual}, but index and columns imply " "that shape should be {expected}.".format( actual=self.shape, expected=expected_shape, ) ) return pd.Series( index=pd.MultiIndex.from_product([index, columns]), data=self.ravel().as_categorical(name=name), ).unstack() def __setitem__(self, indexer, value): self_categories = self.categories if isinstance(value, LabelArray): value_categories = value.categories if compare_arrays(self_categories, value_categories): return super(LabelArray, self).__setitem__(indexer, value) else: raise CategoryMismatch(self_categories, value_categories) elif isinstance(value, self.SUPPORTED_SCALAR_TYPES): value_code = self.reverse_categories.get(value, -1) if value_code < 0: raise ValueError("%r is not in LabelArray categories." % value) self.as_int_array()[indexer] = value_code else: raise NotImplementedError( "Setting into a LabelArray with a value of " "type {type} is not yet supported.".format( type=type(value).__name__, ), ) def __setslice__(self, i, j, sequence): """ This method was deprecated in Python 2.0. It predates slice objects, but Python 2.7.11 still uses it if you implement it, which ndarray does. In newer Pythons, __setitem__ is always called, but we need to manuallly forward in py2. """ self.__setitem__(slice(i, j), sequence) def __getitem__(self, indexer): result = super(LabelArray, self).__getitem__(indexer) if result.ndim: # Result is still a LabelArray, so we can just return it. return result # Result is a scalar value, which will be an instance of np.void. # Map it back to one of our category entries. index = result.view( unsigned_int_dtype_with_size_in_bytes(self.itemsize), ) return self.categories[index] def is_missing(self): """ Like isnan, but checks for locations where we store missing values. """ return ( self.as_int_array() == self.reverse_categories[self.missing_value] ) def not_missing(self): """ Like ~isnan, but checks for locations where we store missing values. """ return ( self.as_int_array() != self.reverse_categories[self.missing_value] ) def _equality_check(op): """ Shared code for __eq__ and __ne__, parameterized on the actual comparison operator to use. """ def method(self, other): if isinstance(other, LabelArray): self_mv = self.missing_value other_mv = other.missing_value if self_mv != other_mv: raise MissingValueMismatch(self_mv, other_mv) self_categories = self.categories other_categories = other.categories if not compare_arrays(self_categories, other_categories): raise CategoryMismatch(self_categories, other_categories) return ( op(self.as_int_array(), other.as_int_array()) & self.not_missing() & other.not_missing() ) elif isinstance(other, ndarray): # Compare to ndarrays as though we were an array of strings. # This is fairly expensive, and should generally be avoided. return op(self.as_string_array(), other) & self.not_missing() elif isinstance(other, self.SUPPORTED_SCALAR_TYPES): i = self._reverse_categories.get(other, -1) return op(self.as_int_array(), i) & self.not_missing() return op(super(LabelArray, self), other) return method __eq__ = _equality_check(eq) __ne__ = _equality_check(ne) del _equality_check def view(self, dtype=_NotPassed, type=_NotPassed): if type is _NotPassed and dtype not in (_NotPassed, self.dtype): raise TypeError("Can't view LabelArray as another dtype.") # The text signature on ndarray.view makes it look like the default # values for dtype and type are `None`, but passing None explicitly has # different semantics than not passing an arg at all, so we reconstruct # the kwargs dict here to simulate the args not being passed at all. kwargs = {} if dtype is not _NotPassed: kwargs['dtype'] = dtype if type is not _NotPassed: kwargs['type'] = type return super(LabelArray, self).view(**kwargs) # In general, we support resizing, slicing, and reshaping methods, but not # numeric methods. SUPPORTED_NDARRAY_METHODS = frozenset([ 'base', 'compress', 'copy', 'data', 'diagonal', 'dtype', 'flat', 'flatten', 'item', 'itemset', 'itemsize', 'nbytes', 'ndim', 'ravel', 'repeat', 'reshape', 'resize', 'setflags', 'shape', 'size', 'squeeze', 'strides', 'swapaxes', 'take', 'trace', 'transpose', 'view' ]) PUBLIC_NDARRAY_METHODS = frozenset([ s for s in dir(ndarray) if not s.startswith('_') ]) # Generate failing wrappers for all unsupported methods. locals().update( { method: _make_unsupported_method(method) for method in PUBLIC_NDARRAY_METHODS - SUPPORTED_NDARRAY_METHODS } ) def __repr__(self): repr_lines = repr(self.as_string_array()).splitlines() repr_lines[0] = repr_lines[0].replace('array(', 'LabelArray(', 1) repr_lines[-1] = repr_lines[-1].rsplit(',', 1)[0] + ')' # The extra spaces here account for the difference in length between # 'array(' and 'LabelArray('. return '\n '.join(repr_lines) def empty_like(self, shape): """ Make an empty LabelArray with the same categories as ``self``, filled with ``self.missing_value``. """ return type(self).from_codes_and_metadata( codes=np.full( shape, self.reverse_categories[self.missing_value], dtype=unsigned_int_dtype_with_size_in_bytes(self.itemsize), ), categories=self.categories, reverse_categories=self.reverse_categories, missing_value=self.missing_value, ) def map_predicate(self, f): """ Map a function from str -> bool element-wise over ``self``. ``f`` will be applied exactly once to each non-missing unique value in ``self``. Missing values will always return False. """ # Functions passed to this are of type str -> bool. Don't ever call # them on None, which is the only non-str value we ever store in # categories. if self.missing_value is None: def f_to_use(x): return False if x is None else f(x) else: f_to_use = f # Call f on each unique value in our categories. results = np.vectorize(f_to_use, otypes=[bool_dtype])(self.categories) # missing_value should produce False no matter what results[self.reverse_categories[self.missing_value]] = False # unpack the results form each unique value into their corresponding # locations in our indices. return results[self.as_int_array()] def map(self, f): """ Map a function from str -> str element-wise over ``self``. ``f`` will be applied exactly once to each non-missing unique value in ``self``. Missing values will always map to ``self.missing_value``. """ # f() should only return None if None is our missing value. if self.missing_value is None: allowed_outtypes = self.SUPPORTED_SCALAR_TYPES else: allowed_outtypes = self.SUPPORTED_NON_NONE_SCALAR_TYPES def f_to_use(x, missing_value=self.missing_value, otypes=allowed_outtypes): # Don't call f on the missing value; those locations don't exist # semantically. We return _sortable_sentinel rather than None # because the np.unique call below sorts the categories array, # which raises an error on Python 3 because None and str aren't # comparable. if x == missing_value: return _sortable_sentinel ret = f(x) if not isinstance(ret, otypes): raise TypeError( "LabelArray.map expected function {f} to return a string" " or None, but got {type} instead.\n" "Value was {value}.".format( f=f.__name__, type=type(ret).__name__, value=ret, ) ) if ret == missing_value: return _sortable_sentinel return ret new_categories_with_duplicates = ( np.vectorize(f_to_use, otypes=[object])(self.categories) ) # If f() maps multiple inputs to the same output, then we can end up # with the same code duplicated multiple times. Compress the categories # by running them through np.unique, and then use the reverse lookup # table to compress codes as well. new_categories, bloated_inverse_index = np.unique( new_categories_with_duplicates, return_inverse=True ) if new_categories[0] is _sortable_sentinel: # f_to_use return _sortable_sentinel for locations that should be # missing values in our output. Since np.unique returns the uniques # in sorted order, and since _sortable_sentinel sorts before any # string, we only need to check the first array entry. new_categories[0] = self.missing_value # `reverse_index` will always be a 64 bit integer even if we can hold a # smaller array. reverse_index = bloated_inverse_index.astype( smallest_uint_that_can_hold(len(new_categories)) ) new_codes = np.take(reverse_index, self.as_int_array()) return self.from_codes_and_metadata( new_codes, new_categories, dict(zip(new_categories, range(len(new_categories)))), missing_value=self.missing_value, ) def startswith(self, prefix): """ Element-wise startswith. Parameters ---------- prefix : str Returns ------- matches : np.ndarray[bool] An array with the same shape as self indicating whether each element of self started with ``prefix``. """ return self.map_predicate(lambda elem: elem.startswith(prefix)) def endswith(self, suffix): """ Elementwise endswith. Parameters ---------- suffix : str Returns ------- matches : np.ndarray[bool] An array with the same shape as self indicating whether each element of self ended with ``suffix`` """ return self.map_predicate(lambda elem: elem.endswith(suffix)) def has_substring(self, substring): """ Elementwise contains. Parameters ---------- substring : str Returns ------- matches : np.ndarray[bool] An array with the same shape as self indicating whether each element of self ended with ``suffix``. """ return self.map_predicate(lambda elem: substring in elem) @preprocess(pattern=coerce(from_=(bytes, unicode), to=re.compile)) def matches(self, pattern): """ Elementwise regex match. Parameters ---------- pattern : str or compiled regex Returns ------- matches : np.ndarray[bool] An array with the same shape as self indicating whether each element of self was matched by ``pattern``. """ return self.map_predicate(compose(bool, pattern.match)) # These types all implement an O(N) __contains__, so pre-emptively # coerce to `set`. @preprocess(container=coerce((list, tuple, np.ndarray), set)) def element_of(self, container): """ Check if each element of self is an of ``container``. Parameters ---------- container : object An object implementing a __contains__ to call on each element of ``self``. Returns ------- is_contained : np.ndarray[bool] An array with the same shape as self indicating whether each element of self was an element of ``container``. """ return self.map_predicate(container.__contains__) @instance # This makes _sortable_sentinel a singleton instance. @total_ordering class _sortable_sentinel(object): """Dummy object that sorts before any other python object. """ def __eq__(self, other): return self is other def __lt__(self, other): return True
PypiClean
/pandokia-2.3.0.tar.gz/pandokia-2.3.0/stsci_regtest/configuration.py
import os import types import os.path xml_symbol = ("<", ">", "&") xml_name = ("&lt;", "&gt;", "&amp;") # CONFIGURATION: Contains functions to read and write configuration files. #============================================================================== # Version 1.1: H.Bushouse 2001/05/04: Initial configured version. Written # by B. Simon. # Version 1.2: H.Bushouse 2002/03/13: Added documentation. #============================================================================== #----------------------------------------------------------------- # Functional interface that calls configuration file classes #----------------------------------------------------------------- def regtest_read(file): """ Read a config file and pre-process it to be suitable for the regtest system. """ config = read(file) # ? if len(config) == 1: config = config[0] # Setup output file names for output in config["output"]: if output["file"].upper() == 'STDOUT': output["file"] = "STDOUT" fname = file + ".stdout" else: fname = output["file"] output["fname"] = fname return config def read(filename): # Read and parse config file reader = Config_reader(filename) return reader.data def write(filename, config): # Write configuration file writer = Config_writer(filename, config) class Config_reader: #----------------------------------------------------------------- # Create an object to parse a configuration file #----------------------------------------------------------------- def __init__(self, filename): # Open the file and initialize state fd = open(filename, "r") self.xcode = Transcoder(xml_name, xml_symbol) self.buffer = ''.join(fd.readlines()) self.length = len(self.buffer) self.pos = 0 fd.close() # Call recursive procedure to parse xml self.data = self.get_value() #----------------------------------------------------------------- # Parse a tag delimeted section of an xml file #----------------------------------------------------------------- def get_value(self): list = [] while self.pos < self.length: # Mark the start of the search for a tag start = self.pos # Get tag delimeters tag_start = self.buffer.find("<", self.pos) if tag_start == -1: return self.transmogrify(list) tag_end = self.buffer.find(">", tag_start) if tag_end == -1: tag_end = self.length self.pos = tag_end + 1 tag = self.buffer[tag_start + 1:tag_end] # Parse tag if tag[0] == "?": # Ignore starting tag continue elif tag[0] == "!": # Ignore declaration tags continue elif tag[0] == "/": # Return contents of list if ending tag found if not list: return self.xcode.convert(self.buffer[start:tag_start]) else: return self.transmogrify(list) elif tag[-1] == "/": # Treat singleton tag as empty pair list.append((tag[0:-1], "")) else: # Call recursively for starting tag list.append((tag, self.get_value())) return self.transmogrify(list) #----------------------------------------------------------------- # Convert a list of pairs into a dictionary or a list #----------------------------------------------------------------- def transmogrify(self, list): # Compute a count of the different names count = {} for pair in list: name = pair[0] count[name] = count.get(name, 0) + 1 if len(count) <= 1: # If all names are the same, convert to list output = [] for pair in list: value = pair[1] output.append(value) else: # If at least one differs, convert to dictionary output = {} for pair in list: (name, value) = pair if count[name] == 1: output[name] = value elif name in output: output[name].append(value) else: output[name] = [value] return output class Config_writer: #----------------------------------------------------------------- # Write a data structure to a configuration file #----------------------------------------------------------------- def __init__(self, filename, config): self.xcode = Transcoder(xml_symbol, xml_name) self.fd = open(filename, "w") self.level = -1 self.fd.write('<?xml version="1.0" standalone="yes"?>\n') self.put_value(config) self.fd.write("\n") self.fd.close() #----------------------------------------------------------------- # Write a dictionary #----------------------------------------------------------------- def put_dict(self, dict): spacer = " " * (2 * self.level) self.fd.write("\n%s" % (spacer,)) for name in list(dict.keys()): value = dict[name] self.fd.write("<%s>" % (name,)) self.put_value(value) self.fd.write("</%s>\n%s" % (name, spacer)) #----------------------------------------------------------------- # Write an array #----------------------------------------------------------------- def put_array(self, array): spacer = " " * (2 * self.level) self.fd.write("\n%s" % (spacer,)) for value in array: self.fd.write("<val>") self.put_value(value) self.fd.write("</val>\n%s" % (spacer,)) #----------------------------------------------------------------- # Write a value in a dictionary or array #----------------------------------------------------------------- def put_value(self, value): self.level = self.level + 1 if isinstance(value, dict): self.put_dict(value) elif isinstance(value, list): self.put_array(value) else: self.fd.write(self.xcode.convert(str(value))) self.level = self.level - 1 class Transcoder: #----------------------------------------------------------------- # A class to convert characters with special meaning for xml #----------------------------------------------------------------- def __init__(self, oldval, newval): if len(oldval) != len(newval): raise IndexError("In transcoder list lengths do not agree") self.oldval = oldval self.newval = newval def convert(self, s): for i in range(len(self.oldval)): if self.oldval[i].find(s) >= 0: s = s.replace(self.oldval[i], self.newval[i]) return s
PypiClean
/pytorch_to_tensorflow-1.0.3.tar.gz/pytorch_to_tensorflow-1.0.3/pytorch_to_tensorflow/pytorch_to_tensorflow.py
import torch import tensorflow as tf import numpy as np def convert_pytorch_to_tensorflow(pytorch_model: torch.nn.Module) -> tf.keras.Model: # Convert the PyTorch model to a TensorFlow model tensorflow_model = tf.keras.Sequential() for layer in pytorch_model.children(): if isinstance(layer, torch.nn.Linear): if layer.weight is not None and layer.bias is not None: tensorflow_model.add(tf.keras.layers.Dense(layer.out_features, input_dim=layer.in_features, weights=[np.transpose(layer.weight.data.numpy()), layer.bias.data.numpy()])) elif isinstance(layer, torch.nn.Conv2d): if layer.weight is not None and layer.bias is not None: padding = 'valid' if layer.padding == 0 else 'same' tensorflow_model.add(tf.keras.layers.Conv2D(layer.out_channels, layer.kernel_size, strides=layer.stride, padding=padding, weights=[layer.weight, layer.bias] )) elif isinstance(layer, torch.nn.ReLU): tensorflow_model.add(tf.keras.layers.ReLU()) elif isinstance(layer, torch.nn.MaxPool2d): padding = 'valid' if layer.padding == 0 else 'same' tensorflow_model.add(tf.keras.layers.MaxPool2D(layer.kernel_size, strides=layer.stride, padding=padding)) elif isinstance(layer, torch.nn.BatchNorm2d): # Add a case for the BatchNorm2d layer tensorflow_model.add(tf.keras.layers.BatchNormalization()) elif isinstance(layer, torch.nn.AdaptiveAvgPool2d): tensorflow_model.add(tf.keras.layers.GlobalAveragePooling2D()) elif isinstance(layer, torch.nn.Sequential): tensorflow_model_seq = tf.keras.Sequential() # Iterate over the layers in the Sequential layer and convert each layer to a TensorFlow layer for sublayer in layer.children(): if(len(list((sublayer.children()))) != 0): for subblayer in sublayer.children(): if isinstance(subblayer, torch.nn.Conv2d): # Check if the weight and bias attributes are defined # if sublayer.weight is not None and sublayer.bias is not None: padding = 'valid' if subblayer.padding == 0 else 'same' # Conv2D(64, (3, 3), strides=(1, 1), padding="same", use_bias=False) tensorflow_model_seq.add(tf.keras.layers.Conv2D(subblayer.out_channels, (3, 3), strides=(1, 1), padding="same", use_bias=False)) elif isinstance(subblayer, torch.nn.ReLU): tensorflow_model_seq.add(tf.keras.layers.ReLU()) elif isinstance(subblayer, torch.nn.MaxPool2d): padding = 'valid' if subblayer.padding == 0 else 'same' tensorflow_model_seq.add(tf.keras.layers.MaxPool2D(layer.kernel_size, strides=layer.stride, padding=padding)) elif isinstance(subblayer, torch.nn.BatchNorm2d): # Add a case for the BatchNorm2d layer tensorflow_model_seq.add(tf.keras.layers.BatchNormalization()) elif isinstance(subblayer, torch.nn.AdaptiveAvgPool2d): tensorflow_model_seq.add(tf.keras.layers.GlobalAveragePooling2D()) else: if isinstance(sublayer, torch.nn.Conv2d): # Check if the weight and bias attributes are defined # if sublayer.weight is not None and sublayer.bias is not None: padding = 'valid' if sublayer.padding == 0 else 'same' tensorflow_model.add(tf.keras.layers.Conv2D(filters=sublayer.out_channels, kernel_size=sublayer.kernel_size, strides=sublayer.stride, padding=padding, weights=[sublayer.weight, sublayer.bias])) elif isinstance(sublayer, torch.nn.ReLU): tensorflow_model.add(tf.keras.layers.ReLU()) elif isinstance(sublayer, torch.nn.MaxPool2d): padding = 'valid' if sublayer.padding == 0 else 'same' tensorflow_model.add(tf.keras.layers.MaxPool2D(layer.kernel_size, strides=layer.stride, padding=padding)) elif isinstance(sublayer, torch.nn.BatchNorm2d): # Add a case for the BatchNorm2d layer tensorflow_model.add(tf.keras.layers.BatchNormalization()) elif isinstance(sublayer, torch.nn.AdaptiveAvgPool2d): tensorflow_model.add(tf.keras.layers.GlobalAveragePooling2D()) tensorflow_model.add(tensorflow_model_seq) else: raise NotImplementedError('Unknown PyTorch layer: {}'.format(layer.__class__.__name__)) return tensorflow_model
PypiClean
/prometheus_client_gc-0.9.1.tar.gz/prometheus_client_gc-0.9.1/prometheus_client_gc/decorator.py
# Copyright (c) 2005-2016, Michele Simionato # All rights reserved. # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # Redistributions in bytecode form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS # OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE. """ Decorator module, see http://pypi.python.org/pypi/decorator for the documentation. """ from __future__ import print_function import collections import inspect import itertools import operator import re import sys __version__ = '4.0.10' if sys.version_info >= (3,): from inspect import getfullargspec def get_init(cls): return cls.__init__ else: class getfullargspec(object): "A quick and dirty replacement for getfullargspec for Python 2.X" def __init__(self, f): self.args, self.varargs, self.varkw, self.defaults = \ inspect.getargspec(f) self.kwonlyargs = [] self.kwonlydefaults = None def __iter__(self): yield self.args yield self.varargs yield self.varkw yield self.defaults getargspec = inspect.getargspec def get_init(cls): return cls.__init__.__func__ # getargspec has been deprecated in Python 3.5 ArgSpec = collections.namedtuple( 'ArgSpec', 'args varargs varkw defaults') def getargspec(f): """A replacement for inspect.getargspec""" spec = getfullargspec(f) return ArgSpec(spec.args, spec.varargs, spec.varkw, spec.defaults) DEF = re.compile(r'\s*def\s*([_\w][_\w\d]*)\s*\(') # basic functionality class FunctionMaker(object): """ An object with the ability to create functions with a given signature. It has attributes name, doc, module, signature, defaults, dict and methods update and make. """ # Atomic get-and-increment provided by the GIL _compile_count = itertools.count() def __init__(self, func=None, name=None, signature=None, defaults=None, doc=None, module=None, funcdict=None): self.shortsignature = signature if func: # func can be a class or a callable, but not an instance method self.name = func.__name__ if self.name == '<lambda>': # small hack for lambda functions self.name = '_lambda_' self.doc = func.__doc__ self.module = func.__module__ if inspect.isfunction(func): argspec = getfullargspec(func) self.annotations = getattr(func, '__annotations__', {}) for a in ('args', 'varargs', 'varkw', 'defaults', 'kwonlyargs', 'kwonlydefaults'): setattr(self, a, getattr(argspec, a)) for i, arg in enumerate(self.args): setattr(self, 'arg%d' % i, arg) if sys.version_info < (3,): # easy way self.shortsignature = self.signature = ( inspect.formatargspec( formatvalue=lambda val: "", *argspec)[1:-1]) else: # Python 3 way allargs = list(self.args) allshortargs = list(self.args) if self.varargs: allargs.append('*' + self.varargs) allshortargs.append('*' + self.varargs) elif self.kwonlyargs: allargs.append('*') # single star syntax for a in self.kwonlyargs: allargs.append('%s=None' % a) allshortargs.append('%s=%s' % (a, a)) if self.varkw: allargs.append('**' + self.varkw) allshortargs.append('**' + self.varkw) self.signature = ', '.join(allargs) self.shortsignature = ', '.join(allshortargs) self.dict = func.__dict__.copy() # func=None happens when decorating a caller if name: self.name = name if signature is not None: self.signature = signature if defaults: self.defaults = defaults if doc: self.doc = doc if module: self.module = module if funcdict: self.dict = funcdict # check existence required attributes assert hasattr(self, 'name') if not hasattr(self, 'signature'): raise TypeError('You are decorating a non function: %s' % func) def update(self, func, **kw): "Update the signature of func with the data in self" func.__name__ = self.name func.__doc__ = getattr(self, 'doc', None) func.__dict__ = getattr(self, 'dict', {}) func.__defaults__ = getattr(self, 'defaults', ()) func.__kwdefaults__ = getattr(self, 'kwonlydefaults', None) func.__annotations__ = getattr(self, 'annotations', None) try: frame = sys._getframe(3) except AttributeError: # for IronPython and similar implementations callermodule = '?' else: callermodule = frame.f_globals.get('__name__', '?') func.__module__ = getattr(self, 'module', callermodule) func.__dict__.update(kw) def make(self, src_templ, evaldict=None, addsource=False, **attrs): "Make a new function from a given template and update the signature" src = src_templ % vars(self) # expand name and signature evaldict = evaldict or {} mo = DEF.match(src) if mo is None: raise SyntaxError('not a valid function template\n%s' % src) name = mo.group(1) # extract the function name names = set([name] + [arg.strip(' *') for arg in self.shortsignature.split(',')]) for n in names: if n in ('_func_', '_call_'): raise NameError('%s is overridden in\n%s' % (n, src)) if not src.endswith('\n'): # add a newline for old Pythons src += '\n' # Ensure each generated function has a unique filename for profilers # (such as cProfile) that depend on the tuple of (<filename>, # <definition line>, <function name>) being unique. filename = '<decorator-gen-%d>' % (next(self._compile_count),) try: code = compile(src, filename, 'single') exec(code, evaldict) except: print('Error in generated code:', file=sys.stderr) print(src, file=sys.stderr) raise func = evaldict[name] if addsource: attrs['__source__'] = src self.update(func, **attrs) return func @classmethod def create(cls, obj, body, evaldict, defaults=None, doc=None, module=None, addsource=True, **attrs): """ Create a function from the strings name, signature and body. evaldict is the evaluation dictionary. If addsource is true an attribute __source__ is added to the result. The attributes attrs are added, if any. """ if isinstance(obj, str): # "name(signature)" name, rest = obj.strip().split('(', 1) signature = rest[:-1] # strip a right parens func = None else: # a function name = None signature = None func = obj self = cls(func, name, signature, defaults, doc, module) ibody = '\n'.join(' ' + line for line in body.splitlines()) return self.make('def %(name)s(%(signature)s):\n' + ibody, evaldict, addsource, **attrs) def decorate(func, caller): """ decorate(func, caller) decorates a function using a caller. """ evaldict = dict(_call_=caller, _func_=func) fun = FunctionMaker.create( func, "return _call_(_func_, %(shortsignature)s)", evaldict, __wrapped__=func) if hasattr(func, '__qualname__'): fun.__qualname__ = func.__qualname__ return fun def decorator(caller, _func=None): """decorator(caller) converts a caller function into a decorator""" if _func is not None: # return a decorated function # this is obsolete behavior; you should use decorate instead return decorate(_func, caller) # else return a decorator function if inspect.isclass(caller): name = caller.__name__.lower() doc = 'decorator(%s) converts functions/generators into ' \ 'factories of %s objects' % (caller.__name__, caller.__name__) elif inspect.isfunction(caller): if caller.__name__ == '<lambda>': name = '_lambda_' else: name = caller.__name__ doc = caller.__doc__ else: # assume caller is an object with a __call__ method name = caller.__class__.__name__.lower() doc = caller.__call__.__doc__ evaldict = dict(_call_=caller, _decorate_=decorate) return FunctionMaker.create( '%s(func)' % name, 'return _decorate_(func, _call_)', evaldict, doc=doc, module=caller.__module__, __wrapped__=caller) # ####################### contextmanager ####################### # try: # Python >= 3.2 from contextlib import _GeneratorContextManager except ImportError: # Python >= 2.5 from contextlib import GeneratorContextManager as _GeneratorContextManager class ContextManager(_GeneratorContextManager): def __call__(self, func): """Context manager decorator""" return FunctionMaker.create( func, "with _self_: return _func_(%(shortsignature)s)", dict(_self_=self, _func_=func), __wrapped__=func) init = getfullargspec(_GeneratorContextManager.__init__) n_args = len(init.args) if n_args == 2 and not init.varargs: # (self, genobj) Python 2.7 def __init__(self, g, *a, **k): return _GeneratorContextManager.__init__(self, g(*a, **k)) ContextManager.__init__ = __init__ elif n_args == 2 and init.varargs: # (self, gen, *a, **k) Python 3.4 pass elif n_args == 4: # (self, gen, args, kwds) Python 3.5 def __init__(self, g, *a, **k): return _GeneratorContextManager.__init__(self, g, a, k) ContextManager.__init__ = __init__ contextmanager = decorator(ContextManager) # ############################ dispatch_on ############################ # def append(a, vancestors): """ Append ``a`` to the list of the virtual ancestors, unless it is already included. """ add = True for j, va in enumerate(vancestors): if issubclass(va, a): add = False break if issubclass(a, va): vancestors[j] = a add = False if add: vancestors.append(a) # inspired from simplegeneric by P.J. Eby and functools.singledispatch def dispatch_on(*dispatch_args): """ Factory of decorators turning a function into a generic function dispatching on the given arguments. """ assert dispatch_args, 'No dispatch args passed' dispatch_str = '(%s,)' % ', '.join(dispatch_args) def check(arguments, wrong=operator.ne, msg=''): """Make sure one passes the expected number of arguments""" if wrong(len(arguments), len(dispatch_args)): raise TypeError('Expected %d arguments, got %d%s' % (len(dispatch_args), len(arguments), msg)) def gen_func_dec(func): """Decorator turning a function into a generic function""" # first check the dispatch arguments argset = set(getfullargspec(func).args) if not set(dispatch_args) <= argset: raise NameError('Unknown dispatch arguments %s' % dispatch_str) typemap = {} def vancestors(*types): """ Get a list of sets of virtual ancestors for the given types """ check(types) ras = [[] for _ in range(len(dispatch_args))] for types_ in typemap: for t, type_, ra in zip(types, types_, ras): if issubclass(t, type_) and type_ not in t.__mro__: append(type_, ra) return [set(ra) for ra in ras] def ancestors(*types): """ Get a list of virtual MROs, one for each type """ check(types) lists = [] for t, vas in zip(types, vancestors(*types)): n_vas = len(vas) if n_vas > 1: raise RuntimeError( 'Ambiguous dispatch for %s: %s' % (t, vas)) elif n_vas == 1: va, = vas mro = type('t', (t, va), {}).__mro__[1:] else: mro = t.__mro__ lists.append(mro[:-1]) # discard t and object return lists def register(*types): """ Decorator to register an implementation for the given types """ check(types) def dec(f): check(getfullargspec(f).args, operator.lt, ' in ' + f.__name__) typemap[types] = f return f return dec def dispatch_info(*types): """ An utility to introspect the dispatch algorithm """ check(types) lst = [] for anc in itertools.product(*ancestors(*types)): lst.append(tuple(a.__name__ for a in anc)) return lst def _dispatch(dispatch_args, *args, **kw): types = tuple(type(arg) for arg in dispatch_args) try: # fast path f = typemap[types] except KeyError: pass else: return f(*args, **kw) combinations = itertools.product(*ancestors(*types)) next(combinations) # the first one has been already tried for types_ in combinations: f = typemap.get(types_) if f is not None: return f(*args, **kw) # else call the default implementation return func(*args, **kw) return FunctionMaker.create( func, 'return _f_(%s, %%(shortsignature)s)' % dispatch_str, dict(_f_=_dispatch), register=register, default=func, typemap=typemap, vancestors=vancestors, ancestors=ancestors, dispatch_info=dispatch_info, __wrapped__=func) gen_func_dec.__name__ = 'dispatch_on' + dispatch_str return gen_func_dec
PypiClean
/mxnet_cu100-1.9.0-py3-none-manylinux2014_x86_64.whl/mxnet/symbol/numpy/random.py
from ...context import current_context from . import _internal as _npi __all__ = ['randint', 'uniform', 'normal', 'multivariate_normal', 'logistic', 'gumbel', 'rayleigh', 'rand', 'shuffle', 'gamma', 'beta', 'chisquare', 'exponential', 'lognormal', 'weibull', 'pareto', 'power'] def randint(low, high=None, size=None, dtype=None, ctx=None, out=None): r"""Return random integers from `low` (inclusive) to `high` (exclusive). Return random integers from the "discrete uniform" distribution of the specified dtype in the "half-open" interval [`low`, `high`). If `high` is None (the default), then results are from [0, `low`). Parameters ---------- low : int Lowest (signed) integer to be drawn from the distribution (unless ``high=None``, in which case this parameter is one above the *highest* such integer). high : int, optional If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if ``high=None``). size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. Default is None, in which case a single value is returned. dtype : dtype, optional Desired dtype of the result. All dtypes are determined by their name, i.e., 'int64', 'int', etc, so byteorder is not available and a specific precision may have different C types depending on the platform. The default value is 'np.int'. ctx : Context, optional Device context of output. Default is current context. out : _Symbol, optional The output symbol (default is `None`). Returns ------- out : _Symbol `size`-shaped array of random integers from the appropriate distribution, or a single such random int if `size` not provided. Examples -------- >>> np.random.randint(2, size=10) array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) >>> np.random.randint(1, size=10) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) Generate a 2 x 4 array of ints between 0 and 4, inclusive: >>> np.random.randint(5, size=(2, 4)) array([[4, 0, 2, 1], [3, 2, 2, 0]]) """ if dtype is None: dtype = 'int' if ctx is None: ctx = current_context() if size is None: size = () if high is None: high = low low = 0 return _npi.random_randint(low, high, shape=size, dtype=dtype, ctx=ctx, out=out) def rand(*size, **kwargs): r"""Random values in a given shape. Create an array of the given shape and populate it with random samples from a uniform distribution over [0, 1). Parameters ---------- d0, d1, ..., dn : int, optional The dimensions of the returned array, should be all positive. If no argument is given a single Python float is returned. Returns ------- out : _Symbol Random values. Examples -------- >>> np.random.rand(3,2) array([[ 0.14022471, 0.96360618], #random [ 0.37601032, 0.25528411], #random [ 0.49313049, 0.94909878]]) #random """ output_shape = () for s in size: output_shape += (s,) return uniform(0, 1, size=output_shape, **kwargs) def uniform(low=0.0, high=1.0, size=None, dtype=None, ctx=None, out=None): r"""Draw samples from a uniform distribution. Samples are uniformly distributed over the half-open interval ``[low, high)`` (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by `uniform`. Parameters ---------- low : float, _Symbol, optional Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0. high : float, _Symbol, optional Upper boundary of the output interval. All values generated will be less than high. The default value is 1.0. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a scalar tensor containing a single value is returned if ``low`` and ``high`` are both scalars. dtype : {'float16', 'float32', 'float64'}, optional Data type of output samples. Default is 'float32' ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol Drawn samples from the parameterized uniform distribution. """ from ._symbol import _Symbol as np_symbol input_type = (isinstance(low, np_symbol), isinstance(high, np_symbol)) if dtype is None: dtype = 'float32' if ctx is None: ctx = current_context() if out is not None: size = out.shape if size == (): size = None if input_type == (True, True): return _npi.uniform(low, high, low=None, high=None, size=size, ctx=ctx, dtype=dtype, out=out) elif input_type == (False, True): return _npi.uniform(high, low=low, high=None, size=size, ctx=ctx, dtype=dtype, out=out) elif input_type == (True, False): return _npi.uniform(low, low=None, high=high, size=size, ctx=ctx, dtype=dtype, out=out) else: return _npi.uniform(low=low, high=high, size=size, ctx=ctx, dtype=dtype, out=out) def normal(loc=0.0, scale=1.0, size=None, dtype=None, ctx=None, out=None): r"""Draw random samples from a normal (Gaussian) distribution. Samples are distributed according to a normal distribution parametrized by *loc* (mean) and *scale* (standard deviation). Parameters ---------- loc : float, optional Mean (centre) of the distribution. scale : float, optional Standard deviation (spread or "width") of the distribution. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a scalar tensor containing a single value is returned if loc and scale are both scalars. dtype : {'float16', 'float32', 'float64'}, optional Data type of output samples. Default is 'float32'. ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol (symbol representing `mxnet.numpy.ndarray` in computational graphs) Drawn samples from the parameterized normal distribution. """ from ._symbol import _Symbol as np_symbol input_type = (isinstance(loc, np_symbol), isinstance(scale, np_symbol)) if dtype is None: dtype = 'float32' if ctx is None: ctx = current_context() if size == (): size = None if input_type == (True, True): return _npi.normal(loc, scale, loc=None, scale=None, size=size, ctx=ctx, dtype=dtype, out=out) elif input_type == (False, True): return _npi.normal(scale, loc=loc, scale=None, size=size, ctx=ctx, dtype=dtype, out=out) elif input_type == (True, False): return _npi.normal(loc, loc=None, scale=scale, size=size, ctx=ctx, dtype=dtype, out=out) else: return _npi.normal(loc=loc, scale=scale, size=size, ctx=ctx, dtype=dtype, out=out) def lognormal(mean=0.0, sigma=1.0, size=None, dtype=None, ctx=None, out=None): r"""Draw samples from a log-normal distribution. Draw samples from a log-normal distribution with specified mean, standard deviation, and array shape. Note that the mean and standard deviation are not the values for the distribution itself, but of the underlying normal distribution it is derived from. Parameters ---------- mean : float, optional Mean value of the underlying normal distribution. Default is 0. sigma : float, optional Standard deviation of the underlying normal distribution. Must be non-negative. Default is 1. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``mean`` and ``sigma`` are both scalars. Otherwise, ``np.broadcast(mean, sigma).size`` samples are drawn. dtype : {'float16', 'float32', 'float64'}, optional Data type of output samples. Default is 'float32' ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol (symbol representing `mxnet.numpy.ndarray` in computational graphs) Drawn samples from the parameterized lognormal distribution. """ from . import _symbol as _mx_np_symbol return _mx_np_symbol.exp(normal(loc=mean, scale=sigma, size=size, dtype=dtype, ctx=ctx, out=out)) def logistic(loc=0.0, scale=1.0, size=None, ctx=None, out=None): r"""Draw samples from a logistic distribution. Samples are drawn from a logistic distribution with specified parameters, loc (location or mean, also median), and scale (>0). Parameters ---------- loc : float, optional Parameter of the distribution. Default is 0. scale : float, optional Parameter of the distribution. Must be non-negative. Default is 1. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``loc`` and ``scale`` are both scalars. Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn. ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol (symbol representing `mxnet.numpy.ndarray` in computational graphs) Drawn samples from the parameterized logistic distribution. """ from ._symbol import _Symbol as np_symbol input_type = (isinstance(loc, np_symbol), isinstance(scale, np_symbol)) if ctx is None: ctx = current_context() if size == (): size = None if input_type == (True, True): return _npi.logistic(loc, scale, loc=None, scale=None, size=size, ctx=ctx, out=out) elif input_type == (False, True): return _npi.logistic(scale, loc=loc, scale=None, size=size, ctx=ctx, out=out) elif input_type == (True, False): return _npi.logistic(loc, loc=None, scale=scale, size=size, ctx=ctx, out=out) else: return _npi.logistic(loc=loc, scale=scale, size=size, ctx=ctx, out=out) def gumbel(loc=0.0, scale=1.0, size=None, ctx=None, out=None): r"""Draw samples from a Gumbel distribution. Parameters ---------- loc : float or array_like of floats, optional The location of the mode of the distribution. Default is 0. scale : float or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``loc`` and ``scale`` are both scalars. Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn. ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol (symbol representing `mxnet.numpy.ndarray` in computational graphs) Drawn samples from the parameterized gumbel distribution. """ from ._symbol import _Symbol as np_symbol input_type = (isinstance(loc, np_symbol), isinstance(scale, np_symbol)) if ctx is None: ctx = current_context() if size == (): size = None if input_type == (True, True): return _npi.gumbel(loc, scale, loc=None, scale=None, size=size, ctx=ctx, out=out) elif input_type == (False, True): return _npi.gumbel(scale, loc=loc, scale=None, size=size, ctx=ctx, out=out) elif input_type == (True, False): return _npi.gumbel(loc, loc=None, scale=scale, size=size, ctx=ctx, out=out) else: return _npi.gumbel(loc=loc, scale=scale, size=size, ctx=ctx, out=out) def choice(a, size=None, replace=True, p=None, ctx=None, out=None): r"""Generates a random sample from a given 1-D array Parameters ----------- a : 1-D array-like or int If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if a were np.arange(a) size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. Default is None, in which case a single value is returned. replace : boolean, optional Whether the sample is with or without replacement p : 1-D array-like, optional The probabilities associated with each entry in a. If not given the sample assumes a uniform distribution over all entries in a. ctx : Context, optional Device context of output. Default is current context. Returns -------- samples : _Symbol The generated random samples Examples --------- Generate a uniform random sample from np.arange(5) of size 3: >>> np.random.choice(5, 3) array([0, 3, 4]) >>> #This is equivalent to np.random.randint(0,5,3) Generate a non-uniform random sample from np.arange(5) of size 3: >>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) Generate a uniform random sample from np.arange(5) of size 3 without replacement: >>> np.random.choice(5, 3, replace=False) array([3,1,0]) >>> #This is equivalent to np.random.permutation(np.arange(5))[:3] Generate a non-uniform random sample from np.arange(5) of size 3 without replacement: >>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) """ from ._symbol import _Symbol as np_symbol if ctx is None: ctx = current_context() if size == (): size = None if isinstance(a, np_symbol): ctx = None if p is None: indices = _npi.choice(a, a=None, size=size, replace=replace, ctx=ctx, weighted=False) return _npi.take(a, indices) else: indices = _npi.choice(a, p, a=None, size=size, replace=replace, ctx=ctx, weighted=True) return _npi.take(a, indices) else: if p is None: return _npi.choice(a=a, size=size, replace=replace, ctx=ctx, weighted=False, out=out) else: return _npi.choice(p, a=a, size=size, replace=replace, ctx=ctx, weighted=True, out=out) def gamma(shape, scale=1.0, size=None, dtype=None, ctx=None, out=None): """Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, `shape` (sometimes designated "k") and `scale` (sometimes designated "theta"), where both parameters are > 0. Parameters ---------- shape : float or array_like of floats The shape of the gamma distribution. Should be greater than zero. scale : float or array_like of floats, optional The scale of the gamma distribution. Should be greater than zero. Default is equal to 1. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``shape`` and ``scale`` are both scalars. Otherwise, ``np.broadcast(shape, scale).size`` samples are drawn. dtype : {'float16', 'float32', 'float64'}, optional Data type of output samples. Default is 'float32'. ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol Drawn samples from the parameterized gamma distribution. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. """ from ._symbol import _Symbol as np_symbol input_type = (isinstance(shape, np_symbol), isinstance(scale, np_symbol)) if dtype is None: dtype = 'float32' if ctx is None: ctx = current_context() if out is not None: size = out.shape if size == (): size = None if input_type == (True, True): return _npi.gamma(shape, scale, shape=None, scale=None, size=size, ctx=ctx, dtype=dtype, out=out) elif input_type == (False, True): return _npi.gamma(scale, shape=shape, scale=None, size=size, ctx=ctx, dtype=dtype, out=out) elif input_type == (True, False): return _npi.gamma(shape, shape=None, scale=scale, size=size, ctx=ctx, dtype=dtype, out=out) else: return _npi.gamma(shape=shape, scale=scale, size=size, ctx=ctx, dtype=dtype, out=out) raise ValueError("Distribution parameters must be either _Symbol or numbers") def rayleigh(scale=0.0, size=None, ctx=None, out=None): r"""Draw samples from a Rayleigh distribution. The :math:`\chi` and Weibull distributions are generalizations of the Rayleigh. Parameters ---------- scale : float or _Symbol Scale, also equals the mode. Must be non-negative. Default is 1. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``scale`` is a scalar. Otherwise, ``np.array(scale).size`` samples are drawn. ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol Drawn samples from the parameterized Rayleigh distribution. """ from ..numpy import _Symbol as np_symbol tensor_type_name = np_symbol if ctx is None: ctx = current_context() if size == (): size = None is_tensor = isinstance(scale, tensor_type_name) if is_tensor: return _npi.rayleigh(scale, scale=None, size=size, ctx=ctx, out=out) else: return _npi.rayleigh(scale=scale, size=size, ctx=ctx, out=out) def beta(a, b, size=None, dtype=None, ctx=None): r"""Draw samples from a Beta distribution. The Beta distribution is a special case of the Dirichlet distribution, and is related to the Gamma distribution. It has the probability distribution function .. math:: f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1}, where the normalisation, B, is the beta function, .. math:: B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt. It is often seen in Bayesian inference and order statistics. Parameters ---------- a : float or _Symbol of floats Alpha, positive (>0). b : float or _Symbol of floats Beta, positive (>0). size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``a`` and ``b`` are both scalars. Otherwise, ``np.broadcast(a, b).size`` samples are drawn. dtype : {'float16', 'float32', 'float64'}, optional Data type of output samples. Default is 'float32'. Dtype 'float32' or 'float64' is strongly recommended, since lower precision might lead to out of range issue. ctx : Context, optional Device context of output. Default is current context. Notes ------- To use this operator with scalars as input, please run ``npx.set_np()`` first. Returns ------- out : _Symbol Drawn samples from the parameterized beta distribution. """ if dtype is None: dtype = 'float32' if ctx is None: ctx = current_context() if size == (): size = None # use fp64 to prevent precision loss X = gamma(a, 1, size=size, dtype='float64', ctx=ctx) Y = gamma(b, 1, size=size, dtype='float64', ctx=ctx) out = X/(X + Y) return out.astype(dtype) def chisquare(df, size=None, dtype=None, ctx=None): r""" chisquare(df, size=None, dtype=None, ctx=None) Draw samples from a chi-square distribution. When `df` independent random variables, each with standard normal distributions (mean 0, variance 1), are squared and summed, the resulting distribution is chi-square (see Notes). This distribution is often used in hypothesis testing. Parameters ---------- df : float or _Symbol of floats Number of degrees of freedom, must be > 0. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``df`` is a scalar. Otherwise, ``np.array(df).size`` samples are drawn. dtype : {'float16', 'float32', 'float64'}, optional Data type of output samples. Default is 'float32'. ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol Drawn samples from the parameterized chi-square distribution. Raises ------ ValueError When `df` <= 0 or when an inappropriate `size` is given. Notes ----- The variable obtained by summing the squares of `df` independent, standard normally distributed random variables: .. math:: Q = \sum_{i=0}^{\mathtt{df}} X^2_i is chi-square distributed, denoted .. math:: Q \sim \chi^2_k. The probability density function of the chi-squared distribution is .. math:: p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2}, where :math:`\Gamma` is the gamma function, .. math:: \Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt. References ---------- .. [1] NIST "Engineering Statistics Handbook" https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm """ if dtype is None: dtype = 'float32' if ctx is None: ctx = current_context() if size == (): size = None return gamma(df/2, 1/2, size=size, dtype=dtype, ctx=ctx) def exponential(scale=1.0, size=None, ctx=None, out=None): r"""Draw samples from an exponential distribution. Parameters ---------- scale : float or array_like of floats The scale parameter, :math:`\beta = 1/\lambda`. Must be non-negative. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``scale`` is a scalar. Otherwise, ``np.array(scale).size`` samples are drawn. ctx : Context, optional Device context of output. Default is current context. Returns ------- out : _Symbol (symbol representing `mxnet.numpy.ndarray` in computational graphs) Drawn samples from the parameterized exponential distribution. """ from ..numpy import _Symbol as np_symbol tensor_type_name = np_symbol if ctx is None: ctx = current_context() if size == (): size = None is_tensor = isinstance(scale, tensor_type_name) if is_tensor: return _npi.exponential(scale, scale=None, size=size, ctx=ctx, out=out) else: return _npi.exponential(scale=scale, size=size, ctx=ctx, out=out) def weibull(a, size=None, ctx=None, out=None): r"""Draw samples from a 1-parameter Weibull distribution with given parameter a via inversion. Parameters ---------- a : float or array_like of floats Shape of the distribution. Must be non-negative. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``a`` is a scalar. Otherwise, ``np.array(a).size`` samples are drawn. Returns ------- out : _Symbol Drawn samples from the 1-parameter Weibull distribution. Examples -------- >>> np.random.weibull(a=5) array(0.9553641) >>> np.random.weibull(a=5, size=[2,3]) array([[1.0466299 , 1.1320982 , 0.98415005], [1.1430776 , 0.9532727 , 1.1344457 ]]) >>> np.random.weibull(a=np.array([2,3]) array([0.98843634, 1.0125613 ]) The Weibull distribution is one of a class of Generalized Extreme Value (GEV) distributions. This class includes the Gumbel and Frechet distributions. The probability density for the Weibull distribution is f(x) = \frac{a}{\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a}, where a is the shape and \lambda the scale. The generated 1-parameter Weibull sample has the scale parameter \lambda = 1. The Weibull distribution is commonly used in reliability engineering to model time to failure, in modeling particle sizes, in information retrieval to model dwell time on pages, in quantitative finance to model risk etc. """ from ..numpy import _Symbol as np_symbol tensor_type_name = np_symbol if ctx is None: ctx = current_context() if size == (): size = None is_tensor = isinstance(a, tensor_type_name) if is_tensor: return _npi.weibull(a, a=None, size=size, ctx=ctx, out=out) else: return _npi.weibull(a=a, size=size, ctx=ctx, out=out) def pareto(a, size=None, ctx=None, out=None): r"""Draw samples from a Pareto II or Lomax distribution with specified shape a. Parameters ---------- a : float or array_like of floats Shape of the distribution. Must be > 0. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``a`` is a scalar. Otherwise, ``np.array(a).size`` samples are drawn. Returns ------- out : _Symbol Drawn samples from the Pareto distribution. Examples -------- >>> np.random.pareto(a=5) array(0.12749612) >>> mx.numpy.random.pareto(a=5, size=[2,3]) array([[0.06933999, 0.0344373 , 0.10654891], [0.0311172 , 0.12911797, 0.03370714]]) >>> np.random.pareto(a=np.array([2,3]) array([0.26636696, 0.15685666]) The probability density for the Pareto distribution is f(x) = \frac{am^a}{x^{a+1}} where a is the shape and m the scale. Here m is assumed 1. The Pareto distribution is a power law distribution. Pareto created it to describe the wealth in the economy. """ from ..numpy import _Symbol as np_symbol tensor_type_name = np_symbol if ctx is None: ctx = current_context() if size == (): size = None is_tensor = isinstance(a, tensor_type_name) if is_tensor: return _npi.pareto(a, a=None, size=size, ctx=ctx, out=out) else: return _npi.pareto(a=a, size=size, ctx=ctx, out=out) def power(a, size=None): r"""Draw samples in [0, 1] from a power distribution with given parameter a. Parameters ---------- a : float or array_like of floats Shape of the distribution. Must be > 0. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. If size is ``None`` (default), a single value is returned if ``a`` is a scalar. Otherwise, ``np.array(a).size`` samples are drawn. Returns ------- out : _Symbol Drawn samples from the power distribution. Examples -------- >>> np.random.power(a=5) array(0.8602478) >>> np.random.power(a=5, size=[2,3]) array([[0.988391 , 0.5153122 , 0.9383134 ], [0.9078098 , 0.87819266, 0.730635]]) >>> np.random.power(a=np.array([2,3]) array([0.7499419 , 0.88894516]) The probability density function is f(x; a) = ax^{a-1}, 0 \le x \le 1, a>0. The power distribution is just the inverse of the Pareto distribution and a special case of the Beta distribution. """ from ..numpy import _Symbol as np_symbol tensor_type_name = np_symbol if size == (): size = None is_tensor = isinstance(a, tensor_type_name) if is_tensor: return _npi.powerd(a, a=None, size=size) else: return _npi.powerd(a=a, size=size) def multivariate_normal(mean, cov, size=None, check_valid=None, tol=None): """ multivariate_normal(mean, cov, size=None, check_valid=None, tol=None) Draw random samples from a multivariate normal distribution. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or "center") and variance (standard deviation, or "width," squared) of the one-dimensional normal distribution. This operator is a little different from the one in official NumPy. The official NumPy operator only accepts 1-D ndarray as mean and 2-D ndarray as cov, whereas the operator in DeepNumPy supports batch operation and auto-broadcasting. Both `mean` and `cov` may have any number of leading dimensions, which correspond to a batch shape. They are not necessarily assumed to have the same batch shape, just ones which can be broadcasted. Parameters ---------- mean : K-D _Symbol, of shape (..., N) Mean of the N-dimensional distribution. cov : (K+1)-D _Symbol, of shape (..., N, N) Covariance matrix of the distribution. The last two dimensions must be symmetric and positive-semidefinite for proper sampling. size : int or tuple of ints, optional Given a shape of, for example, ``(m,n,k)``, ``m*n*k`` identically distributed batchs of samples are generated, and packed in an `m`-by-`n`-by-`k` arrangement. If no shape is specified, a batch of (`N`-D) sample is returned. check_valid : { 'warn', 'raise', 'ignore' }, optional Behavior when the covariance matrix is not positive semidefinite. (Not supported) tol : float, optional Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check. (Not supported) Returns ------- out : _Symbol The input shape of `mean` and `cov` should satisfy the requirements of broadcasting. If the parameter `size` is not provided, the output shape is ``np.broadcast(mean.shape, cov.shape[:-1])``. Otherwise, the output shape is ``size + np.broadcast(mean.shape, cov.shape[:-1])`` Examples -------- >>> mean = np.array([1, 2]) >>> cov = np.array([[1, 0], [0, 1]]) >>> x = np.random.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2) The following is probably true, given that 0.6 is roughly twice the standard deviation: >>> list((x[0,0,:] - mean) < 0.6) [True, True] # random # Performs autobroadcasting when the batch shape of # `mean` and `cov` is different but compatible. >>> mean = np.zeros((3,2)) # shape (3, 2) >>> cov = np.array([[1, 0], [0, 100]]) # shape (2, 2) >>> x = np.random.multivariate_normal(mean, cov) >>> x array([[-1.6115597 , -8.726251 ], [ 2.2425299 , 2.8104177 ], [ 0.36229908, -8.386591 ]]) """ if check_valid is not None: raise NotImplementedError('Parameter `check_valid` is not supported') if tol is not None: raise NotImplementedError('Parameter `tol` is not supported') return _npi.mvn_fallback(mean, cov, size=size) def shuffle(x): """ Modify a sequence in-place by shuffling its contents. This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remain the same. Parameters ---------- x: _Symbol The array or list to be shuffled. Returns ------- None Examples -------- >>> arr = np.arange(10) >>> np.random.shuffle(arr) >>> arr array([5., 1., 0., 6., 7., 3., 9., 8., 4., 2.]) # random Multi-dimensional arrays are only shuffled along the first axis: >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.shuffle(arr) >>> arr array([[6., 7., 8.], # random [3., 4., 5.], [0., 1., 2.]]) """ _npi.shuffle(x, out=x)
PypiClean