id
stringlengths 1
8
| text
stringlengths 6
1.05M
| dataset_id
stringclasses 1
value |
---|---|---|
/baseapp-cloudflare-stream-field-0.5.tar.gz/baseapp-cloudflare-stream-field-0.5/baseapp_cloudflare_stream_field/tasks.py
|
from celery import shared_task
from baseapp_cloudflare_stream_field.stream import StreamClient
stream_client = StreamClient()
@shared_task
def refresh_from_cloudflare(content_type_pk, object_pk, attname, retries=1):
from django.contrib.contenttypes.models import ContentType
content_type = ContentType.objects.get(pk=content_type_pk)
obj = content_type.get_object_for_this_type(pk=object_pk)
cloudflare_video = getattr(obj, attname)
if cloudflare_video["status"]["state"] != "ready":
new_value = stream_client.get_video_data(cloudflare_video["uid"])
if new_value["status"]["state"] == "ready":
setattr(obj, attname, new_value)
obj.save(update_fields=[attname])
elif retries < 1000:
refresh_from_cloudflare.apply_async(
kwargs={
"content_type_pk": content_type_pk,
"object_pk": object_pk,
"attname": attname,
"retries": retries + 1,
},
countdown=20 * retries,
)
@shared_task
def generate_download_url(content_type_pk, object_pk, attname, retries=1):
from django.contrib.contenttypes.models import ContentType
content_type = ContentType.objects.get(pk=content_type_pk)
obj = content_type.get_object_for_this_type(pk=object_pk)
cloudflare_video = getattr(obj, attname)
if cloudflare_video["status"]["state"] != "ready" and retries < 1000:
generate_download_url.apply_async(
kwargs={
"content_type_pk": content_type_pk,
"object_pk": object_pk,
"attname": attname,
"retries": retries + 1,
},
countdown=20 * retries,
)
return None
if (
cloudflare_video["status"]["state"] == "ready"
and "download_url" not in cloudflare_video["meta"]
):
response = stream_client.download_video(cloudflare_video["uid"])
download_url = response["result"]["default"]["url"]
cloudflare_video["meta"]["download_url"] = download_url
stream_client.update_video_data(cloudflare_video["uid"], cloudflare_video["meta"])
setattr(obj, attname, cloudflare_video)
obj.save(update_fields=[attname])
|
PypiClean
|
/figure_second-0.2.0.tar.gz/figure_second-0.2.0/julia/figure_second/docs/src/index.md
|
# Welcome
`figure_second` is a layout first approach to plotting based on the ideas of the python
library [figurefirst](https://flyranch.github.io/figurefirst/). The general workflow of the library
is to define a layout of graphs in inkscape, label each object with an XML `id`, and then plot
*into* these objects using common julia plotting libraries (`Makie.jl`, `Plots.jl`, etc).
```@contents
Depth = 3
```
## Installing
You can install the package from git (for now):
```
using Pkg
Pkg.add(url="https://github.com/Fluid-Dynamics-Group/figure_second", subdir="julia/figure_second")
```
Since the julia library currently wraps the python library, you must also have the python package:
```
pip install figure-second
```
and update the package as you would any other:
```
using Pkg
Pkg.update("figure_second")
```
and also update the python package:
```
pip install -U figure-second
```
Note that the `pip` package builds from a high performance rust library. Therefore, you will need
a recent rust compiler which can be installed [here](https://www.rust-lang.org/tools/install).
# Example
First, open inkscape and draw some rectangles with the `rectangle` tool (R):

Right click on each rectangle and click `Object Properties`

Then, change the label to something memorable and click `Set`
- this will be how you reference each shape from julia. The rectangles
in this example are labeled `A` `B` `C` and `D` as such:

Now open julia and import `figure_second` and either a `Makie` library or `Plots.jl`:
```
using figure_second
using CairoMakie
```
then, create an `Updater` object that holds information on where the inkscape file is on disk. If
you are ok with mutating the inkscape file in place, you can do
```
inkscape = updater("./path/to/file.svg")
```
then, we can find all the ids of the rectangles we just created:
```
ids(inkscape)
# outputs: ["A" "B" "C" "D"]
```
now, lets create general plotting function that we can reuse:
```
function my_plot(x, y, inkscape::Updater, inkscape_id::String)
# manually set a resolution
res = (600, 400)
fig = Figure(resolution = res, dpi = 200)
ax = Axis(fig[1,1], title = inkscape_id)
lines!(ax, x, y, linewidth=4)
return fig
end
```
and then place some data in the `A` rectangle for our figure:
```
x = range(0, 2pi, 100)
A = myplot(x, sin.(x), inkscape, "A")
# a dictionary of keys (name of inkscape ID) and values
# (figure objects)
mapping = Dict(
"A" => A
)
# write all these figures into the inkscape svg
plot_figures(inkscape, mapping)
```
opening inkscape and going `File > Revert`, we will force reload inkscape
to any changes that have happened in the file. Now the file looks like this:

Lets apply the same process to id `C`:
```
x = range(0, 2pi, 100)
A = myplot(x, sin.(x), inkscape, "A")
# new figure!
C = myplot(x, sin.(x), inkscape, "B")
# mapping of inkscape ids to figure objects
mapping = Dict(
"A" => A,
"C" => C,
)
# write all these figures into the inkscape svg
plot_figures(inkscape, mapping)
```

it seems that `figure_second` is not respecting the aspect ratios of the inkscape objects which
in turn causes the plots to fill the allocated space poorly. To fix this we can use the `relative_dimensions`
function to calculate a figure resolution that respects the inkscape aspect ratio. Updating our `my_plot`
function:
```
function my_plot(x, y, inkscape::Updater, inkscape_id::String)
# every figure will have a height of 500, but the width will
# change to respect the aspect ratio of the output
desired_height = 500.
local res = relative_dimensions(inkscape, inkscape_id, desired_height)
fig = Figure(resolution = res, dpi = 200)
ax = Axis(fig[1,1], title = inkscape_id)
lines!(ax, x, y, linewidth=4)
return fig
end
```
re-running the code and reloading the inkscape figure we have the following:

then we can adjust our plotting commands for the other boxes:
```
x = range(0, 2pi, 100)
A = my_plot(x, sin.(x), inkscape, "A")
C = my_plot(x, sin.(x), inkscape, "C")
B = my_plot(x, tan.(x), inkscape, "B")
D = my_plot(x, abs.(cos.(x)), inkscape, "D")
mapping = Dict(
"A"=> A,
"C"=> C,
"B"=> B,
"D" => D
)
```

### The beauty of `figure_second`
Lets say we want to change all the line plots to scatter plots, and make all background colors different:
```
function my_plot(x, y, inkscape::Updater, inkscape_id::String)
# manually set a resolution
local res = (600, 400)
desired_height = 500.
local res = relative_dimensions(inkscape, inkscape_id, desired_height)
fig = Figure(resolution = res, dpi = 200)
# now a black background
ax = Axis(fig[1,1], title = inkscape_id, backgroundcolor=:black)
# now a scatter plot
scatter!(ax, x, y, linewidth=4)
return fig
end
```
our figure now looks like:

Or, what if we moved all the rectangles in our figure:

rerendering in julia:

|
PypiClean
|
/testmsm-3.8.5.tar.gz/testmsm-3.8.5/examples/tICA-vs-PCA.ipynb
|
# tICA vs. PCA
This example uses OpenMM to generate example data to compare two methods for dimensionality reduction:
tICA and PCA.
### Define dynamics
First, let's use OpenMM to run some dynamics on the 3D potential energy function
$$E(x,y,z) = 5 \cdot (x-1)^2 \cdot (x+1)^2 + y^2 + z^2$$
From looking at this equation, we can see that along the `x` dimension,
the potential is a double-well, whereas along the `y` and `z` dimensions,
we've just got a harmonic potential. So, we should expect that `x` is the slow
degree of freedom, whereas the system should equilibrate rapidly along `y` and `z`.
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
xx, yy = np.meshgrid(np.linspace(-2,2), np.linspace(-3,3))
zz = 0 # We can only visualize so many dimensions
ww = 5 * (xx-1)**2 * (xx+1)**2 + yy**2 + zz**2
c = plt.contourf(xx, yy, ww, np.linspace(-1, 15, 20), cmap='viridis_r')
plt.contour(xx, yy, ww, np.linspace(-1, 15, 20), cmap='Greys')
plt.xlabel('$x$', fontsize=18)
plt.ylabel('$y$', fontsize=18)
plt.colorbar(c, label='$E(x, y, z=0)$')
plt.tight_layout()
import simtk.openmm as mm
def propagate(n_steps=10000):
system = mm.System()
system.addParticle(1)
force = mm.CustomExternalForce('5*(x-1)^2*(x+1)^2 + y^2 + z^2')
force.addParticle(0, [])
system.addForce(force)
integrator = mm.LangevinIntegrator(500, 1, 0.02)
context = mm.Context(system, integrator)
context.setPositions([[0, 0, 0]])
context.setVelocitiesToTemperature(500)
x = np.zeros((n_steps, 3))
for i in range(n_steps):
x[i] = (context.getState(getPositions=True)
.getPositions(asNumpy=True)
._value)
integrator.step(1)
return x
```
### Run Dynamics
Okay, let's run the dynamics. The first plot below shows the `x`, `y` and `z` coordinate vs. time for the trajectory, and
the second plot shows each of the 1D and 2D marginal distributions.
```
trajectory = propagate(10000)
ylabels = ['x', 'y', 'z']
for i in range(3):
plt.subplot(3, 1, i+1)
plt.plot(trajectory[:, i])
plt.ylabel(ylabels[i])
plt.xlabel('Simulation time')
plt.tight_layout()
```
Note that the variance of `x` is much lower than the variance in `y` or `z`, despite its bi-modal distribution.
### Fit tICA and PCA models
```
from msmbuilder.decomposition import tICA, PCA
tica = tICA(n_components=1, lag_time=100)
pca = PCA(n_components=1)
tica.fit([trajectory])
pca.fit([trajectory])
```
### See what they find
```
plt.subplot(1,2,1)
plt.title('1st tIC')
plt.bar([1,2,3], tica.components_[0], color='b')
plt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])
plt.subplot(1,2,2)
plt.title('1st PC')
plt.bar([1,2,3], pca.components_[0], color='r')
plt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])
plt.show()
print('1st tIC', tica.components_ / np.linalg.norm(tica.components_))
print('1st PC ', pca.components_ / np.linalg.norm(pca.components_))
```
Note that the first tIC "finds" a projection that just resolves the `x` coordinate, whereas PCA doesn't.
```
c = plt.contourf(xx, yy, ww, np.linspace(-1, 15, 20), cmap='viridis_r')
plt.contour(xx, yy, ww, np.linspace(-1, 15, 20), cmap='Greys')
plt.plot([0, tica.components_[0, 0]],
[0, tica.components_[0, 1]],
lw=5, color='b', label='tICA')
plt.plot([0, pca.components_[0, 0]],
[0, pca.components_[0, 1]],
lw=5, color='r', label='PCA')
plt.xlabel('$x$', fontsize=18)
plt.ylabel('$y$', fontsize=18)
plt.legend(loc='best')
plt.tight_layout()
```
|
PypiClean
|
/GTW-1.2.6.tar.gz/GTW-1.2.6/_RST/_TOP/_MOM/Admin_Restricted.py
|
from __future__ import absolute_import, division, print_function, unicode_literals
from _GTW._RST._TOP._MOM.Admin import *
from _TFL.I18N import _, _T, _Tn
Admin = GTW.RST.TOP.MOM.Admin
class E_Type_R (Admin.E_Type) :
"""Directory displaying the restricted instances of one E_Type."""
_real_name = "E_Type"
et_map_name = None
skip_etag = True
restriction_desc = _ ("created by")
@property
@getattr_safe
def head_line (self) :
result = self.__super.head_line
user = self.user_restriction
if user :
u = user.FO
result = "%s: %s %s" % (result, _T (self.restriction_desc), u)
return result
# end def head_line
@property
@getattr_safe
def query_filters_d (self) :
result = self.query_filters_restricted ()
if result is None :
result = (Q.pid == 0) ### don't show any entries
return (result, ) + self.__super.query_filters_d
# end def query_filters_d
@property
@getattr_safe
def user_restriction (self) :
return self.top.user
# end def user_restriction
@property
@getattr_safe
def _change_info_key (self) :
user = self.top.user
pid = user.pid if user else None
return self.__super._change_info_key, pid
# end def _change_info_key
def query_filters_restricted (self) :
"""Query filter restricting the entities available to resource"""
user = self.user_restriction
if user is not None :
return Q.created_by == user
# end def query_filters_restricted
@property
@getattr_safe
def _objects (self) :
return self.top._objects_cache.get (self._change_info_key)
# end def _objects
@_objects.setter
def _objects (self, value) :
self.top._objects_cache [self._change_info_key] = value
# end def _objects
E_Type = E_Type_R # end class
if __name__ != "__main__" :
GTW.RST.TOP.MOM._Export_Module ()
### __END__ GTW.RST.TOP.MOM.Admin_Restricted
|
PypiClean
|
/criscostack_brik-1.0.5.tar.gz/criscostack_brik-1.0.5/brik/config/common_site_config.py
|
import getpass
import json
import os
default_config = {
"restart_supervisor_on_update": False,
"restart_systemd_on_update": False,
"serve_default_site": True,
"rebase_on_pull": False,
"criscostack_user": getpass.getuser(),
"shallow_clone": True,
"background_workers": 1,
"use_redis_auth": False,
"live_reload": True,
}
DEFAULT_MAX_REQUESTS = 5000
def setup_config(brik_path):
make_pid_folder(brik_path)
brik_config = get_config(brik_path)
brik_config.update(default_config)
brik_config.update(get_gunicorn_workers())
update_config_for_criscostack(brik_config, brik_path)
put_config(brik_config, brik_path)
def get_config(brik_path):
return get_common_site_config(brik_path)
def get_common_site_config(brik_path):
config_path = get_config_path(brik_path)
if not os.path.exists(config_path):
return {}
with open(config_path) as f:
return json.load(f)
def put_config(config, brik_path="."):
config_path = get_config_path(brik_path)
with open(config_path, "w") as f:
return json.dump(config, f, indent=1, sort_keys=True)
def update_config(new_config, brik_path="."):
config = get_config(brik_path=brik_path)
config.update(new_config)
put_config(config, brik_path=brik_path)
def get_config_path(brik_path):
return os.path.join(brik_path, "sites", "common_site_config.json")
def get_gunicorn_workers():
"""This function will return the maximum workers that can be started depending upon
number of cpu's present on the machine"""
import multiprocessing
return {"gunicorn_workers": multiprocessing.cpu_count() * 2 + 1}
def compute_max_requests_jitter(max_requests: int) -> int:
return int(max_requests * 0.1)
def get_default_max_requests(worker_count: int):
"""Get max requests and jitter config based on number of available workers."""
if worker_count <= 1:
# If there's only one worker then random restart can cause spikes in response times and
# can be annoying. Hence not enabled by default.
return 0
return DEFAULT_MAX_REQUESTS
def update_config_for_criscostack(config, brik_path):
ports = make_ports(brik_path)
for key in ("redis_cache", "redis_queue", "redis_socketio"):
if key not in config:
config[key] = f"redis://localhost:{ports[key]}"
for key in ("webserver_port", "socketio_port", "file_watcher_port"):
if key not in config:
config[key] = ports[key]
def make_ports(brik_path):
from urllib.parse import urlparse
brikes_path = os.path.dirname(os.path.abspath(brik_path))
default_ports = {
"webserver_port": 8000,
"socketio_port": 9000,
"file_watcher_port": 6787,
"redis_queue": 11000,
"redis_socketio": 13000,
"redis_cache": 13000,
}
# collect all existing ports
existing_ports = {}
for folder in os.listdir(brikes_path):
brik_path = os.path.join(brikes_path, folder)
if os.path.isdir(brik_path):
brik_config = get_config(brik_path)
for key in list(default_ports.keys()):
value = brik_config.get(key)
# extract port from redis url
if value and (key in ("redis_cache", "redis_queue", "redis_socketio")):
value = urlparse(value).port
if value:
existing_ports.setdefault(key, []).append(value)
# new port value = max of existing port value + 1
ports = {}
for key, value in list(default_ports.items()):
existing_value = existing_ports.get(key, [])
if existing_value:
value = max(existing_value) + 1
ports[key] = value
return ports
def make_pid_folder(brik_path):
pids_path = os.path.join(brik_path, "config", "pids")
if not os.path.exists(pids_path):
os.makedirs(pids_path)
|
PypiClean
|
/petsc-3.19.5.tar.gz/petsc-3.19.5/config/BuildSystem/config/packages/metis.py
|
import config.package
class Configure(config.package.CMakePackage):
def __init__(self, framework):
config.package.CMakePackage.__init__(self, framework)
self.versionname = 'METIS_VER_MAJOR.METIS_VER_MINOR.METIS_VER_SUBMINOR'
self.gitcommit = 'v5.1.0-p11'
self.download = ['git://https://bitbucket.org/petsc/pkg-metis.git','https://bitbucket.org/petsc/pkg-metis/get/'+self.gitcommit+'.tar.gz']
self.downloaddirnames = ['petsc-pkg-metis']
self.functions = ['METIS_PartGraphKway']
self.includes = ['metis.h']
self.liblist = [['libmetis.a'],['libmetis.a','libexecinfo.a']]
self.hastests = 1
self.useddirectly = 0
self.downloadonWindows = 1
return
def setupHelp(self, help):
config.package.CMakePackage.setupHelp(self,help)
import nargs
help.addArgument('METIS', '-download-metis-use-doubleprecision=<bool>', nargs.ArgBool(None, 0, 'enable METIS_USE_DOUBLEPRECISION'))
return
def setupDependencies(self, framework):
config.package.CMakePackage.setupDependencies(self, framework)
self.compilerFlags = framework.require('config.compilerFlags', self)
self.mathlib = framework.require('config.packages.mathlib', self)
self.deps = [self.mathlib]
return
def formCMakeConfigureArgs(self):
args = config.package.CMakePackage.formCMakeConfigureArgs(self)
args.append('-DGKLIB_PATH=../GKlib')
# force metis/parmetis to use a portable random number generator that will produce the same partitioning results on all systems
args.append('-DGKRAND=1')
if not config.setCompilers.Configure.isWindows(self.setCompilers.CC, self.log) and self.checkSharedLibrariesEnabled():
args.append('-DSHARED=1')
if self.compilerFlags.debugging:
args.append('-DDEBUG=1')
if self.getDefaultIndexSize() == 64:
args.append('-DMETIS_USE_LONGINDEX=1')
if config.setCompilers.Configure.isWindows(self.setCompilers.CC, self.log):
args.append('-DMSVC=1')
if self.framework.argDB['download-metis-use-doubleprecision']:
args.append('-DMETIS_USE_DOUBLEPRECISION=1')
args.append('-DMATH_LIB="'+self.libraries.toStringNoDupes(self.mathlib.lib)+'"')
return args
def configureLibrary(self):
config.package.Package.configureLibrary(self)
oldFlags = self.compilers.CPPFLAGS
self.compilers.CPPFLAGS += ' '+self.headers.toString(self.include)
if not self.checkCompile('#include "metis.h"', '#if (IDXTYPEWIDTH != '+ str(self.getDefaultIndexSize())+')\n#error incompatible IDXTYPEWIDTH\n#endif\n'):
if self.defaultIndexSize == 64:
msg= '--with-64-bit-indices option requires a metis build with IDXTYPEWIDTH=64.\n'
else:
msg= 'IDXTYPEWIDTH=64 metis build appears to be specified for a default 32-bit-indices build of PETSc.\n'
raise RuntimeError('Metis specified is incompatible!\n'+msg+'Suggest using --download-metis for a compatible metis')
self.compilers.CPPFLAGS = oldFlags
return
|
PypiClean
|
/pulumi_gcp_native-0.0.2a1617829075.tar.gz/pulumi_gcp_native-0.0.2a1617829075/pulumi_gcp_native/storagetransfer/v1/_inputs.py
|
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
__all__ = [
'AwsAccessKeyArgs',
'AwsS3DataArgs',
'AzureBlobStorageDataArgs',
'AzureCredentialsArgs',
'DateArgs',
'GcsDataArgs',
'HttpDataArgs',
'NotificationConfigArgs',
'ObjectConditionsArgs',
'ScheduleArgs',
'TimeOfDayArgs',
'TransferOptionsArgs',
'TransferSpecArgs',
]
@pulumi.input_type
class AwsAccessKeyArgs:
def __init__(__self__, *,
access_key_id: Optional[pulumi.Input[str]] = None,
secret_access_key: Optional[pulumi.Input[str]] = None):
"""
AWS access key (see [AWS Security Credentials](https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html)). For information on our data retention policy for user credentials, see [User credentials](/storage-transfer/docs/data-retention#user-credentials).
:param pulumi.Input[str] access_key_id: Required. AWS access key ID.
:param pulumi.Input[str] secret_access_key: Required. AWS secret access key. This field is not returned in RPC responses.
"""
if access_key_id is not None:
pulumi.set(__self__, "access_key_id", access_key_id)
if secret_access_key is not None:
pulumi.set(__self__, "secret_access_key", secret_access_key)
@property
@pulumi.getter(name="accessKeyId")
def access_key_id(self) -> Optional[pulumi.Input[str]]:
"""
Required. AWS access key ID.
"""
return pulumi.get(self, "access_key_id")
@access_key_id.setter
def access_key_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "access_key_id", value)
@property
@pulumi.getter(name="secretAccessKey")
def secret_access_key(self) -> Optional[pulumi.Input[str]]:
"""
Required. AWS secret access key. This field is not returned in RPC responses.
"""
return pulumi.get(self, "secret_access_key")
@secret_access_key.setter
def secret_access_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secret_access_key", value)
@pulumi.input_type
class AwsS3DataArgs:
def __init__(__self__, *,
aws_access_key: Optional[pulumi.Input['AwsAccessKeyArgs']] = None,
bucket_name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
An AwsS3Data resource can be a data source, but not a data sink. In an AwsS3Data resource, an object's name is the S3 object's key name.
:param pulumi.Input['AwsAccessKeyArgs'] aws_access_key: Required. Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see [User credentials](/storage-transfer/docs/data-retention#user-credentials).
:param pulumi.Input[str] bucket_name: Required. S3 Bucket name (see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/create-bucket-get-location-example.html)).
:param pulumi.Input[str] path: Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
"""
if aws_access_key is not None:
pulumi.set(__self__, "aws_access_key", aws_access_key)
if bucket_name is not None:
pulumi.set(__self__, "bucket_name", bucket_name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter(name="awsAccessKey")
def aws_access_key(self) -> Optional[pulumi.Input['AwsAccessKeyArgs']]:
"""
Required. Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see [User credentials](/storage-transfer/docs/data-retention#user-credentials).
"""
return pulumi.get(self, "aws_access_key")
@aws_access_key.setter
def aws_access_key(self, value: Optional[pulumi.Input['AwsAccessKeyArgs']]):
pulumi.set(self, "aws_access_key", value)
@property
@pulumi.getter(name="bucketName")
def bucket_name(self) -> Optional[pulumi.Input[str]]:
"""
Required. S3 Bucket name (see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/create-bucket-get-location-example.html)).
"""
return pulumi.get(self, "bucket_name")
@bucket_name.setter
def bucket_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "bucket_name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class AzureBlobStorageDataArgs:
def __init__(__self__, *,
azure_credentials: Optional[pulumi.Input['AzureCredentialsArgs']] = None,
container: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None,
storage_account: Optional[pulumi.Input[str]] = None):
"""
An AzureBlobStorageData resource can be a data source, but not a data sink. An AzureBlobStorageData resource represents one Azure container. The storage account determines the [Azure endpoint](https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account#storage-account-endpoints). In an AzureBlobStorageData resource, a blobs's name is the [Azure Blob Storage blob's key name](https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#blob-names).
:param pulumi.Input['AzureCredentialsArgs'] azure_credentials: Required. Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see [User credentials](/storage-transfer/docs/data-retention#user-credentials).
:param pulumi.Input[str] container: Required. The container to transfer from the Azure Storage account.
:param pulumi.Input[str] path: Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
:param pulumi.Input[str] storage_account: Required. The name of the Azure Storage account.
"""
if azure_credentials is not None:
pulumi.set(__self__, "azure_credentials", azure_credentials)
if container is not None:
pulumi.set(__self__, "container", container)
if path is not None:
pulumi.set(__self__, "path", path)
if storage_account is not None:
pulumi.set(__self__, "storage_account", storage_account)
@property
@pulumi.getter(name="azureCredentials")
def azure_credentials(self) -> Optional[pulumi.Input['AzureCredentialsArgs']]:
"""
Required. Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see [User credentials](/storage-transfer/docs/data-retention#user-credentials).
"""
return pulumi.get(self, "azure_credentials")
@azure_credentials.setter
def azure_credentials(self, value: Optional[pulumi.Input['AzureCredentialsArgs']]):
pulumi.set(self, "azure_credentials", value)
@property
@pulumi.getter
def container(self) -> Optional[pulumi.Input[str]]:
"""
Required. The container to transfer from the Azure Storage account.
"""
return pulumi.get(self, "container")
@container.setter
def container(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "container", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter(name="storageAccount")
def storage_account(self) -> Optional[pulumi.Input[str]]:
"""
Required. The name of the Azure Storage account.
"""
return pulumi.get(self, "storage_account")
@storage_account.setter
def storage_account(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "storage_account", value)
@pulumi.input_type
class AzureCredentialsArgs:
def __init__(__self__, *,
sas_token: Optional[pulumi.Input[str]] = None):
"""
Azure credentials For information on our data retention policy for user credentials, see [User credentials](/storage-transfer/docs/data-retention#user-credentials).
:param pulumi.Input[str] sas_token: Required. Azure shared access signature. (see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview)).
"""
if sas_token is not None:
pulumi.set(__self__, "sas_token", sas_token)
@property
@pulumi.getter(name="sasToken")
def sas_token(self) -> Optional[pulumi.Input[str]]:
"""
Required. Azure shared access signature. (see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview)).
"""
return pulumi.get(self, "sas_token")
@sas_token.setter
def sas_token(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sas_token", value)
@pulumi.input_type
class DateArgs:
def __init__(__self__, *,
day: Optional[pulumi.Input[int]] = None,
month: Optional[pulumi.Input[int]] = None,
year: Optional[pulumi.Input[int]] = None):
"""
Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values * A month and day value, with a zero year, such as an anniversary * A year on its own, with zero month and day values * A year and month value, with a zero day, such as a credit card expiration date Related types are google.type.TimeOfDay and `google.protobuf.Timestamp`.
:param pulumi.Input[int] day: Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
:param pulumi.Input[int] month: Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
:param pulumi.Input[int] year: Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
"""
if day is not None:
pulumi.set(__self__, "day", day)
if month is not None:
pulumi.set(__self__, "month", month)
if year is not None:
pulumi.set(__self__, "year", year)
@property
@pulumi.getter
def day(self) -> Optional[pulumi.Input[int]]:
"""
Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
"""
return pulumi.get(self, "day")
@day.setter
def day(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "day", value)
@property
@pulumi.getter
def month(self) -> Optional[pulumi.Input[int]]:
"""
Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
"""
return pulumi.get(self, "month")
@month.setter
def month(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "month", value)
@property
@pulumi.getter
def year(self) -> Optional[pulumi.Input[int]]:
"""
Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
"""
return pulumi.get(self, "year")
@year.setter
def year(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "year", value)
@pulumi.input_type
class GcsDataArgs:
def __init__(__self__, *,
bucket_name: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None):
"""
In a GcsData resource, an object's name is the Cloud Storage object's name and its "last modification time" refers to the object's `updated` property of Cloud Storage objects, which changes when the content or the metadata of the object is updated.
:param pulumi.Input[str] bucket_name: Required. Cloud Storage bucket name (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/naming#requirements)).
:param pulumi.Input[str] path: Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. (must meet Object Name Requirements](https://cloud.google.com/storage/docs/naming#objectnames)).
"""
if bucket_name is not None:
pulumi.set(__self__, "bucket_name", bucket_name)
if path is not None:
pulumi.set(__self__, "path", path)
@property
@pulumi.getter(name="bucketName")
def bucket_name(self) -> Optional[pulumi.Input[str]]:
"""
Required. Cloud Storage bucket name (see [Bucket Name Requirements](https://cloud.google.com/storage/docs/naming#requirements)).
"""
return pulumi.get(self, "bucket_name")
@bucket_name.setter
def bucket_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "bucket_name", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. (must meet Object Name Requirements](https://cloud.google.com/storage/docs/naming#objectnames)).
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@pulumi.input_type
class HttpDataArgs:
def __init__(__self__, *,
list_url: Optional[pulumi.Input[str]] = None):
"""
An HttpData resource specifies a list of objects on the web to be transferred over HTTP. The information of the objects to be transferred is contained in a file referenced by a URL. The first line in the file must be `"TsvHttpData-1.0"`, which specifies the format of the file. Subsequent lines specify the information of the list of objects, one object per list entry. Each entry has the following tab-delimited fields: * **HTTP URL** — The location of the object. * **Length** — The size of the object in bytes. * **MD5** — The base64-encoded MD5 hash of the object. For an example of a valid TSV file, see [Transferring data from URLs](https://cloud.google.com/storage-transfer/docs/create-url-list). When transferring data based on a URL list, keep the following in mind: * When an object located at `http(s)://hostname:port/` is transferred to a data sink, the name of the object at the data sink is `/`. * If the specified size of an object does not match the actual size of the object fetched, the object will not be transferred. * If the specified MD5 does not match the MD5 computed from the transferred bytes, the object transfer will fail. * Ensure that each URL you specify is publicly accessible. For example, in Cloud Storage you can [share an object publicly] (https://cloud.google.com/storage/docs/cloud-console#_sharingdata) and get a link to it. * Storage Transfer Service obeys `robots.txt` rules and requires the source HTTP server to support `Range` requests and to return a `Content-Length` header in each response. * ObjectConditions have no effect when filtering objects to transfer.
:param pulumi.Input[str] list_url: Required. The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
"""
if list_url is not None:
pulumi.set(__self__, "list_url", list_url)
@property
@pulumi.getter(name="listUrl")
def list_url(self) -> Optional[pulumi.Input[str]]:
"""
Required. The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
"""
return pulumi.get(self, "list_url")
@list_url.setter
def list_url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "list_url", value)
@pulumi.input_type
class NotificationConfigArgs:
def __init__(__self__, *,
event_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
payload_format: Optional[pulumi.Input[str]] = None,
pubsub_topic: Optional[pulumi.Input[str]] = None):
"""
Specification to configure notifications published to Cloud Pub/Sub. Notifications will be published to the customer-provided topic using the following `PubsubMessage.attributes`: * `"eventType"`: one of the EventType values * `"payloadFormat"`: one of the PayloadFormat values * `"projectId"`: the project_id of the `TransferOperation` * `"transferJobName"`: the transfer_job_name of the `TransferOperation` * `"transferOperationName"`: the name of the `TransferOperation` The `PubsubMessage.data` will contain a TransferOperation resource formatted according to the specified `PayloadFormat`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] event_types: Event types for which a notification is desired. If empty, send notifications for all event types.
:param pulumi.Input[str] payload_format: Required. The desired format of the notification message payloads.
:param pulumi.Input[str] pubsub_topic: Required. The `Topic.name` of the Cloud Pub/Sub topic to which to publish notifications. Must be of the format: `projects/{project}/topics/{topic}`. Not matching this format will result in an INVALID_ARGUMENT error.
"""
if event_types is not None:
pulumi.set(__self__, "event_types", event_types)
if payload_format is not None:
pulumi.set(__self__, "payload_format", payload_format)
if pubsub_topic is not None:
pulumi.set(__self__, "pubsub_topic", pubsub_topic)
@property
@pulumi.getter(name="eventTypes")
def event_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Event types for which a notification is desired. If empty, send notifications for all event types.
"""
return pulumi.get(self, "event_types")
@event_types.setter
def event_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "event_types", value)
@property
@pulumi.getter(name="payloadFormat")
def payload_format(self) -> Optional[pulumi.Input[str]]:
"""
Required. The desired format of the notification message payloads.
"""
return pulumi.get(self, "payload_format")
@payload_format.setter
def payload_format(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "payload_format", value)
@property
@pulumi.getter(name="pubsubTopic")
def pubsub_topic(self) -> Optional[pulumi.Input[str]]:
"""
Required. The `Topic.name` of the Cloud Pub/Sub topic to which to publish notifications. Must be of the format: `projects/{project}/topics/{topic}`. Not matching this format will result in an INVALID_ARGUMENT error.
"""
return pulumi.get(self, "pubsub_topic")
@pubsub_topic.setter
def pubsub_topic(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "pubsub_topic", value)
@pulumi.input_type
class ObjectConditionsArgs:
def __init__(__self__, *,
exclude_prefixes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
include_prefixes: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
last_modified_before: Optional[pulumi.Input[str]] = None,
last_modified_since: Optional[pulumi.Input[str]] = None,
max_time_elapsed_since_last_modification: Optional[pulumi.Input[str]] = None,
min_time_elapsed_since_last_modification: Optional[pulumi.Input[str]] = None):
"""
Conditions that determine which objects will be transferred. Applies only to Cloud Data Sources such as S3, Azure, and Cloud Storage. The "last modification time" refers to the time of the last change to the object's content or metadata — specifically, this is the `updated` property of Cloud Storage objects, the `LastModified` field of S3 objects, and the `Last-Modified` header of Azure blobs.
:param pulumi.Input[Sequence[pulumi.Input[str]]] exclude_prefixes: If you specify `exclude_prefixes`, Storage Transfer Service uses the items in the `exclude_prefixes` array to determine which objects to exclude from a transfer. Objects must not start with one of the matching `exclude_prefixes` for inclusion in a transfer. The following are requirements of `exclude_prefixes`: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the object `s3://my-aws-bucket/logs/y=2015/requests.gz`, specify the exclude-prefix as `logs/y=2015/requests.gz`. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included by `include_prefixes`. The max size of `exclude_prefixes` is 1000. For more information, see [Filtering objects from transfers](/storage-transfer/docs/filtering-objects-from-transfers).
:param pulumi.Input[Sequence[pulumi.Input[str]]] include_prefixes: If you specify `include_prefixes`, Storage Transfer Service uses the items in the `include_prefixes` array to determine which objects to include in a transfer. Objects must start with one of the matching `include_prefixes` for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of the `exclude_prefixes` specified for inclusion in the transfer. The following are requirements of `include_prefixes`: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the object `s3://my-aws-bucket/logs/y=2015/requests.gz`, specify the include-prefix as `logs/y=2015/requests.gz`. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size of `include_prefixes` is 1000. For more information, see [Filtering objects from transfers](/storage-transfer/docs/filtering-objects-from-transfers).
:param pulumi.Input[str] last_modified_before: If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" will be transferred.
:param pulumi.Input[str] last_modified_since: If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The `last_modified_since` and `last_modified_before` fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: * `last_modified_since` to the start of the day * `last_modified_before` to the end of the day
:param pulumi.Input[str] max_time_elapsed_since_last_modification: If specified, only objects with a "last modification time" on or after `NOW` - `max_time_elapsed_since_last_modification` and objects that don't have a "last modification time" are transferred. For each TransferOperation started by this TransferJob, `NOW` refers to the start_time of the `TransferOperation`.
:param pulumi.Input[str] min_time_elapsed_since_last_modification: If specified, only objects with a "last modification time" before `NOW` - `min_time_elapsed_since_last_modification` and objects that don't have a "last modification time" are transferred. For each TransferOperation started by this TransferJob, `NOW` refers to the start_time of the `TransferOperation`.
"""
if exclude_prefixes is not None:
pulumi.set(__self__, "exclude_prefixes", exclude_prefixes)
if include_prefixes is not None:
pulumi.set(__self__, "include_prefixes", include_prefixes)
if last_modified_before is not None:
pulumi.set(__self__, "last_modified_before", last_modified_before)
if last_modified_since is not None:
pulumi.set(__self__, "last_modified_since", last_modified_since)
if max_time_elapsed_since_last_modification is not None:
pulumi.set(__self__, "max_time_elapsed_since_last_modification", max_time_elapsed_since_last_modification)
if min_time_elapsed_since_last_modification is not None:
pulumi.set(__self__, "min_time_elapsed_since_last_modification", min_time_elapsed_since_last_modification)
@property
@pulumi.getter(name="excludePrefixes")
def exclude_prefixes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
If you specify `exclude_prefixes`, Storage Transfer Service uses the items in the `exclude_prefixes` array to determine which objects to exclude from a transfer. Objects must not start with one of the matching `exclude_prefixes` for inclusion in a transfer. The following are requirements of `exclude_prefixes`: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the object `s3://my-aws-bucket/logs/y=2015/requests.gz`, specify the exclude-prefix as `logs/y=2015/requests.gz`. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included by `include_prefixes`. The max size of `exclude_prefixes` is 1000. For more information, see [Filtering objects from transfers](/storage-transfer/docs/filtering-objects-from-transfers).
"""
return pulumi.get(self, "exclude_prefixes")
@exclude_prefixes.setter
def exclude_prefixes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "exclude_prefixes", value)
@property
@pulumi.getter(name="includePrefixes")
def include_prefixes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
If you specify `include_prefixes`, Storage Transfer Service uses the items in the `include_prefixes` array to determine which objects to include in a transfer. Objects must start with one of the matching `include_prefixes` for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of the `exclude_prefixes` specified for inclusion in the transfer. The following are requirements of `include_prefixes`: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the object `s3://my-aws-bucket/logs/y=2015/requests.gz`, specify the include-prefix as `logs/y=2015/requests.gz`. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size of `include_prefixes` is 1000. For more information, see [Filtering objects from transfers](/storage-transfer/docs/filtering-objects-from-transfers).
"""
return pulumi.get(self, "include_prefixes")
@include_prefixes.setter
def include_prefixes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "include_prefixes", value)
@property
@pulumi.getter(name="lastModifiedBefore")
def last_modified_before(self) -> Optional[pulumi.Input[str]]:
"""
If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" will be transferred.
"""
return pulumi.get(self, "last_modified_before")
@last_modified_before.setter
def last_modified_before(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "last_modified_before", value)
@property
@pulumi.getter(name="lastModifiedSince")
def last_modified_since(self) -> Optional[pulumi.Input[str]]:
"""
If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The `last_modified_since` and `last_modified_before` fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: * `last_modified_since` to the start of the day * `last_modified_before` to the end of the day
"""
return pulumi.get(self, "last_modified_since")
@last_modified_since.setter
def last_modified_since(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "last_modified_since", value)
@property
@pulumi.getter(name="maxTimeElapsedSinceLastModification")
def max_time_elapsed_since_last_modification(self) -> Optional[pulumi.Input[str]]:
"""
If specified, only objects with a "last modification time" on or after `NOW` - `max_time_elapsed_since_last_modification` and objects that don't have a "last modification time" are transferred. For each TransferOperation started by this TransferJob, `NOW` refers to the start_time of the `TransferOperation`.
"""
return pulumi.get(self, "max_time_elapsed_since_last_modification")
@max_time_elapsed_since_last_modification.setter
def max_time_elapsed_since_last_modification(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "max_time_elapsed_since_last_modification", value)
@property
@pulumi.getter(name="minTimeElapsedSinceLastModification")
def min_time_elapsed_since_last_modification(self) -> Optional[pulumi.Input[str]]:
"""
If specified, only objects with a "last modification time" before `NOW` - `min_time_elapsed_since_last_modification` and objects that don't have a "last modification time" are transferred. For each TransferOperation started by this TransferJob, `NOW` refers to the start_time of the `TransferOperation`.
"""
return pulumi.get(self, "min_time_elapsed_since_last_modification")
@min_time_elapsed_since_last_modification.setter
def min_time_elapsed_since_last_modification(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "min_time_elapsed_since_last_modification", value)
@pulumi.input_type
class ScheduleArgs:
def __init__(__self__, *,
end_time_of_day: Optional[pulumi.Input['TimeOfDayArgs']] = None,
repeat_interval: Optional[pulumi.Input[str]] = None,
schedule_end_date: Optional[pulumi.Input['DateArgs']] = None,
schedule_start_date: Optional[pulumi.Input['DateArgs']] = None,
start_time_of_day: Optional[pulumi.Input['TimeOfDayArgs']] = None):
"""
Transfers can be scheduled to recur or to run just once.
:param pulumi.Input['TimeOfDayArgs'] end_time_of_day: The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, `end_time_of_day` specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * If `end_time_of_day` is not set and `schedule_end_date` is set, then a default value of `23:59:59` is used for `end_time_of_day`. * If `end_time_of_day` is set and `schedule_end_date` is not set, then INVALID_ARGUMENT is returned.
:param pulumi.Input[str] repeat_interval: Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
:param pulumi.Input['DateArgs'] schedule_end_date: The last day a transfer runs. Date boundaries are determined relative to UTC time. A job will run once per 24 hours within the following guidelines: * If `schedule_end_date` and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * If `schedule_end_date` is later than `schedule_start_date` and `schedule_end_date` is in the future relative to UTC, the job will run each day at start_time_of_day through `schedule_end_date`.
:param pulumi.Input['DateArgs'] schedule_start_date: Required. The start date of a transfer. Date boundaries are determined relative to UTC time. If `schedule_start_date` and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. **Note:** When starting jobs at or near midnight UTC it is possible that a job will start later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it will create a TransferJob with `schedule_start_date` set to June 2 and a `start_time_of_day` set to midnight UTC. The first scheduled TransferOperation will take place on June 3 at midnight UTC.
:param pulumi.Input['TimeOfDayArgs'] start_time_of_day: The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If `start_time_of_day` is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. If `start_time_of_day` is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, through `schedule_end_date`.
"""
if end_time_of_day is not None:
pulumi.set(__self__, "end_time_of_day", end_time_of_day)
if repeat_interval is not None:
pulumi.set(__self__, "repeat_interval", repeat_interval)
if schedule_end_date is not None:
pulumi.set(__self__, "schedule_end_date", schedule_end_date)
if schedule_start_date is not None:
pulumi.set(__self__, "schedule_start_date", schedule_start_date)
if start_time_of_day is not None:
pulumi.set(__self__, "start_time_of_day", start_time_of_day)
@property
@pulumi.getter(name="endTimeOfDay")
def end_time_of_day(self) -> Optional[pulumi.Input['TimeOfDayArgs']]:
"""
The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date, `end_time_of_day` specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * If `end_time_of_day` is not set and `schedule_end_date` is set, then a default value of `23:59:59` is used for `end_time_of_day`. * If `end_time_of_day` is set and `schedule_end_date` is not set, then INVALID_ARGUMENT is returned.
"""
return pulumi.get(self, "end_time_of_day")
@end_time_of_day.setter
def end_time_of_day(self, value: Optional[pulumi.Input['TimeOfDayArgs']]):
pulumi.set(self, "end_time_of_day", value)
@property
@pulumi.getter(name="repeatInterval")
def repeat_interval(self) -> Optional[pulumi.Input[str]]:
"""
Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
"""
return pulumi.get(self, "repeat_interval")
@repeat_interval.setter
def repeat_interval(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "repeat_interval", value)
@property
@pulumi.getter(name="scheduleEndDate")
def schedule_end_date(self) -> Optional[pulumi.Input['DateArgs']]:
"""
The last day a transfer runs. Date boundaries are determined relative to UTC time. A job will run once per 24 hours within the following guidelines: * If `schedule_end_date` and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * If `schedule_end_date` is later than `schedule_start_date` and `schedule_end_date` is in the future relative to UTC, the job will run each day at start_time_of_day through `schedule_end_date`.
"""
return pulumi.get(self, "schedule_end_date")
@schedule_end_date.setter
def schedule_end_date(self, value: Optional[pulumi.Input['DateArgs']]):
pulumi.set(self, "schedule_end_date", value)
@property
@pulumi.getter(name="scheduleStartDate")
def schedule_start_date(self) -> Optional[pulumi.Input['DateArgs']]:
"""
Required. The start date of a transfer. Date boundaries are determined relative to UTC time. If `schedule_start_date` and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. **Note:** When starting jobs at or near midnight UTC it is possible that a job will start later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it will create a TransferJob with `schedule_start_date` set to June 2 and a `start_time_of_day` set to midnight UTC. The first scheduled TransferOperation will take place on June 3 at midnight UTC.
"""
return pulumi.get(self, "schedule_start_date")
@schedule_start_date.setter
def schedule_start_date(self, value: Optional[pulumi.Input['DateArgs']]):
pulumi.set(self, "schedule_start_date", value)
@property
@pulumi.getter(name="startTimeOfDay")
def start_time_of_day(self) -> Optional[pulumi.Input['TimeOfDayArgs']]:
"""
The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If `start_time_of_day` is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. If `start_time_of_day` is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, through `schedule_end_date`.
"""
return pulumi.get(self, "start_time_of_day")
@start_time_of_day.setter
def start_time_of_day(self, value: Optional[pulumi.Input['TimeOfDayArgs']]):
pulumi.set(self, "start_time_of_day", value)
@pulumi.input_type
class TimeOfDayArgs:
def __init__(__self__, *,
hours: Optional[pulumi.Input[int]] = None,
minutes: Optional[pulumi.Input[int]] = None,
nanos: Optional[pulumi.Input[int]] = None,
seconds: Optional[pulumi.Input[int]] = None):
"""
Represents a time of day. The date and time zone are either not significant or are specified elsewhere. An API may choose to allow leap seconds. Related types are google.type.Date and `google.protobuf.Timestamp`.
:param pulumi.Input[int] hours: Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
:param pulumi.Input[int] minutes: Minutes of hour of day. Must be from 0 to 59.
:param pulumi.Input[int] nanos: Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
:param pulumi.Input[int] seconds: Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
"""
if hours is not None:
pulumi.set(__self__, "hours", hours)
if minutes is not None:
pulumi.set(__self__, "minutes", minutes)
if nanos is not None:
pulumi.set(__self__, "nanos", nanos)
if seconds is not None:
pulumi.set(__self__, "seconds", seconds)
@property
@pulumi.getter
def hours(self) -> Optional[pulumi.Input[int]]:
"""
Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
"""
return pulumi.get(self, "hours")
@hours.setter
def hours(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "hours", value)
@property
@pulumi.getter
def minutes(self) -> Optional[pulumi.Input[int]]:
"""
Minutes of hour of day. Must be from 0 to 59.
"""
return pulumi.get(self, "minutes")
@minutes.setter
def minutes(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "minutes", value)
@property
@pulumi.getter
def nanos(self) -> Optional[pulumi.Input[int]]:
"""
Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
"""
return pulumi.get(self, "nanos")
@nanos.setter
def nanos(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "nanos", value)
@property
@pulumi.getter
def seconds(self) -> Optional[pulumi.Input[int]]:
"""
Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
"""
return pulumi.get(self, "seconds")
@seconds.setter
def seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "seconds", value)
@pulumi.input_type
class TransferOptionsArgs:
def __init__(__self__, *,
delete_objects_from_source_after_transfer: Optional[pulumi.Input[bool]] = None,
delete_objects_unique_in_sink: Optional[pulumi.Input[bool]] = None,
overwrite_objects_already_existing_in_sink: Optional[pulumi.Input[bool]] = None):
"""
TransferOptions define the actions to be performed on objects in a transfer.
:param pulumi.Input[bool] delete_objects_from_source_after_transfer: Whether objects should be deleted from the source after they are transferred to the sink. **Note:** This option and delete_objects_unique_in_sink are mutually exclusive.
:param pulumi.Input[bool] delete_objects_unique_in_sink: Whether objects that exist only in the sink should be deleted. **Note:** This option and delete_objects_from_source_after_transfer are mutually exclusive.
:param pulumi.Input[bool] overwrite_objects_already_existing_in_sink: When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source will be overwritten with the source object.
"""
if delete_objects_from_source_after_transfer is not None:
pulumi.set(__self__, "delete_objects_from_source_after_transfer", delete_objects_from_source_after_transfer)
if delete_objects_unique_in_sink is not None:
pulumi.set(__self__, "delete_objects_unique_in_sink", delete_objects_unique_in_sink)
if overwrite_objects_already_existing_in_sink is not None:
pulumi.set(__self__, "overwrite_objects_already_existing_in_sink", overwrite_objects_already_existing_in_sink)
@property
@pulumi.getter(name="deleteObjectsFromSourceAfterTransfer")
def delete_objects_from_source_after_transfer(self) -> Optional[pulumi.Input[bool]]:
"""
Whether objects should be deleted from the source after they are transferred to the sink. **Note:** This option and delete_objects_unique_in_sink are mutually exclusive.
"""
return pulumi.get(self, "delete_objects_from_source_after_transfer")
@delete_objects_from_source_after_transfer.setter
def delete_objects_from_source_after_transfer(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "delete_objects_from_source_after_transfer", value)
@property
@pulumi.getter(name="deleteObjectsUniqueInSink")
def delete_objects_unique_in_sink(self) -> Optional[pulumi.Input[bool]]:
"""
Whether objects that exist only in the sink should be deleted. **Note:** This option and delete_objects_from_source_after_transfer are mutually exclusive.
"""
return pulumi.get(self, "delete_objects_unique_in_sink")
@delete_objects_unique_in_sink.setter
def delete_objects_unique_in_sink(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "delete_objects_unique_in_sink", value)
@property
@pulumi.getter(name="overwriteObjectsAlreadyExistingInSink")
def overwrite_objects_already_existing_in_sink(self) -> Optional[pulumi.Input[bool]]:
"""
When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source will be overwritten with the source object.
"""
return pulumi.get(self, "overwrite_objects_already_existing_in_sink")
@overwrite_objects_already_existing_in_sink.setter
def overwrite_objects_already_existing_in_sink(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "overwrite_objects_already_existing_in_sink", value)
@pulumi.input_type
class TransferSpecArgs:
def __init__(__self__, *,
aws_s3_data_source: Optional[pulumi.Input['AwsS3DataArgs']] = None,
azure_blob_storage_data_source: Optional[pulumi.Input['AzureBlobStorageDataArgs']] = None,
gcs_data_sink: Optional[pulumi.Input['GcsDataArgs']] = None,
gcs_data_source: Optional[pulumi.Input['GcsDataArgs']] = None,
http_data_source: Optional[pulumi.Input['HttpDataArgs']] = None,
object_conditions: Optional[pulumi.Input['ObjectConditionsArgs']] = None,
transfer_options: Optional[pulumi.Input['TransferOptionsArgs']] = None):
"""
Configuration for running a transfer.
:param pulumi.Input['AwsS3DataArgs'] aws_s3_data_source: An AWS S3 data source.
:param pulumi.Input['AzureBlobStorageDataArgs'] azure_blob_storage_data_source: An Azure Blob Storage data source.
:param pulumi.Input['GcsDataArgs'] gcs_data_sink: A Cloud Storage data sink.
:param pulumi.Input['GcsDataArgs'] gcs_data_source: A Cloud Storage data source.
:param pulumi.Input['HttpDataArgs'] http_data_source: An HTTP URL data source.
:param pulumi.Input['ObjectConditionsArgs'] object_conditions: Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
:param pulumi.Input['TransferOptionsArgs'] transfer_options: If the option delete_objects_unique_in_sink is `true` and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
"""
if aws_s3_data_source is not None:
pulumi.set(__self__, "aws_s3_data_source", aws_s3_data_source)
if azure_blob_storage_data_source is not None:
pulumi.set(__self__, "azure_blob_storage_data_source", azure_blob_storage_data_source)
if gcs_data_sink is not None:
pulumi.set(__self__, "gcs_data_sink", gcs_data_sink)
if gcs_data_source is not None:
pulumi.set(__self__, "gcs_data_source", gcs_data_source)
if http_data_source is not None:
pulumi.set(__self__, "http_data_source", http_data_source)
if object_conditions is not None:
pulumi.set(__self__, "object_conditions", object_conditions)
if transfer_options is not None:
pulumi.set(__self__, "transfer_options", transfer_options)
@property
@pulumi.getter(name="awsS3DataSource")
def aws_s3_data_source(self) -> Optional[pulumi.Input['AwsS3DataArgs']]:
"""
An AWS S3 data source.
"""
return pulumi.get(self, "aws_s3_data_source")
@aws_s3_data_source.setter
def aws_s3_data_source(self, value: Optional[pulumi.Input['AwsS3DataArgs']]):
pulumi.set(self, "aws_s3_data_source", value)
@property
@pulumi.getter(name="azureBlobStorageDataSource")
def azure_blob_storage_data_source(self) -> Optional[pulumi.Input['AzureBlobStorageDataArgs']]:
"""
An Azure Blob Storage data source.
"""
return pulumi.get(self, "azure_blob_storage_data_source")
@azure_blob_storage_data_source.setter
def azure_blob_storage_data_source(self, value: Optional[pulumi.Input['AzureBlobStorageDataArgs']]):
pulumi.set(self, "azure_blob_storage_data_source", value)
@property
@pulumi.getter(name="gcsDataSink")
def gcs_data_sink(self) -> Optional[pulumi.Input['GcsDataArgs']]:
"""
A Cloud Storage data sink.
"""
return pulumi.get(self, "gcs_data_sink")
@gcs_data_sink.setter
def gcs_data_sink(self, value: Optional[pulumi.Input['GcsDataArgs']]):
pulumi.set(self, "gcs_data_sink", value)
@property
@pulumi.getter(name="gcsDataSource")
def gcs_data_source(self) -> Optional[pulumi.Input['GcsDataArgs']]:
"""
A Cloud Storage data source.
"""
return pulumi.get(self, "gcs_data_source")
@gcs_data_source.setter
def gcs_data_source(self, value: Optional[pulumi.Input['GcsDataArgs']]):
pulumi.set(self, "gcs_data_source", value)
@property
@pulumi.getter(name="httpDataSource")
def http_data_source(self) -> Optional[pulumi.Input['HttpDataArgs']]:
"""
An HTTP URL data source.
"""
return pulumi.get(self, "http_data_source")
@http_data_source.setter
def http_data_source(self, value: Optional[pulumi.Input['HttpDataArgs']]):
pulumi.set(self, "http_data_source", value)
@property
@pulumi.getter(name="objectConditions")
def object_conditions(self) -> Optional[pulumi.Input['ObjectConditionsArgs']]:
"""
Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
"""
return pulumi.get(self, "object_conditions")
@object_conditions.setter
def object_conditions(self, value: Optional[pulumi.Input['ObjectConditionsArgs']]):
pulumi.set(self, "object_conditions", value)
@property
@pulumi.getter(name="transferOptions")
def transfer_options(self) -> Optional[pulumi.Input['TransferOptionsArgs']]:
"""
If the option delete_objects_unique_in_sink is `true` and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
"""
return pulumi.get(self, "transfer_options")
@transfer_options.setter
def transfer_options(self, value: Optional[pulumi.Input['TransferOptionsArgs']]):
pulumi.set(self, "transfer_options", value)
|
PypiClean
|
/np-session-0.6.38.tar.gz/np-session-0.6.38/src/np_session/jobs/lims_upload.py
|
import pathlib
import np_logging
from np_session.session import Session
from np_session.components.paths import INCOMING_ROOT as DEFAULT_INCOMING_ROOT
logger = np_logging.getLogger(__name__)
def write_trigger_file(
session: Session,
incoming_dir: pathlib.Path = DEFAULT_INCOMING_ROOT,
trigger_dir: pathlib.Path = DEFAULT_INCOMING_ROOT / 'trigger',
) -> None:
"""Write a trigger file to initiate ecephys session data upload to lims.
- designated "incoming" folders have a `trigger` dir which is scanned periodically for trigger files
- a trigger file provides:
- a lims session ID
- a path to an "incoming" folder where new session data is located, ready for
upload
- this path is typically the parent of the trigger dir, where lims has
read/write access for deleting session data after upload, but it can be
anywhere on //allen
- once the trigger file is detected, lims searches for a file in the incoming
dir named '*platform*.json', which should contain a `files` dict
"""
if not incoming_dir.exists():
logger.warning(
"Incoming dir doesn't exist or isn't accessible - lims upload job will fail when triggered: %s",
incoming_dir,
)
elif not incoming_dir.match(f'*{session.id}*platform*.json'):
logger.warning(
'No platform json found for %s in incoming dir - lims upload job will fail when triggered: %s',
session.id,
incoming_dir,
)
trigger_file = pathlib.Path(trigger_dir / f'{session.id}.ecp')
trigger_file.touch()
# don't mkdir for trigger_dir or parents
# - doesn't make sense to create, since it's a dir lims needs to know about and
# be set up to monitor
# - if it doesn't exist or is badly specified, the file
# operation should raise the appropriate error
contents = (
f'sessionid: {session.id}\n' f"location: '{incoming_dir.as_posix()}'"
)
trigger_file.write_text(contents)
logger.info(
'Trigger file written for %s in %s:\n%s',
session,
trigger_file.parent,
trigger_file.read_text(),
)
|
PypiClean
|
/cada_dash_custom_radioitems-0.1.1.tar.gz/cada_dash_custom_radioitems-0.1.1/cada_dash_custom_radioitems/cada_dash_custom_radioitems.min.js
|
window.cada_dash_custom_radioitems=function(e){var t={};function n(r){if(t[r])return t[r].exports;var o=t[r]={i:r,l:!1,exports:{}};return e[r].call(o.exports,o,o.exports,n),o.l=!0,o.exports}return n.m=e,n.c=t,n.d=function(e,t,r){n.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:r})},n.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},n.t=function(e,t){if(1&t&&(e=n(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var o in e)n.d(r,o,function(t){return e[t]}.bind(null,o));return r},n.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return n.d(t,"a",t),t},n.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},n.p="",n(n.s=2)}([function(e,t){e.exports=window.PropTypes},function(e,t){e.exports=window.React},function(e,t,n){"use strict";n.r(t);var r=n(0),o=n.n(r),a=n(1),i=n.n(a);function l(e){return(l="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e})(e)}function s(){return(s=Object.assign||function(e){for(var t=1;t<arguments.length;t++){var n=arguments[t];for(var r in n)Object.prototype.hasOwnProperty.call(n,r)&&(e[r]=n[r])}return e}).apply(this,arguments)}function u(e,t){for(var n=0;n<t.length;n++){var r=t[n];r.enumerable=r.enumerable||!1,r.configurable=!0,"value"in r&&(r.writable=!0),Object.defineProperty(e,r.key,r)}}function c(e,t){return!t||"object"!==l(t)&&"function"!=typeof t?function(e){if(void 0===e)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return e}(e):t}function p(e){return(p=Object.setPrototypeOf?Object.getPrototypeOf:function(e){return e.__proto__||Object.getPrototypeOf(e)})(e)}function f(e,t){return(f=Object.setPrototypeOf||function(e,t){return e.__proto__=t,e})(e,t)}var y=function(e){function t(){return function(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}(this,t),c(this,p(t).apply(this,arguments))}var n,r,o;return function(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function");e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,writable:!0,configurable:!0}}),t&&f(e,t)}(t,e),n=t,(r=[{key:"render",value:function(){var e=this.props,t=e.id,n=e.className,r=e.style,o=e.inputClassName,a=e.inputStyle,l=e.labelClassName,u=e.labelStyle,c=e.options,p=e.imgStyle,f=e.setProps,y=e.loading_state,b=e.value,m={};return t&&(m={id:t,key:t}),i.a.createElement("div",s({"data-dash-is-loading":y&&y.is_loading||void 0},m,{className:n,style:r}),c.map((function(e){return i.a.createElement("label",{style:u,className:l,key:e.value},i.a.createElement("input",{checked:e.value===b,className:o,disabled:Boolean(e.disabled),style:a,type:"radio",onChange:function(){f({value:e.value})}}),i.a.createElement("img",{src:e.imgSrc,style:p,className:e.imgClassName}),e.label)})))}}])&&u(n.prototype,r),o&&u(n,o),t}(a.Component);y.propTypes={id:o.a.string,options:o.a.arrayOf(o.a.exact({label:o.a.oneOfType([o.a.string,o.a.number]).isRequired,value:o.a.oneOfType([o.a.string,o.a.number]).isRequired,imgSrc:o.a.string.isRequired,imgClassName:o.a.string,disabled:o.a.bool})),value:o.a.oneOfType([o.a.string,o.a.number]),style:o.a.object,className:o.a.string,inputStyle:o.a.object,inputClassName:o.a.string,labelStyle:o.a.object,labelClassName:o.a.string,setProps:o.a.func,imgStyle:o.a.object,loading_state:o.a.shape({is_loading:o.a.bool,prop_name:o.a.string,component_name:o.a.string}),persistence:o.a.oneOfType([o.a.bool,o.a.string,o.a.number]),persisted_props:o.a.arrayOf(o.a.oneOf(["value"])),persistence_type:o.a.oneOf(["local","session","memory"])},y.defaultProps={inputStyle:{},inputClassName:"",labelStyle:{},labelClassName:"",imgStyle:"",options:[],persisted_props:["value"],persistence_type:"local"},n.d(t,"CadaDashCustomRadioitems",(function(){return y}))}]);
//# sourceMappingURL=cada_dash_custom_radioitems.min.js.map
|
PypiClean
|
/accelbyte_py_sdk-0.48.0.tar.gz/accelbyte_py_sdk-0.48.0/accelbyte_py_sdk/api/eventlog/operations/user_information/get_user_activities_handler.py
|
# template file: ags_py_codegen
# pylint: disable=duplicate-code
# pylint: disable=line-too-long
# pylint: disable=missing-function-docstring
# pylint: disable=missing-module-docstring
# pylint: disable=too-many-arguments
# pylint: disable=too-many-branches
# pylint: disable=too-many-instance-attributes
# pylint: disable=too-many-lines
# pylint: disable=too-many-locals
# pylint: disable=too-many-public-methods
# pylint: disable=too-many-return-statements
# pylint: disable=too-many-statements
# pylint: disable=unused-import
# AccelByte Gaming Services Event Log Service (2.1.0)
from __future__ import annotations
from typing import Any, Dict, List, Optional, Tuple, Union
from .....core import Operation
from .....core import HeaderStr
from .....core import HttpResponse
from .....core import deprecated
from ...models import ModelsEventResponse
class GetUserActivitiesHandler(Operation):
"""Get all user's activities (GetUserActivitiesHandler)
Required permission `NAMESPACE:{namespace}:EVENT [UPDATE]`and scope `analytics`
Required Permission(s):
- NAMESPACE:{namespace}:EVENT [UPDATE]
Required Scope(s):
- analytics
Properties:
url: /event/namespaces/{namespace}/users/{userId}/activities
method: GET
tags: ["User Information"]
consumes: []
produces: ["application/json"]
securities: [BEARER_AUTH]
namespace: (namespace) REQUIRED str in path
user_id: (userId) REQUIRED str in path
offset: (offset) OPTIONAL int in query
page_size: (pageSize) REQUIRED int in query
Responses:
200: OK - ModelsEventResponse (OK)
400: Bad Request - (Bad Request)
401: Unauthorized - (Unauthorized)
403: Forbidden - (Forbidden)
404: Not Found - (Not Found)
500: Internal Server Error - (Internal Server Error)
"""
# region fields
_url: str = "/event/namespaces/{namespace}/users/{userId}/activities"
_method: str = "GET"
_consumes: List[str] = []
_produces: List[str] = ["application/json"]
_securities: List[List[str]] = [["BEARER_AUTH"]]
_location_query: str = None
namespace: str # REQUIRED in [path]
user_id: str # REQUIRED in [path]
offset: int # OPTIONAL in [query]
page_size: int # REQUIRED in [query]
# endregion fields
# region properties
@property
def url(self) -> str:
return self._url
@property
def method(self) -> str:
return self._method
@property
def consumes(self) -> List[str]:
return self._consumes
@property
def produces(self) -> List[str]:
return self._produces
@property
def securities(self) -> List[List[str]]:
return self._securities
@property
def location_query(self) -> str:
return self._location_query
# endregion properties
# region get methods
# endregion get methods
# region get_x_params methods
def get_all_params(self) -> dict:
return {
"path": self.get_path_params(),
"query": self.get_query_params(),
}
def get_path_params(self) -> dict:
result = {}
if hasattr(self, "namespace"):
result["namespace"] = self.namespace
if hasattr(self, "user_id"):
result["userId"] = self.user_id
return result
def get_query_params(self) -> dict:
result = {}
if hasattr(self, "offset"):
result["offset"] = self.offset
if hasattr(self, "page_size"):
result["pageSize"] = self.page_size
return result
# endregion get_x_params methods
# region is/has methods
# endregion is/has methods
# region with_x methods
def with_namespace(self, value: str) -> GetUserActivitiesHandler:
self.namespace = value
return self
def with_user_id(self, value: str) -> GetUserActivitiesHandler:
self.user_id = value
return self
def with_offset(self, value: int) -> GetUserActivitiesHandler:
self.offset = value
return self
def with_page_size(self, value: int) -> GetUserActivitiesHandler:
self.page_size = value
return self
# endregion with_x methods
# region to methods
def to_dict(self, include_empty: bool = False) -> dict:
result: dict = {}
if hasattr(self, "namespace") and self.namespace:
result["namespace"] = str(self.namespace)
elif include_empty:
result["namespace"] = ""
if hasattr(self, "user_id") and self.user_id:
result["userId"] = str(self.user_id)
elif include_empty:
result["userId"] = ""
if hasattr(self, "offset") and self.offset:
result["offset"] = int(self.offset)
elif include_empty:
result["offset"] = 0
if hasattr(self, "page_size") and self.page_size:
result["pageSize"] = int(self.page_size)
elif include_empty:
result["pageSize"] = 0
return result
# endregion to methods
# region response methods
# noinspection PyMethodMayBeStatic
def parse_response(
self, code: int, content_type: str, content: Any
) -> Tuple[Union[None, ModelsEventResponse], Union[None, HttpResponse]]:
"""Parse the given response.
200: OK - ModelsEventResponse (OK)
400: Bad Request - (Bad Request)
401: Unauthorized - (Unauthorized)
403: Forbidden - (Forbidden)
404: Not Found - (Not Found)
500: Internal Server Error - (Internal Server Error)
---: HttpResponse (Undocumented Response)
---: HttpResponse (Unexpected Content-Type Error)
---: HttpResponse (Unhandled Error)
"""
pre_processed_response, error = self.pre_process_response(
code=code, content_type=content_type, content=content
)
if error is not None:
return None, None if error.is_no_content() else error
code, content_type, content = pre_processed_response
if code == 200:
return ModelsEventResponse.create_from_dict(content), None
if code == 400:
return None, HttpResponse.create(code, "Bad Request")
if code == 401:
return None, HttpResponse.create(code, "Unauthorized")
if code == 403:
return None, HttpResponse.create(code, "Forbidden")
if code == 404:
return None, HttpResponse.create(code, "Not Found")
if code == 500:
return None, HttpResponse.create(code, "Internal Server Error")
return self.handle_undocumented_response(
code=code, content_type=content_type, content=content
)
# endregion response methods
# region static methods
@classmethod
def create(
cls,
namespace: str,
user_id: str,
page_size: int,
offset: Optional[int] = None,
**kwargs,
) -> GetUserActivitiesHandler:
instance = cls()
instance.namespace = namespace
instance.user_id = user_id
instance.page_size = page_size
if offset is not None:
instance.offset = offset
return instance
@classmethod
def create_from_dict(
cls, dict_: dict, include_empty: bool = False
) -> GetUserActivitiesHandler:
instance = cls()
if "namespace" in dict_ and dict_["namespace"] is not None:
instance.namespace = str(dict_["namespace"])
elif include_empty:
instance.namespace = ""
if "userId" in dict_ and dict_["userId"] is not None:
instance.user_id = str(dict_["userId"])
elif include_empty:
instance.user_id = ""
if "offset" in dict_ and dict_["offset"] is not None:
instance.offset = int(dict_["offset"])
elif include_empty:
instance.offset = 0
if "pageSize" in dict_ and dict_["pageSize"] is not None:
instance.page_size = int(dict_["pageSize"])
elif include_empty:
instance.page_size = 0
return instance
@staticmethod
def get_field_info() -> Dict[str, str]:
return {
"namespace": "namespace",
"userId": "user_id",
"offset": "offset",
"pageSize": "page_size",
}
@staticmethod
def get_required_map() -> Dict[str, bool]:
return {
"namespace": True,
"userId": True,
"offset": False,
"pageSize": True,
}
# endregion static methods
|
PypiClean
|
/etianen-cms-2.6.5.tar.gz/etianen-cms-2.6.5/src/cms/admin.py
|
from django.contrib import admin
from cms import externals
from cms.models.base import SearchMetaBaseSearchAdapter, PageBaseSearchAdapter
class PublishedBaseAdmin(admin.ModelAdmin):
"""Base admin class for published models."""
change_form_template = "admin/cms/publishedmodel/change_form.html"
class OnlineBaseAdmin(PublishedBaseAdmin):
"""Base admin class for OnlineModelBase instances."""
actions = ("publish_selected", "unpublish_selected",)
list_display = ("__unicode__", "is_online",)
list_filter = ("is_online",)
PUBLICATION_FIELDS = ("Publication", {
"fields": ("is_online",),
"classes": ("collapse",),
})
# Custom admin actions.
def publish_selected(self, request, queryset):
"""Publishes the selected models."""
queryset.update(is_online=True)
publish_selected.short_description = "Place selected %(verbose_name_plural)s online"
def unpublish_selected(self, request, queryset):
"""Unpublishes the selected models."""
queryset.update(is_online=False)
unpublish_selected.short_description = "Take selected %(verbose_name_plural)s offline"
class SearchMetaBaseAdmin(OnlineBaseAdmin):
"""Base admin class for SearchMetaBase models."""
adapter_cls = SearchMetaBaseSearchAdapter
list_display = ("__unicode__", "is_online",)
SEO_FIELDS = ("Search engine optimization", {
"fields": ("browser_title", "meta_keywords", "meta_description", "sitemap_priority", "sitemap_changefreq", "robots_index", "robots_follow", "robots_archive",),
"classes": ("collapse",),
})
if externals.reversion:
class SearchMetaBaseAdmin(SearchMetaBaseAdmin, externals.reversion["admin.VersionMetaAdmin"]):
list_display = SearchMetaBaseAdmin.list_display + ("get_date_modified",)
if externals.watson:
class SearchMetaBaseAdmin(SearchMetaBaseAdmin, externals.watson["admin.SearchAdmin"]):
pass
class PageBaseAdmin(SearchMetaBaseAdmin):
"""Base admin class for PageBase models."""
prepopulated_fields = {"url_title": ("title",),}
search_fields = ("title", "short_title", "meta_keywords", "meta_description",)
adapter_cls = PageBaseSearchAdapter
TITLE_FIELDS = (None, {
"fields": ("title", "url_title",),
})
NAVIGATION_FIELDS = ("Navigation", {
"fields": ("short_title",),
"classes": ("collapse",),
})
fieldsets = (
TITLE_FIELDS,
OnlineBaseAdmin.PUBLICATION_FIELDS,
NAVIGATION_FIELDS,
SearchMetaBaseAdmin.SEO_FIELDS,
)
|
PypiClean
|
/horovod-0.28.1.tar.gz/horovod-0.28.1/third_party/boost/assert/README.md
|
# Boost.Assert
The Boost.Assert library, part of the collection of [Boost C++ Libraries](http://github.com/boostorg),
provides several configurable diagnostic macros similar in behavior and purpose to the standard macro
`assert` from `<cassert>`.
## Documentation
See the documentation of [BOOST_ASSERT](doc/assert.adoc) and
[BOOST_CURRENT_FUNCTION](doc/current_function.adoc) for more information.
## License
Distributed under the [Boost Software License, Version 1.0](http://boost.org/LICENSE_1_0.txt).
|
PypiClean
|
/django-countries-geoextent-0.0.1.tar.gz/django-countries-geoextent-0.0.1/README.md
|
# Django Countries GeoExtent
This package adds a `geo_extent` attribute to
the `django-countries` [Country](https://github.com/SmileyChris/django-countries#the-country-object) object.
The `geo_extent` attribute represents the geographic extent of a country, as extracted
from [GADM 4.1](https://gadm.org/download_world.html) boundaries data.
## Installation
```
pip install django-countries-geoextent
```
## Usage
Once installed, use the [Django Country](https://github.com/SmileyChris/django-countries#the-country-object) as
described in the [docs](https://github.com/SmileyChris/django-countries).
A new attribute named `geo_extent` will be added to a Country object instance, that represents the geographic extent of
the country, as obtained from [GADM 4.1](https://gadm.org/download_world.html) boundaries data.
If a country is not found, the `geo_extent` attribute will be `None`
|
PypiClean
|
/accelerator_toolbox-0.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl/at/plot/generic.py
|
from __future__ import annotations
from itertools import chain, repeat
# noinspection PyPackageRequirements
import matplotlib.pyplot as plt
from typing import Callable
from .synopt import plot_synopt
from ..lattice import Lattice
SLICES = 400
__all__ = ['baseplot']
def baseplot(ring: Lattice, plot_function: Callable, *args, **kwargs):
"""Generic lattice plot
:py:func:`baseplot` divides the region of interest of ring into small
elements, calls the specified function to get the plot data and calls
matplotlib functions to generate the plot.
By default, it creates a new figure for the plot, but if provided with
:py:class:`~matplotlib.axes.Axes` objects it can be used as part of a GUI
Parameters:
ring: Lattice description.
plot_function: Specific data generating function to be called
plotting function. ``plot_function`` is called as:
:code:`title, left, right = plot_function(ring, refpts, *args,
**kwargs)`
and should return 2 or 3 output:
``title``: plot title or :py:obj:`None`
``left``: tuple returning the data for the main (left) axis
left[0] - y-axis label
left[1] - xdata: (N,) array (s coordinate)
left[2] - ydata: iterable of (N,) or (N,M) arrays. Lines from a
(N, M) array share the same style and label
left[3] labels: (optional) iterable of strings as long as ydata
``right``: tuple returning the data for the secondary (right) axis
*args: All other positional parameters are sent to the
Keyword Args:
s_range: Lattice range of interest, default: unchanged,
initially set to the full cell.
axes (tuple[Axes, Optional[Axes]): :py:class:`~matplotlib.axes.Axes`
for plotting as (primary_axes, secondary_axes).
Default: create new axes
slices (int): Number of slices. Default: 400
legend (bool): Show a legend on the plot
block (bool): If :py:obj:`True`, block until the figure is closed.
Default: :py:obj:`False`
dipole (dict): Dictionary of properties overloading the default
properties of dipole representation. See :py:func:`.plot_synopt`
for details
quadrupole (dict): Same definition as for dipole
sextupole (dict): Same definition as for dipole
multipole (dict): Same definition as for dipole
monitor (dict): Same definition as for dipole
**kwargs: All other keywords are sent to the plotting function
Returns:
left_axes (Axes): Main (left) axes
right_axes (Axes): Secondary (right) axes or :py:obj:`None`
synopt_axes (Axes): Synoptic axes
"""
def plot1(ax, yaxis_label, x, y, labels=()):
lines = []
for y1, prop, label in zip(y, props, chain(labels, repeat(None))):
ll = ax.plot(x, y1, **prop)
if label is not None:
ll[0].set_label(label)
lines += ll
ax.set_ylabel(yaxis_label)
return lines
def labeled(line):
return not line.properties()['label'].startswith('_')
# extract baseplot arguments
slices = kwargs.pop('slices', SLICES)
axes = kwargs.pop('axes', None)
legend = kwargs.pop('legend', True)
block = kwargs.pop('block', False)
if 's_range' in kwargs:
ring.s_range = kwargs.pop('s_range')
# extract synopt arguments
synkeys = ['dipole', 'quadrupole', 'sextupole', 'multipole', 'monitor']
kwkeys = list(kwargs.keys())
synargs = dict((k, kwargs.pop(k)) for k in kwkeys if k in synkeys)
# get color cycle
cycle_props = plt.rcParams['axes.prop_cycle']
# slice the ring
rg = ring.slice(slices=slices)
# get the data for the plot
pout = plot_function(rg, rg.i_range, *args, **kwargs)
title = pout[0]
plots = pout[1:]
# prepare the axes
if axes is None:
# Create new axes
nplots = len(plots)
fig = plt.figure()
axleft = fig.add_subplot(111, xlim=rg.s_range, xlabel='s [m]',
facecolor=[1.0, 1.0, 1.0, 0.0],
title=title)
axright = axleft.twinx() if (nplots >= 2) else None
axleft.set_title(ring.name, fontdict={'fontsize': 'medium'},
loc='left')
axsyn = plot_synopt(ring, axes=axleft, **synargs)
else:
# Use existing axes
axleft, axright = axes
axsyn = None
nplots = 1 if axright is None else len(plots)
props = iter(cycle_props())
# left plot
lines1 = plot1(axleft, *plots[0])
# right plot
lines2 = [] if (nplots < 2) else plot1(axright, *plots[1])
if legend:
if nplots < 2:
axleft.legend(handles=[li for li in lines1 if labeled(li)])
elif axleft.get_shared_x_axes().joined(axleft, axright):
axleft.legend(
handles=[li for li in lines1 + lines2 if labeled(li)])
else:
axleft.legend(handles=[li for li in lines1 if labeled(li)])
axright.legend(handles=[li for li in lines2 if labeled(li)])
plt.show(block=block)
return axleft, axright, axsyn
|
PypiClean
|
/wield.iirrational-2.9.5.tar.gz/wield.iirrational-2.9.5/src/wield/iirrational/external/tabulate/tabulate.py
|
from __future__ import print_function
from __future__ import unicode_literals
from collections import namedtuple
import sys
import re
import math
if sys.version_info >= (3, 3):
from collections.abc import Iterable
else:
from collections import Iterable
if sys.version_info[0] < 3:
from itertools import izip_longest
from functools import partial
_none_type = type(None)
_bool_type = bool
_int_type = int
_long_type = long # noqa
_float_type = float
_text_type = unicode # noqa
_binary_type = str
def _is_file(f):
return hasattr(f, "read")
else:
from itertools import zip_longest as izip_longest
from functools import reduce, partial
_none_type = type(None)
_bool_type = bool
_int_type = int
_long_type = int
_float_type = float
_text_type = str
_binary_type = bytes
basestring = str
import io
def _is_file(f):
return isinstance(f, io.IOBase)
try:
import wcwidth # optional wide-character (CJK) support
except ImportError:
wcwidth = None
try:
from html import escape as htmlescape
except ImportError:
from cgi import escape as htmlescape
__all__ = ["tabulate", "tabulate_formats", "simple_separated_format"]
__version__ = "0.8.9"
# minimum extra space in headers
MIN_PADDING = 2
# Whether or not to preserve leading/trailing whitespace in data.
PRESERVE_WHITESPACE = False
_DEFAULT_FLOATFMT = "g"
_DEFAULT_MISSINGVAL = ""
# default align will be overwritten by "left", "center" or "decimal"
# depending on the formatter
_DEFAULT_ALIGN = "default"
# if True, enable wide-character (CJK) support
WIDE_CHARS_MODE = wcwidth is not None
Line = namedtuple("Line", ["begin", "hline", "sep", "end"])
DataRow = namedtuple("DataRow", ["begin", "sep", "end"])
# A table structure is suppposed to be:
#
# --- lineabove ---------
# headerrow
# --- linebelowheader ---
# datarow
# --- linebetweenrows ---
# ... (more datarows) ...
# --- linebetweenrows ---
# last datarow
# --- linebelow ---------
#
# TableFormat's line* elements can be
#
# - either None, if the element is not used,
# - or a Line tuple,
# - or a function: [col_widths], [col_alignments] -> string.
#
# TableFormat's *row elements can be
#
# - either None, if the element is not used,
# - or a DataRow tuple,
# - or a function: [cell_values], [col_widths], [col_alignments] -> string.
#
# padding (an integer) is the amount of white space around data values.
#
# with_header_hide:
#
# - either None, to display all table elements unconditionally,
# - or a list of elements not to be displayed if the table has column headers.
#
TableFormat = namedtuple(
"TableFormat",
[
"lineabove",
"linebelowheader",
"linebetweenrows",
"linebelow",
"headerrow",
"datarow",
"padding",
"with_header_hide",
],
)
def _pipe_segment_with_colons(align, colwidth):
"""Return a segment of a horizontal line with optional colons which
indicate column's alignment (as in `pipe` output format)."""
w = colwidth
if align in ["right", "decimal"]:
return ("-" * (w - 1)) + ":"
elif align == "center":
return ":" + ("-" * (w - 2)) + ":"
elif align == "left":
return ":" + ("-" * (w - 1))
else:
return "-" * w
def _pipe_line_with_colons(colwidths, colaligns):
"""Return a horizontal line with optional colons to indicate column's
alignment (as in `pipe` output format)."""
if not colaligns: # e.g. printing an empty data frame (github issue #15)
colaligns = [""] * len(colwidths)
segments = [_pipe_segment_with_colons(a, w) for a, w in zip(colaligns, colwidths)]
return "|" + "|".join(segments) + "|"
def _mediawiki_row_with_attrs(separator, cell_values, colwidths, colaligns):
alignment = {
"left": "",
"right": 'align="right"| ',
"center": 'align="center"| ',
"decimal": 'align="right"| ',
}
# hard-coded padding _around_ align attribute and value together
# rather than padding parameter which affects only the value
values_with_attrs = [
" " + alignment.get(a, "") + c + " " for c, a in zip(cell_values, colaligns)
]
colsep = separator * 2
return (separator + colsep.join(values_with_attrs)).rstrip()
def _textile_row_with_attrs(cell_values, colwidths, colaligns):
cell_values[0] += " "
alignment = {"left": "<.", "right": ">.", "center": "=.", "decimal": ">."}
values = (alignment.get(a, "") + v for a, v in zip(colaligns, cell_values))
return "|" + "|".join(values) + "|"
def _html_begin_table_without_header(colwidths_ignore, colaligns_ignore):
# this table header will be suppressed if there is a header row
return "<table>\n<tbody>"
def _html_row_with_attrs(celltag, unsafe, cell_values, colwidths, colaligns):
alignment = {
"left": "",
"right": ' style="text-align: right;"',
"center": ' style="text-align: center;"',
"decimal": ' style="text-align: right;"',
}
if unsafe:
values_with_attrs = [
"<{0}{1}>{2}</{0}>".format(celltag, alignment.get(a, ""), c)
for c, a in zip(cell_values, colaligns)
]
else:
values_with_attrs = [
"<{0}{1}>{2}</{0}>".format(celltag, alignment.get(a, ""), htmlescape(c))
for c, a in zip(cell_values, colaligns)
]
rowhtml = "<tr>{}</tr>".format("".join(values_with_attrs).rstrip())
if celltag == "th": # it's a header row, create a new table header
rowhtml = "<table>\n<thead>\n{}\n</thead>\n<tbody>".format(rowhtml)
return rowhtml
def _moin_row_with_attrs(celltag, cell_values, colwidths, colaligns, header=""):
alignment = {
"left": "",
"right": '<style="text-align: right;">',
"center": '<style="text-align: center;">',
"decimal": '<style="text-align: right;">',
}
values_with_attrs = [
"{0}{1} {2} ".format(celltag, alignment.get(a, ""), header + c + header)
for c, a in zip(cell_values, colaligns)
]
return "".join(values_with_attrs) + "||"
def _latex_line_begin_tabular(colwidths, colaligns, booktabs=False, longtable=False):
alignment = {"left": "l", "right": "r", "center": "c", "decimal": "r"}
tabular_columns_fmt = "".join([alignment.get(a, "l") for a in colaligns])
return "\n".join(
[
("\\begin{tabular}{" if not longtable else "\\begin{longtable}{")
+ tabular_columns_fmt
+ "}",
"\\toprule" if booktabs else "\\hline",
]
)
LATEX_ESCAPE_RULES = {
r"&": r"\&",
r"%": r"\%",
r"$": r"\$",
r"#": r"\#",
r"_": r"\_",
r"^": r"\^{}",
r"{": r"\{",
r"}": r"\}",
r"~": r"\textasciitilde{}",
"\\": r"\textbackslash{}",
r"<": r"\ensuremath{<}",
r">": r"\ensuremath{>}",
}
def _latex_row(cell_values, colwidths, colaligns, escrules=LATEX_ESCAPE_RULES):
def escape_char(c):
return escrules.get(c, c)
escaped_values = ["".join(map(escape_char, cell)) for cell in cell_values]
rowfmt = DataRow("", "&", "\\\\")
return _build_simple_row(escaped_values, rowfmt)
def _rst_escape_first_column(rows, headers):
def escape_empty(val):
if isinstance(val, (_text_type, _binary_type)) and not val.strip():
return ".."
else:
return val
new_headers = list(headers)
new_rows = []
if headers:
new_headers[0] = escape_empty(headers[0])
for row in rows:
new_row = list(row)
if new_row:
new_row[0] = escape_empty(row[0])
new_rows.append(new_row)
return new_rows, new_headers
_table_formats = {
"simple": TableFormat(
lineabove=Line("", "-", " ", ""),
linebelowheader=Line("", "-", " ", ""),
linebetweenrows=None,
linebelow=Line("", "-", " ", ""),
headerrow=DataRow("", " ", ""),
datarow=DataRow("", " ", ""),
padding=0,
with_header_hide=["lineabove", "linebelow"],
),
"plain": TableFormat(
lineabove=None,
linebelowheader=None,
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("", " ", ""),
datarow=DataRow("", " ", ""),
padding=0,
with_header_hide=None,
),
"grid": TableFormat(
lineabove=Line("+", "-", "+", "+"),
linebelowheader=Line("+", "=", "+", "+"),
linebetweenrows=Line("+", "-", "+", "+"),
linebelow=Line("+", "-", "+", "+"),
headerrow=DataRow("|", "|", "|"),
datarow=DataRow("|", "|", "|"),
padding=1,
with_header_hide=None,
),
"fancy_grid": TableFormat(
lineabove=Line("╒", "═", "╤", "╕"),
linebelowheader=Line("╞", "═", "╪", "╡"),
linebetweenrows=Line("├", "─", "┼", "┤"),
linebelow=Line("╘", "═", "╧", "╛"),
headerrow=DataRow("│", "│", "│"),
datarow=DataRow("│", "│", "│"),
padding=1,
with_header_hide=None,
),
"fancy_outline": TableFormat(
lineabove=Line("╒", "═", "╤", "╕"),
linebelowheader=Line("╞", "═", "╪", "╡"),
linebetweenrows=None,
linebelow=Line("╘", "═", "╧", "╛"),
headerrow=DataRow("│", "│", "│"),
datarow=DataRow("│", "│", "│"),
padding=1,
with_header_hide=None,
),
"github": TableFormat(
lineabove=Line("|", "-", "|", "|"),
linebelowheader=Line("|", "-", "|", "|"),
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("|", "|", "|"),
datarow=DataRow("|", "|", "|"),
padding=1,
with_header_hide=["lineabove"],
),
"pipe": TableFormat(
lineabove=_pipe_line_with_colons,
linebelowheader=_pipe_line_with_colons,
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("|", "|", "|"),
datarow=DataRow("|", "|", "|"),
padding=1,
with_header_hide=["lineabove"],
),
"orgtbl": TableFormat(
lineabove=None,
linebelowheader=Line("|", "-", "+", "|"),
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("|", "|", "|"),
datarow=DataRow("|", "|", "|"),
padding=1,
with_header_hide=None,
),
"jira": TableFormat(
lineabove=None,
linebelowheader=None,
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("||", "||", "||"),
datarow=DataRow("|", "|", "|"),
padding=1,
with_header_hide=None,
),
"presto": TableFormat(
lineabove=None,
linebelowheader=Line("", "-", "+", ""),
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("", "|", ""),
datarow=DataRow("", "|", ""),
padding=1,
with_header_hide=None,
),
"pretty": TableFormat(
lineabove=Line("+", "-", "+", "+"),
linebelowheader=Line("+", "-", "+", "+"),
linebetweenrows=None,
linebelow=Line("+", "-", "+", "+"),
headerrow=DataRow("|", "|", "|"),
datarow=DataRow("|", "|", "|"),
padding=1,
with_header_hide=None,
),
"psql": TableFormat(
lineabove=Line("+", "-", "+", "+"),
linebelowheader=Line("|", "-", "+", "|"),
linebetweenrows=None,
linebelow=Line("+", "-", "+", "+"),
headerrow=DataRow("|", "|", "|"),
datarow=DataRow("|", "|", "|"),
padding=1,
with_header_hide=None,
),
"rst": TableFormat(
lineabove=Line("", "=", " ", ""),
linebelowheader=Line("", "=", " ", ""),
linebetweenrows=None,
linebelow=Line("", "=", " ", ""),
headerrow=DataRow("", " ", ""),
datarow=DataRow("", " ", ""),
padding=0,
with_header_hide=None,
),
"mediawiki": TableFormat(
lineabove=Line(
'{| class="wikitable" style="text-align: left;"',
"",
"",
"\n|+ <!-- caption -->\n|-",
),
linebelowheader=Line("|-", "", "", ""),
linebetweenrows=Line("|-", "", "", ""),
linebelow=Line("|}", "", "", ""),
headerrow=partial(_mediawiki_row_with_attrs, "!"),
datarow=partial(_mediawiki_row_with_attrs, "|"),
padding=0,
with_header_hide=None,
),
"moinmoin": TableFormat(
lineabove=None,
linebelowheader=None,
linebetweenrows=None,
linebelow=None,
headerrow=partial(_moin_row_with_attrs, "||", header="'''"),
datarow=partial(_moin_row_with_attrs, "||"),
padding=1,
with_header_hide=None,
),
"youtrack": TableFormat(
lineabove=None,
linebelowheader=None,
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("|| ", " || ", " || "),
datarow=DataRow("| ", " | ", " |"),
padding=1,
with_header_hide=None,
),
"html": TableFormat(
lineabove=_html_begin_table_without_header,
linebelowheader="",
linebetweenrows=None,
linebelow=Line("</tbody>\n</table>", "", "", ""),
headerrow=partial(_html_row_with_attrs, "th", False),
datarow=partial(_html_row_with_attrs, "td", False),
padding=0,
with_header_hide=["lineabove"],
),
"unsafehtml": TableFormat(
lineabove=_html_begin_table_without_header,
linebelowheader="",
linebetweenrows=None,
linebelow=Line("</tbody>\n</table>", "", "", ""),
headerrow=partial(_html_row_with_attrs, "th", True),
datarow=partial(_html_row_with_attrs, "td", True),
padding=0,
with_header_hide=["lineabove"],
),
"latex": TableFormat(
lineabove=_latex_line_begin_tabular,
linebelowheader=Line("\\hline", "", "", ""),
linebetweenrows=None,
linebelow=Line("\\hline\n\\end{tabular}", "", "", ""),
headerrow=_latex_row,
datarow=_latex_row,
padding=1,
with_header_hide=None,
),
"latex_raw": TableFormat(
lineabove=_latex_line_begin_tabular,
linebelowheader=Line("\\hline", "", "", ""),
linebetweenrows=None,
linebelow=Line("\\hline\n\\end{tabular}", "", "", ""),
headerrow=partial(_latex_row, escrules={}),
datarow=partial(_latex_row, escrules={}),
padding=1,
with_header_hide=None,
),
"latex_booktabs": TableFormat(
lineabove=partial(_latex_line_begin_tabular, booktabs=True),
linebelowheader=Line("\\midrule", "", "", ""),
linebetweenrows=None,
linebelow=Line("\\bottomrule\n\\end{tabular}", "", "", ""),
headerrow=_latex_row,
datarow=_latex_row,
padding=1,
with_header_hide=None,
),
"latex_longtable": TableFormat(
lineabove=partial(_latex_line_begin_tabular, longtable=True),
linebelowheader=Line("\\hline\n\\endhead", "", "", ""),
linebetweenrows=None,
linebelow=Line("\\hline\n\\end{longtable}", "", "", ""),
headerrow=_latex_row,
datarow=_latex_row,
padding=1,
with_header_hide=None,
),
"tsv": TableFormat(
lineabove=None,
linebelowheader=None,
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("", "\t", ""),
datarow=DataRow("", "\t", ""),
padding=0,
with_header_hide=None,
),
"textile": TableFormat(
lineabove=None,
linebelowheader=None,
linebetweenrows=None,
linebelow=None,
headerrow=DataRow("|_. ", "|_.", "|"),
datarow=_textile_row_with_attrs,
padding=1,
with_header_hide=None,
),
}
tabulate_formats = list(sorted(_table_formats.keys()))
# The table formats for which multiline cells will be folded into subsequent
# table rows. The key is the original format specified at the API. The value is
# the format that will be used to represent the original format.
multiline_formats = {
"plain": "plain",
"simple": "simple",
"grid": "grid",
"fancy_grid": "fancy_grid",
"pipe": "pipe",
"orgtbl": "orgtbl",
"jira": "jira",
"presto": "presto",
"pretty": "pretty",
"psql": "psql",
"rst": "rst",
}
# TODO: Add multiline support for the remaining table formats:
# - mediawiki: Replace \n with <br>
# - moinmoin: TBD
# - youtrack: TBD
# - html: Replace \n with <br>
# - latex*: Use "makecell" package: In header, replace X\nY with
# \thead{X\\Y} and in data row, replace X\nY with \makecell{X\\Y}
# - tsv: TBD
# - textile: Replace \n with <br/> (must be well-formed XML)
_multiline_codes = re.compile(r"\r|\n|\r\n")
_multiline_codes_bytes = re.compile(b"\r|\n|\r\n")
_invisible_codes = re.compile(
r"\x1b\[\d+[;\d]*m|\x1b\[\d*\;\d*\;\d*m|\x1b\]8;;(.*?)\x1b\\"
) # ANSI color codes
_invisible_codes_bytes = re.compile(
b"\x1b\\[\\d+\\[;\\d]*m|\x1b\\[\\d*;\\d*;\\d*m|\\x1b\\]8;;(.*?)\\x1b\\\\"
) # ANSI color codes
_invisible_codes_link = re.compile(
r"\x1B]8;[a-zA-Z0-9:]*;[^\x1B]+\x1B\\([^\x1b]+)\x1B]8;;\x1B\\"
) # Terminal hyperlinks
def simple_separated_format(separator):
"""Construct a simple TableFormat with columns separated by a separator.
>>> tsv = simple_separated_format("\\t") ; \
tabulate([["foo", 1], ["spam", 23]], tablefmt=tsv) == 'foo \\t 1\\nspam\\t23'
True
"""
return TableFormat(
None,
None,
None,
None,
headerrow=DataRow("", separator, ""),
datarow=DataRow("", separator, ""),
padding=0,
with_header_hide=None,
)
def _isconvertible(conv, string):
try:
conv(string)
return True
except (ValueError, TypeError):
return False
def _isnumber(string):
"""
>>> _isnumber("123.45")
True
>>> _isnumber("123")
True
>>> _isnumber("spam")
False
>>> _isnumber("123e45678")
False
>>> _isnumber("inf")
True
"""
if not _isconvertible(float, string):
return False
elif isinstance(string, (_text_type, _binary_type)) and (
math.isinf(float(string)) or math.isnan(float(string))
):
return string.lower() in ["inf", "-inf", "nan"]
return True
def _isint(string, inttype=int):
"""
>>> _isint("123")
True
>>> _isint("123.45")
False
"""
return (
type(string) is inttype
or (isinstance(string, _binary_type) or isinstance(string, _text_type))
and _isconvertible(inttype, string)
)
def _isbool(string):
"""
>>> _isbool(True)
True
>>> _isbool("False")
True
>>> _isbool(1)
False
"""
return type(string) is _bool_type or (
isinstance(string, (_binary_type, _text_type)) and string in ("True", "False")
)
def _type(string, has_invisible=True, numparse=True):
"""The least generic type (type(None), int, float, str, unicode).
>>> _type(None) is type(None)
True
>>> _type("foo") is type("")
True
>>> _type("1") is type(1)
True
>>> _type('\x1b[31m42\x1b[0m') is type(42)
True
>>> _type('\x1b[31m42\x1b[0m') is type(42)
True
"""
if has_invisible and (
isinstance(string, _text_type) or isinstance(string, _binary_type)
):
string = _strip_invisible(string)
if string is None:
return _none_type
elif hasattr(string, "isoformat"): # datetime.datetime, date, and time
return _text_type
elif _isbool(string):
return _bool_type
elif _isint(string) and numparse:
return int
elif _isint(string, _long_type) and numparse:
return int
elif _isnumber(string) and numparse:
return float
elif isinstance(string, _binary_type):
return _binary_type
else:
return _text_type
def _afterpoint(string):
"""Symbols after a decimal point, -1 if the string lacks the decimal point.
>>> _afterpoint("123.45")
2
>>> _afterpoint("1001")
-1
>>> _afterpoint("eggs")
-1
>>> _afterpoint("123e45")
2
"""
if _isnumber(string):
if _isint(string):
return -1
else:
pos = string.rfind(".")
pos = string.lower().rfind("e") if pos < 0 else pos
if pos >= 0:
return len(string) - pos - 1
else:
return -1 # no point
else:
return -1 # not a number
def _padleft(width, s):
"""Flush right.
>>> _padleft(6, '\u044f\u0439\u0446\u0430') == ' \u044f\u0439\u0446\u0430'
True
"""
fmt = "{0:>%ds}" % width
return fmt.format(s)
def _padright(width, s):
"""Flush left.
>>> _padright(6, '\u044f\u0439\u0446\u0430') == '\u044f\u0439\u0446\u0430 '
True
"""
fmt = "{0:<%ds}" % width
return fmt.format(s)
def _padboth(width, s):
"""Center string.
>>> _padboth(6, '\u044f\u0439\u0446\u0430') == ' \u044f\u0439\u0446\u0430 '
True
"""
fmt = "{0:^%ds}" % width
return fmt.format(s)
def _padnone(ignore_width, s):
return s
def _strip_invisible(s):
r"""Remove invisible ANSI color codes.
>>> str(_strip_invisible('\x1B]8;;https://example.com\x1B\\This is a link\x1B]8;;\x1B\\'))
'This is a link'
"""
if isinstance(s, _text_type):
links_removed = re.sub(_invisible_codes_link, "\\1", s)
return re.sub(_invisible_codes, "", links_removed)
else: # a bytestring
return re.sub(_invisible_codes_bytes, "", s)
def _visible_width(s):
"""Visible width of a printed string. ANSI color codes are removed.
>>> _visible_width('\x1b[31mhello\x1b[0m'), _visible_width("world")
(5, 5)
"""
# optional wide-character support
if wcwidth is not None and WIDE_CHARS_MODE:
len_fn = wcwidth.wcswidth
else:
len_fn = len
if isinstance(s, _text_type) or isinstance(s, _binary_type):
return len_fn(_strip_invisible(s))
else:
return len_fn(_text_type(s))
def _is_multiline(s):
if isinstance(s, _text_type):
return bool(re.search(_multiline_codes, s))
else: # a bytestring
return bool(re.search(_multiline_codes_bytes, s))
def _multiline_width(multiline_s, line_width_fn=len):
"""Visible width of a potentially multiline content."""
return max(map(line_width_fn, re.split("[\r\n]", multiline_s)))
def _choose_width_fn(has_invisible, enable_widechars, is_multiline):
"""Return a function to calculate visible cell width."""
if has_invisible:
line_width_fn = _visible_width
elif enable_widechars: # optional wide-character support if available
line_width_fn = wcwidth.wcswidth
else:
line_width_fn = len
if is_multiline:
width_fn = lambda s: _multiline_width(s, line_width_fn) # noqa
else:
width_fn = line_width_fn
return width_fn
def _align_column_choose_padfn(strings, alignment, has_invisible):
if alignment == "right":
if not PRESERVE_WHITESPACE:
strings = [s.strip() for s in strings]
padfn = _padleft
elif alignment == "center":
if not PRESERVE_WHITESPACE:
strings = [s.strip() for s in strings]
padfn = _padboth
elif alignment == "decimal":
if has_invisible:
decimals = [_afterpoint(_strip_invisible(s)) for s in strings]
else:
decimals = [_afterpoint(s) for s in strings]
maxdecimals = max(decimals)
strings = [s + (maxdecimals - decs) * " " for s, decs in zip(strings, decimals)]
padfn = _padleft
elif not alignment:
padfn = _padnone
else:
if not PRESERVE_WHITESPACE:
strings = [s.strip() for s in strings]
padfn = _padright
return strings, padfn
def _align_column_choose_width_fn(has_invisible, enable_widechars, is_multiline):
if has_invisible:
line_width_fn = _visible_width
elif enable_widechars: # optional wide-character support if available
line_width_fn = wcwidth.wcswidth
else:
line_width_fn = len
if is_multiline:
width_fn = lambda s: _align_column_multiline_width(s, line_width_fn) # noqa
else:
width_fn = line_width_fn
return width_fn
def _align_column_multiline_width(multiline_s, line_width_fn=len):
"""Visible width of a potentially multiline content."""
return list(map(line_width_fn, re.split("[\r\n]", multiline_s)))
def _flat_list(nested_list):
ret = []
for item in nested_list:
if isinstance(item, list):
for subitem in item:
ret.append(subitem)
else:
ret.append(item)
return ret
def _align_column(
strings,
alignment,
minwidth=0,
has_invisible=True,
enable_widechars=False,
is_multiline=False,
):
"""[string] -> [padded_string]"""
strings, padfn = _align_column_choose_padfn(strings, alignment, has_invisible)
width_fn = _align_column_choose_width_fn(
has_invisible, enable_widechars, is_multiline
)
s_widths = list(map(width_fn, strings))
maxwidth = max(max(_flat_list(s_widths)), minwidth)
# TODO: refactor column alignment in single-line and multiline modes
if is_multiline:
if not enable_widechars and not has_invisible:
padded_strings = [
"\n".join([padfn(maxwidth, s) for s in ms.splitlines()])
for ms in strings
]
else:
# enable wide-character width corrections
s_lens = [[len(s) for s in re.split("[\r\n]", ms)] for ms in strings]
visible_widths = [
[maxwidth - (w - l) for w, l in zip(mw, ml)]
for mw, ml in zip(s_widths, s_lens)
]
# wcswidth and _visible_width don't count invisible characters;
# padfn doesn't need to apply another correction
padded_strings = [
"\n".join([padfn(w, s) for s, w in zip((ms.splitlines() or ms), mw)])
for ms, mw in zip(strings, visible_widths)
]
else: # single-line cell values
if not enable_widechars and not has_invisible:
padded_strings = [padfn(maxwidth, s) for s in strings]
else:
# enable wide-character width corrections
s_lens = list(map(len, strings))
visible_widths = [maxwidth - (w - l) for w, l in zip(s_widths, s_lens)]
# wcswidth and _visible_width don't count invisible characters;
# padfn doesn't need to apply another correction
padded_strings = [padfn(w, s) for s, w in zip(strings, visible_widths)]
return padded_strings
def _more_generic(type1, type2):
types = {
_none_type: 0,
_bool_type: 1,
int: 2,
float: 3,
_binary_type: 4,
_text_type: 5,
}
invtypes = {
5: _text_type,
4: _binary_type,
3: float,
2: int,
1: _bool_type,
0: _none_type,
}
moregeneric = max(types.get(type1, 5), types.get(type2, 5))
return invtypes[moregeneric]
def _column_type(strings, has_invisible=True, numparse=True):
"""The least generic type all column values are convertible to.
>>> _column_type([True, False]) is _bool_type
True
>>> _column_type(["1", "2"]) is _int_type
True
>>> _column_type(["1", "2.3"]) is _float_type
True
>>> _column_type(["1", "2.3", "four"]) is _text_type
True
>>> _column_type(["four", '\u043f\u044f\u0442\u044c']) is _text_type
True
>>> _column_type([None, "brux"]) is _text_type
True
>>> _column_type([1, 2, None]) is _int_type
True
>>> import datetime as dt
>>> _column_type([dt.datetime(1991,2,19), dt.time(17,35)]) is _text_type
True
"""
types = [_type(s, has_invisible, numparse) for s in strings]
return reduce(_more_generic, types, _bool_type)
def _format(val, valtype, floatfmt, missingval="", has_invisible=True):
"""Format a value according to its type.
Unicode is supported:
>>> hrow = ['\u0431\u0443\u043a\u0432\u0430', '\u0446\u0438\u0444\u0440\u0430'] ; \
tbl = [['\u0430\u0437', 2], ['\u0431\u0443\u043a\u0438', 4]] ; \
good_result = '\\u0431\\u0443\\u043a\\u0432\\u0430 \\u0446\\u0438\\u0444\\u0440\\u0430\\n------- -------\\n\\u0430\\u0437 2\\n\\u0431\\u0443\\u043a\\u0438 4' ; \
tabulate(tbl, headers=hrow) == good_result
True
""" # noqa
if val is None:
return missingval
if valtype in [int, _text_type]:
return "{0}".format(val)
elif valtype is _binary_type:
try:
return _text_type(val, "ascii")
except TypeError:
return _text_type(val)
elif valtype is float:
is_a_colored_number = has_invisible and isinstance(
val, (_text_type, _binary_type)
)
if is_a_colored_number:
raw_val = _strip_invisible(val)
formatted_val = format(float(raw_val), floatfmt)
return val.replace(raw_val, formatted_val)
else:
return format(float(val), floatfmt)
else:
return "{0}".format(val)
def _align_header(
header, alignment, width, visible_width, is_multiline=False, width_fn=None
):
"Pad string header to width chars given known visible_width of the header."
if is_multiline:
header_lines = re.split(_multiline_codes, header)
padded_lines = [
_align_header(h, alignment, width, width_fn(h)) for h in header_lines
]
return "\n".join(padded_lines)
# else: not multiline
ninvisible = len(header) - visible_width
width += ninvisible
if alignment == "left":
return _padright(width, header)
elif alignment == "center":
return _padboth(width, header)
elif not alignment:
return "{0}".format(header)
else:
return _padleft(width, header)
def _prepend_row_index(rows, index):
"""Add a left-most index column."""
if index is None or index is False:
return rows
if len(index) != len(rows):
print("index=", index)
print("rows=", rows)
raise ValueError("index must be as long as the number of data rows")
rows = [[v] + list(row) for v, row in zip(index, rows)]
return rows
def _bool(val):
"A wrapper around standard bool() which doesn't throw on NumPy arrays"
try:
return bool(val)
except ValueError: # val is likely to be a numpy array with many elements
return False
def _normalize_tabular_data(tabular_data, headers, showindex="default"):
"""Transform a supported data type to a list of lists, and a list of headers.
Supported tabular data types:
* list-of-lists or another iterable of iterables
* list of named tuples (usually used with headers="keys")
* list of dicts (usually used with headers="keys")
* list of OrderedDicts (usually used with headers="keys")
* 2D NumPy arrays
* NumPy record arrays (usually used with headers="keys")
* dict of iterables (usually used with headers="keys")
* pandas.DataFrame (usually used with headers="keys")
The first row can be used as headers if headers="firstrow",
column indices can be used as headers if headers="keys".
If showindex="default", show row indices of the pandas.DataFrame.
If showindex="always", show row indices for all types of data.
If showindex="never", don't show row indices for all types of data.
If showindex is an iterable, show its values as row indices.
"""
try:
bool(headers)
is_headers2bool_broken = False # noqa
except ValueError: # numpy.ndarray, pandas.core.index.Index, ...
is_headers2bool_broken = True # noqa
headers = list(headers)
index = None
if hasattr(tabular_data, "keys") and hasattr(tabular_data, "values"):
# dict-like and pandas.DataFrame?
if hasattr(tabular_data.values, "__call__"):
# likely a conventional dict
keys = tabular_data.keys()
rows = list(
izip_longest(*tabular_data.values())
) # columns have to be transposed
elif hasattr(tabular_data, "index"):
# values is a property, has .index => it's likely a pandas.DataFrame (pandas 0.11.0)
keys = list(tabular_data)
if (
showindex in ["default", "always", True]
and tabular_data.index.name is not None
):
if isinstance(tabular_data.index.name, list):
keys[:0] = tabular_data.index.name
else:
keys[:0] = [tabular_data.index.name]
vals = tabular_data.values # values matrix doesn't need to be transposed
# for DataFrames add an index per default
index = list(tabular_data.index)
rows = [list(row) for row in vals]
else:
raise ValueError("tabular data doesn't appear to be a dict or a DataFrame")
if headers == "keys":
headers = list(map(_text_type, keys)) # headers should be strings
else: # it's a usual an iterable of iterables, or a NumPy array
rows = list(tabular_data)
if headers == "keys" and not rows:
# an empty table (issue #81)
headers = []
elif (
headers == "keys"
and hasattr(tabular_data, "dtype")
and getattr(tabular_data.dtype, "names")
):
# numpy record array
headers = tabular_data.dtype.names
elif (
headers == "keys"
and len(rows) > 0
and isinstance(rows[0], tuple)
and hasattr(rows[0], "_fields")
):
# namedtuple
headers = list(map(_text_type, rows[0]._fields))
elif len(rows) > 0 and hasattr(rows[0], "keys") and hasattr(rows[0], "values"):
# dict-like object
uniq_keys = set() # implements hashed lookup
keys = [] # storage for set
if headers == "firstrow":
firstdict = rows[0] if len(rows) > 0 else {}
keys.extend(firstdict.keys())
uniq_keys.update(keys)
rows = rows[1:]
for row in rows:
for k in row.keys():
# Save unique items in input order
if k not in uniq_keys:
keys.append(k)
uniq_keys.add(k)
if headers == "keys":
headers = keys
elif isinstance(headers, dict):
# a dict of headers for a list of dicts
headers = [headers.get(k, k) for k in keys]
headers = list(map(_text_type, headers))
elif headers == "firstrow":
if len(rows) > 0:
headers = [firstdict.get(k, k) for k in keys]
headers = list(map(_text_type, headers))
else:
headers = []
elif headers:
raise ValueError(
"headers for a list of dicts is not a dict or a keyword"
)
rows = [[row.get(k) for k in keys] for row in rows]
elif (
headers == "keys"
and hasattr(tabular_data, "description")
and hasattr(tabular_data, "fetchone")
and hasattr(tabular_data, "rowcount")
):
# Python Database API cursor object (PEP 0249)
# print tabulate(cursor, headers='keys')
headers = [column[0] for column in tabular_data.description]
elif headers == "keys" and len(rows) > 0:
# keys are column indices
headers = list(map(_text_type, range(len(rows[0]))))
# take headers from the first row if necessary
if headers == "firstrow" and len(rows) > 0:
if index is not None:
headers = [index[0]] + list(rows[0])
index = index[1:]
else:
headers = rows[0]
headers = list(map(_text_type, headers)) # headers should be strings
rows = rows[1:]
headers = list(map(_text_type, headers))
rows = list(map(list, rows))
# add or remove an index column
showindex_is_a_str = type(showindex) in [_text_type, _binary_type]
if showindex == "default" and index is not None:
rows = _prepend_row_index(rows, index)
elif isinstance(showindex, Iterable) and not showindex_is_a_str:
rows = _prepend_row_index(rows, list(showindex))
elif showindex == "always" or (_bool(showindex) and not showindex_is_a_str):
if index is None:
index = list(range(len(rows)))
rows = _prepend_row_index(rows, index)
elif showindex == "never" or (not _bool(showindex) and not showindex_is_a_str):
pass
# pad with empty headers for initial columns if necessary
if headers and len(rows) > 0:
nhs = len(headers)
ncols = len(rows[0])
if nhs < ncols:
headers = [""] * (ncols - nhs) + headers
return rows, headers
def tabulate(
tabular_data,
headers=(),
tablefmt="simple",
floatfmt=_DEFAULT_FLOATFMT,
numalign=_DEFAULT_ALIGN,
stralign=_DEFAULT_ALIGN,
missingval=_DEFAULT_MISSINGVAL,
showindex="default",
disable_numparse=False,
colalign=None,
):
"""Format a fixed width table for pretty printing.
>>> print(tabulate([[1, 2.34], [-56, "8.999"], ["2", "10001"]]))
--- ---------
1 2.34
-56 8.999
2 10001
--- ---------
The first required argument (`tabular_data`) can be a
list-of-lists (or another iterable of iterables), a list of named
tuples, a dictionary of iterables, an iterable of dictionaries,
a two-dimensional NumPy array, NumPy record array, or a Pandas'
dataframe.
Table headers
-------------
To print nice column headers, supply the second argument (`headers`):
- `headers` can be an explicit list of column headers
- if `headers="firstrow"`, then the first row of data is used
- if `headers="keys"`, then dictionary keys or column indices are used
Otherwise a headerless table is produced.
If the number of headers is less than the number of columns, they
are supposed to be names of the last columns. This is consistent
with the plain-text format of R and Pandas' dataframes.
>>> print(tabulate([["sex","age"],["Alice","F",24],["Bob","M",19]],
... headers="firstrow"))
sex age
----- ----- -----
Alice F 24
Bob M 19
By default, pandas.DataFrame data have an additional column called
row index. To add a similar column to all other types of data,
use `showindex="always"` or `showindex=True`. To suppress row indices
for all types of data, pass `showindex="never" or `showindex=False`.
To add a custom row index column, pass `showindex=some_iterable`.
>>> print(tabulate([["F",24],["M",19]], showindex="always"))
- - --
0 F 24
1 M 19
- - --
Column alignment
----------------
`tabulate` tries to detect column types automatically, and aligns
the values properly. By default it aligns decimal points of the
numbers (or flushes integer numbers to the right), and flushes
everything else to the left. Possible column alignments
(`numalign`, `stralign`) are: "right", "center", "left", "decimal"
(only for `numalign`), and None (to disable alignment).
Table formats
-------------
`floatfmt` is a format specification used for columns which
contain numeric data with a decimal point. This can also be
a list or tuple of format strings, one per column.
`None` values are replaced with a `missingval` string (like
`floatfmt`, this can also be a list of values for different
columns):
>>> print(tabulate([["spam", 1, None],
... ["eggs", 42, 3.14],
... ["other", None, 2.7]], missingval="?"))
----- -- ----
spam 1 ?
eggs 42 3.14
other ? 2.7
----- -- ----
Various plain-text table formats (`tablefmt`) are supported:
'plain', 'simple', 'grid', 'pipe', 'orgtbl', 'rst', 'mediawiki',
'latex', 'latex_raw', 'latex_booktabs', 'latex_longtable' and tsv.
Variable `tabulate_formats`contains the list of currently supported formats.
"plain" format doesn't use any pseudographics to draw tables,
it separates columns with a double space:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "plain"))
strings numbers
spam 41.9999
eggs 451
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="plain"))
spam 41.9999
eggs 451
"simple" format is like Pandoc simple_tables:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "simple"))
strings numbers
--------- ---------
spam 41.9999
eggs 451
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="simple"))
---- --------
spam 41.9999
eggs 451
---- --------
"grid" is similar to tables produced by Emacs table.el package or
Pandoc grid_tables:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "grid"))
+-----------+-----------+
| strings | numbers |
+===========+===========+
| spam | 41.9999 |
+-----------+-----------+
| eggs | 451 |
+-----------+-----------+
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="grid"))
+------+----------+
| spam | 41.9999 |
+------+----------+
| eggs | 451 |
+------+----------+
"fancy_grid" draws a grid using box-drawing characters:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "fancy_grid"))
╒═══════════╤═══════════╕
│ strings │ numbers │
╞═══════════╪═══════════╡
│ spam │ 41.9999 │
├───────────┼───────────┤
│ eggs │ 451 │
╘═══════════╧═══════════╛
"pipe" is like tables in PHP Markdown Extra extension or Pandoc
pipe_tables:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "pipe"))
| strings | numbers |
|:----------|----------:|
| spam | 41.9999 |
| eggs | 451 |
"presto" is like tables produce by the Presto CLI:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "presto"))
strings | numbers
-----------+-----------
spam | 41.9999
eggs | 451
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="pipe"))
|:-----|---------:|
| spam | 41.9999 |
| eggs | 451 |
"orgtbl" is like tables in Emacs org-mode and orgtbl-mode. They
are slightly different from "pipe" format by not using colons to
define column alignment, and using a "+" sign to indicate line
intersections:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "orgtbl"))
| strings | numbers |
|-----------+-----------|
| spam | 41.9999 |
| eggs | 451 |
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="orgtbl"))
| spam | 41.9999 |
| eggs | 451 |
"rst" is like a simple table format from reStructuredText; please
note that reStructuredText accepts also "grid" tables:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]],
... ["strings", "numbers"], "rst"))
========= =========
strings numbers
========= =========
spam 41.9999
eggs 451
========= =========
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="rst"))
==== ========
spam 41.9999
eggs 451
==== ========
"mediawiki" produces a table markup used in Wikipedia and on other
MediaWiki-based sites:
>>> print(tabulate([["strings", "numbers"], ["spam", 41.9999], ["eggs", "451.0"]],
... headers="firstrow", tablefmt="mediawiki"))
{| class="wikitable" style="text-align: left;"
|+ <!-- caption -->
|-
! strings !! align="right"| numbers
|-
| spam || align="right"| 41.9999
|-
| eggs || align="right"| 451
|}
"html" produces HTML markup as an html.escape'd str
with a ._repr_html_ method so that Jupyter Lab and Notebook display the HTML
and a .str property so that the raw HTML remains accessible
the unsafehtml table format can be used if an unescaped HTML format is required:
>>> print(tabulate([["strings", "numbers"], ["spam", 41.9999], ["eggs", "451.0"]],
... headers="firstrow", tablefmt="html"))
<table>
<thead>
<tr><th>strings </th><th style="text-align: right;"> numbers</th></tr>
</thead>
<tbody>
<tr><td>spam </td><td style="text-align: right;"> 41.9999</td></tr>
<tr><td>eggs </td><td style="text-align: right;"> 451 </td></tr>
</tbody>
</table>
"latex" produces a tabular environment of LaTeX document markup:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="latex"))
\\begin{tabular}{lr}
\\hline
spam & 41.9999 \\\\
eggs & 451 \\\\
\\hline
\\end{tabular}
"latex_raw" is similar to "latex", but doesn't escape special characters,
such as backslash and underscore, so LaTeX commands may embedded into
cells' values:
>>> print(tabulate([["spam$_9$", 41.9999], ["\\\\emph{eggs}", "451.0"]], tablefmt="latex_raw"))
\\begin{tabular}{lr}
\\hline
spam$_9$ & 41.9999 \\\\
\\emph{eggs} & 451 \\\\
\\hline
\\end{tabular}
"latex_booktabs" produces a tabular environment of LaTeX document markup
using the booktabs.sty package:
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="latex_booktabs"))
\\begin{tabular}{lr}
\\toprule
spam & 41.9999 \\\\
eggs & 451 \\\\
\\bottomrule
\\end{tabular}
"latex_longtable" produces a tabular environment that can stretch along
multiple pages, using the longtable package for LaTeX.
>>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="latex_longtable"))
\\begin{longtable}{lr}
\\hline
spam & 41.9999 \\\\
eggs & 451 \\\\
\\hline
\\end{longtable}
Number parsing
--------------
By default, anything which can be parsed as a number is a number.
This ensures numbers represented as strings are aligned properly.
This can lead to weird results for particular strings such as
specific git SHAs e.g. "42992e1" will be parsed into the number
429920 and aligned as such.
To completely disable number parsing (and alignment), use
`disable_numparse=True`. For more fine grained control, a list column
indices is used to disable number parsing only on those columns
e.g. `disable_numparse=[0, 2]` would disable number parsing only on the
first and third columns.
"""
if tabular_data is None:
tabular_data = []
list_of_lists, headers = _normalize_tabular_data(
tabular_data, headers, showindex=showindex
)
# empty values in the first column of RST tables should be escaped (issue #82)
# "" should be escaped as "\\ " or ".."
if tablefmt == "rst":
list_of_lists, headers = _rst_escape_first_column(list_of_lists, headers)
# PrettyTable formatting does not use any extra padding.
# Numbers are not parsed and are treated the same as strings for alignment.
# Check if pretty is the format being used and override the defaults so it
# does not impact other formats.
min_padding = MIN_PADDING
if tablefmt == "pretty":
min_padding = 0
disable_numparse = True
numalign = "center" if numalign == _DEFAULT_ALIGN else numalign
stralign = "center" if stralign == _DEFAULT_ALIGN else stralign
else:
numalign = "decimal" if numalign == _DEFAULT_ALIGN else numalign
stralign = "left" if stralign == _DEFAULT_ALIGN else stralign
# optimization: look for ANSI control codes once,
# enable smart width functions only if a control code is found
plain_text = "\t".join(
["\t".join(map(_text_type, headers))]
+ ["\t".join(map(_text_type, row)) for row in list_of_lists]
)
has_invisible = re.search(_invisible_codes, plain_text)
if not has_invisible:
has_invisible = re.search(_invisible_codes_link, plain_text)
enable_widechars = wcwidth is not None and WIDE_CHARS_MODE
if (
not isinstance(tablefmt, TableFormat)
and tablefmt in multiline_formats
and _is_multiline(plain_text)
):
tablefmt = multiline_formats.get(tablefmt, tablefmt)
is_multiline = True
else:
is_multiline = False
width_fn = _choose_width_fn(has_invisible, enable_widechars, is_multiline)
# format rows and columns, convert numeric values to strings
cols = list(izip_longest(*list_of_lists))
numparses = _expand_numparse(disable_numparse, len(cols))
coltypes = [_column_type(col, numparse=np) for col, np in zip(cols, numparses)]
if isinstance(floatfmt, basestring): # old version
float_formats = len(cols) * [
floatfmt
] # just duplicate the string to use in each column
else: # if floatfmt is list, tuple etc we have one per column
float_formats = list(floatfmt)
if len(float_formats) < len(cols):
float_formats.extend((len(cols) - len(float_formats)) * [_DEFAULT_FLOATFMT])
if isinstance(missingval, basestring):
missing_vals = len(cols) * [missingval]
else:
missing_vals = list(missingval)
if len(missing_vals) < len(cols):
missing_vals.extend((len(cols) - len(missing_vals)) * [_DEFAULT_MISSINGVAL])
cols = [
[_format(v, ct, fl_fmt, miss_v, has_invisible) for v in c]
for c, ct, fl_fmt, miss_v in zip(cols, coltypes, float_formats, missing_vals)
]
# align columns
aligns = [numalign if ct in [int, float] else stralign for ct in coltypes]
if colalign is not None:
assert isinstance(colalign, Iterable)
for idx, align in enumerate(colalign):
aligns[idx] = align
minwidths = (
[width_fn(h) + min_padding for h in headers] if headers else [0] * len(cols)
)
cols = [
_align_column(c, a, minw, has_invisible, enable_widechars, is_multiline)
for c, a, minw in zip(cols, aligns, minwidths)
]
if headers:
# align headers and add headers
t_cols = cols or [[""]] * len(headers)
t_aligns = aligns or [stralign] * len(headers)
minwidths = [
max(minw, max(width_fn(cl) for cl in c))
for minw, c in zip(minwidths, t_cols)
]
headers = [
_align_header(h, a, minw, width_fn(h), is_multiline, width_fn)
for h, a, minw in zip(headers, t_aligns, minwidths)
]
rows = list(zip(*cols))
else:
minwidths = [max(width_fn(cl) for cl in c) for c in cols]
rows = list(zip(*cols))
if not isinstance(tablefmt, TableFormat):
tablefmt = _table_formats.get(tablefmt, _table_formats["simple"])
return _format_table(tablefmt, headers, rows, minwidths, aligns, is_multiline)
def _expand_numparse(disable_numparse, column_count):
"""
Return a list of bools of length `column_count` which indicates whether
number parsing should be used on each column.
If `disable_numparse` is a list of indices, each of those indices are False,
and everything else is True.
If `disable_numparse` is a bool, then the returned list is all the same.
"""
if isinstance(disable_numparse, Iterable):
numparses = [True] * column_count
for index in disable_numparse:
numparses[index] = False
return numparses
else:
return [not disable_numparse] * column_count
def _pad_row(cells, padding):
if cells:
pad = " " * padding
padded_cells = [pad + cell + pad for cell in cells]
return padded_cells
else:
return cells
def _build_simple_row(padded_cells, rowfmt):
"Format row according to DataRow format without padding."
begin, sep, end = rowfmt
return (begin + sep.join(padded_cells) + end).rstrip()
def _build_row(padded_cells, colwidths, colaligns, rowfmt):
"Return a string which represents a row of data cells."
if not rowfmt:
return None
if hasattr(rowfmt, "__call__"):
return rowfmt(padded_cells, colwidths, colaligns)
else:
return _build_simple_row(padded_cells, rowfmt)
def _append_basic_row(lines, padded_cells, colwidths, colaligns, rowfmt):
lines.append(_build_row(padded_cells, colwidths, colaligns, rowfmt))
return lines
def _append_multiline_row(
lines, padded_multiline_cells, padded_widths, colaligns, rowfmt, pad
):
colwidths = [w - 2 * pad for w in padded_widths]
cells_lines = [c.splitlines() for c in padded_multiline_cells]
nlines = max(map(len, cells_lines)) # number of lines in the row
# vertically pad cells where some lines are missing
cells_lines = [
(cl + [" " * w] * (nlines - len(cl))) for cl, w in zip(cells_lines, colwidths)
]
lines_cells = [[cl[i] for cl in cells_lines] for i in range(nlines)]
for ln in lines_cells:
padded_ln = _pad_row(ln, pad)
_append_basic_row(lines, padded_ln, colwidths, colaligns, rowfmt)
return lines
def _build_line(colwidths, colaligns, linefmt):
"Return a string which represents a horizontal line."
if not linefmt:
return None
if hasattr(linefmt, "__call__"):
return linefmt(colwidths, colaligns)
else:
begin, fill, sep, end = linefmt
cells = [fill * w for w in colwidths]
return _build_simple_row(cells, (begin, sep, end))
def _append_line(lines, colwidths, colaligns, linefmt):
lines.append(_build_line(colwidths, colaligns, linefmt))
return lines
class JupyterHTMLStr(str):
"""Wrap the string with a _repr_html_ method so that Jupyter
displays the HTML table"""
def _repr_html_(self):
return self
@property
def str(self):
"""add a .str property so that the raw string is still accessible"""
return self
def _format_table(fmt, headers, rows, colwidths, colaligns, is_multiline):
"""Produce a plain-text representation of the table."""
lines = []
hidden = fmt.with_header_hide if (headers and fmt.with_header_hide) else []
pad = fmt.padding
headerrow = fmt.headerrow
padded_widths = [(w + 2 * pad) for w in colwidths]
if is_multiline:
pad_row = lambda row, _: row # noqa do it later, in _append_multiline_row
append_row = partial(_append_multiline_row, pad=pad)
else:
pad_row = _pad_row
append_row = _append_basic_row
padded_headers = pad_row(headers, pad)
padded_rows = [pad_row(row, pad) for row in rows]
if fmt.lineabove and "lineabove" not in hidden:
_append_line(lines, padded_widths, colaligns, fmt.lineabove)
if padded_headers:
append_row(lines, padded_headers, padded_widths, colaligns, headerrow)
if fmt.linebelowheader and "linebelowheader" not in hidden:
_append_line(lines, padded_widths, colaligns, fmt.linebelowheader)
if padded_rows and fmt.linebetweenrows and "linebetweenrows" not in hidden:
# initial rows with a line below
for row in padded_rows[:-1]:
append_row(lines, row, padded_widths, colaligns, fmt.datarow)
_append_line(lines, padded_widths, colaligns, fmt.linebetweenrows)
# the last row without a line below
append_row(lines, padded_rows[-1], padded_widths, colaligns, fmt.datarow)
else:
for row in padded_rows:
append_row(lines, row, padded_widths, colaligns, fmt.datarow)
if fmt.linebelow and "linebelow" not in hidden:
_append_line(lines, padded_widths, colaligns, fmt.linebelow)
if headers or rows:
output = "\n".join(lines)
if fmt.lineabove == _html_begin_table_without_header:
return JupyterHTMLStr(output)
else:
return output
else: # a completely empty table
return ""
def _main():
"""\
Usage: tabulate [options] [FILE ...]
Pretty-print tabular data.
See also https://github.com/astanin/python-tabulate
FILE a filename of the file with tabular data;
if "-" or missing, read data from stdin.
Options:
-h, --help show this message
-1, --header use the first row of data as a table header
-o FILE, --output FILE print table to FILE (default: stdout)
-s REGEXP, --sep REGEXP use a custom column separator (default: whitespace)
-F FPFMT, --float FPFMT floating point number format (default: g)
-f FMT, --format FMT set output table format; supported formats:
plain, simple, grid, fancy_grid, pipe, orgtbl,
rst, mediawiki, html, latex, latex_raw,
latex_booktabs, latex_longtable, tsv
(default: simple)
"""
import getopt
import sys
import textwrap
usage = textwrap.dedent(_main.__doc__)
try:
opts, args = getopt.getopt(
sys.argv[1:],
"h1o:s:F:A:f:",
["help", "header", "output", "sep=", "float=", "align=", "format="],
)
except getopt.GetoptError as e:
print(e)
print(usage)
sys.exit(2)
headers = []
floatfmt = _DEFAULT_FLOATFMT
colalign = None
tablefmt = "simple"
sep = r"\s+"
outfile = "-"
for opt, value in opts:
if opt in ["-1", "--header"]:
headers = "firstrow"
elif opt in ["-o", "--output"]:
outfile = value
elif opt in ["-F", "--float"]:
floatfmt = value
elif opt in ["-C", "--colalign"]:
colalign = value.split()
elif opt in ["-f", "--format"]:
if value not in tabulate_formats:
print("%s is not a supported table format" % value)
print(usage)
sys.exit(3)
tablefmt = value
elif opt in ["-s", "--sep"]:
sep = value
elif opt in ["-h", "--help"]:
print(usage)
sys.exit(0)
files = [sys.stdin] if not args else args
with (sys.stdout if outfile == "-" else open(outfile, "w")) as out:
for f in files:
if f == "-":
f = sys.stdin
if _is_file(f):
_pprint_file(
f,
headers=headers,
tablefmt=tablefmt,
sep=sep,
floatfmt=floatfmt,
file=out,
colalign=colalign,
)
else:
with open(f) as fobj:
_pprint_file(
fobj,
headers=headers,
tablefmt=tablefmt,
sep=sep,
floatfmt=floatfmt,
file=out,
colalign=colalign,
)
def _pprint_file(fobject, headers, tablefmt, sep, floatfmt, file, colalign):
rows = fobject.readlines()
table = [re.split(sep, r.rstrip()) for r in rows if r.strip()]
print(
tabulate(table, headers, tablefmt, floatfmt=floatfmt, colalign=colalign),
file=file,
)
if __name__ == "__main__":
_main()
|
PypiClean
|
/tocka-django-cms-3.1.2a0.tar.gz/tocka-django-cms-3.1.2a0/cms/middleware/toolbar.py
|
from django.contrib.admin.models import LogEntry, ADDITION, CHANGE
from django.core.urlresolvers import resolve
from django.http import HttpResponse
from django.template.loader import render_to_string
from cms.toolbar.toolbar import CMSToolbar
from cms.utils.conf import get_cms_setting
from cms.utils.i18n import force_language
from cms.utils.placeholder import get_toolbar_plugin_struct
from menus.menu_pool import menu_pool
def toolbar_plugin_processor(instance, placeholder, rendered_content, original_context):
from cms.plugin_pool import plugin_pool
original_context.push()
child_plugin_classes = []
plugin_class = instance.get_plugin_class()
if plugin_class.allow_children:
inst, plugin = instance.get_plugin_instance()
page = original_context['request'].current_page
plugin.cms_plugin_instance = inst
children = [plugin_pool.get_plugin(cls) for cls in plugin.get_child_classes(placeholder, page)]
# Builds the list of dictionaries containing module, name and value for the plugin dropdowns
child_plugin_classes = get_toolbar_plugin_struct(children, placeholder.slot, placeholder.page,
parent=plugin_class)
instance.placeholder = placeholder
request = original_context['request']
with force_language(request.toolbar.toolbar_language):
data = {
'instance': instance,
'rendered_content': rendered_content,
'child_plugin_classes': child_plugin_classes,
'edit_url': placeholder.get_edit_url(instance.pk),
'add_url': placeholder.get_add_url(),
'delete_url': placeholder.get_delete_url(instance.pk),
'move_url': placeholder.get_move_url(),
}
original_context.update(data)
plugin_class = instance.get_plugin_class()
template = plugin_class.frontend_edit_template
output = render_to_string(template, original_context).strip()
original_context.pop()
return output
class ToolbarMiddleware(object):
"""
Middleware to set up CMS Toolbar.
"""
def is_cms_request(self,request):
cms_app_name = get_cms_setting('APP_NAME')
toolbar_hide = get_cms_setting('TOOLBAR_HIDE')
if not toolbar_hide or not cms_app_name:
return True
try:
match = resolve(request.path_info)
except:
return False
return match.app_name == cms_app_name
def process_request(self, request):
"""
If we should show the toolbar for this request, put it on
request.toolbar. Then call the request_hook on the toolbar.
"""
if not self.is_cms_request(request):
return
edit_on = get_cms_setting('CMS_TOOLBAR_URL__EDIT_ON')
edit_off = get_cms_setting('CMS_TOOLBAR_URL__EDIT_OFF')
build = get_cms_setting('CMS_TOOLBAR_URL__BUILD')
disable = get_cms_setting('CMS_TOOLBAR_URL__DISABLE')
anonymous_on = get_cms_setting('TOOLBAR_ANONYMOUS_ON')
if disable in request.GET:
request.session['cms_toolbar_disabled'] = True
if edit_on in request.GET: # If we actively enter edit mode, we should show the toolbar in any case
request.session['cms_toolbar_disabled'] = False
if request.user.is_staff or (anonymous_on and request.user.is_anonymous()):
if edit_on in request.GET and not request.session.get('cms_edit', False):
if not request.session.get('cms_edit', False):
menu_pool.clear()
request.session['cms_edit'] = True
if request.session.get('cms_build', False):
request.session['cms_build'] = False
if edit_off in request.GET and request.session.get('cms_edit', True):
if request.session.get('cms_edit', True):
menu_pool.clear()
request.session['cms_edit'] = False
if request.session.get('cms_build', False):
request.session['cms_build'] = False
if build in request.GET and not request.session.get('cms_build', False):
request.session['cms_build'] = True
else:
request.session['cms_build'] = False
request.session['cms_edit'] = False
if request.user.is_staff:
try:
request.cms_latest_entry = LogEntry.objects.filter(
user=request.user,
action_flag__in=(ADDITION, CHANGE)
).only('pk').order_by('-pk')[0].pk
except IndexError:
request.cms_latest_entry = -1
request.toolbar = CMSToolbar(request)
def process_view(self, request, view_func, view_args, view_kwarg):
if not self.is_cms_request(request):
return
response = request.toolbar.request_hook()
if isinstance(response, HttpResponse):
return response
def process_response(self, request, response):
if not self.is_cms_request(request):
return response
from django.utils.cache import add_never_cache_headers
if ((hasattr(request, 'toolbar') and request.toolbar.edit_mode) or
not all(ph.cache_placeholder
for ph in getattr(request, 'placeholders', ()))):
add_never_cache_headers(response)
if hasattr(request, 'user') and request.user.is_staff and response.status_code != 500:
try:
pk = LogEntry.objects.filter(
user=request.user,
action_flag__in=(ADDITION, CHANGE)
).only('pk').order_by('-pk')[0].pk
if hasattr(request, 'cms_latest_entry') and request.cms_latest_entry != pk:
log = LogEntry.objects.filter(user=request.user, action_flag__in=(ADDITION, CHANGE))[0]
request.session['cms_log_latest'] = log.pk
# If there were no LogEntries, just don't touch the session.
# Note that in the case of a user logging-in as another user,
# request may have a cms_latest_entry attribute, but there are no
# LogEntries for request.user.
except IndexError:
pass
return response
|
PypiClean
|
/pytorch-search-0.6.8.tar.gz/pytorch-search-0.6.8/jina/docker/hubapi.py
|
import json
import requests
from typing import Dict
from pkg_resources import resource_stream
from .helper import credentials_file
from ..helper import yaml, colored
def _list(logger, name: str = None, kind: str = None, type_: str = None, keywords: tuple = ('numeric')):
""" Hub API Invocation to run `hub list` """
# TODO: Shouldn't pass a default argument for keywords. Need to handle after lambda function gets fixed
with resource_stream('jina', '/'.join(('resources', 'hubapi.yml'))) as fp:
hubapi_yml = yaml.load(fp)
hubapi_url = hubapi_yml['hubapi']['url']
hubapi_list = hubapi_yml['hubapi']['list']
params = {}
if name:
params['name'] = name
if kind:
params['kind'] = kind
if type_:
params['type'] = type_
if keywords:
# The way lambda function handles params, we need to pass them comma separated rather than in an iterable
params['keywords'] = ','.join(keywords) if len(keywords) > 1 else keywords
if params:
response = requests.get(url=f'{hubapi_url}{hubapi_list}',
params=params)
if response.status_code == requests.codes.bad_request and response.text == 'No docs found':
print(f'\n{colored("✗ Could not find any executors. Please change the arguments and retry!", "red")}\n')
return response
if response.status_code == requests.codes.internal_server_error:
logger.warning(f'Got the following server error: {response.text}')
print(f'\n{colored("✗ Could not find any executors. Something wrong with the server!", "red")}\n')
return response
manifests = response.json()['manifest']
for index, manifest in enumerate(manifests):
print(f'\n{colored("☟ Executor #" + str(index+1), "cyan", attrs=["bold"])}')
if 'name' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Name", "grey", attrs=["bold"]):<30s}: '
f'{manifest["name"]}')
if 'version' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Version", "grey", attrs=["bold"]):<30s}: '
f'{manifest["version"]}')
if 'description' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Description", "grey", attrs=["bold"]):<30s}: '
f'{manifest["description"]}')
if 'author' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Author", "grey", attrs=["bold"]):<30s}: '
f'{manifest["author"]}')
if 'kind' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Kind", "grey", attrs=["bold"]):<30s}: '
f'{manifest["kind"]}')
if 'type' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Type", "grey", attrs=["bold"]):<30s}: '
f'{manifest["type"]}')
if 'keywords' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Keywords", "grey", attrs=["bold"]):<30s}: '
f'{manifest["keywords"]}')
if 'documentation' in manifest:
print(f'{colored("☞", "green")} '
f'{colored("Documentation", "grey", attrs=["bold"]):<30s}: '
f'{manifest["documentation"]}')
return response
def _push(logger, summary: Dict = None):
""" Hub API Invocation to run `hub push` """
if not summary:
logger.error(f'summary is empty.nothing to do')
return
with resource_stream('jina', '/'.join(('resources', 'hubapi.yml'))) as fp:
hubapi_yml = yaml.load(fp)
hubapi_url = hubapi_yml['hubapi']['url']
hubapi_push = hubapi_yml['hubapi']['push']
if not credentials_file().is_file():
logger.error(f'user hasnot logged in. please login using command: {colored("jina hub login", attrs=["bold"])}')
return
with open(credentials_file(), 'r') as cf:
cred_yml = yaml.load(cf)
access_token = cred_yml['access_token']
if not access_token:
logger.error(f'user hasnot logged in. please login using command: {colored("jina hub login", attrs=["bold"])}')
return
headers = {
'Accept': 'application/json',
'authorizationToken': access_token
}
try:
response = requests.post(url=f'{hubapi_url}{hubapi_push}',
headers=headers,
data=json.dumps(summary))
if response.status_code == requests.codes.ok:
logger.info(response.text)
elif response.status_code == requests.codes.unauthorized:
logger.error(f'user is unauthorized to perform push operation. '
f'please login using command: {colored("jina hub login", attrs=["bold"])}')
elif response.status_code == requests.codes.internal_server_error:
if 'auth' in response.text.lower():
logger.error(f'authentication issues!'
f'please login using command: {colored("jina hub login", attrs=["bold"])}')
logger.error(f'got an error from the API: {response.text}')
except Exception as exp:
logger.error(f'got an exception while invoking hubapi for push {repr(exp)}')
return
|
PypiClean
|
/python-xlib-0.33.tar.gz/python-xlib-0.33/Xlib/ext/res.py
|
from Xlib.protocol import rq
RES_MAJOR_VERSION = 1
RES_MINOR_VERSION = 2
extname = "X-Resource"
# v1.0
ResQueryVersion = 0
ResQueryClients = 1
ResQueryClientResources = 2
ResQueryClientPixmapBytes = 3
# v1.2
ResQueryClientIds = 4
ResQueryResourceBytes = 5
class QueryVersion(rq.ReplyRequest):
_request = rq.Struct(
rq.Card8("opcode"),
rq.Opcode(ResQueryVersion),
rq.RequestLength(),
rq.Card8("client_major"),
rq.Card8("client_minor"),
rq.Pad(2))
_reply = rq.Struct(
rq.ReplyCode(),
rq.Pad(1),
rq.Card16("sequence_number"),
rq.ReplyLength(),
rq.Card16("server_major"),
rq.Card16("server_minor"),
rq.Pad(20))
def query_version(self, client_major=RES_MAJOR_VERSION,
client_minor=RES_MINOR_VERSION):
""" Query the protocol version supported by the X server.
The client sends the highest supported version to the server and the
server sends the highest version it supports, but no higher than the
requested version."""
return QueryVersion(
display=self.display,
opcode=self.display.get_extension_major(extname),
client_major=client_major,
client_minor=client_minor)
Client = rq.Struct(
rq.Card32("resource_base"),
rq.Card32("resource_mask"))
class QueryClients(rq.ReplyRequest):
_request = rq.Struct(
rq.Card8("opcode"),
rq.Opcode(ResQueryClients),
rq.RequestLength())
_reply = rq.Struct(
rq.ReplyCode(),
rq.Pad(1),
rq.Card16("sequence_number"),
rq.ReplyLength(),
rq.LengthOf("clients", 4),
rq.Pad(20),
rq.List("clients", Client))
def query_clients(self):
"""Request the list of all currently connected clients."""
return QueryClients(
display=self.display,
opcode=self.display.get_extension_major(extname))
Type = rq.Struct(
rq.Card32("resource_type"),
rq.Card32("count"))
class QueryClientResources(rq.ReplyRequest):
_request = rq.Struct(
rq.Card8("opcode"),
rq.Opcode(ResQueryClientResources),
rq.RequestLength(),
rq.Card32("client"))
_reply = rq.Struct(
rq.ReplyCode(),
rq.Pad(1),
rq.Card16("sequence_number"),
rq.ReplyLength(),
rq.LengthOf("types", 4),
rq.Pad(20),
rq.List("types", Type))
def query_client_resources(self, client):
"""Request the number of resources owned by a client.
The server will return the counts of each type of resource.
"""
return QueryClientResources(
display=self.display,
opcode=self.display.get_extension_major(extname),
client=client)
class QueryClientPixmapBytes(rq.ReplyRequest):
_request = rq.Struct(
rq.Card8("opcode"),
rq.Opcode(ResQueryClientPixmapBytes),
rq.RequestLength(),
rq.Card32("client"))
_reply = rq.Struct(
rq.ReplyCode(),
rq.Pad(1),
rq.Card16("sequence_number"),
rq.ReplyLength(),
rq.Card32("bytes"),
rq.Card32("bytes_overflow"),
rq.Pad(16))
def query_client_pixmap_bytes(self, client):
"""Query the pixmap usage of some client.
The returned number is a sum of memory usage of each pixmap that can be
attributed to the given client.
"""
return QueryClientPixmapBytes(
display=self.display,
opcode=self.display.get_extension_major(extname),
client=client)
class SizeOf(rq.LengthOf):
"""A SizeOf stores the size in bytes of some other Field whose size
may vary, e.g. List
"""
def __init__(self, name, size, item_size):
rq.LengthOf.__init__(self, name, size)
self.item_size = item_size
def parse_value(self, length, display):
return length // self.item_size
ClientXIDMask = 1 << 0
LocalClientPIDMask = 1 << 1
ClientIdSpec = rq.Struct(
rq.Card32("client"),
rq.Card32("mask"))
ClientIdValue = rq.Struct(
rq.Object("spec", ClientIdSpec),
SizeOf("value", 4, 4),
rq.List("value", rq.Card32Obj))
class QueryClientIds(rq.ReplyRequest):
_request = rq.Struct(
rq.Card8("opcode"),
rq.Opcode(ResQueryClientIds),
rq.RequestLength(),
rq.LengthOf("specs", 4),
rq.List("specs", ClientIdSpec))
_reply = rq.Struct(
rq.ReplyCode(),
rq.Pad(1),
rq.Card16("sequence_number"),
rq.ReplyLength(),
rq.LengthOf("ids", 4),
rq.Pad(20),
rq.List("ids", ClientIdValue))
def query_client_ids(self, specs):
"""Request to identify a given set of clients with some identification method.
The request sends a list of specifiers that select clients and
identification methods to server. The server then tries to identify the
chosen clients using the identification methods specified for each client.
The server returns IDs for those clients that were successfully identified.
"""
return QueryClientIds(
display=self.display,
opcode=self.display.get_extension_major(extname),
specs=specs)
ResourceIdSpec = rq.Struct(
rq.Card32("resource"),
rq.Card32("type"))
ResourceSizeSpec = rq.Struct(
# inline struct ResourceIdSpec to work around
# a parser bug with nested objects
rq.Card32("resource"),
rq.Card32("type"),
rq.Card32("bytes"),
rq.Card32("ref_count"),
rq.Card32("use_count"))
ResourceSizeValue = rq.Struct(
rq.Object("size", ResourceSizeSpec),
rq.LengthOf("cross_references", 4),
rq.List("cross_references", ResourceSizeSpec))
class QueryResourceBytes(rq.ReplyRequest):
_request = rq.Struct(
rq.Card8("opcode"),
rq.Opcode(ResQueryResourceBytes),
rq.RequestLength(),
rq.Card32("client"),
rq.LengthOf("specs", 4),
rq.List("specs", ResourceIdSpec))
_reply = rq.Struct(
rq.ReplyCode(),
rq.Pad(1),
rq.Card16("sequence_number"),
rq.ReplyLength(),
rq.LengthOf("sizes", 4),
rq.Pad(20),
rq.List("sizes", ResourceSizeValue))
def query_resource_bytes(self, client, specs):
"""Query the sizes of resources from X server.
The request sends a list of specifiers that selects resources for size
calculation. The server tries to calculate the sizes of chosen resources
and returns an estimate for a resource only if the size could be determined
"""
return QueryResourceBytes(
display=self.display,
opcode=self.display.get_extension_major(extname),
client=client,
specs=specs)
def init(disp, info):
disp.extension_add_method("display", "res_query_version", query_version)
disp.extension_add_method("display", "res_query_clients", query_clients)
disp.extension_add_method("display", "res_query_client_resources",
query_client_resources)
disp.extension_add_method("display", "res_query_client_pixmap_bytes",
query_client_pixmap_bytes)
disp.extension_add_method("display", "res_query_client_ids",
query_client_ids)
disp.extension_add_method("display", "res_query_resource_bytes",
query_resource_bytes)
|
PypiClean
|
/dash-google-charts-0.0.3rc1.tar.gz/dash-google-charts-0.0.3rc1/dash_google_charts/_components/ColumnChart.py
|
from dash.development.base_component import Component, _explicitize_args
class ColumnChart(Component):
"""A ColumnChart component.
Keyword arguments:
- id (string; optional): The ID of this component, used to identify dash components
in callbacks. The ID needs to be unique across all of the
components in an app.
- style (dict; optional): Defines CSS styles which will override styles previously set.
- className (string; optional): Often used with CSS to style elements with common properties.
- height (string | number; optional): The height of the chart.
- width (string | number; optional): The width of the chart.
- options (dict; optional): A dictionary of options for the chart
- data (list of dicts | dict; optional): The data for the chart
- diffdata (dict; optional): Some charts support passing `diffdata` for visualising a change over time
- mapsApiKey (string; optional): Google maps api key for use with GeoChart
- spreadSheetUrl (string; optional): URL to google sheet for pulling data
- spreadSheetQueryParameters (dict; optional): Query parameters for external spreadsheet
- formatters (list of dicts | dict; optional): Data formatting options.
- legend_toggle (boolean; optional): Allow legend to toggle inclusion of data in chart
- selection (dict; optional): Data associated to user selection for use in callbacks
- dataTable (dict; optional): DataTable object, can be combined with selection data for use in callbacks"""
@_explicitize_args
def __init__(self, id=Component.UNDEFINED, style=Component.UNDEFINED, className=Component.UNDEFINED, height=Component.UNDEFINED, width=Component.UNDEFINED, options=Component.UNDEFINED, data=Component.UNDEFINED, diffdata=Component.UNDEFINED, mapsApiKey=Component.UNDEFINED, spreadSheetUrl=Component.UNDEFINED, spreadSheetQueryParameters=Component.UNDEFINED, formatters=Component.UNDEFINED, legend_toggle=Component.UNDEFINED, selection=Component.UNDEFINED, dataTable=Component.UNDEFINED, **kwargs):
self._prop_names = ['id', 'style', 'className', 'height', 'width', 'options', 'data', 'diffdata', 'mapsApiKey', 'spreadSheetUrl', 'spreadSheetQueryParameters', 'formatters', 'legend_toggle', 'selection', 'dataTable']
self._type = 'ColumnChart'
self._namespace = 'dash_google_charts/_components'
self._valid_wildcard_attributes = []
self.available_properties = ['id', 'style', 'className', 'height', 'width', 'options', 'data', 'diffdata', 'mapsApiKey', 'spreadSheetUrl', 'spreadSheetQueryParameters', 'formatters', 'legend_toggle', 'selection', 'dataTable']
self.available_wildcard_properties = []
_explicit_args = kwargs.pop('_explicit_args')
_locals = locals()
_locals.update(kwargs) # For wildcard attrs
args = {k: _locals[k] for k in _explicit_args if k != 'children'}
for k in []:
if k not in args:
raise TypeError(
'Required argument `' + k + '` was not specified.')
super(ColumnChart, self).__init__(**args)
|
PypiClean
|
/af_process_image_renderer-3.0.3-py3-none-any.whl/af_process_image_renderer/api_client.py
|
from __future__ import absolute_import
import datetime
import json
import mimetypes
from multiprocessing.pool import ThreadPool
import os
import re
import tempfile
# python 2 and python 3 compatibility library
import six
from six.moves.urllib.parse import quote
from af_process_image_renderer.configuration import Configuration
import af_process_image_renderer.models
from af_process_image_renderer import rest
class ApiClient(object):
"""Generic API client for Swagger client library builds.
Swagger generic API client. This client handles the client-
server communication, and is invariant across implementations. Specifics of
the methods and models for each application are generated from the Swagger
templates.
NOTE: This class is auto generated by the swagger code generator program.
Ref: https://github.com/swagger-api/swagger-codegen
Do not edit the class manually.
:param configuration: .Configuration object for this client
:param header_name: a header to pass when making calls to the API.
:param header_value: a header value to pass when making calls to
the API.
:param cookie: a cookie to include in the header when making calls
to the API
"""
PRIMITIVE_TYPES = (float, bool, bytes, six.text_type) + six.integer_types
NATIVE_TYPES_MAPPING = {
'int': int,
'long': int if six.PY3 else long, # noqa: F821
'float': float,
'str': str,
'bool': bool,
'date': datetime.date,
'datetime': datetime.datetime,
'object': object,
}
def __init__(self, configuration=None, header_name=None, header_value=None,
cookie=None):
if configuration is None:
configuration = Configuration()
self.configuration = configuration
self.pool = ThreadPool()
self.rest_client = rest.RESTClientObject(configuration)
self.default_headers = {}
if header_name is not None:
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'Swagger-Codegen/3.0.3/python'
def __del__(self):
self.pool.close()
self.pool.join()
@property
def user_agent(self):
"""User agent for this API client"""
return self.default_headers['User-Agent']
@user_agent.setter
def user_agent(self, value):
self.default_headers['User-Agent'] = value
def set_default_header(self, header_name, header_value):
self.default_headers[header_name] = header_value
def __call_api(
self, resource_path, method, path_params=None,
query_params=None, header_params=None, body=None, post_params=None,
files=None, response_type=None, auth_settings=None,
_return_http_data_only=None, collection_formats=None,
_preload_content=True, _request_timeout=None):
config = self.configuration
# header parameters
header_params = header_params or {}
header_params.update(self.default_headers)
if self.cookie:
header_params['Cookie'] = self.cookie
if header_params:
header_params = self.sanitize_for_serialization(header_params)
header_params = dict(self.parameters_to_tuples(header_params,
collection_formats))
# path parameters
if path_params:
path_params = self.sanitize_for_serialization(path_params)
path_params = self.parameters_to_tuples(path_params,
collection_formats)
for k, v in path_params:
# specified safe chars, encode everything
resource_path = resource_path.replace(
'{%s}' % k,
quote(str(v), safe=config.safe_chars_for_path_param)
)
# query parameters
if query_params:
query_params = self.sanitize_for_serialization(query_params)
query_params = self.parameters_to_tuples(query_params,
collection_formats)
# post parameters
if post_params or files:
post_params = self.prepare_post_parameters(post_params, files)
post_params = self.sanitize_for_serialization(post_params)
post_params = self.parameters_to_tuples(post_params,
collection_formats)
# auth setting
self.update_params_for_auth(header_params, query_params, auth_settings)
# body
if body:
body = self.sanitize_for_serialization(body)
# request url
url = self.configuration.host + resource_path
# perform request and return response
response_data = self.request(
method, url, query_params=query_params, headers=header_params,
post_params=post_params, body=body,
_preload_content=_preload_content,
_request_timeout=_request_timeout)
self.last_response = response_data
return_data = response_data
if _preload_content:
# deserialize response data
if response_type:
return_data = self.deserialize(response_data, response_type)
else:
return_data = None
if _return_http_data_only:
return (return_data)
else:
return (return_data, response_data.status,
response_data.getheaders())
def sanitize_for_serialization(self, obj):
"""Builds a JSON POST object.
If obj is None, return None.
If obj is str, int, long, float, bool, return directly.
If obj is datetime.datetime, datetime.date
convert to string in iso8601 format.
If obj is list, sanitize each element in the list.
If obj is dict, return the dict.
If obj is swagger model, return the properties dict.
:param obj: The data to serialize.
:return: The serialized form of data.
"""
if obj is None:
return None
elif isinstance(obj, self.PRIMITIVE_TYPES):
return obj
elif isinstance(obj, list):
return [self.sanitize_for_serialization(sub_obj)
for sub_obj in obj]
elif isinstance(obj, tuple):
return tuple(self.sanitize_for_serialization(sub_obj)
for sub_obj in obj)
elif isinstance(obj, (datetime.datetime, datetime.date)):
return obj.isoformat()
if isinstance(obj, dict):
obj_dict = obj
else:
# Convert model obj to dict except
# attributes `swagger_types`, `attribute_map`
# and attributes which value is not None.
# Convert attribute name to json key in
# model definition for request.
obj_dict = {obj.attribute_map[attr]: getattr(obj, attr)
for attr, _ in six.iteritems(obj.swagger_types)
if getattr(obj, attr) is not None}
return {key: self.sanitize_for_serialization(val)
for key, val in six.iteritems(obj_dict)}
def deserialize(self, response, response_type):
"""Deserializes response into an object.
:param response: RESTResponse object to be deserialized.
:param response_type: class literal for
deserialized object, or string of class name.
:return: deserialized object.
"""
if response.status == 204:
return None
# handle file downloading
# save response body into a tmp file and return the instance
if response_type == "file":
return self.__deserialize_file(response)
# fetch data from response object
try:
data = json.loads(response.data)
except ValueError:
data = response.data
return self.__deserialize(data, response_type)
def __deserialize(self, data, klass):
"""Deserializes dict, list, str into an object.
:param data: dict, list or str.
:param klass: class literal, or string of class name.
:return: object.
"""
if data is None:
return None
if type(klass) == str:
if klass.startswith('list['):
sub_kls = re.match(r'list\[(.*)\]', klass).group(1)
return [self.__deserialize(sub_data, sub_kls)
for sub_data in data]
if klass.startswith('dict('):
sub_kls = re.match(r'dict\(([^,]*), (.*)\)', klass).group(2)
return {k: self.__deserialize(v, sub_kls)
for k, v in six.iteritems(data)}
# convert str to class
if klass in self.NATIVE_TYPES_MAPPING:
klass = self.NATIVE_TYPES_MAPPING[klass]
else:
klass = getattr(af_process_image_renderer.models, klass)
if klass in self.PRIMITIVE_TYPES:
return self.__deserialize_primitive(data, klass)
elif klass == object:
return self.__deserialize_object(data)
elif klass == datetime.date:
return self.__deserialize_date(data)
elif klass == datetime.datetime:
return self.__deserialize_datatime(data)
else:
return self.__deserialize_model(data, klass)
def call_api(self, resource_path, method,
path_params=None, query_params=None, header_params=None,
body=None, post_params=None, files=None,
response_type=None, auth_settings=None, async_req=None,
_return_http_data_only=None, collection_formats=None,
_preload_content=True, _request_timeout=None):
"""Makes the HTTP request (synchronous) and returns deserialized data.
To make an async request, set the async_req parameter.
:param resource_path: Path to method endpoint.
:param method: Method to call.
:param path_params: Path parameters in the url.
:param query_params: Query parameters in the url.
:param header_params: Header parameters to be
placed in the request header.
:param body: Request body.
:param post_params dict: Request post form parameters,
for `application/x-www-form-urlencoded`, `multipart/form-data`.
:param auth_settings list: Auth Settings names for the request.
:param response: Response data type.
:param files dict: key -> filename, value -> filepath,
for `multipart/form-data`.
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param collection_formats: dict of collection formats for path, query,
header, and post parameters.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return:
If async_req parameter is True,
the request will be called asynchronously.
The method will return the request thread.
If parameter async_req is False or missing,
then the method will return the response directly.
"""
if not async_req:
return self.__call_api(resource_path, method,
path_params, query_params, header_params,
body, post_params, files,
response_type, auth_settings,
_return_http_data_only, collection_formats,
_preload_content, _request_timeout)
else:
thread = self.pool.apply_async(self.__call_api, (resource_path,
method, path_params, query_params,
header_params, body,
post_params, files,
response_type, auth_settings,
_return_http_data_only,
collection_formats,
_preload_content, _request_timeout))
return thread
def request(self, method, url, query_params=None, headers=None,
post_params=None, body=None, _preload_content=True,
_request_timeout=None):
"""Makes the HTTP request using RESTClient."""
if method == "GET":
return self.rest_client.GET(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "HEAD":
return self.rest_client.HEAD(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "OPTIONS":
return self.rest_client.OPTIONS(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "POST":
return self.rest_client.POST(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PUT":
return self.rest_client.PUT(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PATCH":
return self.rest_client.PATCH(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "DELETE":
return self.rest_client.DELETE(url,
query_params=query_params,
headers=headers,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
else:
raise ValueError(
"http method must be `GET`, `HEAD`, `OPTIONS`,"
" `POST`, `PATCH`, `PUT` or `DELETE`."
)
def parameters_to_tuples(self, params, collection_formats):
"""Get parameters as list of tuples, formatting collections.
:param params: Parameters as dict or list of two-tuples
:param dict collection_formats: Parameter collection formats
:return: Parameters as list of tuples, collections formatted
"""
new_params = []
if collection_formats is None:
collection_formats = {}
for k, v in six.iteritems(params) if isinstance(params, dict) else params: # noqa: E501
if k in collection_formats:
collection_format = collection_formats[k]
if collection_format == 'multi':
new_params.extend((k, value) for value in v)
else:
if collection_format == 'ssv':
delimiter = ' '
elif collection_format == 'tsv':
delimiter = '\t'
elif collection_format == 'pipes':
delimiter = '|'
else: # csv is the default
delimiter = ','
new_params.append(
(k, delimiter.join(str(value) for value in v)))
else:
new_params.append((k, v))
return new_params
def prepare_post_parameters(self, post_params=None, files=None):
"""Builds form parameters.
:param post_params: Normal form parameters.
:param files: File parameters.
:return: Form parameters with files.
"""
params = []
if post_params:
params = post_params
if files:
for k, v in six.iteritems(files):
if not v:
continue
file_names = v if type(v) is list else [v]
for n in file_names:
with open(n, 'rb') as f:
filename = os.path.basename(f.name)
filedata = f.read()
mimetype = (mimetypes.guess_type(filename)[0] or
'application/octet-stream')
params.append(
tuple([k, tuple([filename, filedata, mimetype])]))
return params
def select_header_accept(self, accepts):
"""Returns `Accept` based on an array of accepts provided.
:param accepts: List of headers.
:return: Accept (e.g. application/json).
"""
if not accepts:
return
accepts = [x.lower() for x in accepts]
if 'application/json' in accepts:
return 'application/json'
else:
return ', '.join(accepts)
def select_header_content_type(self, content_types):
"""Returns `Content-Type` based on an array of content_types provided.
:param content_types: List of content-types.
:return: Content-Type (e.g. application/json).
"""
if not content_types:
return 'application/json'
content_types = [x.lower() for x in content_types]
if 'application/json' in content_types or '*/*' in content_types:
return 'application/json'
else:
return content_types[0]
def update_params_for_auth(self, headers, querys, auth_settings):
"""Updates header and query params based on authentication setting.
:param headers: Header parameters dict to be updated.
:param querys: Query parameters tuple list to be updated.
:param auth_settings: Authentication setting identifiers list.
"""
if not auth_settings:
return
for auth in auth_settings:
auth_setting = self.configuration.auth_settings().get(auth)
if auth_setting:
if not auth_setting['value']:
continue
elif auth_setting['in'] == 'header':
headers[auth_setting['key']] = auth_setting['value']
elif auth_setting['in'] == 'query':
querys.append((auth_setting['key'], auth_setting['value']))
else:
raise ValueError(
'Authentication token must be in `query` or `header`'
)
def __deserialize_file(self, response):
"""Deserializes body to file
Saves response body into a file in a temporary folder,
using the filename from the `Content-Disposition` header if provided.
:param response: RESTResponse.
:return: file path.
"""
fd, path = tempfile.mkstemp(dir=self.configuration.temp_folder_path)
os.close(fd)
os.remove(path)
content_disposition = response.getheader("Content-Disposition")
if content_disposition:
filename = re.search(r'filename=[\'"]?([^\'"\s]+)[\'"]?',
content_disposition).group(1)
path = os.path.join(os.path.dirname(path), filename)
with open(path, "wb") as f:
f.write(response.data)
return path
def __deserialize_primitive(self, data, klass):
"""Deserializes string to primitive type.
:param data: str.
:param klass: class literal.
:return: int, long, float, str, bool.
"""
try:
return klass(data)
except UnicodeEncodeError:
return six.text_type(data)
except TypeError:
return data
def __deserialize_object(self, value):
"""Return a original value.
:return: object.
"""
return value
def __deserialize_date(self, string):
"""Deserializes string to date.
:param string: str.
:return: date.
"""
try:
from dateutil.parser import parse
return parse(string).date()
except ImportError:
return string
except ValueError:
raise rest.ApiException(
status=0,
reason="Failed to parse `{0}` as date object".format(string)
)
def __deserialize_datatime(self, string):
"""Deserializes string to datetime.
The string should be in iso8601 datetime format.
:param string: str.
:return: datetime.
"""
try:
from dateutil.parser import parse
return parse(string)
except ImportError:
return string
except ValueError:
raise rest.ApiException(
status=0,
reason=(
"Failed to parse `{0}` as datetime object"
.format(string)
)
)
def __hasattr(self, object, name):
return name in object.__class__.__dict__
def __deserialize_model(self, data, klass):
"""Deserializes list or dict to model.
:param data: dict, list.
:param klass: class literal.
:return: model object.
"""
if not klass.swagger_types and not self.__hasattr(klass, 'get_real_child_model'):
return data
kwargs = {}
if klass.swagger_types is not None:
for attr, attr_type in six.iteritems(klass.swagger_types):
if (data is not None and
klass.attribute_map[attr] in data and
isinstance(data, (list, dict))):
value = data[klass.attribute_map[attr]]
kwargs[attr] = self.__deserialize(value, attr_type)
instance = klass(**kwargs)
if (isinstance(instance, dict) and
klass.swagger_types is not None and
isinstance(data, dict)):
for key, value in data.items():
if key not in klass.swagger_types:
instance[key] = value
if self.__hasattr(instance, 'get_real_child_model'):
klass_name = instance.get_real_child_model(data)
if klass_name:
instance = self.__deserialize(data, klass_name)
return instance
|
PypiClean
|
/frodocs-material-4.6.4.tar.gz/frodocs-material-4.6.4/material/assets/javascripts/lunr/lunr.da.js
|
* based on
* Snowball JavaScript Library v0.3
* http://code.google.com/p/urim/
* http://snowball.tartarus.org/
*
* Copyright 2010, Oleg Mazko
* http://www.mozilla.org/MPL/
*/
!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r,m,i;e.da=function(){this.pipeline.reset(),this.pipeline.add(e.da.trimmer,e.da.stopWordFilter,e.da.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.da.stemmer))},e.da.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.da.trimmer=e.trimmerSupport.generateTrimmer(e.da.wordCharacters),e.Pipeline.registerFunction(e.da.trimmer,"trimmer-da"),e.da.stemmer=(r=e.stemmerSupport.Among,m=e.stemmerSupport.SnowballProgram,i=new function(){var i,t,n,s=[new r("hed",-1,1),new r("ethed",0,1),new r("ered",-1,1),new r("e",-1,1),new r("erede",3,1),new r("ende",3,1),new r("erende",5,1),new r("ene",3,1),new r("erne",3,1),new r("ere",3,1),new r("en",-1,1),new r("heden",10,1),new r("eren",10,1),new r("er",-1,1),new r("heder",13,1),new r("erer",13,1),new r("s",-1,2),new r("heds",16,1),new r("es",16,1),new r("endes",18,1),new r("erendes",19,1),new r("enes",18,1),new r("ernes",18,1),new r("eres",18,1),new r("ens",16,1),new r("hedens",24,1),new r("erens",24,1),new r("ers",16,1),new r("ets",16,1),new r("erets",28,1),new r("et",-1,1),new r("eret",30,1)],o=[new r("gd",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1)],a=[new r("ig",-1,1),new r("lig",0,1),new r("elig",1,1),new r("els",-1,1),new r("løst",-1,2)],d=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],u=[239,254,42,3,0,0,0,0,0,0,0,0,0,0,0,0,16],c=new m;function l(){var e,r=c.limit-c.cursor;c.cursor>=t&&(e=c.limit_backward,c.limit_backward=t,c.ket=c.cursor,c.find_among_b(o,4)?(c.bra=c.cursor,c.limit_backward=e,c.cursor=c.limit-r,c.cursor>c.limit_backward&&(c.cursor--,c.bra=c.cursor,c.slice_del())):c.limit_backward=e)}this.setCurrent=function(e){c.setCurrent(e)},this.getCurrent=function(){return c.getCurrent()},this.stem=function(){var e,r=c.cursor;return function(){var e,r=c.cursor+3;if(t=c.limit,0<=r&&r<=c.limit){for(i=r;;){if(e=c.cursor,c.in_grouping(d,97,248)){c.cursor=e;break}if((c.cursor=e)>=c.limit)return;c.cursor++}for(;!c.out_grouping(d,97,248);){if(c.cursor>=c.limit)return;c.cursor++}(t=c.cursor)<i&&(t=i)}}(),c.limit_backward=r,c.cursor=c.limit,function(){var e,r;if(c.cursor>=t&&(r=c.limit_backward,c.limit_backward=t,c.ket=c.cursor,e=c.find_among_b(s,32),c.limit_backward=r,e))switch(c.bra=c.cursor,e){case 1:c.slice_del();break;case 2:c.in_grouping_b(u,97,229)&&c.slice_del()}}(),c.cursor=c.limit,l(),c.cursor=c.limit,function(){var e,r,i,n=c.limit-c.cursor;if(c.ket=c.cursor,c.eq_s_b(2,"st")&&(c.bra=c.cursor,c.eq_s_b(2,"ig")&&c.slice_del()),c.cursor=c.limit-n,c.cursor>=t&&(r=c.limit_backward,c.limit_backward=t,c.ket=c.cursor,e=c.find_among_b(a,5),c.limit_backward=r,e))switch(c.bra=c.cursor,e){case 1:c.slice_del(),i=c.limit-c.cursor,l(),c.cursor=c.limit-i;break;case 2:c.slice_from("løs")}}(),c.cursor=c.limit,c.cursor>=t&&(e=c.limit_backward,c.limit_backward=t,c.ket=c.cursor,c.out_grouping_b(d,97,248)?(c.bra=c.cursor,n=c.slice_to(n),c.limit_backward=e,c.eq_v_b(n)&&c.slice_del()):c.limit_backward=e),!0}},function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}),e.Pipeline.registerFunction(e.da.stemmer,"stemmer-da"),e.da.stopWordFilter=e.generateStopWordFilter("ad af alle alt anden at blev blive bliver da de dem den denne der deres det dette dig din disse dog du efter eller en end er et for fra ham han hans har havde have hende hendes her hos hun hvad hvis hvor i ikke ind jeg jer jo kunne man mange med meget men mig min mine mit mod ned noget nogle nu når og også om op os over på selv sig sin sine sit skal skulle som sådan thi til ud under var vi vil ville vor være været".split(" ")),e.Pipeline.registerFunction(e.da.stopWordFilter,"stopWordFilter-da")}});
|
PypiClean
|
/bonobo-trans-0.0.4.tar.gz/bonobo-trans-0.0.4/bonobo_trans/sequencer.py
|
import logging
from bonobo.config import Configurable, ContextProcessor, Option, Service
from bonobo.config import use_context
from bonobo.errors import UnrecoverableError
from sqlalchemy import Column, Integer, MetaData, String, Table, func, select
from sqlalchemy.engine import create_engine
from bonobo_trans.logging import logger
@use_context
class Sequencer(Configurable):
"""The Sequencer transformation is a number generator.
.. admonition:: Configuration options
*Optional:*
- **name** *(str, length max. 30)*
- **sequence_key** *(str)* Default: SEQ
- **initial** *(int)* Default: 1
- **increment** *(int)* Default: 1
- **max** *(int)*
- **cycle** *(bool)* Default: False
- **generator** *(bool)* Default: False
- **generate** *(int)*
- **source_value_col** *(str)*
- **source_value_tbl** *(str)*
- **persist_type** *(int)* Default:
- **persist_table** *(str)*
- **persist_file** *(str)*
**Option descriptions:**
name
Name of the transformation. Required when using persistence.
sequence_key
Name of the sequence key in the outgoing grow. Default is 'SEQ'.
initial
Starting value. Will start at 1 if not specified.
increment
Value to add in every increment. Will increment by 1 if not specified.
max
Maximum allowed value. When reached the sequencer will stop generating
new numbers. If the 'cycle' option is True, the sequencer will restart
at the initial value.
cycle
When set to True, the sequencer will restart at the initial value after
reaching the max value.
source_value_tbl, source_value_col
Use to retrieve an initial value from an existing table. See notes
below.
.. note::
**Row generation**
generator, generate
Use to generate rows instead of appending. See notes below.
persist_type, persist_file, persist_table
Persist sequence values. See notes below.
source_value_tbl, source_value_col
It's possible to start with an initial value based on an existing value
in a database table. Provide the table and column name using the
``source_value_tbl`` and ``source_value_col``-options.
generator, generate
Instead of appending a row with a sequence number it is possible to
generate a set of rows instead. To do so, set the 'generator' option to
True and the 'generate' option to the number of rows you want to
generate.
The generator mode is essentialy an "extract" transformation, and as
such, no rows can be passed onto it.
By default the generator mode is not enabled.
.. note::
**Persistence**
Persistence enables the sequencer to continue the sequence after
restarting. The current value will need to be stored in a database or
in a file.
By default persistence is not enabled.
There is no mechanism to remove unused files, tables or table entries.
You will need to clean-up these using ?? How to add utility functions to this class ??
persist_type
==================== ======================
``persist_type`` Description
-------------------- ----------------------
SEQ_PERSIST_DISABLED No persistence.
SEQ_PERSIST_DB Persist to a DB table.
SEQ_PERSIST_FILE Persist to a flatfile.
==================== ======================
persist_file
When using SEQ_PERSIST_FILE, the ``persist_file`` option will need to
hold the fully qualifed path and file name to which to save the
sequence value.
persist_table, persist_allow_creation
When using SEQ_PERSIST_DB, the ``persist_table`` option will need to
hold the table name to which to write the sequence value. If the
table does not exist and 'persist_allow_creation' is True, the
table will be created automatically. When creating the table in
advance, you must include the following fields:
- sequence_name string(30)
- sequence_nr numeric
Args:
* **d_row_in** *(dict)*
Returns:
* **d_row_out** *(dict)*
d_row_out contains all the keys of the incoming dictionary plus the
sequencer key (set using the 'sequence_key'-option). If there
already is a key with that name it will be overwritten.
"""
SEQ_PERSIST_DISABLED = 0
SEQ_PERSIST_DB = 1
SEQ_PERSIST_FILE = 2
engine = Service('sqlalchemy.engine')
name = Option(required=False, type=str, default="untitled")
sequence_key = Option(required=False, type=str, default='SEQ')
source_value_tbl = Option(required=False, type=str)
source_value_col = Option(required=False, type=str)
initial = Option(required=False, type=int, default=1)
increment = Option(required=False, type=int, default=1)
max = Option(required=False, type=int, default=None)
cycle = Option(required=False, type=bool,default=False)
persist_type = Option(required=False, type=int, default=SEQ_PERSIST_DISABLED)
persist_table = Option(required=False, type=str, default='SEQUENCES')
persist_file = Option(required=False, type=str)
persist_allow_creation = Option(required=False, type=bool, default=True)
generator = Option(required=False, type=bool,default=False)
generate = Option(required=False, type=int, default=1)
def _get_persisted_from_file(self):
"""Read persisted value from file."""
with open(self.persist_file, 'r') as f:
return int(f.readline())
def _get_persisted_from_db(self, engine, connection):
"""Read persisted value from database table."""
metadata = MetaData()
if not engine.dialect.has_table(engine, self.persist_table):
if self.persist_allow_creation:
tbl_persist = Table(self.persist_table, metadata,
Column('sequence_name', String(30), primary_key=True),
Column('sequence_nr', Integer)
)
metadata.create_all(engine)
sql_insert_seq = tbl_persist.insert().values([self.name, None])
connection.execute(sql_insert_seq)
return None
else:
metadata.reflect(bind=engine, only=[self.persist_table])
tbl_persist = metadata.tables[self.persist_table]
seq_exist = connection.execute(select([func.count()]).select_from(tbl_persist).where(tbl_persist.c.sequence_name==self.name)).scalar()
if seq_exist == 0:
sql_insert_seq = tbl_persist.insert().values([self.name, None])
connection.execute(sql_insert_seq)
return None
else:
return connection.execute(select([tbl_persist.c.sequence_nr]).select_from(tbl_persist).where(tbl_persist.c.sequence_name==self.name)).scalar()
def _get_sequence_from_db(self, engine, connection):
"""Read starting value from a table."""
if not engine.dialect.has_table(engine, self.source_value_tbl):
raise UnrecoverableError("[SEQ_{0}] ERROR: 'source_value_tbl' Table doesn't exist.".format(self.name))
else:
metadata = MetaData()
metadata.reflect(bind=engine, only=[self.source_value_tbl])
tbl_source = metadata.tables[self.source_value_tbl]
high_val = connection.execute(select(func.max([tbl_source.c.self.source_value_col])).select_from(tbl_source)).scalar()
if high_val is None:
return 0
else:
return int(high_val)
def _set_persisted_to_file(self):
"""Write persisted value to file."""
with open(self.persist_file, 'w') as f:
f.write(str(self.sequence_number))
def _set_persisted_to_db(self, engine, connection):
"""Write persisted value to database."""
metadata = MetaData()
metadata.reflect(bind=engine, only=[self.persist_table])
tbl_persist = metadata.tables[self.persist_table]
sql_update_seq = tbl_persist.update().where(tbl_persist.c.sequence_name==self.name).values(sequence_nr=self.sequence_number)
connection = engine.connect()
with connection:
connection.execute(sql_update_seq)
@ContextProcessor
def setup(self, context, *, engine):
"""Setup the transformation.
Connects to database if required. Upon successful connection it will
pass the connection object to the next ContextProcessor.
This is a ContextProcessor, it will be executed once at construction of
the class. All @ContextProcessor functions will get executed in the
order in which they are defined.
Args:
engine (service)
Returns:
connection
"""
self.sequence_number = None
self.sequence_persisted = None
connection = {}
# validate options
if self.persist_type != self.SEQ_PERSIST_DISABLED and self.source_value_tbl is not None:
raise UnrecoverableError("[SEQ_{0}] ERROR: 'persist_type' and 'source_value_tbl' cannot be used together.".format(self.name))
if self.persist_type == self.SEQ_PERSIST_FILE and self.persist_file is None:
raise UnrecoverableError("[SEQ_{0}] ERROR: 'persist_type' set to SEQ_PERSIST_FILE, but 'persist_file' not provided.".format(self.name))
if self.persist_type == self.SEQ_PERSIST_DB and (self.name is None or self.name == 'untitled'):
raise UnrecoverableError("[SEQ_{0}] ERROR: 'persist_type' set to SEQ_PERSIST_DB, but 'name' not provided.".format(self.name))
if self.source_value_tbl is not None and self.source_value_col is None:
raise UnrecoverableError("[SEQ_{0}] ERROR: 'source_value_tbl' specified, but 'source_value_col' is empty.".format(self.name))
if self.source_value_tbl is None and self.source_value_col is not None:
raise UnrecoverableError("[SEQ_{0}] ERROR: 'source_value_col' specified, but 'source_value_tbl' is empty.".format(self.name))
# retrieve high value from source
if self.source_value_tbl is not None:
self.sequence_persisted = self._get_sequence_from_db(engine, connection)
# retrieve persisted values (non-database)
if self.persist_type == self.SEQ_PERSIST_FILE:
self.sequence_persisted = self._get_persisted_from_file()
# setup connection
if self.persist_type == self.SEQ_PERSIST_DB or self.source_value_tbl is not None:
try:
connection = engine.connect()
except OperationalError as exc:
raise UnrecoverableError("[LKP_{0}] ERROR: Could not create SQLAlchemy connection: {1}.".format(self.name, str(exc).replace('\n', ''))) from exc
# retrieve persisted values (database)
if self.persist_type == self.SEQ_PERSIST_DB:
self.sequence_persisted = self._get_persisted_from_db(engine, connection)
# return connection and enter transformation
if connection:
with connection:
yield connection
else:
yield {}
# teardown: persist values
if self.persist_type != self.SEQ_PERSIST_DISABLED and self.sequence_number is not None:
if self.persist_type == self.SEQ_PERSIST_DB:
self._set_persisted_to_db(engine, connection)
elif self.persist_type == self.SEQ_PERSIST_FILE:
self._set_persisted_to_file()
else:
raise UnrecoverableError("[SEQ_{0}] ERROR: 'persist_type': invalid value.".format(self.name))
def __call__(self, connection, context, d_row_in=None, *, engine):
"""Row processor."""
# initialize persisted
if self.sequence_number is None and self.sequence_persisted is not None:
self.sequence_number = self.sequence_persisted
if d_row_in is None and self.generator:
"""
Generator mode
"""
for i in range(self.generate):
# initialize / cycle / increment
if self.sequence_number is None:
self.sequence_number = self.initial
elif self.cycle and self.max is not None and self.sequence_number + self.increment > self.max:
self.sequence_number = self.initial
else:
self.sequence_number = self.sequence_number + self.increment
# send out row
yield {self.sequence_key: self.sequence_number}
else:
"""
Transformation mode
"""
# increment
if self.sequence_number is not None:
self.sequence_number += self.increment
else:
self.sequence_number = self.initial
# cycle
if self.cycle and self.max is not None and self.sequence_number > self.max:
self.sequence_number = self.initial
# send out row
yield {**d_row_in, self.sequence_key: self.sequence_number}
|
PypiClean
|
/paramiko-3.3.1.tar.gz/paramiko-3.3.1/demos/demo_simple.py
|
# Copyright (C) 2003-2007 Robey Pointer <[email protected]>
#
# This file is part of paramiko.
#
# Paramiko is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Paramiko is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
# details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Paramiko; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import base64
import getpass
import os
import socket
import sys
import traceback
from paramiko.py3compat import input
import paramiko
try:
import interactive
except ImportError:
from . import interactive
# setup logging
paramiko.util.log_to_file("demo_simple.log")
# Paramiko client configuration
UseGSSAPI = (
paramiko.GSS_AUTH_AVAILABLE
) # enable "gssapi-with-mic" authentication, if supported by your python installation
DoGSSAPIKeyExchange = (
paramiko.GSS_AUTH_AVAILABLE
) # enable "gssapi-kex" key exchange, if supported by your python installation
# UseGSSAPI = False
# DoGSSAPIKeyExchange = False
port = 22
# get hostname
username = ""
if len(sys.argv) > 1:
hostname = sys.argv[1]
if hostname.find("@") >= 0:
username, hostname = hostname.split("@")
else:
hostname = input("Hostname: ")
if len(hostname) == 0:
print("*** Hostname required.")
sys.exit(1)
if hostname.find(":") >= 0:
hostname, portstr = hostname.split(":")
port = int(portstr)
# get username
if username == "":
default_username = getpass.getuser()
username = input("Username [%s]: " % default_username)
if len(username) == 0:
username = default_username
if not UseGSSAPI and not DoGSSAPIKeyExchange:
password = getpass.getpass("Password for %s@%s: " % (username, hostname))
# now, connect and use paramiko Client to negotiate SSH2 across the connection
try:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.WarningPolicy())
print("*** Connecting...")
if not UseGSSAPI and not DoGSSAPIKeyExchange:
client.connect(hostname, port, username, password)
else:
try:
client.connect(
hostname,
port,
username,
gss_auth=UseGSSAPI,
gss_kex=DoGSSAPIKeyExchange,
)
except Exception:
# traceback.print_exc()
password = getpass.getpass(
"Password for %s@%s: " % (username, hostname)
)
client.connect(hostname, port, username, password)
chan = client.invoke_shell()
print(repr(client.get_transport()))
print("*** Here we go!\n")
interactive.interactive_shell(chan)
chan.close()
client.close()
except Exception as e:
print("*** Caught exception: %s: %s" % (e.__class__, e))
traceback.print_exc()
try:
client.close()
except:
pass
sys.exit(1)
|
PypiClean
|
/ais_dom-2023.7.2-py3-none-any.whl/homeassistant/components/nibe_heatpump/climate.py
|
from __future__ import annotations
from typing import Any
from nibe.coil import Coil
from nibe.coil_groups import (
CLIMATE_COILGROUPS,
UNIT_COILGROUPS,
ClimateCoilGroup,
UnitCoilGroup,
)
from nibe.exceptions import CoilNotFoundException
from homeassistant.components.climate import (
ATTR_HVAC_MODE,
ATTR_TARGET_TEMP_HIGH,
ATTR_TARGET_TEMP_LOW,
ATTR_TEMPERATURE,
ClimateEntity,
ClimateEntityFeature,
HVACAction,
HVACMode,
)
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers.entity_platform import AddEntitiesCallback
from homeassistant.helpers.update_coordinator import CoordinatorEntity
from . import Coordinator
from .const import (
DOMAIN,
LOGGER,
VALUES_COOL_WITH_ROOM_SENSOR_OFF,
VALUES_MIXING_VALVE_CLOSED_STATE,
VALUES_PRIORITY_COOLING,
VALUES_PRIORITY_HEATING,
)
async def async_setup_entry(
hass: HomeAssistant,
config_entry: ConfigEntry,
async_add_entities: AddEntitiesCallback,
) -> None:
"""Set up platform."""
coordinator: Coordinator = hass.data[DOMAIN][config_entry.entry_id]
main_unit = UNIT_COILGROUPS.get(coordinator.series, {}).get("main")
if not main_unit:
LOGGER.debug("Skipping climates - no main unit found")
return
def climate_systems():
for key, group in CLIMATE_COILGROUPS.get(coordinator.series, ()).items():
try:
yield NibeClimateEntity(coordinator, key, main_unit, group)
except CoilNotFoundException as exception:
LOGGER.debug("Skipping climate: %s due to %s", key, exception)
async_add_entities(climate_systems())
class NibeClimateEntity(CoordinatorEntity[Coordinator], ClimateEntity):
"""Climate entity."""
_attr_entity_category = None
_attr_supported_features = (
ClimateEntityFeature.TARGET_TEMPERATURE_RANGE
| ClimateEntityFeature.TARGET_TEMPERATURE
)
_attr_hvac_modes = [HVACMode.HEAT_COOL, HVACMode.OFF, HVACMode.HEAT]
_attr_target_temperature_step = 0.5
_attr_max_temp = 35.0
_attr_min_temp = 5.0
def __init__(
self,
coordinator: Coordinator,
key: str,
unit: UnitCoilGroup,
climate: ClimateCoilGroup,
) -> None:
"""Initialize entity."""
super().__init__(
coordinator,
{
unit.prio,
unit.cooling_with_room_sensor,
climate.current,
climate.setpoint_heat,
climate.setpoint_cool,
climate.mixing_valve_state,
climate.active_accessory,
climate.use_room_sensor,
},
)
self._attr_available = False
self._attr_name = climate.name
self._attr_unique_id = f"{coordinator.unique_id}-{key}"
self._attr_device_info = coordinator.device_info
self._attr_hvac_action = HVACAction.IDLE
self._attr_hvac_mode = HVACMode.OFF
self._attr_target_temperature_high = None
self._attr_target_temperature_low = None
self._attr_target_temperature = None
self._attr_entity_registry_enabled_default = climate.active_accessory is None
def _get(address: int) -> Coil:
return coordinator.heatpump.get_coil_by_address(address)
self._coil_current = _get(climate.current)
self._coil_setpoint_heat = _get(climate.setpoint_heat)
self._coil_setpoint_cool = _get(climate.setpoint_cool)
self._coil_prio = _get(unit.prio)
self._coil_mixing_valve_state = _get(climate.mixing_valve_state)
if climate.active_accessory is None:
self._coil_active_accessory = None
else:
self._coil_active_accessory = _get(climate.active_accessory)
self._coil_use_room_sensor = _get(climate.use_room_sensor)
self._coil_cooling_with_room_sensor = _get(unit.cooling_with_room_sensor)
if self._coil_current:
self._attr_temperature_unit = self._coil_current.unit
@callback
def _handle_coordinator_update(self) -> None:
if not self.coordinator.data:
return
def _get_value(coil: Coil) -> int | str | float | None:
return self.coordinator.get_coil_value(coil)
def _get_float(coil: Coil) -> float | None:
return self.coordinator.get_coil_float(coil)
self._attr_current_temperature = _get_float(self._coil_current)
mode = HVACMode.OFF
if _get_value(self._coil_use_room_sensor) == "ON":
if (
_get_value(self._coil_cooling_with_room_sensor)
in VALUES_COOL_WITH_ROOM_SENSOR_OFF
):
mode = HVACMode.HEAT
else:
mode = HVACMode.HEAT_COOL
self._attr_hvac_mode = mode
setpoint_heat = _get_float(self._coil_setpoint_heat)
setpoint_cool = _get_float(self._coil_setpoint_cool)
if mode == HVACMode.HEAT_COOL:
self._attr_target_temperature = None
self._attr_target_temperature_low = setpoint_heat
self._attr_target_temperature_high = setpoint_cool
elif mode == HVACMode.HEAT:
self._attr_target_temperature = setpoint_heat
self._attr_target_temperature_low = None
self._attr_target_temperature_high = None
else:
self._attr_target_temperature = None
self._attr_target_temperature_low = None
self._attr_target_temperature_high = None
if prio := _get_value(self._coil_prio):
if (
_get_value(self._coil_mixing_valve_state)
in VALUES_MIXING_VALVE_CLOSED_STATE
):
self._attr_hvac_action = HVACAction.IDLE
elif prio in VALUES_PRIORITY_HEATING:
self._attr_hvac_action = HVACAction.HEATING
elif prio in VALUES_PRIORITY_COOLING:
self._attr_hvac_action = HVACAction.COOLING
else:
self._attr_hvac_action = HVACAction.IDLE
else:
self._attr_hvac_action = None
self.async_write_ha_state()
@property
def available(self) -> bool:
"""Return if entity is available."""
coordinator = self.coordinator
active = self._coil_active_accessory
if not coordinator.last_update_success:
return False
if not active:
return True
if active_accessory := coordinator.get_coil_value(active):
return active_accessory == "ON"
return False
async def async_set_temperature(self, **kwargs: Any) -> None:
"""Set target temperatures."""
coordinator = self.coordinator
hvac_mode = kwargs.get(ATTR_HVAC_MODE, self._attr_hvac_mode)
if (temperature := kwargs.get(ATTR_TEMPERATURE)) is not None:
if hvac_mode == HVACMode.HEAT:
await coordinator.async_write_coil(
self._coil_setpoint_heat, temperature
)
elif hvac_mode == HVACMode.COOL:
await coordinator.async_write_coil(
self._coil_setpoint_cool, temperature
)
else:
raise ValueError(
"'set_temperature' requires 'hvac_mode' when passing"
" 'temperature' and 'hvac_mode' is not already set to"
" 'heat' or 'cool'"
)
if (temperature := kwargs.get(ATTR_TARGET_TEMP_LOW)) is not None:
await coordinator.async_write_coil(self._coil_setpoint_heat, temperature)
if (temperature := kwargs.get(ATTR_TARGET_TEMP_HIGH)) is not None:
await coordinator.async_write_coil(self._coil_setpoint_cool, temperature)
|
PypiClean
|
/spectrify-3.1.0.tar.gz/spectrify-3.1.0/CONTRIBUTING.rst
|
.. highlight:: shell
============
Contributing
============
Contributions are welcome, and they are greatly appreciated! Every
little bit helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions
----------------------
Report Bugs
~~~~~~~~~~~
Report bugs at https://github.com/hellonarrativ/spectrify/issues.
If you are reporting a bug, please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
Fix Bugs
~~~~~~~~
Look through the GitHub issues for bugs. Anything tagged with "bug"
and "help wanted" is open to whoever wants to implement it.
Implement Features
~~~~~~~~~~~~~~~~~~
Look through the GitHub issues for features. Anything tagged with "enhancement"
and "help wanted" is open to whoever wants to implement it.
Write Documentation
~~~~~~~~~~~~~~~~~~~
Spectrify could always use more documentation, whether as part of the
official Spectrify docs, in docstrings, or even on the web in blog posts,
articles, and such.
Submit Feedback
~~~~~~~~~~~~~~~
The best way to send feedback is to file an issue at https://github.com/hellonarrativ/spectrify/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions
are welcome :)
Get Started!
------------
Ready to contribute? Here's how to set up `spectrify` for local development.
1. Fork the `spectrify` repo on GitHub.
2. Clone your fork locally::
$ git clone [email protected]:your_name_here/spectrify.git
3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development::
$ mkvirtualenv spectrify -p `which python3` # or python2, if you prefer
$ cd spectrify/
$ pip install -e .
4. Create a branch for local development::
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox::
$ flake8 spectrify tests
$ python setup.py test or py.test
$ tox
To get flake8 and tox, just pip install them into your virtualenv.
6. Commit your changes and push your branch to GitHub::
$ git add .
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature
7. Submit a pull request through the GitHub website.
Pull Request Guidelines
-----------------------
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 2.7, 3.5, and 3.6. Check
https://travis-ci.org/hellonarrativ/spectrify/pull_requests
and make sure that the tests pass for all supported Python versions.
Tips
----
To run a subset of tests::
$ py.test tests.test_spectrify
|
PypiClean
|
/pulumi_oci-1.9.0a1693465256.tar.gz/pulumi_oci-1.9.0a1693465256/pulumi_oci/identity/domain_replication_to_region.py
|
import copy
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['DomainReplicationToRegionArgs', 'DomainReplicationToRegion']
@pulumi.input_type
class DomainReplicationToRegionArgs:
def __init__(__self__, *,
domain_id: pulumi.Input[str],
replica_region: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a DomainReplicationToRegion resource.
:param pulumi.Input[str] domain_id: The OCID of the domain
:param pulumi.Input[str] replica_region: A region for which domain replication is requested for. See [Regions and Availability Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm) for the full list of supported region names. Example: `us-phoenix-1`
** IMPORTANT **
Any change to a property that does not support update will force the destruction and recreation of the resource with the new property values
"""
pulumi.set(__self__, "domain_id", domain_id)
if replica_region is not None:
pulumi.set(__self__, "replica_region", replica_region)
@property
@pulumi.getter(name="domainId")
def domain_id(self) -> pulumi.Input[str]:
"""
The OCID of the domain
"""
return pulumi.get(self, "domain_id")
@domain_id.setter
def domain_id(self, value: pulumi.Input[str]):
pulumi.set(self, "domain_id", value)
@property
@pulumi.getter(name="replicaRegion")
def replica_region(self) -> Optional[pulumi.Input[str]]:
"""
A region for which domain replication is requested for. See [Regions and Availability Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm) for the full list of supported region names. Example: `us-phoenix-1`
** IMPORTANT **
Any change to a property that does not support update will force the destruction and recreation of the resource with the new property values
"""
return pulumi.get(self, "replica_region")
@replica_region.setter
def replica_region(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "replica_region", value)
@pulumi.input_type
class _DomainReplicationToRegionState:
def __init__(__self__, *,
domain_id: Optional[pulumi.Input[str]] = None,
replica_region: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering DomainReplicationToRegion resources.
:param pulumi.Input[str] domain_id: The OCID of the domain
:param pulumi.Input[str] replica_region: A region for which domain replication is requested for. See [Regions and Availability Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm) for the full list of supported region names. Example: `us-phoenix-1`
** IMPORTANT **
Any change to a property that does not support update will force the destruction and recreation of the resource with the new property values
"""
if domain_id is not None:
pulumi.set(__self__, "domain_id", domain_id)
if replica_region is not None:
pulumi.set(__self__, "replica_region", replica_region)
@property
@pulumi.getter(name="domainId")
def domain_id(self) -> Optional[pulumi.Input[str]]:
"""
The OCID of the domain
"""
return pulumi.get(self, "domain_id")
@domain_id.setter
def domain_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "domain_id", value)
@property
@pulumi.getter(name="replicaRegion")
def replica_region(self) -> Optional[pulumi.Input[str]]:
"""
A region for which domain replication is requested for. See [Regions and Availability Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm) for the full list of supported region names. Example: `us-phoenix-1`
** IMPORTANT **
Any change to a property that does not support update will force the destruction and recreation of the resource with the new property values
"""
return pulumi.get(self, "replica_region")
@replica_region.setter
def replica_region(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "replica_region", value)
class DomainReplicationToRegion(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
domain_id: Optional[pulumi.Input[str]] = None,
replica_region: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
This resource provides the Domain Replication To Region resource in Oracle Cloud Infrastructure Identity service.
Replicate domain to a new region. This is an asynchronous call - where, at start,
{@code state} of this domain in replica region is set to ENABLING_REPLICATION.
On domain replication completion the {@code state} will be set to REPLICATION_ENABLED.
To track progress, HTTP GET on /iamWorkRequests/{iamWorkRequestsId} endpoint will provide
the async operation's status.
If the replica region's {@code state} is already ENABLING_REPLICATION or REPLICATION_ENABLED,
returns 409 CONFLICT.
- If the domain doesn't exists, returns 404 NOT FOUND.
- If home region is same as replication region, return 400 BAD REQUEST.
- If Domain is not active or being updated, returns 400 BAD REQUEST.
- If any internal error occurs, return 500 INTERNAL SERVER ERROR.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_domain_replication_to_region = oci.identity.DomainReplicationToRegion("testDomainReplicationToRegion",
domain_id=oci_identity_domain["test_domain"]["id"],
replica_region=var["domain_replication_to_region_replica_region"])
```
## Import
Import is not supported for this resource.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] domain_id: The OCID of the domain
:param pulumi.Input[str] replica_region: A region for which domain replication is requested for. See [Regions and Availability Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm) for the full list of supported region names. Example: `us-phoenix-1`
** IMPORTANT **
Any change to a property that does not support update will force the destruction and recreation of the resource with the new property values
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: DomainReplicationToRegionArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource provides the Domain Replication To Region resource in Oracle Cloud Infrastructure Identity service.
Replicate domain to a new region. This is an asynchronous call - where, at start,
{@code state} of this domain in replica region is set to ENABLING_REPLICATION.
On domain replication completion the {@code state} will be set to REPLICATION_ENABLED.
To track progress, HTTP GET on /iamWorkRequests/{iamWorkRequestsId} endpoint will provide
the async operation's status.
If the replica region's {@code state} is already ENABLING_REPLICATION or REPLICATION_ENABLED,
returns 409 CONFLICT.
- If the domain doesn't exists, returns 404 NOT FOUND.
- If home region is same as replication region, return 400 BAD REQUEST.
- If Domain is not active or being updated, returns 400 BAD REQUEST.
- If any internal error occurs, return 500 INTERNAL SERVER ERROR.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_domain_replication_to_region = oci.identity.DomainReplicationToRegion("testDomainReplicationToRegion",
domain_id=oci_identity_domain["test_domain"]["id"],
replica_region=var["domain_replication_to_region_replica_region"])
```
## Import
Import is not supported for this resource.
:param str resource_name: The name of the resource.
:param DomainReplicationToRegionArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(DomainReplicationToRegionArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
domain_id: Optional[pulumi.Input[str]] = None,
replica_region: Optional[pulumi.Input[str]] = None,
__props__=None):
opts = pulumi.ResourceOptions.merge(_utilities.get_resource_opts_defaults(), opts)
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = DomainReplicationToRegionArgs.__new__(DomainReplicationToRegionArgs)
if domain_id is None and not opts.urn:
raise TypeError("Missing required property 'domain_id'")
__props__.__dict__["domain_id"] = domain_id
__props__.__dict__["replica_region"] = replica_region
super(DomainReplicationToRegion, __self__).__init__(
'oci:Identity/domainReplicationToRegion:DomainReplicationToRegion',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
domain_id: Optional[pulumi.Input[str]] = None,
replica_region: Optional[pulumi.Input[str]] = None) -> 'DomainReplicationToRegion':
"""
Get an existing DomainReplicationToRegion resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] domain_id: The OCID of the domain
:param pulumi.Input[str] replica_region: A region for which domain replication is requested for. See [Regions and Availability Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm) for the full list of supported region names. Example: `us-phoenix-1`
** IMPORTANT **
Any change to a property that does not support update will force the destruction and recreation of the resource with the new property values
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _DomainReplicationToRegionState.__new__(_DomainReplicationToRegionState)
__props__.__dict__["domain_id"] = domain_id
__props__.__dict__["replica_region"] = replica_region
return DomainReplicationToRegion(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="domainId")
def domain_id(self) -> pulumi.Output[str]:
"""
The OCID of the domain
"""
return pulumi.get(self, "domain_id")
@property
@pulumi.getter(name="replicaRegion")
def replica_region(self) -> pulumi.Output[str]:
"""
A region for which domain replication is requested for. See [Regions and Availability Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm) for the full list of supported region names. Example: `us-phoenix-1`
** IMPORTANT **
Any change to a property that does not support update will force the destruction and recreation of the resource with the new property values
"""
return pulumi.get(self, "replica_region")
|
PypiClean
|
/Pyrogram-2.0.34.tar.gz/Pyrogram-2.0.34/pyrogram/handlers/raw_update_handler.py
|
from typing import Callable
from .handler import Handler
class RawUpdateHandler(Handler):
"""The Raw Update handler class. Used to handle raw updates. It is intended to be used with
:meth:`~pyrogram.Client.add_handler`
For a nicer way to register this handler, have a look at the
:meth:`~pyrogram.Client.on_raw_update` decorator.
Parameters:
callback (``Callable``):
A function that will be called when a new update is received from the server. It takes
*(client, update, users, chats)* as positional arguments (look at the section below for
a detailed description).
Other Parameters:
client (:obj:`~pyrogram.Client`):
The Client itself, useful when you want to call other API methods inside the update handler.
update (``Update``):
The received update, which can be one of the many single Updates listed in the
:obj:`~pyrogram.raw.base.Update` base type.
users (``dict``):
Dictionary of all :obj:`~pyrogram.types.User` mentioned in the update.
You can access extra info about the user (such as *first_name*, *last_name*, etc...) by using
the IDs you find in the *update* argument (e.g.: *users[1768841572]*).
chats (``dict``):
Dictionary of all :obj:`~pyrogram.types.Chat` and
:obj:`~pyrogram.raw.types.Channel` mentioned in the update.
You can access extra info about the chat (such as *title*, *participants_count*, etc...)
by using the IDs you find in the *update* argument (e.g.: *chats[1701277281]*).
Note:
The following Empty or Forbidden types may exist inside the *users* and *chats* dictionaries.
They mean you have been blocked by the user or banned from the group/channel.
- :obj:`~pyrogram.raw.types.UserEmpty`
- :obj:`~pyrogram.raw.types.ChatEmpty`
- :obj:`~pyrogram.raw.types.ChatForbidden`
- :obj:`~pyrogram.raw.types.ChannelForbidden`
"""
def __init__(self, callback: Callable):
super().__init__(callback)
|
PypiClean
|
/memsource-cli-0.3.8.tar.gz/memsource-cli-0.3.8/docs/ProjectTransMemoryDto.md
|
# ProjectTransMemoryDto
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**target_locale** | **str** | | [optional]
**workflow_step** | [**WorkflowStepReference**](WorkflowStepReference.md) | | [optional]
**read_mode** | **bool** | | [optional]
**write_mode** | **bool** | | [optional]
**trans_memory** | [**TransMemoryDto**](TransMemoryDto.md) | | [optional]
**penalty** | **float** | | [optional]
**apply_penalty_to101_only** | **bool** | | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
|
PypiClean
|
/BoxKit-2023.6.7.tar.gz/BoxKit-2023.6.7/bin/cmd.py
|
import os
import sys
import subprocess
from setuptools.command.install import install
from setuptools.command.develop import develop
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
# Import cbox_build from cbox
# module located in the 'bin' folder of the
# package directory
from cbox import cbox_build # pylint: disable=wrong-import-position
from boost import boost_install # pylint: disable=wrong-import-position
# custom command
class CustomCmd:
"""Custom command."""
user_options = [
("with-cbox", None, "With C++ backend"),
("with-pyarrow", None, "With pyarrow data backend"),
("with-zarr", None, "With zarr data backend"),
("with-dask", None, "With dask data/parallel backend"),
("with-server", None, "With remote server utilitiy"),
("enable-testing", None, "Enable testing mode"),
]
def initialize_options(self):
"""
Initialize options
"""
self.with_cbox = None # pylint: disable=attribute-defined-outside-init
self.with_pyarrow = None # pylint: disable=attribute-defined-outside-init
self.with_zarr = None # pylint: disable=attribute-defined-outside-init
self.with_dask = None # pylint: disable=attribute-defined-outside-init
self.with_server = None # pylint: disable=attribute-defined-outside-init
self.enable_testing = None # pylint: disable=attribute-defined-outside-init
def finalize_options(self):
"""
Finalize options
"""
for option in [
"with_cbox",
"with_pyarrow",
"with_zarr",
"with_dask",
"with_server",
"enable_testing",
]:
if getattr(self, option) not in [None, 1]:
raise ValueError(f"{option} is a flag")
def run(self, user):
"""
Run command
"""
if user:
with_user = "--user"
else:
with_user = ""
if self.with_cbox:
subprocess.run(
f"{sys.executable} -m pip install -r requirements/cbox.txt {with_user}",
shell=True,
check=True,
executable="/bin/bash",
)
if self.with_pyarrow:
subprocess.run(
f"{sys.executable} -m pip install -r requirements/pyarrow.txt {with_user}",
shell=True,
check=True,
executable="/bin/bash",
)
if self.with_zarr:
subprocess.run(
f"{sys.executable} -m pip install -r requirements/zarr.txt {with_user}",
shell=True,
check=True,
executable="/bin/bash",
)
if self.with_dask:
subprocess.run(
f"{sys.executable} -m pip install -r requirements/dask.txt {with_user}",
shell=True,
check=True,
executable="/bin/bash",
)
if self.with_server:
subprocess.run(
f"{sys.executable} -m pip install -r requirements/server.txt {with_user}",
shell=True,
check=True,
executable="/bin/bash",
)
if self.enable_testing:
subprocess.run(
f"{sys.executable} -m pip install -r requirements/testing.txt {with_user}",
shell=True,
check=True,
executable="/bin/bash",
)
with open("boxkit/options.py", "w", encoding="ascii") as optfile:
optfile.write(f"CBOX={self.with_cbox}\n")
optfile.write(f"PYARROW={self.with_pyarrow}\n")
optfile.write(f"ZARR={self.with_zarr}\n")
optfile.write(f"DASK={self.with_dask}\n")
optfile.write(f"SERVER={self.with_server}\n")
optfile.write(f"TESTING={self.enable_testing}\n")
# replaces the default build command for setup.py
class InstallCmd(install, CustomCmd):
"""Custom build command."""
user_options = install.user_options + CustomCmd.user_options
def initialize_options(self):
install.initialize_options(self)
CustomCmd.initialize_options(self)
def finalize_options(self):
install.finalize_options(self)
CustomCmd.finalize_options(self)
def run(self):
CustomCmd.run(self, self.user)
if self.with_cbox:
cbox_build()
boost_install()
install.run(self)
# replaces custom develop command for setup.py
class DevelopCmd(develop, CustomCmd):
"""Custom develop command."""
user_options = develop.user_options + CustomCmd.user_options
def initialize_options(self):
develop.initialize_options(self)
CustomCmd.initialize_options(self)
def finalize_options(self):
develop.finalize_options(self)
CustomCmd.finalize_options(self)
def run(self):
develop.run(self)
CustomCmd.run(self, self.user)
if self.with_cbox:
cbox_build()
|
PypiClean
|
/kolibri_ml-1.1.88-py3-none-any.whl/kolibri/tokenizers/multi_word_tokenizer.py
|
from kolibri.tokenizers.regex_tokenizer import RegexpTokenizer
from kolibri.tokenizers.tokenizer import Tokenizer
from kdmt.dict import update
class MultiWordTokenizer(Tokenizer):
defaults = {
"fixed": {
'whitespace': False,
'regex': None,
'split': " "
},
"tunable": {
}
}
def __init__(self, parameters={}):
"""
:param config:
"""
super().__init__(parameters)
tknzr = RegexpTokenizer(parameters=parameters)
self._tokenize=tknzr.tokenize
self.do_lower_case=self.get_parameter('do-lower-case')
self.split=self.get_parameter("split")
if self.get_parameter('whitespace'):
self._tokenize=self.whitespace_tokenize
if self.get_parameter('regex') is not None:
toknizr=RegexpTokenizer({'pattern':self.get_parameter('regex')})
self._tokenize=toknizr.tokenize
def update_default_hyper_parameters(self):
self.defaults=update(self.defaults, MultiWordTokenizer.defaults)
super().update_default_hyper_parameters()
def fit(self, training_data, target):
return self
def tokenize(self, text):
"""Tokenizes a piece of text."""
text=super(MultiWordTokenizer, self).tokenize(text)
orig_tokens = self._tokenize(text)
split_tokens = []
for token in orig_tokens:
if self.remove_stopwords and token.lower() in self.stopwords:
continue
if self.get_parameter("remove-numbers") and token.isnumeric():
continue
split_tokens.append(token)
return split_tokens
def whitespace_tokenize(self, text):
"""Converts a text to a sequence of words (or tokens).
# Arguments
text: Input text (string).
# Returns
A list of words (or tokens).
"""
translate_dict = {c: self.split for c in self.filters}
translate_map = str.maketrans(translate_dict)
text = text.translate(translate_map)
seq = text.split(self.split)
return [i for i in seq if i]
def transform(self, texts, **kwargs):
texts=self._check_values(texts)
return [self.tokenize(d) for d in texts]
def get_info(self):
return "multi_word_tokenizer"
|
PypiClean
|
/bmcs-0.0.2a28-py3-none-any.whl/stats/spirrid_bak/ui/view/spirrid_view.py
|
from etsproxy.traits.api import HasTraits, Str, Instance, Float, \
Event, Int, Bool, Button
from etsproxy.traits.ui.api import View, Item, \
HGroup, Tabbed, VGroup, ModelView, Group, HSplit
from stats.spirrid_bak.old.spirrid import SPIRRID
from math import pi
# -------------------------------
# SPIRRID VIEW
# -------------------------------
class NoOfFibers( HasTraits ):
Lx = Float( 16., auto_set = False, enter_set = True, desc = 'x-Length of the specimen in [cm]' )
Ly = Float( 4., auto_set = False, enter_set = True, desc = 'y-Length of the specimen in [cm]' )
Lz = Float( 4., auto_set = False, enter_set = True, desc = 'z-Length of the specimen in [cm]' )
Fiber_volume_fraction = Float( 3.0 , auto_set = False, enter_set = True, desc = 'Fiber volume fraction in [%]' )
Fiber_Length = Float( 17. , auto_set = False, enter_set = True, desc = 'Fiber length in [cm] ' )
Fiber_diameter = Float( 0.15 , auto_set = False, enter_set = True, desc = 'Fiber diameter in [mm]' )
class SPIRRIDModelView( ModelView ):
title = Str( 'spirrid exec ctrl' )
model = Instance( SPIRRID )
ins = Instance( NoOfFibers )
def _ins_default( self ):
return NoOfFibers()
eval = Button
def _eval_fired( self ):
Specimen_Volume = self.ins.Lx * self.ins.Ly * self.ins.Lz
self.no_of_fibers_in_specimen = ( Specimen_Volume * self.ins.Fiber_volume_fraction / 100 ) / ( pi * ( self.ins.Fiber_diameter / 20 ) ** 2 * self.ins.Fiber_Length / 10 )
prob_crackbridging_fiber = ( self.ins.Fiber_Length / ( 10 * 2 ) ) / self.ins.Lx
self.mean_parallel_links = prob_crackbridging_fiber * self.no_of_fibers_in_specimen
self.stdev_parallel_links = ( prob_crackbridging_fiber * self.no_of_fibers_in_specimen * ( 1 - prob_crackbridging_fiber ) ) ** 0.5
run = Button( desc = 'Run the computation' )
def _run_fired( self ):
self.evaluate()
run_legend = Str( 'mean response',
desc = 'Legend to be added to the plot of the results' )
min_eps = Float( 0.0,
desc = 'minimum value of the control variable' )
max_eps = Float( 1.0,
desc = 'maximum value of the control variable' )
n_eps = Int( 100,
desc = 'resolution of the control variable' )
plot_title = Str( 'response',
desc = 'diagram title' )
label_x = Str( 'epsilon',
desc = 'label of the horizontal axis' )
label_y = Str( 'sigma',
desc = 'label of the vertical axis' )
stdev = Bool( True )
mean_parallel_links = Float( 1.,
desc = 'mean number of parallel links (fibers)' )
stdev_parallel_links = Float( 0.,
desc = 'stdev of number of parallel links (fibers)' )
no_of_fibers_in_specimen = Float( 0., desc = 'Number of Fibers in the specimen', )
data_changed = Event( True )
def evaluate( self ):
self.model.set(
min_eps = 0.00, max_eps = self.max_eps, n_eps = self.n_eps,
)
# evaluate the mean curve
self.model.mean_curve
# evaluate the variance if the stdev bool is True
if self.stdev:
self.model.var_curve
self.data_changed = True
traits_view = View( VGroup(
HGroup(
Item( 'run_legend', resizable = False, label = 'Run label',
width = 80, springy = False ),
Item( 'run', show_label = False, resizable = False )
),
Tabbed(
VGroup(
Item( 'model.cached_dG' , label = 'Cached weight factors',
resizable = False,
springy = False ),
Item( 'model.compiled_QdG_loop' , label = 'Compiled loop over the integration product',
springy = False ),
Item( 'model.compiled_eps_loop' ,
enabled_when = 'model.compiled_QdG_loop',
label = 'Compiled loop over the control variable',
springy = False ),
scrollable = True,
label = 'Execution configuration',
id = 'spirrid.tview.exec_params',
dock = 'tab',
),
VGroup(
HGroup(
Item( 'min_eps' , label = 'Min',
springy = False, resizable = False ),
Item( 'max_eps' , label = 'Max',
springy = False, resizable = False ),
Item( 'n_eps' , label = 'N',
springy = False, resizable = False ),
label = 'Simulation range',
show_border = True
),
HGroup(
Item( 'stdev', label = 'plot standard deviation' ),
),
HSplit(
HGroup( VGroup( Item( 'mean_parallel_links', label = 'mean No of fibers' ),
Item( 'stdev_parallel_links', label = 'stdev No of fibers' ),
)
),
VGroup( Item( '@ins', label = 'evaluate No of fibers' , show_label = False ),
VGroup( HGroup( Item( 'eval', show_label = False, resizable = False, label = 'Evaluate No of Fibers' ),
Item( 'no_of_fibers_in_specimen',
label = 'No of Fibers in specimen',
style = 'readonly' ) ) )
),
label = 'number of parralel fibers',
show_border = True,
scrollable = True, ),
VGroup(
Item( 'plot_title' , label = 'title', resizable = False,
springy = False ),
Item( 'label_x' , label = 'x', resizable = False,
springy = False ),
Item( 'label_y' , label = 'y', resizable = False,
springy = False ),
label = 'title and axes labels',
show_border = True,
scrollable = True,
),
label = 'Execution control',
id = 'spirrid.tview.view_params',
dock = 'tab',
),
scrollable = True,
id = 'spirrid.tview.tabs',
dock = 'tab',
),
),
title = 'SPIRRID',
id = 'spirrid.viewmodel',
dock = 'tab',
resizable = True,
height = 1.0, width = 1.0
)
if __name__ == '__main__':
pass
|
PypiClean
|
/cosmos_ingest-0.1-py3-none-any.whl/ingest/process/detection/src/converters/html2xml.py
|
from bs4 import BeautifulSoup
import re
import os
import glob
import codecs
from postprocess.postprocess import not_ocr
from pascal_voc_writer import Writer
from argparse import ArgumentParser
def iterate_and_update_writer(soup, writer):
"""
Core function for outputting xml
:param soup: BeautifulSoup representation of html doc
:param writer: pascal_voc_writer.Writer object
:return: Updated pascal_voc_writer.Writer object
"""
for seg_type in soup.find_all('div', not_ocr):
seg_class = " ".join(seg_type["class"])
hocr = seg_type.find_next('div', 'hocr')
if hocr is None:
print(seg_type)
raise Exception('Invalid div found. Please account for said div')
score = hocr['data-score']
coordinates = hocr['data-coordinates']
coordinates = coordinates.split(' ')
coordinates = [int(x) for x in coordinates]
coordinates = tuple(coordinates)
writer.addObject(seg_class, *coordinates, difficult=float(score))
return writer
def htmlfile2xml(html_f_path, output_path):
"""
Take as input an html file, and output back an xml from that representation
:param html_f_path: Path to html file
:param output_path: Path to output new xml
"""
with codecs.open(html_f_path, "r", "utf-8") as fin:
soup = BeautifulSoup(fin, 'html.parser')
writer = Writer(f'{os.path.basename(html_f_path)[:-5]}.png', 1920, 1920)
writer = iterate_and_update_writer(soup, writer)
writer.save(f'{os.path.join(output_path, os.path.basename(html_f_path)[:-5])}.xml')
def html2xml(html_path, output_path):
"""
Helper function to sequentially call htmlfile2xml
:param html_path: path to html folder
:param output_path: path to output folder
"""
for f in glob.glob(os.path.join(html_path, "*.html")):
htmlfile2xml(f, output_path)
if __name__ == '__main__':
parser = ArgumentParser(description='Convert html output to xml output')
parser.add_argument('htmldir', type=str, help='Path to html directory')
parser.add_argument('outputdir', type=str, help='Path to output directory')
args = parse_args()
html2xml(args.htmldir, args.outputdir)
|
PypiClean
|
/aisy_sca-0.2.7-py3-none-any.whl/aisy_sca/optimization/RandomGridSearch.py
|
from aisy_sca.optimization.HyperparameterSearch import *
from aisy_sca.sca.Profiling import Profiling
from aisy_sca.utils import Utils
import tensorflow.keras.backend as backend
from termcolor import colored
import itertools
import json
import time
class RandomGridSearch(HyperparameterSearch):
"""
Class that perform random or grid hyper-parameter search .
The random search only supports MLPs and CNNs.
"""
def __init__(self, settings, dataset, random_search_definitions, labels_key_guesses_attack_set, labels_key_guesses_validation_set,
analysis_db_controls, random_states=None):
super().__init__(settings, dataset, random_search_definitions, labels_key_guesses_attack_set, labels_key_guesses_validation_set,
analysis_db_controls, random_states=None)
self.custom_callbacks = {}
def run(self, hp_combinations_reproducible, da_function=None, grid_search=False):
hp_search_ranges = self.random_search_definitions["hyper_parameters_search"]
hp_ids = []
model_descriptions = []
if self.random_states is None:
if grid_search:
hp_to_search = {}
hp_to_adjust = {}
for hp, hp_value in hp_search_ranges.items():
if type(hp_value) is str:
hp_to_adjust[hp] = hp_value
else:
hp_to_search[hp] = hp_value
keys, value = zip(*hp_to_search.items())
search_hp_combinations = [dict(zip(keys, v)) for v in itertools.product(*value)]
for idx in range(len(search_hp_combinations)):
for hp, hp_value in hp_to_adjust.items():
search_hp_combinations[idx][hp] = hp_value
self.random_search_definitions["max_trials"] = len(search_hp_combinations)
print("======================================================================================")
print(f"Number of hyperparameters combinations for grid search: {len(search_hp_combinations)}")
print("======================================================================================")
else:
search_hp_combinations = hp_search_ranges
else:
search_hp_combinations = hp_combinations_reproducible
start = time.time()
for search_index in range(self.random_search_definitions["max_trials"]):
""" Generate a random/grid CNN or MLP model """
if self.random_states is None:
hp_values = search_hp_combinations[search_index] if grid_search else search_hp_combinations
else:
hp_values = search_hp_combinations[search_index]
model, hp = self.generate_model(hp_values)
print(colored(f"Hyper-Parameters for Search {search_index}: {json.dumps(hp, sort_keys=True, indent=4)}", "blue"))
model_descriptions.append(Utils().keras_model_as_string(model, f"best_model_{search_index}"))
""" Run profiling/training phase """
profiling = Profiling(self.settings, self.dataset, da_function)
history = profiling.train_model(model)
self.custom_callbacks[f"{search_index}"] = profiling.get_custom_callbacks()
""" Save hyperparameters combination to database and receive the id of the insertion (hp_id)"""
hyper_parameters_list = self.set_hyper_parameters_list(hp, model)
hp_id = self.analysis_db_controls.save_hyper_parameters_to_database(hyper_parameters_list)
hp_ids.append(hp_id)
""" Retrieve metrics from model and save them to database """
self.loss_search.append(history.history["loss"])
self.acc_search.append(history.history["accuracy"])
""" Compute SCA metrics and save them to database """
self.compute_metrics(model, search_index, self.analysis_db_controls, profiling.get_builtin_callbacks(), hp_id)
ge = self.ge_search[search_index]
sr = self.sr_search[search_index]
hyper_parameters_list = self.update_hyper_parameters_list(hp, ge, sr)
self.analysis_db_controls.update_hyper_parameters_to_database(hyper_parameters_list, hp_id)
""" Save generic metrics (loss, accuracy, early stopping metrics) to database """
self.save_generic_metrics(profiling, history, search_index, hp_id)
""" Update results in database """
self.analysis_db_controls.update_results_in_database(time.time() - start)
backend.clear_session()
""" Find best model"""
self.get_best_model()
""" Save best model so far in h5 """
self.save_best_model(model, search_index)
""" Check if stopping condition is satisfied"""
if self.check_stop_condition(search_index, hp):
self.finalize_search(profiling, model_descriptions, hp_ids)
break
if search_index == self.random_search_definitions["max_trials"] - 1:
self.finalize_search(profiling, model_descriptions, hp_ids)
""" update database settings"""
self.analysis_db_controls.update_results_in_database(time.time() - start)
def get_custom_callbacks(self):
return self.custom_callbacks
|
PypiClean
|
/v2/__init__.py
|
from __future__ import absolute_import
# import EvsClient
from huaweicloudsdkevs.v2.evs_client import EvsClient
from huaweicloudsdkevs.v2.evs_async_client import EvsAsyncClient
# import models into sdk package
from huaweicloudsdkevs.v2.model.attachment import Attachment
from huaweicloudsdkevs.v2.model.az_info import AzInfo
from huaweicloudsdkevs.v2.model.batch_create_volume_tags_request import BatchCreateVolumeTagsRequest
from huaweicloudsdkevs.v2.model.batch_create_volume_tags_request_body import BatchCreateVolumeTagsRequestBody
from huaweicloudsdkevs.v2.model.batch_create_volume_tags_response import BatchCreateVolumeTagsResponse
from huaweicloudsdkevs.v2.model.batch_delete_volume_tags_request import BatchDeleteVolumeTagsRequest
from huaweicloudsdkevs.v2.model.batch_delete_volume_tags_request_body import BatchDeleteVolumeTagsRequestBody
from huaweicloudsdkevs.v2.model.batch_delete_volume_tags_response import BatchDeleteVolumeTagsResponse
from huaweicloudsdkevs.v2.model.bss_param_for_create_volume import BssParamForCreateVolume
from huaweicloudsdkevs.v2.model.bss_param_for_resize_volume import BssParamForResizeVolume
from huaweicloudsdkevs.v2.model.cinder_export_to_image_option import CinderExportToImageOption
from huaweicloudsdkevs.v2.model.cinder_export_to_image_request import CinderExportToImageRequest
from huaweicloudsdkevs.v2.model.cinder_export_to_image_request_body import CinderExportToImageRequestBody
from huaweicloudsdkevs.v2.model.cinder_export_to_image_response import CinderExportToImageResponse
from huaweicloudsdkevs.v2.model.cinder_list_availability_zones_request import CinderListAvailabilityZonesRequest
from huaweicloudsdkevs.v2.model.cinder_list_availability_zones_response import CinderListAvailabilityZonesResponse
from huaweicloudsdkevs.v2.model.cinder_list_quotas_request import CinderListQuotasRequest
from huaweicloudsdkevs.v2.model.cinder_list_quotas_response import CinderListQuotasResponse
from huaweicloudsdkevs.v2.model.cinder_list_volume_types_request import CinderListVolumeTypesRequest
from huaweicloudsdkevs.v2.model.cinder_list_volume_types_response import CinderListVolumeTypesResponse
from huaweicloudsdkevs.v2.model.create_snapshot_option import CreateSnapshotOption
from huaweicloudsdkevs.v2.model.create_snapshot_request import CreateSnapshotRequest
from huaweicloudsdkevs.v2.model.create_snapshot_request_body import CreateSnapshotRequestBody
from huaweicloudsdkevs.v2.model.create_snapshot_response import CreateSnapshotResponse
from huaweicloudsdkevs.v2.model.create_volume_option import CreateVolumeOption
from huaweicloudsdkevs.v2.model.create_volume_request import CreateVolumeRequest
from huaweicloudsdkevs.v2.model.create_volume_request_body import CreateVolumeRequestBody
from huaweicloudsdkevs.v2.model.create_volume_response import CreateVolumeResponse
from huaweicloudsdkevs.v2.model.delete_snapshot_request import DeleteSnapshotRequest
from huaweicloudsdkevs.v2.model.delete_snapshot_response import DeleteSnapshotResponse
from huaweicloudsdkevs.v2.model.delete_tags_option import DeleteTagsOption
from huaweicloudsdkevs.v2.model.delete_volume_request import DeleteVolumeRequest
from huaweicloudsdkevs.v2.model.delete_volume_response import DeleteVolumeResponse
from huaweicloudsdkevs.v2.model.image import Image
from huaweicloudsdkevs.v2.model.job_entities import JobEntities
from huaweicloudsdkevs.v2.model.link import Link
from huaweicloudsdkevs.v2.model.list_snapshots_details_request import ListSnapshotsDetailsRequest
from huaweicloudsdkevs.v2.model.list_snapshots_details_response import ListSnapshotsDetailsResponse
from huaweicloudsdkevs.v2.model.list_volume_tags_request import ListVolumeTagsRequest
from huaweicloudsdkevs.v2.model.list_volume_tags_response import ListVolumeTagsResponse
from huaweicloudsdkevs.v2.model.list_volumes_by_tags_request import ListVolumesByTagsRequest
from huaweicloudsdkevs.v2.model.list_volumes_by_tags_request_body import ListVolumesByTagsRequestBody
from huaweicloudsdkevs.v2.model.list_volumes_by_tags_response import ListVolumesByTagsResponse
from huaweicloudsdkevs.v2.model.list_volumes_details_request import ListVolumesDetailsRequest
from huaweicloudsdkevs.v2.model.list_volumes_details_response import ListVolumesDetailsResponse
from huaweicloudsdkevs.v2.model.match import Match
from huaweicloudsdkevs.v2.model.os_extend import OsExtend
from huaweicloudsdkevs.v2.model.quota_detail import QuotaDetail
from huaweicloudsdkevs.v2.model.quota_detail_backup_gigabytes import QuotaDetailBackupGigabytes
from huaweicloudsdkevs.v2.model.quota_detail_backups import QuotaDetailBackups
from huaweicloudsdkevs.v2.model.quota_detail_gigabytes import QuotaDetailGigabytes
from huaweicloudsdkevs.v2.model.quota_detail_gigabytes_gpssd import QuotaDetailGigabytesGPSSD
from huaweicloudsdkevs.v2.model.quota_detail_gigabytes_sas import QuotaDetailGigabytesSAS
from huaweicloudsdkevs.v2.model.quota_detail_gigabytes_sata import QuotaDetailGigabytesSATA
from huaweicloudsdkevs.v2.model.quota_detail_gigabytes_ssd import QuotaDetailGigabytesSSD
from huaweicloudsdkevs.v2.model.quota_detail_per_volume_gigabytes import QuotaDetailPerVolumeGigabytes
from huaweicloudsdkevs.v2.model.quota_detail_snapshots import QuotaDetailSnapshots
from huaweicloudsdkevs.v2.model.quota_detail_snapshots_gpssd import QuotaDetailSnapshotsGPSSD
from huaweicloudsdkevs.v2.model.quota_detail_snapshots_sas import QuotaDetailSnapshotsSAS
from huaweicloudsdkevs.v2.model.quota_detail_snapshots_sata import QuotaDetailSnapshotsSATA
from huaweicloudsdkevs.v2.model.quota_detail_snapshots_ssd import QuotaDetailSnapshotsSSD
from huaweicloudsdkevs.v2.model.quota_detail_volumes import QuotaDetailVolumes
from huaweicloudsdkevs.v2.model.quota_detail_volumes_gpssd import QuotaDetailVolumesGPSSD
from huaweicloudsdkevs.v2.model.quota_detail_volumes_sas import QuotaDetailVolumesSAS
from huaweicloudsdkevs.v2.model.quota_detail_volumes_sata import QuotaDetailVolumesSATA
from huaweicloudsdkevs.v2.model.quota_detail_volumes_ssd import QuotaDetailVolumesSSD
from huaweicloudsdkevs.v2.model.quota_list import QuotaList
from huaweicloudsdkevs.v2.model.resize_volume_request import ResizeVolumeRequest
from huaweicloudsdkevs.v2.model.resize_volume_request_body import ResizeVolumeRequestBody
from huaweicloudsdkevs.v2.model.resize_volume_response import ResizeVolumeResponse
from huaweicloudsdkevs.v2.model.resource import Resource
from huaweicloudsdkevs.v2.model.rollback_info import RollbackInfo
from huaweicloudsdkevs.v2.model.rollback_snapshot_option import RollbackSnapshotOption
from huaweicloudsdkevs.v2.model.rollback_snapshot_request import RollbackSnapshotRequest
from huaweicloudsdkevs.v2.model.rollback_snapshot_request_body import RollbackSnapshotRequestBody
from huaweicloudsdkevs.v2.model.rollback_snapshot_response import RollbackSnapshotResponse
from huaweicloudsdkevs.v2.model.show_job_request import ShowJobRequest
from huaweicloudsdkevs.v2.model.show_job_response import ShowJobResponse
from huaweicloudsdkevs.v2.model.show_snapshot_request import ShowSnapshotRequest
from huaweicloudsdkevs.v2.model.show_snapshot_response import ShowSnapshotResponse
from huaweicloudsdkevs.v2.model.show_volume_request import ShowVolumeRequest
from huaweicloudsdkevs.v2.model.show_volume_response import ShowVolumeResponse
from huaweicloudsdkevs.v2.model.show_volume_tags_request import ShowVolumeTagsRequest
from huaweicloudsdkevs.v2.model.show_volume_tags_response import ShowVolumeTagsResponse
from huaweicloudsdkevs.v2.model.snapshot_details import SnapshotDetails
from huaweicloudsdkevs.v2.model.snapshot_list import SnapshotList
from huaweicloudsdkevs.v2.model.sub_job import SubJob
from huaweicloudsdkevs.v2.model.sub_job_entities import SubJobEntities
from huaweicloudsdkevs.v2.model.tag import Tag
from huaweicloudsdkevs.v2.model.tags_for_list_volumes import TagsForListVolumes
from huaweicloudsdkevs.v2.model.update_snapshot_option import UpdateSnapshotOption
from huaweicloudsdkevs.v2.model.update_snapshot_request import UpdateSnapshotRequest
from huaweicloudsdkevs.v2.model.update_snapshot_request_body import UpdateSnapshotRequestBody
from huaweicloudsdkevs.v2.model.update_snapshot_response import UpdateSnapshotResponse
from huaweicloudsdkevs.v2.model.update_volume_option import UpdateVolumeOption
from huaweicloudsdkevs.v2.model.update_volume_request import UpdateVolumeRequest
from huaweicloudsdkevs.v2.model.update_volume_request_body import UpdateVolumeRequestBody
from huaweicloudsdkevs.v2.model.update_volume_response import UpdateVolumeResponse
from huaweicloudsdkevs.v2.model.volume_detail import VolumeDetail
from huaweicloudsdkevs.v2.model.volume_metadata import VolumeMetadata
from huaweicloudsdkevs.v2.model.volume_type import VolumeType
from huaweicloudsdkevs.v2.model.volume_type_extra_specs import VolumeTypeExtraSpecs
from huaweicloudsdkevs.v2.model.zone_state import ZoneState
|
PypiClean
|
/privy-presidio-utils-0.0.71.tar.gz/privy-presidio-utils-0.0.71/presidio_evaluator/data_generator/faker_extensions/data_objects.py
|
from dataclasses import dataclass
import dataclasses
import json
from typing import Optional, List, Dict
from collections import Counter
@dataclass(eq=True)
class FakerSpan:
"""FakerSpan holds the start, end, value and type of every element replaced."""
value: str
start: int
end: int
type: str
def __repr__(self):
return json.dumps(dataclasses.asdict(self))
@dataclass()
class FakerSpansResult:
"""FakerSpansResult holds the full fake sentence, the original template
and a list of spans for each element replaced."""
fake: str
spans: List[FakerSpan]
template: Optional[str] = None
template_id: Optional[int] = None
def __str__(self):
return self.fake
def __repr__(self):
return json.dumps(dataclasses.asdict(self))
def toJSON(self):
spans_dict = json.dumps([dataclasses.asdict(span) for span in self.spans])
return json.dumps(
{
"fake": self.fake,
"spans": spans_dict,
"template": self.template,
"template_id": self.template_id,
}
)
@classmethod
def fromJSON(cls, json_string):
"""Load a single FakerSpansResult from a JSON string."""
json_dict = json.loads(json_string)
converted_spans = []
for span_dict in json.loads(json_dict['spans']):
converted_spans.append(FakerSpan(**span_dict))
json_dict['spans'] = converted_spans
return cls(**json_dict)
@classmethod
def count_entities(cls, fake_records: List["FakerSpansResult"]) -> Counter:
count_per_entity_new = Counter()
for record in fake_records:
for span in record.spans:
count_per_entity_new[span.type] += 1
return count_per_entity_new.most_common()
@classmethod
def load_privy_dataset(cls, filename: str) -> List["FakerSpansResult"]:
"""Load a dataset of FakerSpansResult from a JSON file."""
with open(filename, "r", encoding="utf-8") as f:
return [cls.fromJSON(line) for line in f.readlines()]
@classmethod
def update_entity_types(cls, dataset: List["FakerSpansResult"], entity_mapping: Dict[str, str]):
"""Replace entity types using a translator dictionary."""
for sample in dataset:
# update entity types on spans
for span in sample.spans:
span.type = entity_mapping[span.type]
# update entity types on the template string
for key, value in entity_mapping.items():
sample.template = sample.template.replace(
"{{" + key + "}}", "{{" + value + "}}")
|
PypiClean
|
/retro_data_structures-0.23.0-py3-none-any.whl/retro_data_structures/properties/corruption/archetypes/HoverThenHomeProjectile.py
|
import dataclasses
import struct
import typing
from retro_data_structures.game_check import Game
from retro_data_structures.properties.base_property import BaseProperty
from retro_data_structures.properties.corruption.core.AssetId import AssetId, default_asset_id
@dataclasses.dataclass()
class HoverThenHomeProjectile(BaseProperty):
hover_time: float = dataclasses.field(default=1.0)
hover_speed: float = dataclasses.field(default=1.0)
hover_distance: float = dataclasses.field(default=5.0)
unknown_0x5a310c3b: float = dataclasses.field(default=10.0)
unknown_0x66284bf9: float = dataclasses.field(default=10.0)
initial_speed: float = dataclasses.field(default=-1.0)
final_speed: float = dataclasses.field(default=-1.0)
optional_homing_sound: AssetId = dataclasses.field(metadata={'asset_types': ['CAUD']}, default=default_asset_id)
@classmethod
def game(cls) -> Game:
return Game.CORRUPTION
@classmethod
def from_stream(cls, data: typing.BinaryIO, size: typing.Optional[int] = None, default_override: typing.Optional[dict] = None):
property_count = struct.unpack(">H", data.read(2))[0]
if default_override is None and (result := _fast_decode(data, property_count)) is not None:
return result
present_fields = default_override or {}
for _ in range(property_count):
property_id, property_size = struct.unpack(">LH", data.read(6))
start = data.tell()
try:
property_name, decoder = _property_decoder[property_id]
present_fields[property_name] = decoder(data, property_size)
except KeyError:
raise RuntimeError(f"Unknown property: 0x{property_id:08x}")
assert data.tell() - start == property_size
return cls(**present_fields)
def to_stream(self, data: typing.BinaryIO, default_override: typing.Optional[dict] = None):
default_override = default_override or {}
data.write(b'\x00\x08') # 8 properties
data.write(b'0\xaa\x9a\xf1') # 0x30aa9af1
data.write(b'\x00\x04') # size
data.write(struct.pack('>f', self.hover_time))
data.write(b'\x84^\xf4\x89') # 0x845ef489
data.write(b'\x00\x04') # size
data.write(struct.pack('>f', self.hover_speed))
data.write(b'E$&\xbb') # 0x452426bb
data.write(b'\x00\x04') # size
data.write(struct.pack('>f', self.hover_distance))
data.write(b'Z1\x0c;') # 0x5a310c3b
data.write(b'\x00\x04') # size
data.write(struct.pack('>f', self.unknown_0x5a310c3b))
data.write(b'f(K\xf9') # 0x66284bf9
data.write(b'\x00\x04') # size
data.write(struct.pack('>f', self.unknown_0x66284bf9))
data.write(b'\xcb\x14\xd9|') # 0xcb14d97c
data.write(b'\x00\x04') # size
data.write(struct.pack('>f', self.initial_speed))
data.write(b'\x80m\x06O') # 0x806d064f
data.write(b'\x00\x04') # size
data.write(struct.pack('>f', self.final_speed))
data.write(b'K\x1cWf') # 0x4b1c5766
data.write(b'\x00\x08') # size
data.write(struct.pack(">Q", self.optional_homing_sound))
@classmethod
def from_json(cls, data: dict):
return cls(
hover_time=data['hover_time'],
hover_speed=data['hover_speed'],
hover_distance=data['hover_distance'],
unknown_0x5a310c3b=data['unknown_0x5a310c3b'],
unknown_0x66284bf9=data['unknown_0x66284bf9'],
initial_speed=data['initial_speed'],
final_speed=data['final_speed'],
optional_homing_sound=data['optional_homing_sound'],
)
def to_json(self) -> dict:
return {
'hover_time': self.hover_time,
'hover_speed': self.hover_speed,
'hover_distance': self.hover_distance,
'unknown_0x5a310c3b': self.unknown_0x5a310c3b,
'unknown_0x66284bf9': self.unknown_0x66284bf9,
'initial_speed': self.initial_speed,
'final_speed': self.final_speed,
'optional_homing_sound': self.optional_homing_sound,
}
_FAST_FORMAT = None
_FAST_IDS = (0x30aa9af1, 0x845ef489, 0x452426bb, 0x5a310c3b, 0x66284bf9, 0xcb14d97c, 0x806d064f, 0x4b1c5766)
def _fast_decode(data: typing.BinaryIO, property_count: int) -> typing.Optional[HoverThenHomeProjectile]:
if property_count != 8:
return None
global _FAST_FORMAT
if _FAST_FORMAT is None:
_FAST_FORMAT = struct.Struct('>LHfLHfLHfLHfLHfLHfLHfLHQ')
before = data.tell()
dec = _FAST_FORMAT.unpack(data.read(84))
if (dec[0], dec[3], dec[6], dec[9], dec[12], dec[15], dec[18], dec[21]) != _FAST_IDS:
data.seek(before)
return None
return HoverThenHomeProjectile(
dec[2],
dec[5],
dec[8],
dec[11],
dec[14],
dec[17],
dec[20],
dec[23],
)
def _decode_hover_time(data: typing.BinaryIO, property_size: int):
return struct.unpack('>f', data.read(4))[0]
def _decode_hover_speed(data: typing.BinaryIO, property_size: int):
return struct.unpack('>f', data.read(4))[0]
def _decode_hover_distance(data: typing.BinaryIO, property_size: int):
return struct.unpack('>f', data.read(4))[0]
def _decode_unknown_0x5a310c3b(data: typing.BinaryIO, property_size: int):
return struct.unpack('>f', data.read(4))[0]
def _decode_unknown_0x66284bf9(data: typing.BinaryIO, property_size: int):
return struct.unpack('>f', data.read(4))[0]
def _decode_initial_speed(data: typing.BinaryIO, property_size: int):
return struct.unpack('>f', data.read(4))[0]
def _decode_final_speed(data: typing.BinaryIO, property_size: int):
return struct.unpack('>f', data.read(4))[0]
def _decode_optional_homing_sound(data: typing.BinaryIO, property_size: int):
return struct.unpack(">Q", data.read(8))[0]
_property_decoder: typing.Dict[int, typing.Tuple[str, typing.Callable[[typing.BinaryIO, int], typing.Any]]] = {
0x30aa9af1: ('hover_time', _decode_hover_time),
0x845ef489: ('hover_speed', _decode_hover_speed),
0x452426bb: ('hover_distance', _decode_hover_distance),
0x5a310c3b: ('unknown_0x5a310c3b', _decode_unknown_0x5a310c3b),
0x66284bf9: ('unknown_0x66284bf9', _decode_unknown_0x66284bf9),
0xcb14d97c: ('initial_speed', _decode_initial_speed),
0x806d064f: ('final_speed', _decode_final_speed),
0x4b1c5766: ('optional_homing_sound', _decode_optional_homing_sound),
}
|
PypiClean
|
/bw_plex-0.1.0.tar.gz/bw_plex-0.1.0/bw_plex/credits.py
|
from __future__ import division
import glob
import os
import subprocess
import re
import click
import numpy as np
from bw_plex import LOG
from bw_plex.video import video_frame_by_frame
from bw_plex.misc import sec_to_hh_mm_ss
try:
import cv2
except ImportError:
cv2 = None
LOG.warning('Scanning for credits is not supported. '
'Install the package with pip install bw_plex[all] or bw_plex[video]')
try:
import pytesseract
except ImportError:
pytesseract = None
LOG.warning('Extracting text from images is not supported. '
'Install the package with pip install bw_plex[all] or bw_plex[video]')
try:
import Image
except ImportError:
from PIL import Image
color = {'yellow': (255, 255, 0),
'red': (255, 0, 0),
'blue': (0, 0, 255),
'lime': (0, 255, 0),
'white': (255, 255, 255),
'fuchsia': (255, 0, 255),
'black': (0, 0, 0)
}
def make_imgz(afile, start=600, dest=None, fps=1):
"""Helper to generate images."""
dest_path = dest + '\out%d.jpg'
fps = 'fps=%s' % fps
t = sec_to_hh_mm_ss(start)
cmd = [
'ffmpeg', '-ss', t, '-i',
afile, '-vf', fps, dest_path
]
# fix me
subprocess.call(cmd)
print(dest)
return dest
def extract_text(img, lang='eng', encoding='utf-8'):
if pytesseract is None:
return
if isinstance(img, str):
img = Image.open(img)
return pytesseract.image_to_string(img, lang=lang).encode(encoding, 'ignore')
def calc_success(rectangles, img_height, img_width, success=0.9): # pragma: no cover
"""Helper to check the n percentage of the image is covered in text."""
t = sum([i[2] * i[3] for i in rectangles if i])
p = 100 * float(t) / float(img_height * img_width)
return p > success
def locate_text(image, debug=False):
"""Locate where and if there are text in the images.
Args:
image(numpy.ndarray, str): str would be path to image
debug(bool): Show each of the images using open cv.
Returns:
list of rectangles
"""
# Mostly ripped from https://github.com/hurdlea/Movie-Credits-Detect
# Thanks!
import cv2
# Compat so we can use a frame and img file..
if isinstance(image, str) and os.path.isfile(image):
image = cv2.imread(image)
if debug:
cv2.imshow('original image', image)
height, width, _ = image.shape
mser = cv2.MSER_create(4, 10, 8000, 0.8, 0.2, 200, 1.01, 0.003, 5)
# Convert to gray.
grey = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
if debug:
cv2.imshow('grey', grey)
# Pull out grahically overlayed text from a video image
blur = cv2.GaussianBlur(grey, (3, 3), 0)
# test media blur
# blur = cv2.medianBlur(grey, 1)
if debug:
cv2.imshow('blur', blur)
adapt_threshold = cv2.adaptiveThreshold(blur, 255, cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY, 5, -25)
contours, _ = mser.detectRegions(adapt_threshold)
# for each contour get a bounding box and remove
rects = []
for contour in contours:
# get rectangle bounding contour
[x, y, w, h] = cv2.boundingRect(contour)
# Remove small rects
if w < 5 or h < 5: # 2
continue
# Throw away rectangles which don't match a character aspect ratio
if (float(w * h) / (width * height)) > 0.005 or float(w) / h > 1:
continue
rects.append(cv2.boundingRect(contour))
# Mask of original image
mask = np.zeros((height, width, 1), np.uint8)
# To expand rectangles, i.e. increase sensitivity to nearby rectangles
# Add knobs?
# lets scale this alot so we get mostly one big square
# todo when/if detect motion.
xscaleFactor = 14 # 14
yscaleFactor = 4 # 4
for box in rects:
[x, y, w, h] = box
# Draw filled bounding boxes on mask
cv2.rectangle(mask, (x - xscaleFactor, y - yscaleFactor),
(x + w + xscaleFactor, y + h + yscaleFactor),
color['white'], cv2.FILLED)
if debug:
cv2.imshow("Mask", mask)
# Find contours in mask if bounding boxes overlap,
# they will be joined by this function call
rectangles = []
contours = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
if cv2.__version__.startswith('4'):
contours = contours[0]
else:
contours = contours[1]
for contour in contours:
# This is disabled since we are not after the text but the text area.
# Only preserve "squarish" features
# peri = cv2.arcLength(contour, True)
# approx = cv2.approxPolyDP(contour, 0.01 * peri, True)
# the contour is 'bad' if it is not a rectangluarish
# This doesnt have to be bad, since we match more then one char.
# if len(approx) > 8:
# cv2.drawContours(image, [contour], -1, color['lime'])
# if debug:
# cv2.imshow("bad Rectangles check lime", image)
# continue
rect = cv2.boundingRect(contour)
x, y, w, h = rect
cv2.rectangle(image, (x, y), (x + w, y + h), color['blue'], 2)
# Remove small areas and areas that don't have text like features
# such as a long width.
if ((float(w * h) / (width * height)) < 0.006):
# remove small areas
if float(w * h) / (width * height) < 0.0018:
continue
# remove areas that aren't long
if (float(w) / h < 2.5):
continue
else:
pass
# This is disabled as we want to cache the large area of text
# to backup this shit for movement detection
# General catch for larger identified areas that they have
# a text width profile
# and it does not fit for jap letters.
# if float(w) / h < 1.8:
# continue
rectangles.append(rect)
cv2.rectangle(image, (x, y), (x + w, y + h), color['fuchsia'], 2)
if debug:
cv2.imshow("Final image", image)
cv2.waitKey(0)
return rectangles
def find_credits(path, offset=0, fps=None, duration=None, check=7, step=1, frame_range=True):
"""Find the start/end of the credits and end in a videofile.
This only check frames so if there is any silence in the video this is simply skipped as
opencv only handles videofiles.
use frame_range to so we only check frames every 1 sec.
# TODO just ffmepg to check for silence so we calculate the correct time? :(
Args:
path (str): path to the videofile
offset(int): If given we should start from this one.
fps(float?): fps of the video file
duration(None, int): Duration of the vfile in seconds.
check(int): Stop after n frames with text, set a insane high number to check all.
end is not correct without this!
step(int): only use every n frame
frame_range(bool). default true, precalc the frames and only check thous frames.
Returns:
1, 2
"""
# LOG.debug('%r %r %r %r %r %r %r', path, offset, fps, duration, check, step, frame_range)
if cv2 is None:
return
frames = []
start = -1
end = -1
LOG.debug('Trying to find the credits for %s', path)
try:
if fps is None:
# we can just grab the fps from plex.
cap = cv2.VideoCapture(path)
fps = cap.get(cv2.CAP_PROP_FPS)
cap.release()
for _, (frame, millisec) in enumerate(video_frame_by_frame(path, offset=offset,
step=step, frame_range=frame_range)):
# LOG.debug('progress %s', millisec / 1000)
if frame is not None:
recs = locate_text(frame, debug=False)
if recs:
frames.append(millisec)
if check != -1 and len(frames) >= check:
break
if frames:
LOG.debug(frames)
start = min(frames) / 1000
end = max(frames) / 1000
LOG.debug('credits_start %s, credits_end %s', start, end)
except: # pragma: no cover
# We just want to log the exception not halt the entire process to db.
LOG.exception('There was a error in find_credits')
return start, end
def fill_rects(image, rects):
"""This is used to fill the rects (location of credits)
The idea if to mask the credits so we can check if there is any background
movement while the credits are running. Like the movie cars etc.
See if we can grab something usefull from
https://gist.github.com/luipillmann/d76eb4f4eea0320bb35dcd1b2a4575ee
"""
for rect in rects:
x, y, w, h = rect
cv2.rectangle(image, (x, y), (x + w, y + h), color['black'], cv2.FILLED)
return image
@click.command()
@click.argument('path')
@click.option('-c', type=float, default=0.0)
@click.option('-d', '--debug', is_flag=True, default=False)
@click.option('-p', '--profile', is_flag=True, default=False)
@click.option('-o', '--offset', default=0, type=int)
def cmd(path, c, debug, profile, offset): # pragma: no cover
if os.path.isfile(path):
files = [path]
else:
files = glob.glob(path)
d = {}
for f in files:
if f.endswith(('.png', '.jpeg', '.jpg')):
filename = os.path.basename(f)
hit = re.search(r'(\d+)', filename)
t = locate_text(f, debug=debug)
if hit:
d[int(hit.group()) + offset] = (bool(t), filename)
else:
t = find_credits(f, offset=offset)
if c:
img = cv2.read_img(f, cv2.IMREAD_UNCHANGED)
height, width, _ = img.shape
t = calc_success(t, height, width)
if d:
click.echo('Image report')
for k, v in sorted(d.items()):
if v[0] is True:
color = 'green'
else:
color = 'red'
click.secho('%s %s %s %s' % (k, sec_to_hh_mm_ss(k), v[0], v[1]), fg=color)
if __name__ == '__main__':
# cmd()
def test():
import cv2
i = r"C:\Users\alexa\OneDrive\Dokumenter\GitHub\bw_plex\tests\test_data\blacktext_whitebg_2.png"
i = r'C:\Users\alexa\.config\bw_plex\third_images\out165.jpg'
img = cv2.imread(i)
ffs = img.copy()
rects = locate_text(ffs, debug=True)
f = fill_rects(img, rects)
cv2.imshow('ass', f)
cv2.waitKey(0)
# test()
|
PypiClean
|
/cn-stock-holidays-1.10.tar.gz/cn-stock-holidays-1.10/cn_stock_holidays/zipline/exchange_calendar_hkex.py
|
from datetime import time
from cn_stock_holidays.data_hk import get_cached
from pandas import Timestamp, date_range, DatetimeIndex
import pytz
from zipline.utils.memoize import remember_last, lazyval
import warnings
from zipline.utils.calendars import TradingCalendar
from zipline.utils.calendars.trading_calendar import days_at_time, NANOS_IN_MINUTE
import numpy as np
import pandas as pd
# lunch break for shanghai and shenzhen exchange
lunch_break_start = time(12, 30)
lunch_break_end = time(14, 31)
start_default = pd.Timestamp('2000-12-25', tz='UTC')
end_base = pd.Timestamp('today', tz='UTC')
end_default = end_base + pd.Timedelta(days=365)
class HKExchangeCalendar(TradingCalendar):
"""
Exchange calendar for Shanghai and Shenzhen (China Market)
Open Time 9:31 AM, Asia/Shanghai
Close Time 3:00 PM, Asia/Shanghai
One big difference between china and us exchange is china exchange has a lunch break , so I handle it
Sample Code in ipython:
> from zipline.utils.calendars import *
> from cn_stock_holidays.zipline.exchange_calendar_hkex import HKExchangeCalendar
> register_calendar("HKEX", HKExchangeCalendar(), force=True)
> c=get_calendar("HKEX")
for the guy need to keep updating about holiday file, try to add `cn-stock-holiday-sync-hk` command to crontab
"""
def __init__(self, start=start_default, end=end_default):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
_all_days = date_range(start, end, freq=self.day, tz='UTC')
self._lunch_break_starts = days_at_time(_all_days, lunch_break_start, self.tz, 0)
self._lunch_break_ends = days_at_time(_all_days, lunch_break_end, self.tz, 0)
TradingCalendar.__init__(self, start=start_default, end=end_default)
@property
def name(self):
return "HKEX"
@property
def tz(self):
return pytz.timezone("Asia/Shanghai")
@property
def open_time(self):
return time(10, 1)
@property
def close_time(self):
return time(16, 0)
@property
def adhoc_holidays(self):
return [Timestamp(t,tz=pytz.UTC) for t in get_cached(use_list=True)]
@property
@remember_last
def all_minutes(self):
"""
Returns a DatetimeIndex representing all the minutes in this calendar.
"""
opens_in_ns = \
self._opens.values.astype('datetime64[ns]')
closes_in_ns = \
self._closes.values.astype('datetime64[ns]')
lunch_break_start_in_ns = \
self._lunch_break_starts.values.astype('datetime64[ns]')
lunch_break_ends_in_ns = \
self._lunch_break_ends.values.astype('datetime64[ns]')
deltas_before_lunch = lunch_break_start_in_ns - opens_in_ns
deltas_after_lunch = closes_in_ns - lunch_break_ends_in_ns
daily_before_lunch_sizes = (deltas_before_lunch / NANOS_IN_MINUTE) + 1
daily_after_lunch_sizes = (deltas_after_lunch / NANOS_IN_MINUTE) + 1
daily_sizes = daily_before_lunch_sizes + daily_after_lunch_sizes
num_minutes = np.sum(daily_sizes).astype(np.int64)
# One allocation for the entire thing. This assumes that each day
# represents a contiguous block of minutes.
all_minutes = np.empty(num_minutes, dtype='datetime64[ns]')
idx = 0
for day_idx, size in enumerate(daily_sizes):
# lots of small allocations, but it's fast enough for now.
# size is a np.timedelta64, so we need to int it
size_int = int(size)
before_lunch_size_int = int(daily_before_lunch_sizes[day_idx])
after_lunch_size_int = int(daily_after_lunch_sizes[day_idx])
#print("idx:{}, before_lunch_size_int: {}".format(idx, before_lunch_size_int))
all_minutes[idx:(idx + before_lunch_size_int)] = \
np.arange(
opens_in_ns[day_idx],
lunch_break_start_in_ns[day_idx] + NANOS_IN_MINUTE,
NANOS_IN_MINUTE
)
all_minutes[(idx + before_lunch_size_int):(idx + size_int)] = \
np.arange(
lunch_break_ends_in_ns[day_idx],
closes_in_ns[day_idx] + NANOS_IN_MINUTE,
NANOS_IN_MINUTE
)
idx += size_int
return DatetimeIndex(all_minutes).tz_localize("UTC")
if __name__ == '__main__':
HKExchangeCalendar()
|
PypiClean
|
/Pint-0.22-py3-none-any.whl/pint/_vendor/flexparser.py
|
from __future__ import annotations
import collections
import dataclasses
import enum
import functools
import hashlib
import hmac
import inspect
import logging
import pathlib
import re
import sys
import typing as ty
from collections.abc import Iterator
from dataclasses import dataclass
from functools import cached_property
from importlib import resources
from typing import Optional, Tuple, Type
_LOGGER = logging.getLogger("flexparser")
_SENTINEL = object()
################
# Exceptions
################
@dataclass(frozen=True)
class Statement:
"""Base class for parsed elements within a source file."""
start_line: int = dataclasses.field(init=False, default=None)
start_col: int = dataclasses.field(init=False, default=None)
end_line: int = dataclasses.field(init=False, default=None)
end_col: int = dataclasses.field(init=False, default=None)
raw: str = dataclasses.field(init=False, default=None)
@classmethod
def from_statement(cls, statement: Statement):
out = cls()
out.set_position(*statement.get_position())
out.set_raw(statement.raw)
return out
@classmethod
def from_statement_iterator_element(cls, values: ty.Tuple[int, int, int, int, str]):
out = cls()
out.set_position(*values[:-1])
out.set_raw(values[-1])
return out
@property
def format_position(self):
if self.start_line is None:
return "N/A"
return "%d,%d-%d,%d" % self.get_position()
@property
def raw_strip(self):
return self.raw.strip()
def get_position(self):
return self.start_line, self.start_col, self.end_line, self.end_col
def set_position(self, start_line, start_col, end_line, end_col):
object.__setattr__(self, "start_line", start_line)
object.__setattr__(self, "start_col", start_col)
object.__setattr__(self, "end_line", end_line)
object.__setattr__(self, "end_col", end_col)
return self
def set_raw(self, raw):
object.__setattr__(self, "raw", raw)
return self
def set_simple_position(self, line, col, width):
return self.set_position(line, col, line, col + width)
@dataclass(frozen=True)
class ParsingError(Statement, Exception):
"""Base class for all parsing exceptions in this package."""
def __str__(self):
return Statement.__str__(self)
@dataclass(frozen=True)
class UnknownStatement(ParsingError):
"""A string statement could not bee parsed."""
def __str__(self):
return f"Could not parse '{self.raw}' ({self.format_position})"
@dataclass(frozen=True)
class UnhandledParsingError(ParsingError):
"""Base class for all parsing exceptions in this package."""
ex: Exception
def __str__(self):
return f"Unhandled exception while parsing '{self.raw}' ({self.format_position}): {self.ex}"
@dataclass(frozen=True)
class UnexpectedEOF(ParsingError):
"""End of file was found within an open block."""
#############################
# Useful methods and classes
#############################
@dataclass(frozen=True)
class Hash:
algorithm_name: str
hexdigest: str
def __eq__(self, other: Hash):
return (
isinstance(other, Hash)
and self.algorithm_name != ""
and self.algorithm_name == other.algorithm_name
and hmac.compare_digest(self.hexdigest, other.hexdigest)
)
@classmethod
def from_bytes(cls, algorithm, b: bytes):
hasher = algorithm(b)
return cls(hasher.name, hasher.hexdigest())
@classmethod
def from_file_pointer(cls, algorithm, fp: ty.BinaryIO):
return cls.from_bytes(algorithm, fp.read())
@classmethod
def nullhash(cls):
return cls("", "")
def _yield_types(
obj, valid_subclasses=(object,), recurse_origin=(tuple, list, ty.Union)
):
"""Recursively transverse type annotation if the
origin is any of the types in `recurse_origin`
and yield those type which are subclasses of `valid_subclasses`.
"""
if ty.get_origin(obj) in recurse_origin:
for el in ty.get_args(obj):
yield from _yield_types(el, valid_subclasses, recurse_origin)
else:
if inspect.isclass(obj) and issubclass(obj, valid_subclasses):
yield obj
class classproperty: # noqa N801
"""Decorator for a class property
In Python 3.9+ can be replaced by
@classmethod
@property
def myprop(self):
return 42
"""
def __init__(self, fget):
self.fget = fget
def __get__(self, owner_self, owner_cls):
return self.fget(owner_cls)
def is_relative_to(self, *other):
"""Return True if the path is relative to another path or False.
In Python 3.9+ can be replaced by
path.is_relative_to(other)
"""
try:
self.relative_to(*other)
return True
except ValueError:
return False
class DelimiterInclude(enum.IntEnum):
"""Specifies how to deal with delimiters while parsing."""
#: Split at delimiter, not including in any string
SPLIT = enum.auto()
#: Split after, keeping the delimiter with previous string.
SPLIT_AFTER = enum.auto()
#: Split before, keeping the delimiter with next string.
SPLIT_BEFORE = enum.auto()
#: Do not split at delimiter.
DO_NOT_SPLIT = enum.auto()
class DelimiterAction(enum.IntEnum):
"""Specifies how to deal with delimiters while parsing."""
#: Continue parsing normally.
CONTINUE = enum.auto()
#: Capture everything til end of line as a whole.
CAPTURE_NEXT_TIL_EOL = enum.auto()
#: Stop parsing line and move to next.
STOP_PARSING_LINE = enum.auto()
#: Stop parsing content.
STOP_PARSING = enum.auto()
DO_NOT_SPLIT_EOL = {
"\r\n": (DelimiterInclude.DO_NOT_SPLIT, DelimiterAction.CONTINUE),
"\n": (DelimiterInclude.DO_NOT_SPLIT, DelimiterAction.CONTINUE),
"\r": (DelimiterInclude.DO_NOT_SPLIT, DelimiterAction.CONTINUE),
}
SPLIT_EOL = {
"\r\n": (DelimiterInclude.SPLIT, DelimiterAction.CONTINUE),
"\n": (DelimiterInclude.SPLIT, DelimiterAction.CONTINUE),
"\r": (DelimiterInclude.SPLIT, DelimiterAction.CONTINUE),
}
_EOLs_set = set(DO_NOT_SPLIT_EOL.keys())
@functools.lru_cache
def _build_delimiter_pattern(delimiters: ty.Tuple[str, ...]) -> re.Pattern:
"""Compile a tuple of delimiters into a regex expression with a capture group
around the delimiter.
"""
return re.compile("|".join(f"({re.escape(el)})" for el in delimiters))
############
# Iterators
############
DelimiterDictT = ty.Dict[str, ty.Tuple[DelimiterInclude, DelimiterAction]]
class Spliter:
"""Content iterator splitting according to given delimiters.
The pattern can be changed dynamically sending a new pattern to the generator,
see DelimiterInclude and DelimiterAction for more information.
The current scanning position can be changed at any time.
Parameters
----------
content : str
delimiters : ty.Dict[str, ty.Tuple[DelimiterInclude, DelimiterAction]]
Yields
------
start_line : int
line number of the start of the content (zero-based numbering).
start_col : int
column number of the start of the content (zero-based numbering).
end_line : int
line number of the end of the content (zero-based numbering).
end_col : int
column number of the end of the content (zero-based numbering).
part : str
part of the text between delimiters.
"""
_pattern: ty.Optional[re.Pattern]
_delimiters: DelimiterDictT
__stop_searching_in_line = False
__pending = ""
__first_line_col = None
__lines = ()
__lineno = 0
__colno = 0
def __init__(self, content: str, delimiters: DelimiterDictT):
self.set_delimiters(delimiters)
self.__lines = content.splitlines(keepends=True)
def set_position(self, lineno: int, colno: int):
self.__lineno, self.__colno = lineno, colno
def set_delimiters(self, delimiters: DelimiterDictT):
for k, v in delimiters.items():
if v == (DelimiterInclude.DO_NOT_SPLIT, DelimiterAction.STOP_PARSING):
raise ValueError(
f"The delimiter action for {k} is not a valid combination ({v})"
)
# Build a pattern but removing eols
_pat_dlm = tuple(set(delimiters.keys()) - _EOLs_set)
if _pat_dlm:
self._pattern = _build_delimiter_pattern(_pat_dlm)
else:
self._pattern = None
# We add the end of line as delimiters if not present.
self._delimiters = {**DO_NOT_SPLIT_EOL, **delimiters}
def __iter__(self):
return self
def __next__(self):
if self.__lineno >= len(self.__lines):
raise StopIteration
while True:
if self.__stop_searching_in_line:
# There must be part of a line pending to parse
# due to stop
line = self.__lines[self.__lineno]
mo = None
self.__stop_searching_in_line = False
else:
# We get the current line and the find the first delimiter.
line = self.__lines[self.__lineno]
if self._pattern is None:
mo = None
else:
mo = self._pattern.search(line, self.__colno)
if mo is None:
# No delimiter was found,
# which should happen at end of the content or end of line
for k in DO_NOT_SPLIT_EOL.keys():
if line.endswith(k):
dlm = line[-len(k) :]
end_col, next_col = len(line) - len(k), 0
break
else:
# No EOL found, this is end of content
dlm = None
end_col, next_col = len(line), 0
next_line = self.__lineno + 1
else:
next_line = self.__lineno
end_col, next_col = mo.span()
dlm = mo.group()
part = line[self.__colno : end_col]
include, action = self._delimiters.get(
dlm, (DelimiterInclude.SPLIT, DelimiterAction.STOP_PARSING)
)
if include == DelimiterInclude.SPLIT:
next_pending = ""
elif include == DelimiterInclude.SPLIT_AFTER:
end_col += len(dlm)
part = part + dlm
next_pending = ""
elif include == DelimiterInclude.SPLIT_BEFORE:
next_pending = dlm
elif include == DelimiterInclude.DO_NOT_SPLIT:
self.__pending += line[self.__colno : end_col] + dlm
next_pending = ""
else:
raise ValueError(f"Unknown action {include}.")
if action == DelimiterAction.STOP_PARSING:
# this will raise a StopIteration in the next call.
next_line = len(self.__lines)
elif action == DelimiterAction.STOP_PARSING_LINE:
next_line = self.__lineno + 1
next_col = 0
start_line = self.__lineno
start_col = self.__colno
end_line = self.__lineno
self.__lineno = next_line
self.__colno = next_col
if action == DelimiterAction.CAPTURE_NEXT_TIL_EOL:
self.__stop_searching_in_line = True
if include == DelimiterInclude.DO_NOT_SPLIT:
self.__first_line_col = start_line, start_col
else:
if self.__first_line_col is None:
out = (
start_line,
start_col - len(self.__pending),
end_line,
end_col,
self.__pending + part,
)
else:
out = (
*self.__first_line_col,
end_line,
end_col,
self.__pending + part,
)
self.__first_line_col = None
self.__pending = next_pending
return out
class StatementIterator:
"""Content peekable iterator splitting according to given delimiters.
The pattern can be changed dynamically sending a new pattern to the generator,
see DelimiterInclude and DelimiterAction for more information.
Parameters
----------
content : str
delimiters : dict[str, ty.Tuple[DelimiterInclude, DelimiterAction]]
Yields
------
Statement
"""
_cache: ty.Deque[Statement]
def __init__(
self, content: str, delimiters: DelimiterDictT, strip_spaces: bool = True
):
self._cache = collections.deque()
self._spliter = Spliter(content, delimiters)
self._strip_spaces = strip_spaces
def __iter__(self):
return self
def set_delimiters(self, delimiters: DelimiterDictT):
self._spliter.set_delimiters(delimiters)
if self._cache:
value = self.peek()
# Elements are 1 based indexing, while splitter is 0 based.
self._spliter.set_position(value.start_line - 1, value.start_col)
self._cache.clear()
def _get_next_strip(self) -> Statement:
part = ""
while not part:
start_line, start_col, end_line, end_col, part = next(self._spliter)
lo = len(part)
part = part.lstrip()
start_col += lo - len(part)
lo = len(part)
part = part.rstrip()
end_col -= lo - len(part)
return Statement.from_statement_iterator_element(
(start_line + 1, start_col, end_line + 1, end_col, part)
)
def _get_next(self) -> Statement:
if self._strip_spaces:
return self._get_next_strip()
part = ""
while not part:
start_line, start_col, end_line, end_col, part = next(self._spliter)
return Statement.from_statement_iterator_element(
(start_line + 1, start_col, end_line + 1, end_col, part)
)
def peek(self, default=_SENTINEL) -> Statement:
"""Return the item that will be next returned from ``next()``.
Return ``default`` if there are no items left. If ``default`` is not
provided, raise ``StopIteration``.
"""
if not self._cache:
try:
self._cache.append(self._get_next())
except StopIteration:
if default is _SENTINEL:
raise
return default
return self._cache[0]
def __next__(self) -> Statement:
if self._cache:
return self._cache.popleft()
else:
return self._get_next()
###########
# Parsing
###########
# Configuration type
CT = ty.TypeVar("CT")
PST = ty.TypeVar("PST", bound="ParsedStatement")
LineColStr = Tuple[int, int, str]
FromString = ty.Union[None, PST, ParsingError]
Consume = ty.Union[PST, ParsingError]
NullableConsume = ty.Union[None, PST, ParsingError]
Single = ty.Union[PST, ParsingError]
Multi = ty.Tuple[ty.Union[PST, ParsingError], ...]
@dataclass(frozen=True)
class ParsedStatement(ty.Generic[CT], Statement):
"""A single parsed statement.
In order to write your own, you need to subclass it as a
frozen dataclass and implement the parsing logic by overriding
`from_string` classmethod.
Takes two arguments: the string to parse and an object given
by the parser which can be used to store configuration information.
It should return an instance of this class if parsing
was successful or None otherwise
"""
@classmethod
def from_string(cls: Type[PST], s: str) -> FromString[PST]:
"""Parse a string into a ParsedStatement.
Return files and their meaning:
1. None: the string cannot be parsed with this class.
2. A subclass of ParsedStatement: the string was parsed successfully
3. A subclass of ParsingError the string could be parsed with this class but there is
an error.
"""
raise NotImplementedError(
"ParsedStatement subclasses must implement "
"'from_string' or 'from_string_and_config'"
)
@classmethod
def from_string_and_config(cls: Type[PST], s: str, config: CT) -> FromString[PST]:
"""Parse a string into a ParsedStatement.
Return files and their meaning:
1. None: the string cannot be parsed with this class.
2. A subclass of ParsedStatement: the string was parsed successfully
3. A subclass of ParsingError the string could be parsed with this class but there is
an error.
"""
return cls.from_string(s)
@classmethod
def from_statement_and_config(
cls: Type[PST], statement: Statement, config: CT
) -> FromString[PST]:
try:
out = cls.from_string_and_config(statement.raw, config)
except Exception as ex:
out = UnhandledParsingError(ex)
if out is None:
return None
out.set_position(*statement.get_position())
out.set_raw(statement.raw)
return out
@classmethod
def consume(
cls: Type[PST], statement_iterator: StatementIterator, config: CT
) -> NullableConsume[PST]:
"""Peek into the iterator and try to parse.
Return files and their meaning:
1. None: the string cannot be parsed with this class, the iterator is kept an the current place.
2. a subclass of ParsedStatement: the string was parsed successfully, advance the iterator.
3. a subclass of ParsingError: the string could be parsed with this class but there is
an error, advance the iterator.
"""
statement = statement_iterator.peek()
parsed_statement = cls.from_statement_and_config(statement, config)
if parsed_statement is None:
return None
next(statement_iterator)
return parsed_statement
OPST = ty.TypeVar("OPST", bound="ParsedStatement")
IPST = ty.TypeVar("IPST", bound="ParsedStatement")
CPST = ty.TypeVar("CPST", bound="ParsedStatement")
BT = ty.TypeVar("BT", bound="Block")
RBT = ty.TypeVar("RBT", bound="RootBlock")
@dataclass(frozen=True)
class Block(ty.Generic[OPST, IPST, CPST, CT]):
"""A sequence of statements with an opening, body and closing."""
opening: Consume[OPST]
body: Tuple[Consume[IPST], ...]
closing: Consume[CPST]
delimiters = {}
@property
def start_line(self):
return self.opening.start_line
@property
def start_col(self):
return self.opening.start_col
@property
def end_line(self):
return self.closing.end_line
@property
def end_col(self):
return self.closing.end_col
def get_position(self):
return self.start_line, self.start_col, self.end_line, self.end_col
@property
def format_position(self):
if self.start_line is None:
return "N/A"
return "%d,%d-%d,%d" % self.get_position()
@classmethod
def subclass_with(cls, *, opening=None, body=None, closing=None):
@dataclass(frozen=True)
class CustomBlock(Block):
pass
if opening:
CustomBlock.__annotations__["opening"] = Single[ty.Union[opening]]
if body:
CustomBlock.__annotations__["body"] = Multi[ty.Union[body]]
if closing:
CustomBlock.__annotations__["closing"] = Single[ty.Union[closing]]
return CustomBlock
def __iter__(self) -> Iterator[Statement]:
yield self.opening
for el in self.body:
if isinstance(el, Block):
yield from el
else:
yield el
yield self.closing
def iter_blocks(self) -> Iterator[ty.Union[Block, Statement]]:
yield self.opening
yield from self.body
yield self.closing
###################################################
# Convenience methods to iterate parsed statements
###################################################
_ElementT = ty.TypeVar("_ElementT", bound=Statement)
def filter_by(self, *klass: Type[_ElementT]) -> Iterator[_ElementT]:
"""Yield elements of a given class or classes."""
yield from (el for el in self if isinstance(el, klass)) # noqa Bug in pycharm.
@cached_property
def errors(self) -> ty.Tuple[ParsingError, ...]:
"""Tuple of errors found."""
return tuple(self.filter_by(ParsingError))
@property
def has_errors(self) -> bool:
"""True if errors were found during parsing."""
return bool(self.errors)
####################
# Statement classes
####################
@classproperty
def opening_classes(cls) -> Iterator[Type[OPST]]:
"""Classes representing any of the parsed statement that can open this block."""
opening = ty.get_type_hints(cls)["opening"]
yield from _yield_types(opening, ParsedStatement)
@classproperty
def body_classes(cls) -> Iterator[Type[IPST]]:
"""Classes representing any of the parsed statement that can be in the body."""
body = ty.get_type_hints(cls)["body"]
yield from _yield_types(body, (ParsedStatement, Block))
@classproperty
def closing_classes(cls) -> Iterator[Type[CPST]]:
"""Classes representing any of the parsed statement that can close this block."""
closing = ty.get_type_hints(cls)["closing"]
yield from _yield_types(closing, ParsedStatement)
##########
# Consume
##########
@classmethod
def consume_opening(
cls: Type[BT], statement_iterator: StatementIterator, config: CT
) -> NullableConsume[OPST]:
"""Peek into the iterator and try to parse with any of the opening classes.
See `ParsedStatement.consume` for more details.
"""
for c in cls.opening_classes:
el = c.consume(statement_iterator, config)
if el is not None:
return el
return None
@classmethod
def consume_body(
cls, statement_iterator: StatementIterator, config: CT
) -> Consume[IPST]:
"""Peek into the iterator and try to parse with any of the body classes.
If the statement cannot be parsed, a UnknownStatement is returned.
"""
for c in cls.body_classes:
el = c.consume(statement_iterator, config)
if el is not None:
return el
el = next(statement_iterator)
return UnknownStatement.from_statement(el)
@classmethod
def consume_closing(
cls: Type[BT], statement_iterator: StatementIterator, config: CT
) -> NullableConsume[CPST]:
"""Peek into the iterator and try to parse with any of the opening classes.
See `ParsedStatement.consume` for more details.
"""
for c in cls.closing_classes:
el = c.consume(statement_iterator, config)
if el is not None:
return el
return None
@classmethod
def consume_body_closing(
cls: Type[BT], opening: OPST, statement_iterator: StatementIterator, config: CT
) -> BT:
body = []
closing = None
last_line = opening.end_line
while closing is None:
try:
closing = cls.consume_closing(statement_iterator, config)
if closing is not None:
continue
el = cls.consume_body(statement_iterator, config)
body.append(el)
last_line = el.end_line
except StopIteration:
closing = cls.on_stop_iteration(config)
closing.set_position(last_line + 1, 0, last_line + 1, 0)
return cls(opening, tuple(body), closing)
@classmethod
def consume(
cls: Type[BT], statement_iterator: StatementIterator, config: CT
) -> Optional[BT]:
"""Try consume the block.
Possible outcomes:
1. The opening was not matched, return None.
2. A subclass of Block, where body and closing migh contain errors.
"""
opening = cls.consume_opening(statement_iterator, config)
if opening is None:
return None
return cls.consume_body_closing(opening, statement_iterator, config)
@classmethod
def on_stop_iteration(cls, config):
return UnexpectedEOF()
@dataclass(frozen=True)
class BOS(ParsedStatement[CT]):
"""Beginning of source."""
# Hasher algorithm name and hexdigest
content_hash: Hash
@classmethod
def from_string_and_config(cls: Type[PST], s: str, config: CT) -> FromString[PST]:
raise RuntimeError("BOS cannot be constructed from_string_and_config")
@property
def location(self) -> SourceLocationT:
return "<undefined>"
@dataclass(frozen=True)
class BOF(BOS):
"""Beginning of file."""
path: pathlib.Path
# Modification time of the file.
mtime: float
@property
def location(self) -> SourceLocationT:
return self.path
@dataclass(frozen=True)
class BOR(BOS):
"""Beginning of resource."""
package: str
resource_name: str
@property
def location(self) -> SourceLocationT:
return self.package, self.resource_name
@dataclass(frozen=True)
class EOS(ParsedStatement[CT]):
"""End of sequence."""
@classmethod
def from_string_and_config(cls: Type[PST], s: str, config: CT) -> FromString[PST]:
return cls()
class RootBlock(ty.Generic[IPST, CT], Block[BOS, IPST, EOS, CT]):
"""A sequence of statement flanked by the beginning and ending of stream."""
opening: Single[BOS]
closing: Single[EOS]
@classmethod
def subclass_with(cls, *, body=None):
@dataclass(frozen=True)
class CustomRootBlock(RootBlock):
pass
if body:
CustomRootBlock.__annotations__["body"] = Multi[ty.Union[body]]
return CustomRootBlock
@classmethod
def consume_opening(
cls: Type[RBT], statement_iterator: StatementIterator, config: CT
) -> NullableConsume[BOS]:
raise RuntimeError(
"Implementation error, 'RootBlock.consume_opening' should never be called"
)
@classmethod
def consume(
cls: Type[RBT], statement_iterator: StatementIterator, config: CT
) -> RBT:
block = super().consume(statement_iterator, config)
if block is None:
raise RuntimeError(
"Implementation error, 'RootBlock.consume' should never return None"
)
return block
@classmethod
def consume_closing(
cls: Type[RBT], statement_iterator: StatementIterator, config: CT
) -> NullableConsume[EOS]:
return None
@classmethod
def on_stop_iteration(cls, config):
return EOS()
#################
# Source parsing
#################
ResourceT = ty.Tuple[str, str] # package name, resource name
StrictLocationT = ty.Union[pathlib.Path, ResourceT]
SourceLocationT = ty.Union[str, StrictLocationT]
@dataclass(frozen=True)
class ParsedSource(ty.Generic[RBT, CT]):
parsed_source: RBT
# Parser configuration.
config: CT
@property
def location(self) -> StrictLocationT:
return self.parsed_source.opening.location
@cached_property
def has_errors(self) -> bool:
return self.parsed_source.has_errors
def errors(self):
yield from self.parsed_source.errors
@dataclass(frozen=True)
class CannotParseResourceAsFile(Exception):
"""The requested python package resource cannot be located as a file
in the file system.
"""
package: str
resource_name: str
class Parser(ty.Generic[RBT, CT]):
"""Parser class."""
#: class to iterate through statements in a source unit.
_statement_iterator_class: Type[StatementIterator] = StatementIterator
#: Delimiters.
_delimiters: DelimiterDictT = SPLIT_EOL
_strip_spaces: bool = True
#: root block class containing statements and blocks can be parsed.
_root_block_class: Type[RBT]
#: source file text encoding.
_encoding = "utf-8"
#: configuration passed to from_string functions.
_config: CT
#: try to open resources as files.
_prefer_resource_as_file: bool
#: parser algorithm to us. Must be a callable member of hashlib
_hasher = hashlib.blake2b
def __init__(self, config: CT, prefer_resource_as_file=True):
self._config = config
self._prefer_resource_as_file = prefer_resource_as_file
def parse(self, source_location: SourceLocationT) -> ParsedSource[RBT, CT]:
"""Parse a file into a ParsedSourceFile or ParsedResource.
Parameters
----------
source_location:
if str or pathlib.Path is interpreted as a file.
if (str, str) is interpreted as (package, resource) using the resource python api.
"""
if isinstance(source_location, tuple) and len(source_location) == 2:
if self._prefer_resource_as_file:
try:
return self.parse_resource_from_file(*source_location)
except CannotParseResourceAsFile:
pass
return self.parse_resource(*source_location)
if isinstance(source_location, str):
return self.parse_file(pathlib.Path(source_location))
if isinstance(source_location, pathlib.Path):
return self.parse_file(source_location)
raise TypeError(
f"Unknown type {type(source_location)}, "
"use str or pathlib.Path for files or "
"(package: str, resource_name: str) tuple "
"for a resource."
)
def parse_bytes(self, b: bytes, bos: BOS = None) -> ParsedSource[RBT, CT]:
if bos is None:
bos = BOS(Hash.from_bytes(self._hasher, b)).set_simple_position(0, 0, 0)
sic = self._statement_iterator_class(
b.decode(self._encoding), self._delimiters, self._strip_spaces
)
parsed = self._root_block_class.consume_body_closing(bos, sic, self._config)
return ParsedSource(
parsed,
self._config,
)
def parse_file(self, path: pathlib.Path) -> ParsedSource[RBT, CT]:
"""Parse a file into a ParsedSourceFile.
Parameters
----------
path
path of the file.
"""
with path.open(mode="rb") as fi:
content = fi.read()
bos = BOF(
Hash.from_bytes(self._hasher, content), path, path.stat().st_mtime
).set_simple_position(0, 0, 0)
return self.parse_bytes(content, bos)
def parse_resource_from_file(
self, package: str, resource_name: str
) -> ParsedSource[RBT, CT]:
"""Parse a resource into a ParsedSourceFile, opening as a file.
Parameters
----------
package
package name where the resource is located.
resource_name
name of the resource
"""
if sys.version_info < (3, 9):
# Remove when Python 3.8 is dropped
with resources.path(package, resource_name) as p:
path = p.resolve()
else:
with resources.as_file(
resources.files(package).joinpath(resource_name)
) as p:
path = p.resolve()
if path.exists():
return self.parse_file(path)
raise CannotParseResourceAsFile(package, resource_name)
def parse_resource(self, package: str, resource_name: str) -> ParsedSource[RBT, CT]:
"""Parse a resource into a ParsedResource.
Parameters
----------
package
package name where the resource is located.
resource_name
name of the resource
"""
if sys.version_info < (3, 9):
# Remove when Python 3.8 is dropped
with resources.open_binary(package, resource_name) as fi:
content = fi.read()
else:
with resources.files(package).joinpath(resource_name).open("rb") as fi:
content = fi.read()
bos = BOR(
Hash.from_bytes(self._hasher, content), package, resource_name
).set_simple_position(0, 0, 0)
return self.parse_bytes(content, bos)
##########
# Project
##########
class IncludeStatement(ParsedStatement):
""" "Include statements allow to merge files."""
@property
def target(self) -> str:
raise NotImplementedError(
"IncludeStatement subclasses must implement target property."
)
class ParsedProject(
ty.Dict[
ty.Optional[ty.Tuple[StrictLocationT, str]],
ParsedSource,
]
):
"""Collection of files, independent or connected via IncludeStatement.
Keys are either an absolute pathname or a tuple package name, resource name.
None is the name of the root.
"""
@cached_property
def has_errors(self) -> bool:
return any(el.has_errors for el in self.values())
def errors(self):
for el in self.values():
yield from el.errors()
def _iter_statements(self, items, seen, include_only_once):
"""Iter all definitions in the order they appear,
going into the included files.
"""
for source_location, parsed in items:
seen.add(source_location)
for parsed_statement in parsed.parsed_source:
if isinstance(parsed_statement, IncludeStatement):
location = parsed.location, parsed_statement.target
if location in seen and include_only_once:
raise ValueError(f"{location} was already included.")
yield from self._iter_statements(
((location, self[location]),), seen, include_only_once
)
else:
yield parsed_statement
def iter_statements(self, include_only_once=True):
"""Iter all definitions in the order they appear,
going into the included files.
Parameters
----------
include_only_once
if true, each file cannot be included more than once.
"""
yield from self._iter_statements([(None, self[None])], set(), include_only_once)
def _iter_blocks(self, items, seen, include_only_once):
"""Iter all definitions in the order they appear,
going into the included files.
"""
for source_location, parsed in items:
seen.add(source_location)
for parsed_statement in parsed.parsed_source.iter_blocks():
if isinstance(parsed_statement, IncludeStatement):
location = parsed.location, parsed_statement.target
if location in seen and include_only_once:
raise ValueError(f"{location} was already included.")
yield from self._iter_blocks(
((location, self[location]),), seen, include_only_once
)
else:
yield parsed_statement
def iter_blocks(self, include_only_once=True):
"""Iter all definitions in the order they appear,
going into the included files.
Parameters
----------
include_only_once
if true, each file cannot be included more than once.
"""
yield from self._iter_blocks([(None, self[None])], set(), include_only_once)
def default_locator(source_location: StrictLocationT, target: str) -> StrictLocationT:
"""Return a new location from current_location and target."""
if isinstance(source_location, pathlib.Path):
current_location = pathlib.Path(source_location).resolve()
if current_location.is_file():
current_path = current_location.parent
else:
current_path = current_location
target_path = pathlib.Path(target)
if target_path.is_absolute():
raise ValueError(
f"Cannot refer to absolute paths in import statements ({source_location}, {target})."
)
tmp = (current_path / target_path).resolve()
if not is_relative_to(tmp, current_path):
raise ValueError(
f"Cannot refer to locations above the current location ({source_location}, {target})"
)
return tmp.absolute()
elif isinstance(source_location, tuple) and len(source_location) == 2:
return source_location[0], target
raise TypeError(
f"Cannot handle type {type(source_location)}, "
"use str or pathlib.Path for files or "
"(package: str, resource_name: str) tuple "
"for a resource."
)
DefinitionT = ty.Union[ty.Type[Block], ty.Type[ParsedStatement]]
SpecT = ty.Union[
ty.Type[Parser],
DefinitionT,
ty.Iterable[DefinitionT],
ty.Type[RootBlock],
]
def build_parser_class(spec: SpecT, *, strip_spaces: bool = True, delimiters=None):
"""Build a custom parser class.
Parameters
----------
spec
specification of the content to parse. Can be one of the following things:
- Parser class.
- Block or ParsedStatement derived class.
- Iterable of Block or ParsedStatement derived class.
- RootBlock derived class.
strip_spaces : bool
if True, spaces will be stripped for each statement before calling
``from_string_and_config``.
delimiters : dict
Specify how the source file is split into statements (See below).
Delimiters dictionary
---------------------
The delimiters are specified with the keys of the delimiters dict.
The dict files can be used to further customize the iterator. Each
consist of a tuple of two elements:
1. A value of the DelimiterMode to indicate what to do with the
delimiter string: skip it, attach keep it with previous or next string
2. A boolean indicating if parsing should stop after fiSBT
encountering this delimiter.
"""
if delimiters is None:
delimiters = SPLIT_EOL
if isinstance(spec, type) and issubclass(spec, Parser):
CustomParser = spec
else:
if isinstance(spec, (tuple, list)):
for el in spec:
if not issubclass(el, (Block, ParsedStatement)):
raise TypeError(
"Elements in root_block_class must be of type Block or ParsedStatement, "
f"not {el}"
)
@dataclass(frozen=True)
class CustomRootBlock(RootBlock):
pass
CustomRootBlock.__annotations__["body"] = Multi[ty.Union[spec]]
elif isinstance(spec, type) and issubclass(spec, RootBlock):
CustomRootBlock = spec
elif isinstance(spec, type) and issubclass(spec, (Block, ParsedStatement)):
@dataclass(frozen=True)
class CustomRootBlock(RootBlock):
pass
CustomRootBlock.__annotations__["body"] = Multi[spec]
else:
raise TypeError(
"`spec` must be of type RootBlock or tuple of type Block or ParsedStatement, "
f"not {type(spec)}"
)
class CustomParser(Parser):
_delimiters = delimiters
_root_block_class = CustomRootBlock
_strip_spaces = strip_spaces
return CustomParser
def parse(
entry_point: SourceLocationT,
spec: SpecT,
config=None,
*,
strip_spaces: bool = True,
delimiters=None,
locator: ty.Callable[[StrictLocationT, str], StrictLocationT] = default_locator,
prefer_resource_as_file: bool = True,
**extra_parser_kwargs,
) -> ParsedProject:
"""Parse sources into a ParsedProject dictionary.
Parameters
----------
entry_point
file or resource, given as (package_name, resource_name).
spec
specification of the content to parse. Can be one of the following things:
- Parser class.
- Block or ParsedStatement derived class.
- Iterable of Block or ParsedStatement derived class.
- RootBlock derived class.
config
a configuration object that will be passed to `from_string_and_config`
classmethod.
strip_spaces : bool
if True, spaces will be stripped for each statement before calling
``from_string_and_config``.
delimiters : dict
Specify how the source file is split into statements (See below).
locator : Callable
function that takes the current location and a target of an IncludeStatement
and returns a new location.
prefer_resource_as_file : bool
if True, resources will try to be located in the filesystem if
available.
extra_parser_kwargs
extra keyword arguments to be given to the parser.
Delimiters dictionary
---------------------
The delimiters are specified with the keys of the delimiters dict.
The dict files can be used to further customize the iterator. Each
consist of a tuple of two elements:
1. A value of the DelimiterMode to indicate what to do with the
delimiter string: skip it, attach keep it with previous or next string
2. A boolean indicating if parsing should stop after fiSBT
encountering this delimiter.
"""
CustomParser = build_parser_class(
spec, strip_spaces=strip_spaces, delimiters=delimiters
)
parser = CustomParser(
config, prefer_resource_as_file=prefer_resource_as_file, **extra_parser_kwargs
)
pp = ParsedProject()
# : ty.List[Optional[ty.Union[LocatorT, str]], ...]
pending: ty.List[ty.Tuple[StrictLocationT, str]] = []
if isinstance(entry_point, (str, pathlib.Path)):
entry_point = pathlib.Path(entry_point)
if not entry_point.is_absolute():
entry_point = pathlib.Path.cwd() / entry_point
elif not (isinstance(entry_point, tuple) and len(entry_point) == 2):
raise TypeError(
f"Cannot handle type {type(entry_point)}, "
"use str or pathlib.Path for files or "
"(package: str, resource_name: str) tuple "
"for a resource."
)
pp[None] = parsed = parser.parse(entry_point)
pending.extend(
(parsed.location, el.target)
for el in parsed.parsed_source.filter_by(IncludeStatement)
)
while pending:
source_location, target = pending.pop(0)
pp[(source_location, target)] = parsed = parser.parse(
locator(source_location, target)
)
pending.extend(
(parsed.location, el.target)
for el in parsed.parsed_source.filter_by(IncludeStatement)
)
return pp
def parse_bytes(
content: bytes,
spec: SpecT,
config=None,
*,
strip_spaces: bool = True,
delimiters=None,
**extra_parser_kwargs,
) -> ParsedProject:
"""Parse sources into a ParsedProject dictionary.
Parameters
----------
content
bytes.
spec
specification of the content to parse. Can be one of the following things:
- Parser class.
- Block or ParsedStatement derived class.
- Iterable of Block or ParsedStatement derived class.
- RootBlock derived class.
config
a configuration object that will be passed to `from_string_and_config`
classmethod.
strip_spaces : bool
if True, spaces will be stripped for each statement before calling
``from_string_and_config``.
delimiters : dict
Specify how the source file is split into statements (See below).
"""
CustomParser = build_parser_class(
spec, strip_spaces=strip_spaces, delimiters=delimiters
)
parser = CustomParser(config, prefer_resource_as_file=False, **extra_parser_kwargs)
pp = ParsedProject()
pp[None] = parsed = parser.parse_bytes(content)
if any(parsed.parsed_source.filter_by(IncludeStatement)):
raise ValueError("parse_bytes does not support using an IncludeStatement")
return pp
|
PypiClean
|
/robot_grpc-0.0.4.tar.gz/robot_grpc-0.0.4/robot_grpc/node_modules/lodash/_baseSortedIndexBy.js
|
var isSymbol = require('./isSymbol');
/** Used as references for the maximum length and index of an array. */
var MAX_ARRAY_LENGTH = 4294967295,
MAX_ARRAY_INDEX = MAX_ARRAY_LENGTH - 1;
/* Built-in method references for those with the same name as other `lodash` methods. */
var nativeFloor = Math.floor,
nativeMin = Math.min;
/**
* The base implementation of `_.sortedIndexBy` and `_.sortedLastIndexBy`
* which invokes `iteratee` for `value` and each element of `array` to compute
* their sort ranking. The iteratee is invoked with one argument; (value).
*
* @private
* @param {Array} array The sorted array to inspect.
* @param {*} value The value to evaluate.
* @param {Function} iteratee The iteratee invoked per element.
* @param {boolean} [retHighest] Specify returning the highest qualified index.
* @returns {number} Returns the index at which `value` should be inserted
* into `array`.
*/
function baseSortedIndexBy(array, value, iteratee, retHighest) {
value = iteratee(value);
var low = 0,
high = array == null ? 0 : array.length,
valIsNaN = value !== value,
valIsNull = value === null,
valIsSymbol = isSymbol(value),
valIsUndefined = value === undefined;
while (low < high) {
var mid = nativeFloor((low + high) / 2),
computed = iteratee(array[mid]),
othIsDefined = computed !== undefined,
othIsNull = computed === null,
othIsReflexive = computed === computed,
othIsSymbol = isSymbol(computed);
if (valIsNaN) {
var setLow = retHighest || othIsReflexive;
} else if (valIsUndefined) {
setLow = othIsReflexive && (retHighest || othIsDefined);
} else if (valIsNull) {
setLow = othIsReflexive && othIsDefined && (retHighest || !othIsNull);
} else if (valIsSymbol) {
setLow = othIsReflexive && othIsDefined && !othIsNull && (retHighest || !othIsSymbol);
} else if (othIsNull || othIsSymbol) {
setLow = false;
} else {
setLow = retHighest ? (computed <= value) : (computed < value);
}
if (setLow) {
low = mid + 1;
} else {
high = mid;
}
}
return nativeMin(high, MAX_ARRAY_INDEX);
}
module.exports = baseSortedIndexBy;
|
PypiClean
|
/certora_cli_alpha_antti_learned_lemma_passing_cvl2-20230517.9.39.262005-py3-none-any.whl/certora_cli/EVMVerifier/certoraNodeFilters.py
|
from typing import Any, Dict
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from Shared.certoraUtils import NoValEnum
class NodeFilters:
class NodeType(NoValEnum):
def is_this_node_type(self, type_name_node: Dict[str, Any]) -> bool:
return type_name_node["nodeType"] == self.value
class TypeNameNode(NodeType):
ELEMENTARY = "ElementaryTypeName"
FUNCTION = "FunctionTypeName"
USER_DEFINED = "UserDefinedTypeName"
MAPPING = "Mapping"
ARRAY = "ArrayTypeName"
class UserDefinedTypeDefNode(NodeType):
ENUM = "EnumDefinition"
STRUCT = "StructDefinition"
VALUE_TYPE = "UserDefinedValueTypeDefinition"
CONTRACT = "ContractDefinition"
@staticmethod
def CERTORA_CONTRACT_NAME() -> str:
return "certora_contract_name"
@staticmethod
def is_enum_definition(node: Dict[str, Any]) -> bool:
return node["nodeType"] == "EnumDefinition"
@staticmethod
def is_struct_definition(node: Dict[str, Any]) -> bool:
return node["nodeType"] == "StructDefinition"
@staticmethod
def is_user_defined_value_type_definition(node: Dict[str, Any]) -> bool:
return node["nodeType"] == "UserDefinedValueTypeDefinition"
@staticmethod
def is_contract_definition(node: Dict[str, Any]) -> bool:
return node["nodeType"] == "ContractDefinition"
@staticmethod
def is_user_defined_type_definition(node: Dict[str, Any]) -> bool:
return NodeFilters.is_enum_definition(node) or NodeFilters.is_struct_definition(
node) or NodeFilters.is_user_defined_value_type_definition(node)
@staticmethod
def is_import(node: Dict[str, Any]) -> bool:
return node["nodeType"] == "ImportDirective"
@staticmethod
def is_defined_in_a_contract_or_library(node: Dict[str, Any]) -> bool:
return NodeFilters.CERTORA_CONTRACT_NAME() in node
@staticmethod
def is_defined_in_contract(node: Dict[str, Any], contract_name: str) -> bool:
return node[NodeFilters.CERTORA_CONTRACT_NAME()] == contract_name
|
PypiClean
|
/delphix_dct_api-9.0.0rc1-py3-none-any.whl/delphix/api/gateway/model/provision_vdbby_timestamp_parameters_all_of.py
|
import re # noqa: F401
import sys # noqa: F401
from delphix.api.gateway.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
from ..model_utils import OpenApiModel
from delphix.api.gateway.exceptions import ApiAttributeError
class ProvisionVDBByTimestampParametersAllOf(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
('source_data_id',): {
'max_length': 256,
'min_length': 1,
},
('engine_id',): {
'max_length': 256,
'min_length': 1,
},
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'source_data_id': (str,), # noqa: E501
'engine_id': (str,), # noqa: E501
'make_current_account_owner': (bool,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'source_data_id': 'source_data_id', # noqa: E501
'engine_id': 'engine_id', # noqa: E501
'make_current_account_owner': 'make_current_account_owner', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, source_data_id, *args, **kwargs): # noqa: E501
"""ProvisionVDBByTimestampParametersAllOf - a model defined in OpenAPI
Args:
source_data_id (str): The ID of the source object (dSource or VDB) to provision from. All other objects referenced by the parameters must live on the same engine as the source.
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
engine_id (str): The ID of the Engine onto which to provision. If the source ID unambiguously identifies a source object, this parameter is unnecessary and ignored.. [optional] # noqa: E501
make_current_account_owner (bool): Whether the account provisioning this VDB must be configured as owner of the VDB.. [optional] if omitted the server will use the default value of True # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.source_data_id = source_data_id
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, source_data_id, *args, **kwargs): # noqa: E501
"""ProvisionVDBByTimestampParametersAllOf - a model defined in OpenAPI
Args:
source_data_id (str): The ID of the source object (dSource or VDB) to provision from. All other objects referenced by the parameters must live on the same engine as the source.
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
engine_id (str): The ID of the Engine onto which to provision. If the source ID unambiguously identifies a source object, this parameter is unnecessary and ignored.. [optional] # noqa: E501
make_current_account_owner (bool): Whether the account provisioning this VDB must be configured as owner of the VDB.. [optional] if omitted the server will use the default value of True # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.source_data_id = source_data_id
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")
|
PypiClean
|
/PyOpenGL-Demo-3.1.1.tar.gz/PyOpenGL-Demo-3.1.1/OpenGL_Demo/proesch/simpleTexture/texturedQuad.py
|
from __future__ import absolute_import
from __future__ import print_function
import sys
import array
from PIL import Image
import random
from six.moves import range
try:
from OpenGL.GLUT import *
from OpenGL.GL import *
from OpenGL.GLU import *
except:
print(""" Error PyOpenGL not installed properly !!""")
sys.exit()
def image_as_bytes(im, *args, **named):
return (
im.tobytes(*args, **named)
if hasattr(im, "tobytes")
else im.tostring(*args, **named)
)
class Texture(object):
"""Texture either loaded from a file or initialised with random colors."""
def __init__(self):
self.xSize, self.ySize = 0, 0
self.rawRefence = None
class RandomTexture(Texture):
"""Image with random RGB values."""
def __init__(self, xSizeP, ySizeP):
self.xSize, self.ySize = xSizeP, ySizeP
tmpList = [random.randint(0, 255) for i in range(3 * self.xSize * self.ySize)]
self.textureArray = array.array("B", tmpList)
self.rawReference = image_as_bytes(self.textureArray)
class FileTexture(Texture):
"""Texture loaded from a file."""
def __init__(self, fileName):
im = Image.open(fileName)
self.xSize = im.size[0]
self.ySize = im.size[1]
self.rawReference = im.tobytes("raw", "RGB", 0, -1)
def display():
"""Glut display function."""
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glColor3f(1, 1, 1)
glBegin(GL_QUADS)
glTexCoord2f(0, 1)
glVertex3f(-0.5, 0.5, 0)
glTexCoord2f(0, 0)
glVertex3f(-0.5, -0.5, 0)
glTexCoord2f(1, 0)
glVertex3f(0.5, -0.5, 0)
glTexCoord2f(1, 1)
glVertex3f(0.5, 0.5, 0)
glEnd()
glutSwapBuffers()
def init(fileName):
"""Glut init function."""
try:
texture = FileTexture(fileName)
except:
print("could not open ", fileName, "; using random texture")
texture = RandomTexture(256, 256)
glClearColor(0, 0, 0, 0)
glShadeModel(GL_SMOOTH)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(
GL_TEXTURE_2D,
0,
3,
texture.xSize,
texture.ySize,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
texture.rawReference,
)
glEnable(GL_TEXTURE_2D)
glutInit(sys.argv)
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB)
glutInitWindowSize(250, 250)
glutInitWindowPosition(100, 100)
glutCreateWindow(sys.argv[0])
if len(sys.argv) > 1:
init(sys.argv[1])
else:
init(None)
glutDisplayFunc(display)
glutMainLoop()
|
PypiClean
|
/microt_preprocessing-0.1.61-py3-none-any.whl/time_study_preprocessing_main/preprocess_scripts/phone/daily_notes/parse.py
|
from os import sep
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
def parse_raw_df(df_hour_raw):
columns_names = ["LOG_TIME", "LOG_TYPE", "PARTICIPANT_ID", "FILE_NAME", "NOTES"]
df_burst_status = pd.DataFrame()
df_dnd_status = pd.DataFrame()
if df_hour_raw.shape[0] > 0:
df_hour_raw.fillna('-1', inplace=True)
df_burst_status = df_hour_raw
df_dnd_status = df_hour_raw
df_hour_raw.columns = columns_names
df_hour_raw = df_hour_raw[~df_hour_raw["NOTES"].str.contains("Audio_profile")]
# df_hour_raw['LOG_TIME'] = [x + " " + time_zone for x in df_hour_raw['LOG_TIME']]
df_hour_raw[['SLEEP_OR_WAKE', 'SLEEP_OR_WAKE_TIME']] = df_hour_raw.NOTES.str.split("|", expand=True)
df_hour_raw['SLEEP_OR_WAKE'] = df_hour_raw['SLEEP_OR_WAKE'].str.strip()
df_hour_raw['SLEEP_OR_WAKE_TIME'] = df_hour_raw['SLEEP_OR_WAKE_TIME'].str.lstrip()
df_hour_raw = df_hour_raw[(df_hour_raw['SLEEP_OR_WAKE'] == 'Current sleep time') |
(df_hour_raw['SLEEP_OR_WAKE'] == 'Next wake time') |
(df_hour_raw['SLEEP_OR_WAKE'] == 'Current wake time')]
df_hour_raw = df_hour_raw.drop('LOG_TYPE', 1)
df_hour_raw = df_hour_raw.drop('FILE_NAME', 1)
df_hour_raw = df_hour_raw.drop('NOTES', 1)
df_hour_raw = df_hour_raw[['PARTICIPANT_ID', 'LOG_TIME', 'SLEEP_OR_WAKE', 'SLEEP_OR_WAKE_TIME']]
df_burst_status = df_burst_status[~df_burst_status["NOTES"].str.contains("Audio_profile")]
df_burst_status[['DAY_TYPE', 'IS_BURST']] = df_burst_status.NOTES.str.split("|", expand=True)
df_burst_status['DAY_TYPE'] = df_burst_status['DAY_TYPE'].str.strip()
df_burst_status['IS_BURST'] = df_burst_status['IS_BURST'].str.strip()
df_burst_status = df_burst_status[(df_burst_status['DAY_TYPE'] == 'BURST mode')]
# print("No of rows: " + str(df_burst_status.shape[0]))
if df_burst_status.shape[0] != 0:
df_burst_status = df_burst_status.drop('LOG_TYPE', 1)
df_burst_status = df_burst_status.drop('FILE_NAME', 1)
df_burst_status = df_burst_status.drop('NOTES', 1)
df_burst_status = df_burst_status.drop('DAY_TYPE', 1)
df_burst_status.loc[(df_burst_status.IS_BURST == 'true'), 'IS_BURST_DAY'] = 'True'
df_burst_status.loc[(df_burst_status.IS_BURST == 'false'), 'IS_BURST_DAY'] = 'False'
df_burst_status = df_burst_status.drop('IS_BURST', 1)
df_burst_status = df_burst_status[['PARTICIPANT_ID', 'LOG_TIME', 'IS_BURST_DAY']]
df_dnd_status = df_dnd_status[~df_dnd_status["NOTES"].str.contains("Audio_profile")]
df_dnd_status[['DND', 'IS_DND']] = df_dnd_status.NOTES.str.split("|", expand=True)
df_dnd_status['DND'] = df_dnd_status['DND'].str.strip()
df_dnd_status['IS_DND'] = df_dnd_status['IS_DND'].str.strip()
df_dnd_status = df_dnd_status[(df_dnd_status['DND'] == 'DND status')]
if df_dnd_status.shape[0] != 0:
df_dnd_status = df_dnd_status.drop('LOG_TYPE', 1)
df_dnd_status = df_dnd_status.drop('FILE_NAME', 1)
df_dnd_status = df_dnd_status.drop('NOTES', 1)
df_dnd_status = df_dnd_status.drop('DND', 1)
df_dnd_status.loc[(df_dnd_status.IS_DND == '1'), 'IS_IN_DND'] = 'False'
df_dnd_status.loc[(df_dnd_status.IS_DND != '1'), 'IS_IN_DND'] = 'True'
df_dnd_status = df_dnd_status.drop('IS_DND', 1)
df_dnd_status = df_dnd_status[['PARTICIPANT_ID', 'LOG_TIME', 'IS_IN_DND']]
return df_hour_raw, df_burst_status, df_dnd_status
if __name__ == "__main__":
target_file = "MicroTUploadManagerServiceNotes.log.csv"
hour_folder_path = r"E:\data\wocket\Wockets-win32-x64\resources\app\src\srv\MICROT\aditya4_internal@timestudy_com\data-watch\2020-06-02\02-EDT"
target_file_path = hour_folder_path + sep + target_file
output_file_path = r"C:\Users\jixin\Desktop\temp\temp_file.csv"
p_id = "aditya4_internal@timestudy_com"
df_hour_raw = pd.read_csv(target_file_path)
print(df_hour_raw)
df_hour_parsed = parse_raw_df(df_hour_raw)
df_hour_parsed.to_csv(output_file_path, index=False)
|
PypiClean
|
/python-docx-2023-0.2.17.tar.gz/python-docx-2023-0.2.17/docx/oxml/coreprops.py
|
from __future__ import (
absolute_import, division, print_function, unicode_literals
)
import re
from datetime import datetime, timedelta
from docx.compat import is_string
from docx.oxml import parse_xml
from docx.oxml.ns import nsdecls, qn
from docx.oxml.xmlchemy import BaseOxmlElement, ZeroOrOne
class CT_CoreProperties(BaseOxmlElement):
"""
``<cp:coreProperties>`` element, the root element of the Core Properties
part stored as ``/docProps/core.xml``. Implements many of the Dublin Core
document metadata elements. String elements resolve to an empty string
('') if the element is not present in the XML. String elements are
limited in length to 255 unicode characters.
"""
category = ZeroOrOne('cp:category', successors=())
contentStatus = ZeroOrOne('cp:contentStatus', successors=())
created = ZeroOrOne('dcterms:created', successors=())
creator = ZeroOrOne('dc:creator', successors=())
description = ZeroOrOne('dc:description', successors=())
identifier = ZeroOrOne('dc:identifier', successors=())
keywords = ZeroOrOne('cp:keywords', successors=())
language = ZeroOrOne('dc:language', successors=())
lastModifiedBy = ZeroOrOne('cp:lastModifiedBy', successors=())
lastPrinted = ZeroOrOne('cp:lastPrinted', successors=())
modified = ZeroOrOne('dcterms:modified', successors=())
revision = ZeroOrOne('cp:revision', successors=())
subject = ZeroOrOne('dc:subject', successors=())
title = ZeroOrOne('dc:title', successors=())
version = ZeroOrOne('cp:version', successors=())
_coreProperties_tmpl = (
'<cp:coreProperties %s/>\n' % nsdecls('cp', 'dc', 'dcterms')
)
@classmethod
def new(cls):
"""
Return a new ``<cp:coreProperties>`` element
"""
xml = cls._coreProperties_tmpl
coreProperties = parse_xml(xml)
return coreProperties
@property
def author_text(self):
"""
The text in the `dc:creator` child element.
"""
return self._text_of_element('creator')
@author_text.setter
def author_text(self, value):
self._set_element_text('creator', value)
@property
def category_text(self):
return self._text_of_element('category')
@category_text.setter
def category_text(self, value):
self._set_element_text('category', value)
@property
def comments_text(self):
return self._text_of_element('description')
@comments_text.setter
def comments_text(self, value):
self._set_element_text('description', value)
@property
def contentStatus_text(self):
return self._text_of_element('contentStatus')
@contentStatus_text.setter
def contentStatus_text(self, value):
self._set_element_text('contentStatus', value)
@property
def created_datetime(self):
return self._datetime_of_element('created')
@created_datetime.setter
def created_datetime(self, value):
self._set_element_datetime('created', value)
@property
def identifier_text(self):
return self._text_of_element('identifier')
@identifier_text.setter
def identifier_text(self, value):
self._set_element_text('identifier', value)
@property
def keywords_text(self):
return self._text_of_element('keywords')
@keywords_text.setter
def keywords_text(self, value):
self._set_element_text('keywords', value)
@property
def language_text(self):
return self._text_of_element('language')
@language_text.setter
def language_text(self, value):
self._set_element_text('language', value)
@property
def lastModifiedBy_text(self):
return self._text_of_element('lastModifiedBy')
@lastModifiedBy_text.setter
def lastModifiedBy_text(self, value):
self._set_element_text('lastModifiedBy', value)
@property
def lastPrinted_datetime(self):
return self._datetime_of_element('lastPrinted')
@lastPrinted_datetime.setter
def lastPrinted_datetime(self, value):
self._set_element_datetime('lastPrinted', value)
@property
def modified_datetime(self):
return self._datetime_of_element('modified')
@modified_datetime.setter
def modified_datetime(self, value):
self._set_element_datetime('modified', value)
@property
def revision_number(self):
"""
Integer value of revision property.
"""
revision = self.revision
if revision is None:
return 0
revision_str = revision.text
try:
revision = int(revision_str)
except ValueError:
# non-integer revision strings also resolve to 0
revision = 0
# as do negative integers
if revision < 0:
revision = 0
return revision
@revision_number.setter
def revision_number(self, value):
"""
Set revision property to string value of integer *value*.
"""
if not isinstance(value, int) or value < 1:
tmpl = "revision property requires positive int, got '%s'"
raise ValueError(tmpl % value)
revision = self.get_or_add_revision()
revision.text = str(value)
@property
def subject_text(self):
return self._text_of_element('subject')
@subject_text.setter
def subject_text(self, value):
self._set_element_text('subject', value)
@property
def title_text(self):
return self._text_of_element('title')
@title_text.setter
def title_text(self, value):
self._set_element_text('title', value)
@property
def version_text(self):
return self._text_of_element('version')
@version_text.setter
def version_text(self, value):
self._set_element_text('version', value)
def _datetime_of_element(self, property_name):
element = getattr(self, property_name)
if element is None:
return None
datetime_str = element.text
try:
return self._parse_W3CDTF_to_datetime(datetime_str)
except ValueError:
# invalid datetime strings are ignored
return None
def _get_or_add(self, prop_name):
"""
Return element returned by 'get_or_add_' method for *prop_name*.
"""
get_or_add_method_name = 'get_or_add_%s' % prop_name
get_or_add_method = getattr(self, get_or_add_method_name)
element = get_or_add_method()
return element
@classmethod
def _offset_dt(cls, dt, offset_str):
"""
Return a |datetime| instance that is offset from datetime *dt* by
the timezone offset specified in *offset_str*, a string like
``'-07:00'``.
"""
match = cls._offset_pattern.match(offset_str)
if match is None:
raise ValueError(
"'%s' is not a valid offset string" % offset_str
)
sign, hours_str, minutes_str = match.groups()
sign_factor = -1 if sign == '+' else 1
hours = int(hours_str) * sign_factor
minutes = int(minutes_str) * sign_factor
td = timedelta(hours=hours, minutes=minutes)
return dt + td
_offset_pattern = re.compile(r'([+-])(\d\d):(\d\d)')
@classmethod
def _parse_W3CDTF_to_datetime(cls, w3cdtf_str):
# valid W3CDTF date cases:
# yyyy e.g. '2003'
# yyyy-mm e.g. '2003-12'
# yyyy-mm-dd e.g. '2003-12-31'
# UTC timezone e.g. '2003-12-31T10:14:55Z'
# numeric timezone e.g. '2003-12-31T10:14:55-08:00'
templates = (
'%Y-%m-%dT%H:%M:%S',
'%Y-%m-%d',
'%Y-%m',
'%Y',
)
# strptime isn't smart enough to parse literal timezone offsets like
# '-07:30', so we have to do it ourselves
parseable_part = w3cdtf_str[:19]
offset_str = w3cdtf_str[19:]
dt = None
for tmpl in templates:
try:
dt = datetime.strptime(parseable_part, tmpl)
except ValueError:
continue
if dt is None:
tmpl = "could not parse W3CDTF datetime string '%s'"
raise ValueError(tmpl % w3cdtf_str)
if len(offset_str) == 6:
return cls._offset_dt(dt, offset_str)
return dt
def _set_element_datetime(self, prop_name, value):
"""
Set date/time value of child element having *prop_name* to *value*.
"""
if not isinstance(value, datetime):
tmpl = (
"property requires <type 'datetime.datetime'> object, got %s"
)
raise ValueError(tmpl % type(value))
element = self._get_or_add(prop_name)
dt_str = value.strftime('%Y-%m-%dT%H:%M:%SZ')
element.text = dt_str
if prop_name in ('created', 'modified'):
# These two require an explicit 'xsi:type="dcterms:W3CDTF"'
# attribute. The first and last line are a hack required to add
# the xsi namespace to the root element rather than each child
# element in which it is referenced
self.set(qn('xsi:foo'), 'bar')
element.set(qn('xsi:type'), 'dcterms:W3CDTF')
del self.attrib[qn('xsi:foo')]
def _set_element_text(self, prop_name, value):
"""Set string value of *name* property to *value*."""
if not is_string(value):
value = str(value)
if len(value) > 255:
tmpl = (
"exceeded 255 char limit for property, got:\n\n'%s'"
)
raise ValueError(tmpl % value)
element = self._get_or_add(prop_name)
element.text = value
def _text_of_element(self, property_name):
"""
Return the text in the element matching *property_name*, or an empty
string if the element is not present or contains no text.
"""
element = getattr(self, property_name)
if element is None:
return ''
if element.text is None:
return ''
return element.text
|
PypiClean
|
/alipay-sdk-python-pycryptodome-3.3.202.tar.gz/alipay-sdk-python-pycryptodome-3.3.202/alipay/aop/api/domain/AlipayDataDataserviceAntdataassetsUploadjobCreateModel.py
|
import json
from alipay.aop.api.constant.ParamConstants import *
from alipay.aop.api.domain.AntdataassetsOdpsColumn import AntdataassetsOdpsColumn
class AlipayDataDataserviceAntdataassetsUploadjobCreateModel(object):
def __init__(self):
self._guid = None
self._odps_columns = None
@property
def guid(self):
return self._guid
@guid.setter
def guid(self, value):
self._guid = value
@property
def odps_columns(self):
return self._odps_columns
@odps_columns.setter
def odps_columns(self, value):
if isinstance(value, list):
self._odps_columns = list()
for i in value:
if isinstance(i, AntdataassetsOdpsColumn):
self._odps_columns.append(i)
else:
self._odps_columns.append(AntdataassetsOdpsColumn.from_alipay_dict(i))
def to_alipay_dict(self):
params = dict()
if self.guid:
if hasattr(self.guid, 'to_alipay_dict'):
params['guid'] = self.guid.to_alipay_dict()
else:
params['guid'] = self.guid
if self.odps_columns:
if isinstance(self.odps_columns, list):
for i in range(0, len(self.odps_columns)):
element = self.odps_columns[i]
if hasattr(element, 'to_alipay_dict'):
self.odps_columns[i] = element.to_alipay_dict()
if hasattr(self.odps_columns, 'to_alipay_dict'):
params['odps_columns'] = self.odps_columns.to_alipay_dict()
else:
params['odps_columns'] = self.odps_columns
return params
@staticmethod
def from_alipay_dict(d):
if not d:
return None
o = AlipayDataDataserviceAntdataassetsUploadjobCreateModel()
if 'guid' in d:
o.guid = d['guid']
if 'odps_columns' in d:
o.odps_columns = d['odps_columns']
return o
|
PypiClean
|
/mitmproxy_lin_customization-5.2.2.1.tar.gz/mitmproxy_lin_customization-5.2.2.1/pathod/protocols/http2.py
|
import itertools
import time
import hyperframe.frame
from hpack.hpack import Encoder, Decoder
from mitmproxy.net.http import http2
import mitmproxy.net.http.headers
import mitmproxy.net.http.response
import mitmproxy.net.http.request
from mitmproxy.coretypes import bidi
from .. import language
class TCPHandler:
def __init__(self, rfile, wfile=None):
self.rfile = rfile
self.wfile = wfile
class HTTP2StateProtocol:
ERROR_CODES = bidi.BiDi(
NO_ERROR=0x0,
PROTOCOL_ERROR=0x1,
INTERNAL_ERROR=0x2,
FLOW_CONTROL_ERROR=0x3,
SETTINGS_TIMEOUT=0x4,
STREAM_CLOSED=0x5,
FRAME_SIZE_ERROR=0x6,
REFUSED_STREAM=0x7,
CANCEL=0x8,
COMPRESSION_ERROR=0x9,
CONNECT_ERROR=0xa,
ENHANCE_YOUR_CALM=0xb,
INADEQUATE_SECURITY=0xc,
HTTP_1_1_REQUIRED=0xd
)
CLIENT_CONNECTION_PREFACE = b'PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n'
HTTP2_DEFAULT_SETTINGS = {
hyperframe.frame.SettingsFrame.HEADER_TABLE_SIZE: 4096,
hyperframe.frame.SettingsFrame.ENABLE_PUSH: 1,
hyperframe.frame.SettingsFrame.MAX_CONCURRENT_STREAMS: None,
hyperframe.frame.SettingsFrame.INITIAL_WINDOW_SIZE: 2 ** 16 - 1,
hyperframe.frame.SettingsFrame.MAX_FRAME_SIZE: 2 ** 14,
hyperframe.frame.SettingsFrame.MAX_HEADER_LIST_SIZE: None,
}
def __init__(
self,
tcp_handler=None,
rfile=None,
wfile=None,
is_server=False,
dump_frames=False,
encoder=None,
decoder=None,
unhandled_frame_cb=None,
):
self.tcp_handler = tcp_handler or TCPHandler(rfile, wfile)
self.is_server = is_server
self.dump_frames = dump_frames
self.encoder = encoder or Encoder()
self.decoder = decoder or Decoder()
self.unhandled_frame_cb = unhandled_frame_cb
self.http2_settings = self.HTTP2_DEFAULT_SETTINGS.copy()
self.current_stream_id = None
self.connection_preface_performed = False
def read_request(
self,
__rfile,
include_body=True,
body_size_limit=None,
allow_empty=False,
):
if body_size_limit is not None:
raise NotImplementedError()
self.perform_connection_preface()
timestamp_start = time.time()
if hasattr(self.tcp_handler.rfile, "reset_timestamps"):
self.tcp_handler.rfile.reset_timestamps()
stream_id, headers, body = self._receive_transmission(
include_body=include_body,
)
if hasattr(self.tcp_handler.rfile, "first_byte_timestamp"):
# more accurate timestamp_start
timestamp_start = self.tcp_handler.rfile.first_byte_timestamp
timestamp_end = time.time()
first_line_format, method, scheme, host, port, path = http2.parse_headers(headers)
request = mitmproxy.net.http.request.Request(
first_line_format,
method,
scheme,
host,
port,
path,
b"HTTP/2.0",
headers,
body,
None,
timestamp_start=timestamp_start,
timestamp_end=timestamp_end,
)
request.stream_id = stream_id
return request
def read_response(
self,
__rfile,
request_method=b'',
body_size_limit=None,
include_body=True,
stream_id=None,
):
if body_size_limit is not None:
raise NotImplementedError()
self.perform_connection_preface()
timestamp_start = time.time()
if hasattr(self.tcp_handler.rfile, "reset_timestamps"):
self.tcp_handler.rfile.reset_timestamps()
stream_id, headers, body = self._receive_transmission(
stream_id=stream_id,
include_body=include_body,
)
if hasattr(self.tcp_handler.rfile, "first_byte_timestamp"):
# more accurate timestamp_start
timestamp_start = self.tcp_handler.rfile.first_byte_timestamp
if include_body:
timestamp_end = time.time()
else:
timestamp_end = None
response = mitmproxy.net.http.response.Response(
b"HTTP/2.0",
int(headers.get(':status', 502)),
b'',
headers,
body,
timestamp_start=timestamp_start,
timestamp_end=timestamp_end,
)
response.stream_id = stream_id
return response
def assemble(self, message):
if isinstance(message, mitmproxy.net.http.request.Request):
return self.assemble_request(message)
elif isinstance(message, mitmproxy.net.http.response.Response):
return self.assemble_response(message)
else:
raise ValueError("HTTP message not supported.")
def assemble_request(self, request):
assert isinstance(request, mitmproxy.net.http.request.Request)
authority = self.tcp_handler.sni if self.tcp_handler.sni else self.tcp_handler.address[0]
if self.tcp_handler.address[1] != 443:
authority += ":%d" % self.tcp_handler.address[1]
headers = request.headers.copy()
if ':authority' not in headers:
headers.insert(0, ':authority', authority)
headers.insert(0, ':scheme', request.scheme)
headers.insert(0, ':path', request.path)
headers.insert(0, ':method', request.method)
if hasattr(request, 'stream_id'):
stream_id = request.stream_id
else:
stream_id = self._next_stream_id()
return list(itertools.chain(
self._create_headers(headers, stream_id, end_stream=(request.content is None or len(request.content) == 0)),
self._create_body(request.content, stream_id)))
def assemble_response(self, response):
assert isinstance(response, mitmproxy.net.http.response.Response)
headers = response.headers.copy()
if ':status' not in headers:
headers.insert(0, b':status', str(response.status_code).encode())
if hasattr(response, 'stream_id'):
stream_id = response.stream_id
else:
stream_id = self._next_stream_id()
return list(itertools.chain(
self._create_headers(headers, stream_id, end_stream=(response.content is None or len(response.content) == 0)),
self._create_body(response.content, stream_id),
))
def perform_connection_preface(self, force=False):
if force or not self.connection_preface_performed:
if self.is_server:
self.perform_server_connection_preface(force)
else:
self.perform_client_connection_preface(force)
def perform_server_connection_preface(self, force=False):
if force or not self.connection_preface_performed:
self.connection_preface_performed = True
magic_length = len(self.CLIENT_CONNECTION_PREFACE)
magic = self.tcp_handler.rfile.safe_read(magic_length)
assert magic == self.CLIENT_CONNECTION_PREFACE
frm = hyperframe.frame.SettingsFrame(settings={
hyperframe.frame.SettingsFrame.ENABLE_PUSH: 0,
hyperframe.frame.SettingsFrame.MAX_CONCURRENT_STREAMS: 1,
})
self.send_frame(frm, hide=True)
self._receive_settings(hide=True)
def perform_client_connection_preface(self, force=False):
if force or not self.connection_preface_performed:
self.connection_preface_performed = True
self.tcp_handler.wfile.write(self.CLIENT_CONNECTION_PREFACE)
self.send_frame(hyperframe.frame.SettingsFrame(), hide=True)
self._receive_settings(hide=True) # server announces own settings
self._receive_settings(hide=True) # server acks my settings
def send_frame(self, frm, hide=False):
raw_bytes = frm.serialize()
self.tcp_handler.wfile.write(raw_bytes)
self.tcp_handler.wfile.flush()
if not hide and self.dump_frames: # pragma: no cover
print(">> " + repr(frm))
def read_frame(self, hide=False):
while True:
frm = http2.parse_frame(*http2.read_raw_frame(self.tcp_handler.rfile))
if not hide and self.dump_frames: # pragma: no cover
print("<< " + repr(frm))
if isinstance(frm, hyperframe.frame.PingFrame):
raw_bytes = hyperframe.frame.PingFrame(flags=['ACK'], payload=frm.payload).serialize()
self.tcp_handler.wfile.write(raw_bytes)
self.tcp_handler.wfile.flush()
continue
if isinstance(frm, hyperframe.frame.SettingsFrame) and 'ACK' not in frm.flags:
self._apply_settings(frm.settings, hide)
if isinstance(frm, hyperframe.frame.DataFrame) and frm.flow_controlled_length > 0:
self._update_flow_control_window(frm.stream_id, frm.flow_controlled_length)
return frm
def check_alpn(self):
alp = self.tcp_handler.get_alpn_proto_negotiated()
if alp != b'h2':
raise NotImplementedError(
"HTTP2Protocol can not handle unknown ALPN value: %s" % alp)
return True
def _handle_unexpected_frame(self, frm):
if isinstance(frm, hyperframe.frame.SettingsFrame):
return
if self.unhandled_frame_cb:
self.unhandled_frame_cb(frm)
def _receive_settings(self, hide=False):
while True:
frm = self.read_frame(hide)
if isinstance(frm, hyperframe.frame.SettingsFrame):
break
else:
self._handle_unexpected_frame(frm)
def _next_stream_id(self):
if self.current_stream_id is None:
if self.is_server:
# servers must use even stream ids
self.current_stream_id = 2
else:
# clients must use odd stream ids
self.current_stream_id = 1
else:
self.current_stream_id += 2
return self.current_stream_id
def _apply_settings(self, settings, hide=False):
for setting, value in settings.items():
old_value = self.http2_settings[setting]
if not old_value:
old_value = '-'
self.http2_settings[setting] = value
frm = hyperframe.frame.SettingsFrame(flags=['ACK'])
self.send_frame(frm, hide)
def _update_flow_control_window(self, stream_id, increment):
frm = hyperframe.frame.WindowUpdateFrame(stream_id=0, window_increment=increment)
self.send_frame(frm)
frm = hyperframe.frame.WindowUpdateFrame(stream_id=stream_id, window_increment=increment)
self.send_frame(frm)
def _create_headers(self, headers, stream_id, end_stream=True):
def frame_cls(chunks):
for i in chunks:
if i == 0:
yield hyperframe.frame.HeadersFrame, i
else:
yield hyperframe.frame.ContinuationFrame, i
header_block_fragment = self.encoder.encode(headers.fields)
chunk_size = self.http2_settings[hyperframe.frame.SettingsFrame.MAX_FRAME_SIZE]
chunks = range(0, len(header_block_fragment), chunk_size)
frms = [frm_cls(
flags=[],
stream_id=stream_id,
data=header_block_fragment[i:i + chunk_size]) for frm_cls, i in frame_cls(chunks)]
frms[-1].flags.add('END_HEADERS')
if end_stream:
frms[0].flags.add('END_STREAM')
if self.dump_frames: # pragma: no cover
for frm in frms:
print(">> ", repr(frm))
return [frm.serialize() for frm in frms]
def _create_body(self, body, stream_id):
if body is None or len(body) == 0:
return b''
chunk_size = self.http2_settings[hyperframe.frame.SettingsFrame.MAX_FRAME_SIZE]
chunks = range(0, len(body), chunk_size)
frms = [hyperframe.frame.DataFrame(
flags=[],
stream_id=stream_id,
data=body[i:i + chunk_size]) for i in chunks]
frms[-1].flags.add('END_STREAM')
if self.dump_frames: # pragma: no cover
for frm in frms:
print(">> ", repr(frm))
return [frm.serialize() for frm in frms]
def _receive_transmission(self, stream_id=None, include_body=True):
if not include_body:
raise NotImplementedError()
body_expected = True
header_blocks = b''
body = b''
while True:
frm = self.read_frame()
if (
(isinstance(frm, hyperframe.frame.HeadersFrame) or isinstance(frm, hyperframe.frame.ContinuationFrame)) and
(stream_id is None or frm.stream_id == stream_id)
):
stream_id = frm.stream_id
header_blocks += frm.data
if 'END_STREAM' in frm.flags:
body_expected = False
if 'END_HEADERS' in frm.flags:
break
else:
self._handle_unexpected_frame(frm)
while body_expected:
frm = self.read_frame()
if isinstance(frm, hyperframe.frame.DataFrame) and frm.stream_id == stream_id:
body += frm.data
if 'END_STREAM' in frm.flags:
break
else:
self._handle_unexpected_frame(frm)
headers = mitmproxy.net.http.headers.Headers(
[[k, v] for k, v in self.decoder.decode(header_blocks, raw=True)]
)
return stream_id, headers, body
class HTTP2Protocol:
def __init__(self, pathod_handler):
self.pathod_handler = pathod_handler
self.wire_protocol = HTTP2StateProtocol(
self.pathod_handler, is_server=True, dump_frames=self.pathod_handler.http2_framedump
)
def make_error_response(self, reason, body):
return language.http2.make_error_response(reason, body)
def read_request(self, lg=None):
self.wire_protocol.perform_server_connection_preface()
return self.wire_protocol.read_request(self.pathod_handler.rfile)
def assemble(self, message):
return self.wire_protocol.assemble(message)
|
PypiClean
|
/nvme-spex-0.0.1.tar.gz/nvme-spex-0.0.1/src/spex/jsonspec/rowiter.py
|
import copy
from lxml import etree
from spex.xml import Xpath
def repr_elem(elem):
return etree.tostring(elem, encoding="unicode").strip()
def alst_count_decr(alst):
res = [(ndx, (elem, count - 1)) for ndx, (elem, count) in alst if count > 1]
return res
def alst_insert(alst, ndx, elem, count):
entry = (ndx, (elem, count))
for iter_ndx, (e_ndx, _) in enumerate(alst):
if ndx < e_ndx:
alst.insert(iter_ndx, entry)
return
alst.append(entry)
def sorted_row_children(row, alst):
col_off = 0
rest = alst
(elem_off, (elem, _)), rest = rest[0], rest[1:]
for td in Xpath.elems(row, "./td"):
while elem_off is not None and elem_off == col_off:
yield elem
col_off += int(elem.get("colspan", 1))
if rest:
(elem_off, (elem, _)), rest = rest[0], rest[1:]
else:
elem_off = None
yield td
colspan = int(td.get("colspan", 1))
col_off += colspan
if elem_off is not None and elem_off == col_off:
yield elem
if rest:
yield from (elem for (_, (elem, __)) in rest)
def alst_repr(alst):
return repr(
[
(ndx, (etree.tostring(elem, encoding="unicode").strip(), count))
for (ndx, (elem, count)) in alst
]
)
def row_iter(tbl):
it = Xpath.elems(tbl, "./tr/td[1]/parent::tr")
alst = []
row_cnt = 1
for row in it:
if alst:
row_ = row
# There's one or more cells to re-insert. Insert while respecting
# the colspan's of every cell.
row = etree.Element(row.tag)
for key, val in row_.attrib.items():
row.attrib[key] = val
for td in sorted_row_children(row_, alst):
# lxml elements can have _exactly_ one parent, this action
# would re-parent the element, causing it to disappear from the
# original XML tree.
row.append(copy.deepcopy(td))
# Find and store any cells with a rowspan attribute so that we can
# re-insert these later.
column_offset = 0
for td in Xpath.elems(row, "./td"):
rowspan = int(td.get("rowspan", 1))
if rowspan > 1:
elem = copy.copy(td)
# avoid repeatedly processing this elem in subsequent rows
del elem.attrib["rowspan"]
alst_insert(alst, column_offset, elem, rowspan)
column_offset += int(td.get("colspan", 1))
yield row
alst = alst_count_decr(alst)
row_cnt += 1
def get_cell_of(row: etree._Element, col: int) -> etree._Element:
col_off = 0
for td in Xpath.elems(row, "./td"):
if col_off == col:
return td
col_off += int(td.get("colspan", 1))
if col > col_off:
raise IndexError(
f"index out of range (requested {col} in row with columns [0;{col_off}])"
)
raise KeyError(f"requested {col}, no element starting at this index")
|
PypiClean
|
/zc.relation-2.0.tar.gz/zc.relation-2.0/src/zc/relation/tokens.rst
|
======================================================
Tokens and Joins: zc.relation Catalog Extended Example
======================================================
Introduction and Set Up
=======================
This document assumes you have read the introductory README.rst and want
to learn a bit more by example. In it, we will explore a more
complicated set of relations that demonstrates most of the aspects of
working with tokens. In particular, we will look at joins, which will
also give us a chance to look more in depth at query factories and
search indexes, and introduce the idea of listeners. It will not explain
the basics that the README already addressed.
Imagine we are indexing security assertions in a system. In this
system, users may have roles within an organization. Each organization
may have multiple child organizations and may have a single parent
organization. A user with a role in a parent organization will have the
same role in all transitively connected child relations.
We have two kinds of relations, then. One kind of relation will model
the hierarchy of organizations. We'll do it with an intrinsic relation
of organizations to their children: that reflects the fact that parent
organizations choose and are comprised of their children; children do
not choose their parents.
The other relation will model the (multiple) roles a (single) user has
in a (single) organization. This relation will be entirely extrinsic.
We could create two catalogs, one for each type. Or we could put them
both in the same catalog. Initially, we'll go with the single-catalog
approach for our examples. This single catalog, then, will be indexing
a heterogeneous collection of relations.
Let's define the two relations with interfaces. We'll include one
accessor, getOrganization, largely to show how to handle methods.
>>> import zope.interface
>>> class IOrganization(zope.interface.Interface):
... title = zope.interface.Attribute('the title')
... parts = zope.interface.Attribute(
... 'the organizations that make up this one')
...
>>> class IRoles(zope.interface.Interface):
... def getOrganization():
... 'return the organization in which this relation operates'
... principal_id = zope.interface.Attribute(
... 'the pricipal id whose roles this relation lists')
... role_ids = zope.interface.Attribute(
... 'the role ids that the principal explicitly has in the '
... 'organization. The principal may have other roles via '
... 'roles in parent organizations.')
...
Now we can create some classes. In the README example, the setup was a bit
of a toy. This time we will be just a bit more practical. We'll also expect
to be operating within the ZODB, with a root and transactions. [#ZODB]_
.. [#ZODB] Here we will set up a ZODB instance for us to use.
>>> from ZODB.tests.util import DB
>>> db = DB()
>>> conn = db.open()
>>> root = conn.root()
Here's how we will dump and load our relations: use a "registry"
object, similar to an intid utility. [#faux_intid]_
.. [#faux_intid] Here's a simple persistent keyreference. Notice that it is
not persistent itself: this is important for conflict resolution to be
able to work (which we don't show here, but we're trying to lean more
towards real usage for this example).
>>> from functools import total_ordering
>>> @total_ordering
... class Reference(object): # see zope.app.keyreference
... def __init__(self, obj):
... self.object = obj
... def _get_sorting_key(self):
... # this doesn't work during conflict resolution. See
... # zope.app.keyreference.persistent, 3.5 release, for current
... # best practice.
... if self.object._p_jar is None:
... raise ValueError(
... 'can only compare when both objects have connections')
... return self.object._p_oid or ''
... def __lt__(self, other):
... # this doesn't work during conflict resolution. See
... # zope.app.keyreference.persistent, 3.5 release, for current
... # best practice.
... if not isinstance(other, Reference):
... raise ValueError('can only compare with Reference objects')
... return self._get_sorting_key() < other._get_sorting_key()
... def __eq__(self, other):
... # this doesn't work during conflict resolution. See
... # zope.app.keyreference.persistent, 3.5 release, for current
... # best practice.
... if not isinstance(other, Reference):
... raise ValueError('can only compare with Reference objects')
... return self._get_sorting_key() == other._get_sorting_key()
Here's a simple integer identifier tool.
>>> import persistent
>>> import BTrees
>>> class Registry(persistent.Persistent): # see zope.app.intid
... def __init__(self, family=BTrees.family32):
... self.family = family
... self.ids = self.family.IO.BTree()
... self.refs = self.family.OI.BTree()
... def getId(self, obj):
... if not isinstance(obj, persistent.Persistent):
... raise ValueError('not a persistent object', obj)
... if obj._p_jar is None:
... self._p_jar.add(obj)
... ref = Reference(obj)
... id = self.refs.get(ref)
... if id is None:
... # naive for conflict resolution; see zope.app.intid
... if self.ids:
... id = self.ids.maxKey() + 1
... else:
... id = self.family.minint
... self.ids[id] = ref
... self.refs[ref] = id
... return id
... def __contains__(self, obj):
... if (not isinstance(obj, persistent.Persistent) or
... obj._p_oid is None):
... return False
... return Reference(obj) in self.refs
... def getObject(self, id, default=None):
... res = self.ids.get(id, None)
... if res is None:
... return default
... else:
... return res.object
... def remove(self, r):
... if isinstance(r, int):
... self.refs.pop(self.ids.pop(r))
... elif (not isinstance(r, persistent.Persistent) or
... r._p_oid is None):
... raise LookupError(r)
... else:
... self.ids.pop(self.refs.pop(Reference(r)))
...
>>> registry = root['registry'] = Registry()
>>> import transaction
>>> transaction.commit()
In this implementation of the "dump" method, we use the cache just to
show you how you might use it. It probably is overkill for this job,
and maybe even a speed loss, but you can see the idea.
>>> def dump(obj, catalog, cache):
... reg = cache.get('registry')
... if reg is None:
... reg = cache['registry'] = catalog._p_jar.root()['registry']
... return reg.getId(obj)
...
>>> def load(token, catalog, cache):
... reg = cache.get('registry')
... if reg is None:
... reg = cache['registry'] = catalog._p_jar.root()['registry']
... return reg.getObject(token)
...
Now we can create a relation catalog to hold these items.
>>> import zc.relation.catalog
>>> catalog = root['catalog'] = zc.relation.catalog.Catalog(dump, load)
>>> transaction.commit()
Now we set up our indexes. We'll start with just the organizations, and
set up the catalog with them. This part will be similar to the example
in README.rst, but will introduce more discussions of optimizations and
tokens. Then we'll add in the part about roles, and explore queries and
token-based "joins".
Organizations
=============
The organization will hold a set of organizations. This is actually not
inherently easy in the ZODB because this means that we need to compare
or hash persistent objects, which does not work reliably over time and
across machines out-of-the-box. To side-step the issue for this example,
and still do something a bit interesting and real-world, we'll use the
registry tokens introduced above. This will also give us a chance to
talk a bit more about optimizations and tokens. (If you would like
to sanely and transparently hold a set of persistent objects, try the
zc.set package XXX not yet.)
>>> import BTrees
>>> import persistent
>>> @zope.interface.implementer(IOrganization)
... @total_ordering
... class Organization(persistent.Persistent):
...
... def __init__(self, title):
... self.title = title
... self.parts = BTrees.family32.IF.TreeSet()
... # the next parts just make the tests prettier
... def __repr__(self):
... return '<Organization instance "' + self.title + '">'
... def __lt__(self, other):
... # pukes if other doesn't have name
... return self.title < other.title
... def __eq__(self, other):
... return self is other
... def __hash__(self):
... return 1 # dummy
...
OK, now we know how organizations will work. Now we can add the `parts`
index to the catalog. This will do a few new things from how we added
indexes in the README.
>>> catalog.addValueIndex(IOrganization['parts'], multiple=True,
... name="part")
So, what's different from the README examples?
First, we are using an interface element to define the value to be indexed.
It provides an interface to which objects will be adapted, a default name
for the index, and information as to whether the attribute should be used
directly or called.
Second, we are not specifying a dump or load. They are None. This
means that the indexed value can already be treated as a token. This
can allow a very significant optimization for reindexing if the indexed
value is a large collection using the same BTree family as the
index--which leads us to the next difference.
Third, we are specifying that `multiple=True`. This means that the value
on a given relation that provides or can be adapted to IOrganization will
have a collection of `parts`. These will always be regarded as a set,
whether the actual colection is a BTrees set or the keys of a BTree.
Last, we are specifying a name to be used for queries. I find that queries
read more easily when the query keys are singular, so I often rename plurals.
As in the README, We can add another simple transposing transitive query
factory, switching between 'part' and `None`.
>>> import zc.relation.queryfactory
>>> factory1 = zc.relation.queryfactory.TransposingTransitive(
... 'part', None)
>>> catalog.addDefaultQueryFactory(factory1)
Let's add a couple of search indexes in too, of the hierarchy looking up...
>>> import zc.relation.searchindex
>>> catalog.addSearchIndex(
... zc.relation.searchindex.TransposingTransitiveMembership(
... 'part', None))
...and down.
>>> catalog.addSearchIndex(
... zc.relation.searchindex.TransposingTransitiveMembership(
... None, 'part'))
PLEASE NOTE: the search index looking up is not a good idea practically. The
index is designed for looking down [#verifyObjectTransitive]_.
.. [#verifyObjectTransitive] The TransposingTransitiveMembership indexes
provide ISearchIndex.
>>> from zope.interface.verify import verifyObject
>>> import zc.relation.interfaces
>>> index = list(catalog.iterSearchIndexes())[0]
>>> verifyObject(zc.relation.interfaces.ISearchIndex, index)
True
Let's create and add a few organizations.
We'll make a structure like this [#silliness]_::
Ynod Corp Mangement Zookd Corp Management
/ | \ / | \
Ynod Devs Ynod SAs Ynod Admins Zookd Admins Zookd SAs Zookd Devs
/ \ \ / / \
Y3L4 Proj Bet Proj Ynod Zookd Task Force Zookd hOgnmd Zookd Nbd
Here's the Python.
>>> orgs = root['organizations'] = BTrees.family32.OO.BTree()
>>> for nm, parts in (
... ('Y3L4 Proj', ()),
... ('Bet Proj', ()),
... ('Ynod Zookd Task Force', ()),
... ('Zookd hOgnmd', ()),
... ('Zookd Nbd', ()),
... ('Ynod Devs', ('Y3L4 Proj', 'Bet Proj')),
... ('Ynod SAs', ()),
... ('Ynod Admins', ('Ynod Zookd Task Force',)),
... ('Zookd Admins', ('Ynod Zookd Task Force',)),
... ('Zookd SAs', ()),
... ('Zookd Devs', ('Zookd hOgnmd', 'Zookd Nbd')),
... ('Ynod Corp Management', ('Ynod Devs', 'Ynod SAs', 'Ynod Admins')),
... ('Zookd Corp Management', ('Zookd Devs', 'Zookd SAs',
... 'Zookd Admins'))):
... org = Organization(nm)
... for part in parts:
... ignore = org.parts.insert(registry.getId(orgs[part]))
... orgs[nm] = org
... catalog.index(org)
...
Now the catalog knows about the relations.
>>> len(catalog)
13
>>> root['dummy'] = Organization('Foo')
>>> root['dummy'] in catalog
False
>>> orgs['Y3L4 Proj'] in catalog
True
Also, now we can search. To do this, we can use some of the token methods that
the catalog provides. The most commonly used is `tokenizeQuery`. It takes a
query with values that are not tokenized and converts them to values that are
tokenized.
>>> Ynod_SAs_id = registry.getId(orgs['Ynod SAs'])
>>> catalog.tokenizeQuery({None: orgs['Ynod SAs']}) == {
... None: Ynod_SAs_id}
True
>>> Zookd_SAs_id = registry.getId(orgs['Zookd SAs'])
>>> Zookd_Devs_id = registry.getId(orgs['Zookd Devs'])
>>> catalog.tokenizeQuery(
... {None: zc.relation.catalog.any(
... orgs['Zookd SAs'], orgs['Zookd Devs'])}) == {
... None: zc.relation.catalog.any(Zookd_SAs_id, Zookd_Devs_id)}
True
Of course, right now doing this with 'part' alone is kind of silly, since it
does not change within the relation catalog (because we said that dump and
load were `None`, as discussed above).
>>> catalog.tokenizeQuery({'part': Ynod_SAs_id}) == {
... 'part': Ynod_SAs_id}
True
>>> catalog.tokenizeQuery(
... {'part': zc.relation.catalog.any(Zookd_SAs_id, Zookd_Devs_id)}
... ) == {'part': zc.relation.catalog.any(Zookd_SAs_id, Zookd_Devs_id)}
True
The `tokenizeQuery` method is so common that we're going to assign it to
a variable in our example. Then we'll do a search or two.
So...find the relations that Ynod Devs supervise.
>>> t = catalog.tokenizeQuery
>>> res = list(catalog.findRelationTokens(t({None: orgs['Ynod Devs']})))
OK...we used `findRelationTokens`, as opposed to `findRelations`, so res
is a couple of numbers now. How do we convert them back?
`resolveRelationTokens` will do the trick.
>>> len(res)
3
>>> sorted(catalog.resolveRelationTokens(res))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Bet Proj">, <Organization instance "Y3L4 Proj">,
<Organization instance "Ynod Devs">]
`resolveQuery` is the mirror image of `tokenizeQuery`: it converts
tokenized queries to queries with "loaded" values.
>>> original = {'part': zc.relation.catalog.any(
... Zookd_SAs_id, Zookd_Devs_id),
... None: orgs['Zookd Devs']}
>>> tokenized = catalog.tokenizeQuery(original)
>>> original == catalog.resolveQuery(tokenized)
True
>>> original = {None: zc.relation.catalog.any(
... orgs['Zookd SAs'], orgs['Zookd Devs']),
... 'part': Zookd_Devs_id}
>>> tokenized = catalog.tokenizeQuery(original)
>>> original == catalog.resolveQuery(tokenized)
True
Likewise, `tokenizeRelations` is the mirror image of `resolveRelationTokens`.
>>> sorted(catalog.tokenizeRelations(
... [orgs["Bet Proj"], orgs["Y3L4 Proj"]])) == sorted(
... registry.getId(o) for o in
... [orgs["Bet Proj"], orgs["Y3L4 Proj"]])
True
The other token-related methods are as follows
[#show_remaining_token_methods]_:
.. [#show_remaining_token_methods] For what it's worth, here are some small
examples of the remaining token-related methods.
These two are the singular versions of `tokenizeRelations` and
`resolveRelationTokens`.
`tokenizeRelation` returns a token for the given relation.
>>> catalog.tokenizeRelation(orgs['Zookd Corp Management']) == (
... registry.getId(orgs['Zookd Corp Management']))
True
`resolveRelationToken` returns a relation for the given token.
>>> catalog.resolveRelationToken(registry.getId(
... orgs['Zookd Corp Management'])) is orgs['Zookd Corp Management']
True
The "values" ones are a bit lame to show now, since the only value
we have right now is not tokenized but used straight up. But here
goes, showing some fascinating no-ops.
`tokenizeValues`, returns an iterable of tokens for the values of
the given index name.
>>> list(catalog.tokenizeValues((1,2,3), 'part'))
[1, 2, 3]
`resolveValueTokens` returns an iterable of values for the tokens of
the given index name.
>>> list(catalog.resolveValueTokens((1,2,3), 'part'))
[1, 2, 3]
- `tokenizeValues`, which returns an iterable of tokens for the values
of the given index name;
- `resolveValueTokens`, which returns an iterable of values for the tokens of
the given index name;
- `tokenizeRelation`, which returns a token for the given relation; and
- `resolveRelationToken`, which returns a relation for the given token.
Why do we bother with these tokens, instead of hiding them away and
making the API prettier? By exposing them, we enable efficient joining,
and efficient use in other contexts. For instance, if you use the same
intid utility to tokenize in other catalogs, our results can be merged
with the results of other catalogs. Similarly, you can use the results
of queries to other catalogs--or even "joins" from earlier results of
querying this catalog--as query values here. We'll explore this in the
next section.
Roles
=====
We have set up the Organization relations. Now let's set up the roles, and
actually be able to answer the questions that we described at the beginning
of the document.
In our Roles object, roles and principals will simply be strings--ids, if
this were a real system. The organization will be a direct object reference.
>>> @zope.interface.implementer(IRoles)
... @total_ordering
... class Roles(persistent.Persistent):
...
... def __init__(self, principal_id, role_ids, organization):
... self.principal_id = principal_id
... self.role_ids = BTrees.family32.OI.TreeSet(role_ids)
... self._organization = organization
... def getOrganization(self):
... return self._organization
... # the rest is for prettier/easier tests
... def __repr__(self):
... return "<Roles instance (%s has %s in %s)>" % (
... self.principal_id, ', '.join(self.role_ids),
... self._organization.title)
... def __lt__(self, other):
... _self = (
... self.principal_id,
... tuple(self.role_ids),
... self._organization.title,
... )
... _other = (
... other.principal_id,
... tuple(other.role_ids),
... other._organization.title,
... )
... return _self <_other
... def __eq__(self, other):
... return self is other
... def __hash__(self):
... return 1 # dummy
...
Now let's add add the value indexes to the relation catalog.
>>> catalog.addValueIndex(IRoles['principal_id'], btree=BTrees.family32.OI)
>>> catalog.addValueIndex(IRoles['role_ids'], btree=BTrees.family32.OI,
... multiple=True, name='role_id')
>>> catalog.addValueIndex(IRoles['getOrganization'], dump, load,
... name='organization')
Those are some slightly new variations of what we've seen in `addValueIndex`
before, but all mixing and matching on the same ingredients.
As a reminder, here is our organization structure::
Ynod Corp Mangement Zookd Corp Management
/ | \ / | \
Ynod Devs Ynod SAs Ynod Admins Zookd Admins Zookd SAs Zookd Devs
/ \ \ / / \
Y3L4 Proj Bet Proj Ynod Zookd Task Force Zookd hOgnmd Zookd Nbd
Now let's create and add some roles.
>>> principal_ids = [
... 'abe', 'bran', 'cathy', 'david', 'edgar', 'frank', 'gertrude',
... 'harriet', 'ignas', 'jacob', 'karyn', 'lettie', 'molly', 'nancy',
... 'ophelia', 'pat']
>>> role_ids = ['user manager', 'writer', 'reviewer', 'publisher']
>>> get_role = dict((v[0], v) for v in role_ids).__getitem__
>>> roles = root['roles'] = BTrees.family32.IO.BTree()
>>> next = 0
>>> for prin, org, role_ids in (
... ('abe', orgs['Zookd Corp Management'], 'uwrp'),
... ('bran', orgs['Ynod Corp Management'], 'uwrp'),
... ('cathy', orgs['Ynod Devs'], 'w'),
... ('cathy', orgs['Y3L4 Proj'], 'r'),
... ('david', orgs['Bet Proj'], 'wrp'),
... ('edgar', orgs['Ynod Devs'], 'up'),
... ('frank', orgs['Ynod SAs'], 'uwrp'),
... ('frank', orgs['Ynod Admins'], 'w'),
... ('gertrude', orgs['Ynod Zookd Task Force'], 'uwrp'),
... ('harriet', orgs['Ynod Zookd Task Force'], 'w'),
... ('harriet', orgs['Ynod Admins'], 'r'),
... ('ignas', orgs['Zookd Admins'], 'r'),
... ('ignas', orgs['Zookd Corp Management'], 'w'),
... ('karyn', orgs['Zookd Corp Management'], 'uwrp'),
... ('karyn', orgs['Ynod Corp Management'], 'uwrp'),
... ('lettie', orgs['Zookd Corp Management'], 'u'),
... ('lettie', orgs['Ynod Zookd Task Force'], 'w'),
... ('lettie', orgs['Zookd SAs'], 'w'),
... ('molly', orgs['Zookd SAs'], 'uwrp'),
... ('nancy', orgs['Zookd Devs'], 'wrp'),
... ('nancy', orgs['Zookd hOgnmd'], 'u'),
... ('ophelia', orgs['Zookd Corp Management'], 'w'),
... ('ophelia', orgs['Zookd Devs'], 'r'),
... ('ophelia', orgs['Zookd Nbd'], 'p'),
... ('pat', orgs['Zookd Nbd'], 'wrp')):
... assert prin in principal_ids
... role_ids = [get_role(l) for l in role_ids]
... role = roles[next] = Roles(prin, role_ids, org)
... role.key = next
... next += 1
... catalog.index(role)
...
Now we can begin to do searches [#real_value_tokens]_.
.. [#real_value_tokens] We can also show the values token methods more
sanely now.
>>> original = sorted((orgs['Zookd Devs'], orgs['Ynod SAs']))
>>> tokens = list(catalog.tokenizeValues(original, 'organization'))
>>> original == sorted(catalog.resolveValueTokens(tokens, 'organization'))
True
What are all the role settings for ophelia?
>>> sorted(catalog.findRelations({'principal_id': 'ophelia'}))
... # doctest: +NORMALIZE_WHITESPACE
[<Roles instance (ophelia has publisher in Zookd Nbd)>,
<Roles instance (ophelia has reviewer in Zookd Devs)>,
<Roles instance (ophelia has writer in Zookd Corp Management)>]
That answer does not need to be transitive: we're done.
Next question. Where does ophelia have the 'writer' role?
>>> list(catalog.findValues(
... 'organization', {'principal_id': 'ophelia',
... 'role_id': 'writer'}))
[<Organization instance "Zookd Corp Management">]
Well, that's correct intransitively. Do we need a transitive queries
factory? No! This is a great chance to look at the token join we talked
about in the previous section. This should actually be a two-step
operation: find all of the organizations in which ophelia has writer,
and then find all of the transitive parts to that organization.
>>> sorted(catalog.findRelations({None: zc.relation.catalog.Any(
... catalog.findValueTokens('organization',
... {'principal_id': 'ophelia',
... 'role_id': 'writer'}))}))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Ynod Zookd Task Force">,
<Organization instance "Zookd Admins">,
<Organization instance "Zookd Corp Management">,
<Organization instance "Zookd Devs">,
<Organization instance "Zookd Nbd">,
<Organization instance "Zookd SAs">,
<Organization instance "Zookd hOgnmd">]
That's more like it.
Next question. What users have roles in the 'Zookd Devs' organization?
Intransitively, that's pretty easy.
>>> sorted(catalog.findValueTokens(
... 'principal_id', t({'organization': orgs['Zookd Devs']})))
['nancy', 'ophelia']
Transitively, we should do another join.
>>> org_id = registry.getId(orgs['Zookd Devs'])
>>> sorted(catalog.findValueTokens(
... 'principal_id', {
... 'organization': zc.relation.catalog.any(
... org_id, *catalog.findRelationTokens({'part': org_id}))}))
['abe', 'ignas', 'karyn', 'lettie', 'nancy', 'ophelia']
That's a little awkward, but it does the trick.
Last question, and the kind of question that started the entire example.
What roles does ophelia have in the "Zookd Nbd" organization?
>>> list(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'})))
['publisher']
Intransitively, that's correct. But, transitively, ophelia also has
reviewer and writer, and that's the answer we want to be able to get quickly.
We could ask the question a different way, then, again leveraging a join.
We'll set it up as a function, because we will want to use it a little later
without repeating the code.
>>> def getRolesInOrganization(principal_id, org):
... org_id = registry.getId(org)
... return sorted(catalog.findValueTokens(
... 'role_id', {
... 'organization': zc.relation.catalog.any(
... org_id,
... *catalog.findRelationTokens({'part': org_id})),
... 'principal_id': principal_id}))
...
>>> getRolesInOrganization('ophelia', orgs['Zookd Nbd'])
['publisher', 'reviewer', 'writer']
As you can see, then, working with tokens makes interesting joins possible,
as long as the tokens are the same across the two queries.
We have examined tokens methods and token techniques like joins. The example
story we have told can let us get into a few more advanced topics, such as
query factory joins and search indexes that can increase their read speed.
Query Factory Joins
===================
We can build a query factory that makes the join automatic. A query
factory is a callable that takes two arguments: a query (the one that
starts the search) and the catalog. The factory either returns None,
indicating that the query factory cannot be used for this query, or it
returns another callable that takes a chain of relations. The last
token in the relation chain is the most recent. The output of this
inner callable is expected to be an iterable of
BTrees.family32.OO.Bucket queries to search further from the given chain
of relations.
Here's a flawed approach to this problem.
>>> def flawed_factory(query, catalog):
... if (len(query) == 2 and
... 'organization' in query and
... 'principal_id' in query):
... def getQueries(relchain):
... if not relchain:
... yield query
... return
... current = catalog.getValueTokens(
... 'organization', relchain[-1])
... if current:
... organizations = catalog.getRelationTokens(
... {'part': zc.relation.catalog.Any(current)})
... if organizations:
... res = BTrees.family32.OO.Bucket(query)
... res['organization'] = zc.relation.catalog.Any(
... organizations)
... yield res
... return getQueries
...
That works for our current example.
>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'}),
... queryFactory=flawed_factory))
['publisher', 'reviewer', 'writer']
However, it won't work for other similar queries.
>>> getRolesInOrganization('abe', orgs['Zookd Nbd'])
['publisher', 'reviewer', 'user manager', 'writer']
>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}),
... queryFactory=flawed_factory))
[]
oops.
The flawed_factory is actually a useful pattern for more typical relation
traversal. It goes from relation to relation to relation, and ophelia has
connected relations all the way to the top. However, abe only has them at
the top, so nothing is traversed.
Instead, we can make a query factory that modifies the initial query.
>>> def factory2(query, catalog):
... if (len(query) == 2 and
... 'organization' in query and
... 'principal_id' in query):
... def getQueries(relchain):
... if not relchain:
... res = BTrees.family32.OO.Bucket(query)
... org_id = query['organization']
... if org_id is not None:
... res['organization'] = zc.relation.catalog.any(
... org_id,
... *catalog.findRelationTokens({'part': org_id}))
... yield res
... return getQueries
...
>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'}),
... queryFactory=factory2))
['publisher', 'reviewer', 'writer']
>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}),
... queryFactory=factory2))
['publisher', 'reviewer', 'user manager', 'writer']
A difference between this and the other approach is that it is essentially
intransitive: this query factory modifies the initial query, and then does
not give further queries. The catalog currently always stops calling the
query factory if the queries do not return any results, so an approach like
the flawed_factory simply won't work for this kind of problem.
We could add this query factory as another default.
>>> catalog.addDefaultQueryFactory(factory2)
>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'})))
['publisher', 'reviewer', 'writer']
>>> sorted(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'})))
['publisher', 'reviewer', 'user manager', 'writer']
The previously installed query factory is still available.
>>> list(catalog.iterDefaultQueryFactories()) == [factory1, factory2]
True
>>> list(catalog.findRelations(
... {'part': registry.getId(orgs['Y3L4 Proj'])}))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Ynod Devs">,
<Organization instance "Ynod Corp Management">]
>>> sorted(catalog.findRelations(
... {None: registry.getId(orgs['Ynod Corp Management'])}))
... # doctest: +NORMALIZE_WHITESPACE
[<Organization instance "Bet Proj">, <Organization instance "Y3L4 Proj">,
<Organization instance "Ynod Admins">,
<Organization instance "Ynod Corp Management">,
<Organization instance "Ynod Devs">, <Organization instance "Ynod SAs">,
<Organization instance "Ynod Zookd Task Force">]
Search Index for Query Factory Joins
====================================
Now that we have written a query factory that encapsulates the join, we can
use a search index that speeds it up. We've only used transitive search
indexes so far. Now we will add an intransitive search index.
The intransitive search index generally just needs the search value
names it should be indexing, optionally the result name (defaulting to
relations), and optionally the query factory to be used.
We need to use two additional options because of the odd join trick we're
doing. We need to specify what organization and principal_id values need
to be changed when an object is indexed, and we need to indicate that we
should update when organization, principal_id, *or* parts changes.
`getValueTokens` specifies the values that need to be indexed. It gets
the index, the name for the tokens desired, the token, the catalog that
generated the token change (it may not be the same as the index's
catalog, the source dictionary that contains a dictionary of the values
that will be used for tokens if you do not override them, a dict of the
added values for this token (keys are value names), a dict of the
removed values for this token, and whether the token has been removed.
The method can return None, which will leave the index to its default
behavior that should work if no query factory is used; or an iterable of
values.
>>> def getValueTokens(index, name, token, catalog, source,
... additions, removals, removed):
... if name == 'organization':
... orgs = source.get('organization')
... if not removed or not orgs:
... orgs = index.catalog.getValueTokens(
... 'organization', token)
... if not orgs:
... orgs = [token]
... orgs.extend(removals.get('part', ()))
... orgs = set(orgs)
... orgs.update(index.catalog.findValueTokens(
... 'part',
... {None: zc.relation.catalog.Any(
... t for t in orgs if t is not None)}))
... return orgs
... elif name == 'principal_id':
... # we only want custom behavior if this is an organization
... if 'principal_id' in source or index.catalog.getValueTokens(
... 'principal_id', token):
... return ''
... orgs = set((token,))
... orgs.update(index.catalog.findRelationTokens(
... {'part': token}))
... return set(index.catalog.findValueTokens(
... 'principal_id', {
... 'organization': zc.relation.catalog.Any(orgs)}))
...
>>> index = zc.relation.searchindex.Intransitive(
... ('organization', 'principal_id'), 'role_id', factory2,
... getValueTokens,
... ('organization', 'principal_id', 'part', 'role_id'),
... unlimitedDepth=True)
>>> catalog.addSearchIndex(index)
>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'}))
>>> list(res)
['publisher', 'reviewer', 'writer']
>>> list(res)
['publisher', 'reviewer', 'writer']
>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
[#verifyObjectIntransitive]_
.. [#verifyObjectIntransitive] The Intransitive search index provides
ISearchIndex and IListener.
>>> from zope.interface.verify import verifyObject
>>> import zc.relation.interfaces
>>> verifyObject(zc.relation.interfaces.ISearchIndex, index)
True
>>> verifyObject(zc.relation.interfaces.IListener, index)
True
Now we can change and remove relations--both organizations and roles--and
have the index maintain correct state. Given the current state of
organizations--
::
Ynod Corp Mangement Zookd Corp Management
/ | \ / | \
Ynod Devs Ynod SAs Ynod Admins Zookd Admins Zookd SAs Zookd Devs
/ \ \ / / \
Y3L4 Proj Bet Proj Ynod Zookd Task Force Zookd hOgnmd Zookd Nbd
--first we will move Ynod Devs to beneath Zookd Devs, and back out. This will
briefly give abe full privileges to Y3L4 Proj., among others.
>>> list(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]
>>> orgs['Zookd Devs'].parts.insert(registry.getId(orgs['Ynod Devs']))
1
>>> catalog.index(orgs['Zookd Devs'])
>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
>>> orgs['Zookd Devs'].parts.remove(registry.getId(orgs['Ynod Devs']))
>>> catalog.index(orgs['Zookd Devs'])
>>> list(catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]
As another example, we will change the roles abe has, and see that it is
propagated down to Zookd Nbd.
>>> rels = list(catalog.findRelations(t(
... {'principal_id': 'abe',
... 'organization': orgs['Zookd Corp Management']})))
>>> len(rels)
1
>>> rels[0].role_ids.remove('reviewer')
>>> catalog.index(rels[0])
>>> res = catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'user manager', 'writer']
>>> list(res)
['publisher', 'user manager', 'writer']
Note that search index order matters. In our case, our intransitive search
index is relying on our transitive index, so the transitive index needs to
come first. You want transitive relation indexes before name. Right now,
you are in charge of this order: it will be difficult to come up with a
reliable algorithm for guessing this.
Listeners, Catalog Administration, and Joining Across Relation Catalogs
=======================================================================
We've done all of our examples so far with a single catalog that indexes
both kinds of relations. What if we want to have two catalogs with
homogenous collections of relations? That can feel cleaner, but it also
introduces some new wrinkles.
Let's use our current catalog for organizations, removing the extra
information; and create a new one for roles.
>>> role_catalog = root['role_catalog'] = catalog.copy()
>>> transaction.commit()
>>> org_catalog = catalog
>>> del catalog
We'll need a slightly different query factory and a slightly different
search index `getValueTokens` function. We'll write those, then modify the
configuration of our two catalogs for the new world.
The transitive factory we write here is for the role catalog. It needs
access to the organzation catalog. We could do this a variety of
ways--relying on a utility, or finding the catalog from context. We will
make the role_catalog have a .org_catalog attribute, and rely on that.
>>> role_catalog.org_catalog = org_catalog
>>> def factory3(query, catalog):
... if (len(query) == 2 and
... 'organization' in query and
... 'principal_id' in query):
... def getQueries(relchain):
... if not relchain:
... res = BTrees.family32.OO.Bucket(query)
... org_id = query['organization']
... if org_id is not None:
... res['organization'] = zc.relation.catalog.any(
... org_id,
... *catalog.org_catalog.findRelationTokens(
... {'part': org_id}))
... yield res
... return getQueries
...
>>> def getValueTokens2(index, name, token, catalog, source,
... additions, removals, removed):
... is_role_catalog = catalog is index.catalog # role_catalog
... if name == 'organization':
... if is_role_catalog:
... orgs = set(source.get('organization') or
... index.catalog.getValueTokens(
... 'organization', token) or ())
... else:
... orgs = set((token,))
... orgs.update(removals.get('part', ()))
... orgs.update(index.catalog.org_catalog.findValueTokens(
... 'part',
... {None: zc.relation.catalog.Any(
... t for t in orgs if t is not None)}))
... return orgs
... elif name == 'principal_id':
... # we only want custom behavior if this is an organization
... if not is_role_catalog:
... orgs = set((token,))
... orgs.update(index.catalog.org_catalog.findRelationTokens(
... {'part': token}))
... return set(index.catalog.findValueTokens(
... 'principal_id', {
... 'organization': zc.relation.catalog.Any(orgs)}))
... return ''
If you are following along in the code and comparing to the originals, you may
see that this approach is a bit cleaner than the one when the relations were
in the same catalog.
Now we will fix up the the organization catalog [#compare_copy]_.
.. [#compare_copy] Before we modify them, let's look at the copy we made.
The copy should currently behave identically to the original.
>>> len(org_catalog)
38
>>> len(role_catalog)
38
>>> indexed = list(org_catalog)
>>> len(indexed)
38
>>> orgs['Zookd Devs'] in indexed
True
>>> for r in indexed:
... if r not in role_catalog:
... print('bad')
... break
... else:
... print('good')
...
good
>>> org_names = set(dir(org_catalog))
>>> role_names = set(dir(role_catalog))
>>> sorted(org_names - role_names)
[]
>>> sorted(role_names - org_names)
['org_catalog']
>>> def checkYnodDevsParts(catalog):
... res = sorted(catalog.findRelations(t({None: orgs['Ynod Devs']})))
... if res != [
... orgs["Bet Proj"], orgs["Y3L4 Proj"], orgs["Ynod Devs"]]:
... print("bad", res)
...
>>> checkYnodDevsParts(org_catalog)
>>> checkYnodDevsParts(role_catalog)
>>> def checkOpheliaRoles(catalog):
... res = sorted(catalog.findRelations({'principal_id': 'ophelia'}))
... if repr(res) != (
... "[<Roles instance (ophelia has publisher in Zookd Nbd)>, " +
... "<Roles instance (ophelia has reviewer in Zookd Devs)>, " +
... "<Roles instance (ophelia has writer in " +
... "Zookd Corp Management)>]"):
... print("bad", res)
...
>>> checkOpheliaRoles(org_catalog)
>>> checkOpheliaRoles(role_catalog)
>>> def checkOpheliaWriterOrganizations(catalog):
... res = sorted(catalog.findRelations({None: zc.relation.catalog.Any(
... catalog.findValueTokens(
... 'organization', {'principal_id': 'ophelia',
... 'role_id': 'writer'}))}))
... if repr(res) != (
... '[<Organization instance "Ynod Zookd Task Force">, ' +
... '<Organization instance "Zookd Admins">, ' +
... '<Organization instance "Zookd Corp Management">, ' +
... '<Organization instance "Zookd Devs">, ' +
... '<Organization instance "Zookd Nbd">, ' +
... '<Organization instance "Zookd SAs">, ' +
... '<Organization instance "Zookd hOgnmd">]'):
... print("bad", res)
...
>>> checkOpheliaWriterOrganizations(org_catalog)
>>> checkOpheliaWriterOrganizations(role_catalog)
>>> def checkPrincipalsWithRolesInZookdDevs(catalog):
... org_id = registry.getId(orgs['Zookd Devs'])
... res = sorted(catalog.findValueTokens(
... 'principal_id',
... {'organization': zc.relation.catalog.any(
... org_id, *catalog.findRelationTokens({'part': org_id}))}))
... if res != ['abe', 'ignas', 'karyn', 'lettie', 'nancy', 'ophelia']:
... print("bad", res)
...
>>> checkPrincipalsWithRolesInZookdDevs(org_catalog)
>>> checkPrincipalsWithRolesInZookdDevs(role_catalog)
>>> def checkOpheliaRolesInZookdNbd(catalog):
... res = sorted(catalog.findValueTokens(
... 'role_id', {
... 'organization': registry.getId(orgs['Zookd Nbd']),
... 'principal_id': 'ophelia'}))
... if res != ['publisher', 'reviewer', 'writer']:
... print("bad", res)
...
>>> checkOpheliaRolesInZookdNbd(org_catalog)
>>> checkOpheliaRolesInZookdNbd(role_catalog)
>>> def checkAbeRolesInZookdNbd(catalog):
... res = sorted(catalog.findValueTokens(
... 'role_id', {
... 'organization': registry.getId(orgs['Zookd Nbd']),
... 'principal_id': 'abe'}))
... if res != ['publisher', 'user manager', 'writer']:
... print("bad", res)
...
>>> checkAbeRolesInZookdNbd(org_catalog)
>>> checkAbeRolesInZookdNbd(role_catalog)
>>> org_catalog.removeDefaultQueryFactory(None) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
LookupError: ('factory not found', None)
>>> org_catalog.removeValueIndex('organization')
>>> org_catalog.removeValueIndex('role_id')
>>> org_catalog.removeValueIndex('principal_id')
>>> org_catalog.removeDefaultQueryFactory(factory2)
>>> org_catalog.removeSearchIndex(index)
>>> org_catalog.clear()
>>> len(org_catalog)
0
>>> for v in orgs.values():
... org_catalog.index(v)
This also shows using the `removeDefaultQueryFactory` and `removeSearchIndex`
methods [#removeDefaultQueryFactoryExceptions]_.
.. [#removeDefaultQueryFactoryExceptions] You get errors by removing query
factories that are not registered.
>>> org_catalog.removeDefaultQueryFactory(factory2) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
LookupError: ('factory not found', <function factory2 at ...>)
Now we will set up the role catalog [#copy_unchanged]_.
.. [#copy_unchanged] Changes to one copy should not affect the other. That
means the role_catalog should still work as before.
>>> len(org_catalog)
13
>>> len(list(org_catalog))
13
>>> len(role_catalog)
38
>>> indexed = list(role_catalog)
>>> len(indexed)
38
>>> orgs['Zookd Devs'] in indexed
True
>>> orgs['Zookd Devs'] in role_catalog
True
>>> checkYnodDevsParts(role_catalog)
>>> checkOpheliaRoles(role_catalog)
>>> checkOpheliaWriterOrganizations(role_catalog)
>>> checkPrincipalsWithRolesInZookdDevs(role_catalog)
>>> checkOpheliaRolesInZookdNbd(role_catalog)
>>> checkAbeRolesInZookdNbd(role_catalog)
>>> role_catalog.removeValueIndex('part')
>>> for ix in list(role_catalog.iterSearchIndexes()):
... role_catalog.removeSearchIndex(ix)
...
>>> role_catalog.removeDefaultQueryFactory(factory1)
>>> role_catalog.removeDefaultQueryFactory(factory2)
>>> role_catalog.addDefaultQueryFactory(factory3)
>>> root['index2'] = index2 = zc.relation.searchindex.Intransitive(
... ('organization', 'principal_id'), 'role_id', factory3,
... getValueTokens2,
... ('organization', 'principal_id', 'part', 'role_id'),
... unlimitedDepth=True)
>>> role_catalog.addSearchIndex(index2)
The new role_catalog index needs to be updated from the org_catalog.
We'll set that up using listeners, a new concept.
>>> org_catalog.addListener(index2)
>>> list(org_catalog.iterListeners()) == [index2]
True
Now the role_catalog should be able to answer the same questions as the old
single catalog approach.
>>> t = role_catalog.tokenizeQuery
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'})))
['publisher', 'user manager', 'writer']
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'ophelia'})))
['publisher', 'reviewer', 'writer']
We can also make changes to both catalogs and the search indexes are
maintained.
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]
>>> orgs['Zookd Devs'].parts.insert(registry.getId(orgs['Ynod Devs']))
1
>>> org_catalog.index(orgs['Zookd Devs'])
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
['publisher', 'user manager', 'writer']
>>> orgs['Zookd Devs'].parts.remove(registry.getId(orgs['Ynod Devs']))
>>> org_catalog.index(orgs['Zookd Devs'])
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Y3L4 Proj'],
... 'principal_id': 'abe'})))
[]
>>> rels = list(role_catalog.findRelations(t(
... {'principal_id': 'abe',
... 'organization': orgs['Zookd Corp Management']})))
>>> len(rels)
1
>>> rels[0].role_ids.insert('reviewer')
1
>>> role_catalog.index(rels[0])
>>> res = role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd Nbd'],
... 'principal_id': 'abe'}))
>>> list(res)
['publisher', 'reviewer', 'user manager', 'writer']
Here we add a new organization.
>>> orgs['Zookd hOnc'] = org = Organization('Zookd hOnc')
>>> orgs['Zookd Devs'].parts.insert(registry.getId(org))
1
>>> org_catalog.index(orgs['Zookd hOnc'])
>>> org_catalog.index(orgs['Zookd Devs'])
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd hOnc'],
... 'principal_id': 'abe'})))
['publisher', 'reviewer', 'user manager', 'writer']
>>> list(role_catalog.findValueTokens(
... 'role_id', t({'organization': orgs['Zookd hOnc'],
... 'principal_id': 'ophelia'})))
['reviewer', 'writer']
Now we'll remove it.
>>> orgs['Zookd Devs'].parts.remove(registry.getId(org))
>>> org_catalog.index(orgs['Zookd Devs'])
>>> org_catalog.unindex(orgs['Zookd hOnc'])
TODO make sure that intransitive copy looks the way we expect
[#administrivia]_
.. [#administrivia]
You can add listeners multiple times.
>>> org_catalog.addListener(index2)
>>> list(org_catalog.iterListeners()) == [index2, index2]
True
Now we will remove the listeners, to show we can.
>>> org_catalog.removeListener(index2)
>>> org_catalog.removeListener(index2)
>>> org_catalog.removeListener(index2)
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
LookupError: ('listener not found',
<zc.relation.searchindex.Intransitive object at ...>)
>>> org_catalog.removeListener(None)
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
LookupError: ('listener not found', None)
Here's the same for removing a search index we don't have
>>> org_catalog.removeSearchIndex(index2)
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
LookupError: ('index not found',
<zc.relation.searchindex.Intransitive object at ...>)
.. ......... ..
.. Footnotes ..
.. ......... ..
.. [#silliness] In "2001: A Space Odyssey", many people believe the name HAL
was chosen because it was ROT25 of IBM.... I cheat a bit sometimes and
use ROT1 because the result sounds better.
|
PypiClean
|
/Pyomo-6.6.2-cp39-cp39-win_amd64.whl/pyomo/scripting/plugins/download.py
|
import logging
import sys
from pyomo.common.download import FileDownloader, DownloadFactory
from pyomo.scripting.pyomo_parser import add_subparser
class GroupDownloader(object):
def __init__(self):
self.downloader = FileDownloader()
def create_parser(self, parser):
return self.downloader.create_parser(parser)
def call(self, args, unparsed):
logger = logging.getLogger('pyomo.common')
original_level = logger.level
logger.setLevel(logging.INFO)
try:
return self._call_impl(args, unparsed, logger)
finally:
logger.setLevel(original_level)
def _call_impl(self, args, unparsed, logger):
results = []
result_fmt = "[%s] %s"
returncode = 0
self.downloader.cacert = args.cacert
self.downloader.insecure = args.insecure
logger.info(
"As of February 9, 2023, AMPL GSL can no longer be downloaded\
through download-extensions. Visit https://portal.ampl.com/\
to download the AMPL GSL binaries."
)
for target in DownloadFactory:
try:
ext = DownloadFactory(target, downloader=self.downloader)
if hasattr(ext, 'skip') and ext.skip():
result = 'SKIP'
elif hasattr(ext, '__call__'):
ext()
result = ' OK '
else:
# Extension was a simple function and already ran
result = ' OK '
except SystemExit:
_info = sys.exc_info()
_cls = (
str(_info[0].__name__ if _info[0] is not None else "NoneType")
+ ": "
)
logger.error(_cls + str(_info[1]))
result = 'FAIL'
returncode |= 2
except:
_info = sys.exc_info()
_cls = (
str(_info[0].__name__ if _info[0] is not None else "NoneType")
+ ": "
)
logger.error(_cls + str(_info[1]))
result = 'FAIL'
returncode |= 1
results.append(result_fmt % (result, target))
logger.info("Finished downloading Pyomo extensions.")
logger.info(
"The following extensions were downloaded:\n " + "\n ".join(results)
)
return returncode
#
# Add a subparser for the download-extensions command
#
_group_downloader = GroupDownloader()
_parser = _group_downloader.create_parser(
add_subparser(
'download-extensions',
func=_group_downloader.call,
help='Download compiled extension modules',
add_help=False,
description='This downloads all registered (compiled) extension modules',
)
)
|
PypiClean
|
/gaeframework-2.0.10.tar.gz/gaeframework-2.0.10/google_appengine/lib/django_1_2/django/db/models/loading.py
|
"Utilities for loading models and the modules that contain them."
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.utils.datastructures import SortedDict
from django.utils.importlib import import_module
from django.utils.module_loading import module_has_submodule
import imp
import sys
import os
import threading
__all__ = ('get_apps', 'get_app', 'get_models', 'get_model', 'register_models',
'load_app', 'app_cache_ready')
class AppCache(object):
"""
A cache that stores installed applications and their models. Used to
provide reverse-relations and for app introspection (e.g. admin).
"""
# Use the Borg pattern to share state between all instances. Details at
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66531.
__shared_state = dict(
# Keys of app_store are the model modules for each application.
app_store = SortedDict(),
# Mapping of app_labels to a dictionary of model names to model code.
app_models = SortedDict(),
# Mapping of app_labels to errors raised when trying to import the app.
app_errors = {},
# -- Everything below here is only used when populating the cache --
loaded = False,
handled = {},
postponed = [],
nesting_level = 0,
write_lock = threading.RLock(),
_get_models_cache = {},
)
def __init__(self):
self.__dict__ = self.__shared_state
def _populate(self):
"""
Fill in all the cache information. This method is threadsafe, in the
sense that every caller will see the same state upon return, and if the
cache is already initialised, it does no work.
"""
if self.loaded:
return
self.write_lock.acquire()
try:
if self.loaded:
return
for app_name in settings.INSTALLED_APPS:
if app_name in self.handled:
continue
self.load_app(app_name, True)
if not self.nesting_level:
for app_name in self.postponed:
self.load_app(app_name)
self.loaded = True
finally:
self.write_lock.release()
def load_app(self, app_name, can_postpone=False):
"""
Loads the app with the provided fully qualified name, and returns the
model module.
"""
self.handled[app_name] = None
self.nesting_level += 1
app_module = import_module(app_name)
try:
models = import_module('.models', app_name)
except ImportError:
self.nesting_level -= 1
# If the app doesn't have a models module, we can just ignore the
# ImportError and return no models for it.
if not module_has_submodule(app_module, 'models'):
return None
# But if the app does have a models module, we need to figure out
# whether to suppress or propagate the error. If can_postpone is
# True then it may be that the package is still being imported by
# Python and the models module isn't available yet. So we add the
# app to the postponed list and we'll try it again after all the
# recursion has finished (in populate). If can_postpone is False
# then it's time to raise the ImportError.
else:
if can_postpone:
self.postponed.append(app_name)
return None
else:
raise
self.nesting_level -= 1
if models not in self.app_store:
self.app_store[models] = len(self.app_store)
return models
def app_cache_ready(self):
"""
Returns true if the model cache is fully populated.
Useful for code that wants to cache the results of get_models() for
themselves once it is safe to do so.
"""
return self.loaded
def get_apps(self):
"Returns a list of all installed modules that contain models."
self._populate()
# Ensure the returned list is always in the same order (with new apps
# added at the end). This avoids unstable ordering on the admin app
# list page, for example.
apps = [(v, k) for k, v in self.app_store.items()]
apps.sort()
return [elt[1] for elt in apps]
def get_app(self, app_label, emptyOK=False):
"""
Returns the module containing the models for the given app_label. If
the app has no models in it and 'emptyOK' is True, returns None.
"""
self._populate()
self.write_lock.acquire()
try:
for app_name in settings.INSTALLED_APPS:
if app_label == app_name.split('.')[-1]:
mod = self.load_app(app_name, False)
if mod is None:
if emptyOK:
return None
else:
return mod
raise ImproperlyConfigured("App with label %s could not be found" % app_label)
finally:
self.write_lock.release()
def get_app_errors(self):
"Returns the map of known problems with the INSTALLED_APPS."
self._populate()
return self.app_errors
def get_models(self, app_mod=None, include_auto_created=False, include_deferred=False):
"""
Given a module containing models, returns a list of the models.
Otherwise returns a list of all installed models.
By default, auto-created models (i.e., m2m models without an
explicit intermediate table) are not included. However, if you
specify include_auto_created=True, they will be.
By default, models created to satisfy deferred attribute
queries are *not* included in the list of models. However, if
you specify include_deferred, they will be.
"""
cache_key = (app_mod, include_auto_created, include_deferred)
try:
return self._get_models_cache[cache_key]
except KeyError:
pass
self._populate()
if app_mod:
app_list = [self.app_models.get(app_mod.__name__.split('.')[-2], SortedDict())]
else:
app_list = self.app_models.itervalues()
model_list = []
for app in app_list:
model_list.extend(
model for model in app.values()
if ((not model._deferred or include_deferred)
and (not model._meta.auto_created or include_auto_created))
)
self._get_models_cache[cache_key] = model_list
return model_list
def get_model(self, app_label, model_name, seed_cache=True):
"""
Returns the model matching the given app_label and case-insensitive
model_name.
Returns None if no model is found.
"""
if seed_cache:
self._populate()
return self.app_models.get(app_label, SortedDict()).get(model_name.lower())
def register_models(self, app_label, *models):
"""
Register a set of models as belonging to an app.
"""
for model in models:
# Store as 'name: model' pair in a dictionary
# in the app_models dictionary
model_name = model._meta.object_name.lower()
model_dict = self.app_models.setdefault(app_label, SortedDict())
if model_name in model_dict:
# The same model may be imported via different paths (e.g.
# appname.models and project.appname.models). We use the source
# filename as a means to detect identity.
fname1 = os.path.abspath(sys.modules[model.__module__].__file__)
fname2 = os.path.abspath(sys.modules[model_dict[model_name].__module__].__file__)
# Since the filename extension could be .py the first time and
# .pyc or .pyo the second time, ignore the extension when
# comparing.
if os.path.splitext(fname1)[0] == os.path.splitext(fname2)[0]:
continue
model_dict[model_name] = model
self._get_models_cache.clear()
cache = AppCache()
# These methods were always module level, so are kept that way for backwards
# compatibility.
get_apps = cache.get_apps
get_app = cache.get_app
get_app_errors = cache.get_app_errors
get_models = cache.get_models
get_model = cache.get_model
register_models = cache.register_models
load_app = cache.load_app
app_cache_ready = cache.app_cache_ready
|
PypiClean
|
/forked_ecephys_spike_sorting-0.1.1.tar.gz/forked_ecephys_spike_sorting-0.1.1/ecephys_spike_sorting/scripts/batch_processing_NP0_ks.py
|
import os
import subprocess
import glob
import shutil
from create_input_json import createInputJson
npx_directories = [r'N:\715093703_386129_20180627'
]
probe_type = '3A'
json_directory = r'C:\Users\svc_neuropix\Documents\json_files'
def copy_to_local(source, destination):
robocopy(source, destination)
def copy_to_remote(source, destination):
if not os.path.exists(os.path.dirname(destination)):
os.mkdir(os.path.dirname(destination))
robocopy(source, destination)
def copy_single_file(source, destination):
if not os.path.exists(destination):
print('Copying ' + source + ' to ' + destination)
shutil.copyfile(source, destination)
else:
print('file already exists at ' + destination)
def copy_sorted_files(source, destination):
files_to_copy = [f for f in os.listdir(source) if f[-3:] != 'dat' and f != 'ap_timestamps.npy']
if not os.path.exists(destination):
os.makedirs(destination, exist_ok=True)
for f in files_to_copy:
shutil.copyfile(os.path.join(source, f), os.path.join(destination, f))
def robocopy(source, destination):
print('Copying:')
print(' Source: ' + source)
print(' Destination: ' + destination)
if os.path.isdir(source):
command_string = "robocopy " + source + " "+ destination + r" /e /xc /xn /xo"
else:
command_string = "robocopy " + source + " "+ destination
subprocess.call(command_string.split(' '))
for local_directory in npx_directories:
remote_directory = glob.glob(os.path.join(r'\\10.128.50.151',
'SD4',
os.path.basename(local_directory)))[0]
sorted_directories = glob.glob(os.path.join(remote_directory, '*_sorted'))
sorted_directories.sort()
for sorted_directory in sorted_directories:
new_directories = glob.glob(os.path.join(r'\\10.128.50.77',
'sd5.3',
'RE-SORT',
'*',
os.path.basename(sorted_directory)))
if len(new_directories) > 0:
if os.path.exists(os.path.join(new_directories[0], 'continuous.dat')):
sd = new_directories[0] #os.path.join(new_directories[0], 'continuous','Neuropix-3a-100.0')
else:
sd = sorted_directory #os.path.join(sorted_directory, 'continuous','Neuropix-3a-100.0')
else:
sd = sorted_directory #os.path.join(sorted_directory, 'continuous','Neuropix-3a-100.0')
local_sorting_directory = os.path.join(local_directory, os.path.basename(sd))
os.makedirs(local_sorting_directory, exist_ok=True)
print('Copying data...')
copy_single_file(os.path.join(sd, 'continuous','Neuropix-3a-100.0', 'continuous.dat'),
os.path.join(local_sorting_directory,'continuous.dat'))
copy_single_file(os.path.join(sd, 'probe_info.json'),
os.path.join(local_sorting_directory,'probe_info.json'))
session_id = os.path.basename(local_sorting_directory)
target_directory = os.path.join(r'\\10.128.50.77',
'sd5.3',
'RE-SORT',
session_id[:-7],
session_id,
'continuous',
'Neuropix-3a-100.0')
input_json = os.path.join(json_directory, session_id + '_resort-input.json')
output_json = os.path.join(json_directory, session_id + '_resort-output.json')
info = createInputJson(input_json, kilosort_output_directory=local_sorting_directory,
extracted_data_directory=local_sorting_directory)
modules = [ 'kilosort_helper',
'kilosort_postprocessing',
'noise_templates',
'mean_waveforms',
'quality_metrics']
for module in modules:
command = "python -W ignore -m ecephys_spike_sorting.modules." + module + " --input_json " + input_json \
+ " --output_json " + output_json
subprocess.check_call(command.split(' '))
copy_sorted_files(local_sorting_directory, target_directory)
shutil.rmtree(local_sorting_directory)
|
PypiClean
|
/textworld-py-0.0.3.4.tar.gz/textworld-py-0.0.3.4/textworld/envs/zmachine/jericho.py
|
import io
import os
import sys
import warnings
import jericho
import textworld
from textworld.core import GameState
from textworld.core import GameNotRunningError
class JerichoUnsupportedGameWarning(UserWarning):
pass
class JerichoGameState(GameState):
@property
def nb_deaths(self):
""" Number of times the player has died. """
return -1
@property
def feedback(self):
""" Interpreter's response after issuing last command. """
if not hasattr(self, "_feedback"):
# Extract feeback from command's output.
self._feedback = self._raw
return self._feedback
@property
def inventory(self):
""" Player's inventory. """
if not hasattr(self, "_inventory"):
self._inventory = ""
if not self.game_ended:
# Issue the "inventory" command and parse its output.
self._inventory, _, _, _ = self._env._jericho.step("inventory")
return self._inventory
@property
def score(self):
""" Current score. """
if not hasattr(self, "_score"):
self._score = 0
return self._score
@property
def max_score(self):
""" Max score for this game. """
if not hasattr(self, "_max_score"):
self._max_score = 0
return self._max_score
@property
def description(self):
""" Description of the current location. """
if not hasattr(self, "_description"):
self._description = ""
if not self.game_ended:
# Issue the "look" command and parse its output.
self._description, _, _ , _= self._env._jericho.step("look")
return self._description
@property
def has_won(self):
""" Whether the player has won the game or not. """
if not hasattr(self, "_has_won"):
self._has_won = False
return self._has_won
@property
def has_lost(self):
""" Whether the player has lost the game or not. """
if not hasattr(self, "_has_lost"):
self._has_lost = False
return self._has_lost
class JerichoEnvironment(textworld.Environment):
GAME_STATE_CLASS = JerichoGameState
metadata = {'render.modes': ['human', 'ansi', 'text']}
def __init__(self, game_filename):
"""
Parameters
----------
game_filename : str
The game's filename.
"""
self._seed = -1
self._jericho = None
self.game_filename = os.path.abspath(game_filename)
self.game_name, ext = os.path.splitext(os.path.basename(game_filename))
# Check if game is supported by Jericho.
if not ext.startswith(".z"):
raise ValueError("Only .z[1-8] files are supported!")
if not os.path.isfile(self.game_filename):
raise FileNotFoundError(game_filename)
# self.fully_supported = jericho.FrotzEnv.is_fully_supported(self.game_filename)
# if not self.fully_supported:
# msg = ("Game '{}' is not fully supported. Score, move, change"
# " detection are disabled.")
# warnings.warn(msg, JerichoUnsupportedGameWarning)
def __del__(self) -> None:
self.close()
@property
def game_running(self) -> bool:
""" Determines if the game is still running. """
return self._jericho is not None
def seed(self, seed=None):
self._seed = seed
return self._seed
def reset(self):
self.close() # In case, it is running.
self.game_state = self.GAME_STATE_CLASS(self)
# Start the game using Jericho.
self._jericho = jericho.FrotzEnv(self.game_filename, self._seed)
# Grab start info from game.
start_output = self._jericho.reset()
self.game_state.init(start_output)
self.game_state._score = self._jericho.get_score()
self.game_state._max_score = self._jericho.get_max_score()
self.game_state._has_won = self._jericho.victory()
self.game_state._has_lost = self._jericho.game_over()
return self.game_state
def step(self, command):
if not self.game_running:
raise GameNotRunningError()
command = command.strip()
output, _, _, _ = self._jericho.step(command)
self.game_state = self.game_state.update(command, output)
self.game_state._score = self._jericho.get_score()
self.game_state._max_score = self._jericho.get_max_score()
self.game_state._has_won = self._jericho.victory()
self.game_state._has_lost = self._jericho.game_over()
return self.game_state, self.game_state.score, self.game_state.game_ended
def close(self):
if self._jericho is not None:
self._jericho.close()
self._jericho = None
def render(self, mode='human', close=False):
if close:
return
outfile = io.StringIO() if mode in ['ansi', "text"] else sys.stdout
if self.display_command_during_render and self.game_state.command is not None:
command = "> " + self.game_state.command
outfile.write(command + "\n\n")
observation = self.game_state.feedback
outfile.write(observation + "\n")
if mode == "text":
outfile.seek(0)
return outfile.read()
if mode == 'ansi':
return outfile
# By default disable warning related to unsupported games.
warnings.simplefilter("ignore", JerichoUnsupportedGameWarning)
|
PypiClean
|
/odoo13_addon_account_payment_return-13.0.1.0.6-py3-none-any.whl/odoo/addons/account_payment_return/models/account_move.py
|
from operator import itemgetter
from odoo import fields, models
class AccountMove(models.Model):
_inherit = "account.move"
returned_payment = fields.Boolean(
string="Payment returned",
help="Invoice has been included on a payment that has been returned later.",
copy=False,
)
def check_payment_return(self):
returned_invoices = (
self.env["account.partial.reconcile"]
.search([("origin_returned_move_ids.move_id", "in", self.ids)])
.mapped("origin_returned_move_ids.move_id")
)
returned_invoices.filtered(lambda x: not x.returned_payment).write(
{"returned_payment": True}
)
(self - returned_invoices).filtered("returned_payment").write(
{"returned_payment": False}
)
def _get_reconciled_info_JSON_values(self):
values = super()._get_reconciled_info_JSON_values()
if not self.returned_payment:
return values
returned_reconciles = self.env["account.partial.reconcile"].search(
[("origin_returned_move_ids.move_id", "=", self.id)]
)
for returned_reconcile in returned_reconciles:
payment = returned_reconcile.credit_move_id
payment_ret = returned_reconcile.debit_move_id
values.append(
{
"name": payment.name,
"journal_name": payment.journal_id.name,
"amount": returned_reconcile.amount,
"currency": self.currency_id.symbol,
"digits": [69, self.currency_id.decimal_places],
"position": self.currency_id.position,
"date": payment.date,
"payment_id": payment.id,
"move_id": payment.move_id.id,
"ref": payment.move_id.name,
}
)
values.append(
{
"name": payment_ret.name,
"journal_name": payment_ret.journal_id.name,
"amount": -returned_reconcile.amount,
"currency": self.currency_id.symbol,
"digits": [69, self.currency_id.decimal_places],
"position": self.currency_id.position,
"date": payment_ret.date,
"payment_id": payment_ret.id,
"move_id": payment_ret.move_id.id,
"ref": "{} ({})".format(
payment_ret.move_id.name, payment_ret.move_id.ref
),
"returned": True,
}
)
return sorted(values, key=itemgetter("date"), reverse=True)
class AccountMoveLine(models.Model):
_inherit = "account.move.line"
partial_reconcile_returned_ids = fields.Many2many(
comodel_name="account.partial.reconcile",
relation="account_partial_reconcile_account_move_line_rel",
column1="move_line_id",
column2="partial_reconcile_id",
copy=False,
)
class AccountPartialReconcile(models.Model):
_inherit = "account.partial.reconcile"
origin_returned_move_ids = fields.Many2many(
comodel_name="account.move.line",
relation="account_partial_reconcile_account_move_line_rel",
column1="partial_reconcile_id",
column2="move_line_id",
)
|
PypiClean
|
/azure-mgmt-appplatform-8.0.0.zip/azure-mgmt-appplatform-8.0.0/azure/mgmt/appplatform/v2023_01_01_preview/operations/_api_portals_operations.py
|
import sys
from typing import Any, Callable, Dict, IO, Iterable, Optional, TypeVar, Union, cast, overload
import urllib.parse
from azure.core.exceptions import (
ClientAuthenticationError,
HttpResponseError,
ResourceExistsError,
ResourceNotFoundError,
ResourceNotModifiedError,
map_error,
)
from azure.core.paging import ItemPaged
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.polling import LROPoller, NoPolling, PollingMethod
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from azure.core.utils import case_insensitive_dict
from azure.mgmt.core.exceptions import ARMErrorFormat
from azure.mgmt.core.polling.arm_polling import ARMPolling
from .. import models as _models
from ..._serialization import Serializer
from .._vendor import _convert_request, _format_url_section
if sys.version_info >= (3, 8):
from typing import Literal # pylint: disable=no-name-in-module, ungrouped-imports
else:
from typing_extensions import Literal # type: ignore # pylint: disable=ungrouped-imports
T = TypeVar("T")
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
def build_get_request(
resource_group_name: str, service_name: str, api_portal_name: str, subscription_id: str, **kwargs: Any
) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop(
"template_url",
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}",
) # pylint: disable=line-too-long
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"),
"serviceName": _SERIALIZER.url("service_name", service_name, "str", pattern=r"^[a-z][a-z0-9-]*[a-z0-9]$"),
"apiPortalName": _SERIALIZER.url("api_portal_name", api_portal_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs)
def build_create_or_update_request(
resource_group_name: str, service_name: str, api_portal_name: str, subscription_id: str, **kwargs: Any
) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None))
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop(
"template_url",
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}",
) # pylint: disable=line-too-long
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"),
"serviceName": _SERIALIZER.url("service_name", service_name, "str", pattern=r"^[a-z][a-z0-9-]*[a-z0-9]$"),
"apiPortalName": _SERIALIZER.url("api_portal_name", api_portal_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
if content_type is not None:
_headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="PUT", url=_url, params=_params, headers=_headers, **kwargs)
def build_delete_request(
resource_group_name: str, service_name: str, api_portal_name: str, subscription_id: str, **kwargs: Any
) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop(
"template_url",
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}",
) # pylint: disable=line-too-long
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"),
"serviceName": _SERIALIZER.url("service_name", service_name, "str", pattern=r"^[a-z][a-z0-9-]*[a-z0-9]$"),
"apiPortalName": _SERIALIZER.url("api_portal_name", api_portal_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="DELETE", url=_url, params=_params, headers=_headers, **kwargs)
def build_list_request(resource_group_name: str, service_name: str, subscription_id: str, **kwargs: Any) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop(
"template_url",
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals",
) # pylint: disable=line-too-long
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"),
"serviceName": _SERIALIZER.url("service_name", service_name, "str", pattern=r"^[a-z][a-z0-9-]*[a-z0-9]$"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs)
def build_validate_domain_request(
resource_group_name: str, service_name: str, api_portal_name: str, subscription_id: str, **kwargs: Any
) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None))
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop(
"template_url",
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}/validateDomain",
) # pylint: disable=line-too-long
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"),
"serviceName": _SERIALIZER.url("service_name", service_name, "str", pattern=r"^[a-z][a-z0-9-]*[a-z0-9]$"),
"apiPortalName": _SERIALIZER.url("api_portal_name", api_portal_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
if content_type is not None:
_headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs)
class ApiPortalsOperations:
"""
.. warning::
**DO NOT** instantiate this class directly.
Instead, you should access the following operations through
:class:`~azure.mgmt.appplatform.v2023_01_01_preview.AppPlatformManagementClient`'s
:attr:`api_portals` attribute.
"""
models = _models
def __init__(self, *args, **kwargs):
input_args = list(args)
self._client = input_args.pop(0) if input_args else kwargs.pop("client")
self._config = input_args.pop(0) if input_args else kwargs.pop("config")
self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer")
self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer")
@distributed_trace
def get(
self, resource_group_name: str, service_name: str, api_portal_name: str, **kwargs: Any
) -> _models.ApiPortalResource:
"""Get the API portal and its properties.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ApiPortalResource or the result of cls(response)
:rtype: ~azure.mgmt.appplatform.v2023_01_01_preview.models.ApiPortalResource
:raises ~azure.core.exceptions.HttpResponseError:
"""
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
cls: ClsType[_models.ApiPortalResource] = kwargs.pop("cls", None)
request = build_get_request(
resource_group_name=resource_group_name,
service_name=service_name,
api_portal_name=api_portal_name,
subscription_id=self._config.subscription_id,
api_version=api_version,
template_url=self.get.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
_stream = False
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=_stream, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize("ApiPortalResource", pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get.metadata = {
"url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}"
}
def _create_or_update_initial(
self,
resource_group_name: str,
service_name: str,
api_portal_name: str,
api_portal_resource: Union[_models.ApiPortalResource, IO],
**kwargs: Any
) -> _models.ApiPortalResource:
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None))
cls: ClsType[_models.ApiPortalResource] = kwargs.pop("cls", None)
content_type = content_type or "application/json"
_json = None
_content = None
if isinstance(api_portal_resource, (IO, bytes)):
_content = api_portal_resource
else:
_json = self._serialize.body(api_portal_resource, "ApiPortalResource")
request = build_create_or_update_request(
resource_group_name=resource_group_name,
service_name=service_name,
api_portal_name=api_portal_name,
subscription_id=self._config.subscription_id,
api_version=api_version,
content_type=content_type,
json=_json,
content=_content,
template_url=self._create_or_update_initial.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
_stream = False
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=_stream, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200, 201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
if response.status_code == 200:
deserialized = self._deserialize("ApiPortalResource", pipeline_response)
if response.status_code == 201:
deserialized = self._deserialize("ApiPortalResource", pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {}) # type: ignore
return deserialized # type: ignore
_create_or_update_initial.metadata = {
"url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}"
}
@overload
def begin_create_or_update(
self,
resource_group_name: str,
service_name: str,
api_portal_name: str,
api_portal_resource: _models.ApiPortalResource,
*,
content_type: str = "application/json",
**kwargs: Any
) -> LROPoller[_models.ApiPortalResource]:
"""Create the default API portal or update the existing API portal.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:param api_portal_resource: The API portal for the create or update operation. Required.
:type api_portal_resource: ~azure.mgmt.appplatform.v2023_01_01_preview.models.ApiPortalResource
:keyword content_type: Body Parameter content-type. Content type parameter for JSON body.
Default value is "application/json".
:paramtype content_type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be ARMPolling. Pass in False for this
operation to not poll, or pass in your own initialized polling object for a personal polling
strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of LROPoller that returns either ApiPortalResource or the result of
cls(response)
:rtype:
~azure.core.polling.LROPoller[~azure.mgmt.appplatform.v2023_01_01_preview.models.ApiPortalResource]
:raises ~azure.core.exceptions.HttpResponseError:
"""
@overload
def begin_create_or_update(
self,
resource_group_name: str,
service_name: str,
api_portal_name: str,
api_portal_resource: IO,
*,
content_type: str = "application/json",
**kwargs: Any
) -> LROPoller[_models.ApiPortalResource]:
"""Create the default API portal or update the existing API portal.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:param api_portal_resource: The API portal for the create or update operation. Required.
:type api_portal_resource: IO
:keyword content_type: Body Parameter content-type. Content type parameter for binary body.
Default value is "application/json".
:paramtype content_type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be ARMPolling. Pass in False for this
operation to not poll, or pass in your own initialized polling object for a personal polling
strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of LROPoller that returns either ApiPortalResource or the result of
cls(response)
:rtype:
~azure.core.polling.LROPoller[~azure.mgmt.appplatform.v2023_01_01_preview.models.ApiPortalResource]
:raises ~azure.core.exceptions.HttpResponseError:
"""
@distributed_trace
def begin_create_or_update(
self,
resource_group_name: str,
service_name: str,
api_portal_name: str,
api_portal_resource: Union[_models.ApiPortalResource, IO],
**kwargs: Any
) -> LROPoller[_models.ApiPortalResource]:
"""Create the default API portal or update the existing API portal.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:param api_portal_resource: The API portal for the create or update operation. Is either a
ApiPortalResource type or a IO type. Required.
:type api_portal_resource: ~azure.mgmt.appplatform.v2023_01_01_preview.models.ApiPortalResource
or IO
:keyword content_type: Body Parameter content-type. Known values are: 'application/json'.
Default value is None.
:paramtype content_type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be ARMPolling. Pass in False for this
operation to not poll, or pass in your own initialized polling object for a personal polling
strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of LROPoller that returns either ApiPortalResource or the result of
cls(response)
:rtype:
~azure.core.polling.LROPoller[~azure.mgmt.appplatform.v2023_01_01_preview.models.ApiPortalResource]
:raises ~azure.core.exceptions.HttpResponseError:
"""
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None))
cls: ClsType[_models.ApiPortalResource] = kwargs.pop("cls", None)
polling: Union[bool, PollingMethod] = kwargs.pop("polling", True)
lro_delay = kwargs.pop("polling_interval", self._config.polling_interval)
cont_token: Optional[str] = kwargs.pop("continuation_token", None)
if cont_token is None:
raw_result = self._create_or_update_initial(
resource_group_name=resource_group_name,
service_name=service_name,
api_portal_name=api_portal_name,
api_portal_resource=api_portal_resource,
api_version=api_version,
content_type=content_type,
cls=lambda x, y, z: x,
headers=_headers,
params=_params,
**kwargs
)
kwargs.pop("error_map", None)
def get_long_running_output(pipeline_response):
deserialized = self._deserialize("ApiPortalResource", pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
if polling is True:
polling_method: PollingMethod = cast(PollingMethod, ARMPolling(lro_delay, **kwargs))
elif polling is False:
polling_method = cast(PollingMethod, NoPolling())
else:
polling_method = polling
if cont_token:
return LROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output,
)
return LROPoller(self._client, raw_result, get_long_running_output, polling_method) # type: ignore
begin_create_or_update.metadata = {
"url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}"
}
def _delete_initial( # pylint: disable=inconsistent-return-statements
self, resource_group_name: str, service_name: str, api_portal_name: str, **kwargs: Any
) -> None:
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
cls: ClsType[None] = kwargs.pop("cls", None)
request = build_delete_request(
resource_group_name=resource_group_name,
service_name=service_name,
api_portal_name=api_portal_name,
subscription_id=self._config.subscription_id,
api_version=api_version,
template_url=self._delete_initial.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
_stream = False
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=_stream, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200, 202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
_delete_initial.metadata = {
"url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}"
}
@distributed_trace
def begin_delete(
self, resource_group_name: str, service_name: str, api_portal_name: str, **kwargs: Any
) -> LROPoller[None]:
"""Delete the default API portal.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be ARMPolling. Pass in False for this
operation to not poll, or pass in your own initialized polling object for a personal polling
strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of LROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.LROPoller[None]
:raises ~azure.core.exceptions.HttpResponseError:
"""
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
cls: ClsType[None] = kwargs.pop("cls", None)
polling: Union[bool, PollingMethod] = kwargs.pop("polling", True)
lro_delay = kwargs.pop("polling_interval", self._config.polling_interval)
cont_token: Optional[str] = kwargs.pop("continuation_token", None)
if cont_token is None:
raw_result = self._delete_initial( # type: ignore
resource_group_name=resource_group_name,
service_name=service_name,
api_portal_name=api_portal_name,
api_version=api_version,
cls=lambda x, y, z: x,
headers=_headers,
params=_params,
**kwargs
)
kwargs.pop("error_map", None)
def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements
if cls:
return cls(pipeline_response, None, {})
if polling is True:
polling_method: PollingMethod = cast(PollingMethod, ARMPolling(lro_delay, **kwargs))
elif polling is False:
polling_method = cast(PollingMethod, NoPolling())
else:
polling_method = polling
if cont_token:
return LROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output,
)
return LROPoller(self._client, raw_result, get_long_running_output, polling_method) # type: ignore
begin_delete.metadata = {
"url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}"
}
@distributed_trace
def list(self, resource_group_name: str, service_name: str, **kwargs: Any) -> Iterable["_models.ApiPortalResource"]:
"""Handles requests to list all resources in a Service.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either ApiPortalResource or the result of cls(response)
:rtype:
~azure.core.paging.ItemPaged[~azure.mgmt.appplatform.v2023_01_01_preview.models.ApiPortalResource]
:raises ~azure.core.exceptions.HttpResponseError:
"""
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
cls: ClsType[_models.ApiPortalResourceCollection] = kwargs.pop("cls", None)
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
def prepare_request(next_link=None):
if not next_link:
request = build_list_request(
resource_group_name=resource_group_name,
service_name=service_name,
subscription_id=self._config.subscription_id,
api_version=api_version,
template_url=self.list.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
else:
# make call to next link with the client's api-version
_parsed_next_link = urllib.parse.urlparse(next_link)
_next_request_params = case_insensitive_dict(
{
key: [urllib.parse.quote(v) for v in value]
for key, value in urllib.parse.parse_qs(_parsed_next_link.query).items()
}
)
_next_request_params["api-version"] = self._config.api_version
request = HttpRequest(
"GET", urllib.parse.urljoin(next_link, _parsed_next_link.path), params=_next_request_params
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
request.method = "GET"
return request
def extract_data(pipeline_response):
deserialized = self._deserialize("ApiPortalResourceCollection", pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem) # type: ignore
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
_stream = False
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=_stream, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return ItemPaged(get_next, extract_data)
list.metadata = {
"url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals"
}
@overload
def validate_domain(
self,
resource_group_name: str,
service_name: str,
api_portal_name: str,
validate_payload: _models.CustomDomainValidatePayload,
*,
content_type: str = "application/json",
**kwargs: Any
) -> _models.CustomDomainValidateResult:
"""Check the domains are valid as well as not in use.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:param validate_payload: Custom domain payload to be validated. Required.
:type validate_payload:
~azure.mgmt.appplatform.v2023_01_01_preview.models.CustomDomainValidatePayload
:keyword content_type: Body Parameter content-type. Content type parameter for JSON body.
Default value is "application/json".
:paramtype content_type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: CustomDomainValidateResult or the result of cls(response)
:rtype: ~azure.mgmt.appplatform.v2023_01_01_preview.models.CustomDomainValidateResult
:raises ~azure.core.exceptions.HttpResponseError:
"""
@overload
def validate_domain(
self,
resource_group_name: str,
service_name: str,
api_portal_name: str,
validate_payload: IO,
*,
content_type: str = "application/json",
**kwargs: Any
) -> _models.CustomDomainValidateResult:
"""Check the domains are valid as well as not in use.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:param validate_payload: Custom domain payload to be validated. Required.
:type validate_payload: IO
:keyword content_type: Body Parameter content-type. Content type parameter for binary body.
Default value is "application/json".
:paramtype content_type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: CustomDomainValidateResult or the result of cls(response)
:rtype: ~azure.mgmt.appplatform.v2023_01_01_preview.models.CustomDomainValidateResult
:raises ~azure.core.exceptions.HttpResponseError:
"""
@distributed_trace
def validate_domain(
self,
resource_group_name: str,
service_name: str,
api_portal_name: str,
validate_payload: Union[_models.CustomDomainValidatePayload, IO],
**kwargs: Any
) -> _models.CustomDomainValidateResult:
"""Check the domains are valid as well as not in use.
:param resource_group_name: The name of the resource group that contains the resource. You can
obtain this value from the Azure Resource Manager API or the portal. Required.
:type resource_group_name: str
:param service_name: The name of the Service resource. Required.
:type service_name: str
:param api_portal_name: The name of API portal. Required.
:type api_portal_name: str
:param validate_payload: Custom domain payload to be validated. Is either a
CustomDomainValidatePayload type or a IO type. Required.
:type validate_payload:
~azure.mgmt.appplatform.v2023_01_01_preview.models.CustomDomainValidatePayload or IO
:keyword content_type: Body Parameter content-type. Known values are: 'application/json'.
Default value is None.
:paramtype content_type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: CustomDomainValidateResult or the result of cls(response)
:rtype: ~azure.mgmt.appplatform.v2023_01_01_preview.models.CustomDomainValidateResult
:raises ~azure.core.exceptions.HttpResponseError:
"""
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2023-01-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2023-01-01-preview")
)
content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None))
cls: ClsType[_models.CustomDomainValidateResult] = kwargs.pop("cls", None)
content_type = content_type or "application/json"
_json = None
_content = None
if isinstance(validate_payload, (IO, bytes)):
_content = validate_payload
else:
_json = self._serialize.body(validate_payload, "CustomDomainValidatePayload")
request = build_validate_domain_request(
resource_group_name=resource_group_name,
service_name=service_name,
api_portal_name=api_portal_name,
subscription_id=self._config.subscription_id,
api_version=api_version,
content_type=content_type,
json=_json,
content=_content,
template_url=self.validate_domain.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
_stream = False
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=_stream, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize("CustomDomainValidateResult", pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
validate_domain.metadata = {
"url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AppPlatform/Spring/{serviceName}/apiPortals/{apiPortalName}/validateDomain"
}
|
PypiClean
|
/quantlwsdk-0.0.24.tar.gz/quantlwsdk-0.0.24/rqalpha/const.py
|
from enum import Enum, EnumMeta
import six
class CustomEnumMeta(EnumMeta):
def __new__(metacls, cls, bases, classdict):
enum_class = super(CustomEnumMeta, metacls).__new__(metacls, cls, bases, classdict)
enum_class._member_reverse_map = {v.value: v for v in enum_class.__members__.values()}
return enum_class
def __contains__(cls, member):
if super(CustomEnumMeta, cls).__contains__(member):
return True
if isinstance(member, str):
return member in cls._member_reverse_map
return False
def __getitem__(self, item):
try:
return super(CustomEnumMeta, self).__getitem__(item)
except KeyError:
return self._member_reverse_map[item]
if six.PY2:
# six.with_metaclass not working
class CustomEnumCore(str, Enum):
__metaclass__ = CustomEnumMeta
else:
exec("class CustomEnumCore(str, Enum, metaclass=CustomEnumMeta): pass")
# noinspection PyUnresolvedReferences
class CustomEnum(CustomEnumCore):
def __repr__(self):
return "%s.%s" % (
self.__class__.__name__, self._name_)
# noinspection PyPep8Naming
class EXECUTION_PHASE(CustomEnum):
GLOBAL = "[全局]"
ON_INIT = "[程序初始化]"
BEFORE_TRADING = "[日内交易前]"
OPEN_AUCTION = "[集合竞价]"
ON_BAR = "[盘中 handle_bar 函数]"
ON_TICK = "[盘中 handle_tick 函数]"
AFTER_TRADING = "[日内交易后]"
FINALIZED = "[程序结束]"
SCHEDULED = "[scheduler函数内]"
# noinspection PyPep8Naming
class RUN_TYPE(CustomEnum):
# TODO: 取消 RUN_TYPE, 取而代之的是使用开启哪些Mod来控制策略所运行的类型
# Back Test
BACKTEST = "BACKTEST"
# Paper Trading
PAPER_TRADING = "PAPER_TRADING"
# Live Trading
LIVE_TRADING = 'LIVE_TRADING'
# noinspection PyPep8Naming
class DEFAULT_ACCOUNT_TYPE(CustomEnum):
"""
* 关于 ACCOUNT_TYPE,目前主要表示为交易账户。STOCK / FUTURE / OPTION 目前均表示为中国 对应的交易账户。
* ACCOUNT_TYPE 不区分交易所,比如 A 股区分上海交易所和深圳交易所,但对应的都是一个账户,因此统一为 STOCK
* 目前暂时不添加其他 DEFAULT_ACCOUNT_TYPE 类型,如果需要增加自定义账户及类型,请参考 https://github.com/ricequant/rqalpha/issues/160
"""
# 股票
STOCK = "STOCK"
# 期货
FUTURE = "FUTURE"
# 期权
OPTION = "OPTION"
# 债券
BOND = "BOND"
# noinspection PyPep8Naming
class MATCHING_TYPE(CustomEnum):
CURRENT_BAR_CLOSE = "CURRENT_BAR_CLOSE"
NEXT_BAR_OPEN = "NEXT_BAR_OPEN"
NEXT_TICK_LAST = "NEXT_TICK_LAST"
NEXT_TICK_BEST_OWN = "NEXT_TICK_BEST_OWN"
NEXT_TICK_BEST_COUNTERPARTY = "NEXT_TICK_BEST_COUNTERPARTY"
# noinspection PyPep8Naming
class ORDER_TYPE(CustomEnum):
MARKET = "MARKET"
LIMIT = "LIMIT"
# noinspection PyPep8Naming
class ORDER_STATUS(CustomEnum):
PENDING_NEW = "PENDING_NEW"
ACTIVE = "ACTIVE"
FILLED = "FILLED"
REJECTED = "REJECTED"
PENDING_CANCEL = "PENDING_CANCEL"
CANCELLED = "CANCELLED"
# noinspection PyPep8Naming
class SIDE(CustomEnum):
BUY = "BUY" # 买
SELL = "SELL" # 卖
FINANCING = "FINANCING" # 正回购
MARGIN = "MARGIN" # 逆回购
CONVERT_STOCK = "CONVERT_STOCK" # 转股
# noinspection PyPep8Naming
class POSITION_EFFECT(CustomEnum):
OPEN = "OPEN"
CLOSE = "CLOSE"
CLOSE_TODAY = "CLOSE_TODAY"
EXERCISE = "EXERCISE"
MATCH = "MATCH"
# noinspection PyPep8Naming
class POSITION_DIRECTION(CustomEnum):
LONG = "LONG"
SHORT = "SHORT"
# noinspection PyPep8Naming
class EXC_TYPE(CustomEnum):
USER_EXC = "USER_EXC"
SYSTEM_EXC = "SYSTEM_EXC"
NOTSET = "NOTSET"
# noinspection PyPep8Naming
class INSTRUMENT_TYPE(CustomEnum):
CS = "CS"
FUTURE = "Future"
OPTION = "Option"
ETF = "ETF"
LOF = "LOF"
INDX = "INDX"
FENJI_MU = "FenjiMu"
FENJI_A = "FenjiA"
FENJI_B = "FenjiB"
PUBLIC_FUND = 'PublicFund'
BOND = "Bond"
CONVERTIBLE = "Convertible"
SPOT = "Spot"
REPO = "Repo"
# noinspection PyPep8Naming
class PERSIST_MODE(CustomEnum):
ON_CRASH = "ON_CRASH"
REAL_TIME = "REAL_TIME"
ON_NORMAL_EXIT = "ON_NORMAL_EXIT"
# noinspection PyPep8Naming
class MARGIN_TYPE(CustomEnum):
BY_MONEY = "BY_MONEY"
BY_VOLUME = "BY_VOLUME"
# noinspection PyPep8Naming
class COMMISSION_TYPE(CustomEnum):
BY_MONEY = "BY_MONEY"
BY_VOLUME = "BY_VOLUME"
# noinspection PyPep8Naming
class EXIT_CODE(CustomEnum):
EXIT_SUCCESS = "EXIT_SUCCESS"
EXIT_USER_ERROR = "EXIT_USER_ERROR"
EXIT_INTERNAL_ERROR = "EXIT_INTERNAL_ERROR"
# noinspection PyPep8Naming
class HEDGE_TYPE(CustomEnum):
HEDGE = "hedge"
SPECULATION = "speculation"
ARBITRAGE = "arbitrage"
# noinspection PyPep8Naming
class DAYS_CNT(object):
DAYS_A_YEAR = 365
TRADING_DAYS_A_YEAR = 252
# noinspection PyPep8Naming
class MARKET(CustomEnum):
CN = "CN"
HK = "HK"
# noinspection PyPep8Naming
class TRADING_CALENDAR_TYPE(CustomEnum):
EXCHANGE = "EXCHANGE"
INTER_BANK = "INTERBANK"
class CURRENCY(CustomEnum):
CNY = "CNY" # 人民币
USD = "USD" # 美元
EUR = "EUR" # 欧元
HKD = "HKD" # 港币
GBP = "GBP" # 英镑
JPY = "JPY" # 日元
KRW = "KWR" # 韩元
CAD = "CAD" # 加元
AUD = "AUD" # 澳元
CHF = "CHF" # 瑞郎
SGD = "SGD" # 新加坡元
MYR = "MYR" # 马拉西亚币
IDR = "IDR" # 印尼币
NZD = "NZD" # 新西兰币
VND = "VND" # 越南盾
THB = "THB" # 泰铢
PHP = "PHP" # 菲律宾币
UNDERLYING_SYMBOL_PATTERN = "([a-zA-Z]+)\d+"
#*************************************李文 lw 来添加的东东************
#这个是用来指示,回测的行情来自哪里。比如juejin 就是来自掘金的数据。
class HQDATATYPE(CustomEnum):
JUEJIN = "JUEJIN"
MongoHq="MongoHq"
|
PypiClean
|
/torvy-hass-0.1.tar.gz/torvy-hass-0.1/homeassistant/components/light/wink.py
|
import colorsys
from homeassistant.components.light import (
ATTR_BRIGHTNESS, ATTR_COLOR_TEMP, ATTR_RGB_COLOR, SUPPORT_BRIGHTNESS,
SUPPORT_COLOR_TEMP, SUPPORT_RGB_COLOR, Light)
from homeassistant.components.wink import WinkDevice
from homeassistant.util import color as color_util
from homeassistant.util.color import \
color_temperature_mired_to_kelvin as mired_to_kelvin
DEPENDENCIES = ['wink']
SUPPORT_WINK = SUPPORT_BRIGHTNESS | SUPPORT_COLOR_TEMP | SUPPORT_RGB_COLOR
def setup_platform(hass, config, add_devices, discovery_info=None):
"""Setup the Wink lights."""
import pywink
add_devices(WinkLight(light) for light in pywink.get_bulbs())
class WinkLight(WinkDevice, Light):
"""Representation of a Wink light."""
def __init__(self, wink):
"""Initialize the Wink device."""
WinkDevice.__init__(self, wink)
@property
def is_on(self):
"""Return true if light is on."""
return self.wink.state()
@property
def brightness(self):
"""Return the brightness of the light."""
return int(self.wink.brightness() * 255)
@property
def rgb_color(self):
"""Current bulb color in RGB."""
if not self.wink.supports_hue_saturation():
return None
else:
hue = self.wink.color_hue()
saturation = self.wink.color_saturation()
value = int(self.wink.brightness() * 255)
rgb = colorsys.hsv_to_rgb(hue, saturation, value)
r_value = int(round(rgb[0]))
g_value = int(round(rgb[1]))
b_value = int(round(rgb[2]))
return r_value, g_value, b_value
@property
def xy_color(self):
"""Current bulb color in CIE 1931 (XY) color space."""
if not self.wink.supports_xy_color():
return None
return self.wink.color_xy()
@property
def color_temp(self):
"""Current bulb color in degrees Kelvin."""
if not self.wink.supports_temperature():
return None
return color_util.color_temperature_kelvin_to_mired(
self.wink.color_temperature_kelvin())
@property
def supported_features(self):
"""Flag supported features."""
return SUPPORT_WINK
# pylint: disable=too-few-public-methods
def turn_on(self, **kwargs):
"""Turn the switch on."""
brightness = kwargs.get(ATTR_BRIGHTNESS)
rgb_color = kwargs.get(ATTR_RGB_COLOR)
color_temp_mired = kwargs.get(ATTR_COLOR_TEMP)
state_kwargs = {
}
if rgb_color:
if self.wink.supports_xy_color():
xyb = color_util.color_RGB_to_xy(*rgb_color)
state_kwargs['color_xy'] = xyb[0], xyb[1]
state_kwargs['brightness'] = xyb[2]
elif self.wink.supports_hue_saturation():
hsv = colorsys.rgb_to_hsv(rgb_color[0],
rgb_color[1], rgb_color[2])
state_kwargs['color_hue_saturation'] = hsv[0], hsv[1]
if color_temp_mired:
state_kwargs['color_kelvin'] = mired_to_kelvin(color_temp_mired)
if brightness:
state_kwargs['brightness'] = brightness / 255.0
self.wink.set_state(True, **state_kwargs)
def turn_off(self):
"""Turn the switch off."""
self.wink.set_state(False)
|
PypiClean
|
/taskcc-alipay-sdk-python-3.3.398.tar.gz/taskcc-alipay-sdk-python-3.3.398/alipay/aop/api/request/MybankPaymentTradeNormalpayTransferRequest.py
|
import json
from alipay.aop.api.FileItem import FileItem
from alipay.aop.api.constant.ParamConstants import *
from alipay.aop.api.domain.MybankPaymentTradeNormalpayTransferModel import MybankPaymentTradeNormalpayTransferModel
class MybankPaymentTradeNormalpayTransferRequest(object):
def __init__(self, biz_model=None):
self._biz_model = biz_model
self._biz_content = None
self._version = "1.0"
self._terminal_type = None
self._terminal_info = None
self._prod_code = None
self._notify_url = None
self._return_url = None
self._udf_params = None
self._need_encrypt = False
@property
def biz_model(self):
return self._biz_model
@biz_model.setter
def biz_model(self, value):
self._biz_model = value
@property
def biz_content(self):
return self._biz_content
@biz_content.setter
def biz_content(self, value):
if isinstance(value, MybankPaymentTradeNormalpayTransferModel):
self._biz_content = value
else:
self._biz_content = MybankPaymentTradeNormalpayTransferModel.from_alipay_dict(value)
@property
def version(self):
return self._version
@version.setter
def version(self, value):
self._version = value
@property
def terminal_type(self):
return self._terminal_type
@terminal_type.setter
def terminal_type(self, value):
self._terminal_type = value
@property
def terminal_info(self):
return self._terminal_info
@terminal_info.setter
def terminal_info(self, value):
self._terminal_info = value
@property
def prod_code(self):
return self._prod_code
@prod_code.setter
def prod_code(self, value):
self._prod_code = value
@property
def notify_url(self):
return self._notify_url
@notify_url.setter
def notify_url(self, value):
self._notify_url = value
@property
def return_url(self):
return self._return_url
@return_url.setter
def return_url(self, value):
self._return_url = value
@property
def udf_params(self):
return self._udf_params
@udf_params.setter
def udf_params(self, value):
if not isinstance(value, dict):
return
self._udf_params = value
@property
def need_encrypt(self):
return self._need_encrypt
@need_encrypt.setter
def need_encrypt(self, value):
self._need_encrypt = value
def add_other_text_param(self, key, value):
if not self.udf_params:
self.udf_params = dict()
self.udf_params[key] = value
def get_params(self):
params = dict()
params[P_METHOD] = 'mybank.payment.trade.normalpay.transfer'
params[P_VERSION] = self.version
if self.biz_model:
params[P_BIZ_CONTENT] = json.dumps(obj=self.biz_model.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
if self.biz_content:
if hasattr(self.biz_content, 'to_alipay_dict'):
params['biz_content'] = json.dumps(obj=self.biz_content.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
else:
params['biz_content'] = self.biz_content
if self.terminal_type:
params['terminal_type'] = self.terminal_type
if self.terminal_info:
params['terminal_info'] = self.terminal_info
if self.prod_code:
params['prod_code'] = self.prod_code
if self.notify_url:
params['notify_url'] = self.notify_url
if self.return_url:
params['return_url'] = self.return_url
if self.udf_params:
params.update(self.udf_params)
return params
def get_multipart_params(self):
multipart_params = dict()
return multipart_params
|
PypiClean
|
/odooku_odoo_base-11.0.7-py35-none-any.whl/odoo/addons/web/static/lib/moment/locale/es-do.js
|
;(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined'
&& typeof require === 'function' ? factory(require('../moment')) :
typeof define === 'function' && define.amd ? define(['../moment'], factory) :
factory(global.moment)
}(this, (function (moment) { 'use strict';
var monthsShortDot = 'ene._feb._mar._abr._may._jun._jul._ago._sep._oct._nov._dic.'.split('_');
var monthsShort = 'ene_feb_mar_abr_may_jun_jul_ago_sep_oct_nov_dic'.split('_');
var esDo = moment.defineLocale('es-do', {
months : 'enero_febrero_marzo_abril_mayo_junio_julio_agosto_septiembre_octubre_noviembre_diciembre'.split('_'),
monthsShort : function (m, format) {
if (/-MMM-/.test(format)) {
return monthsShort[m.month()];
} else {
return monthsShortDot[m.month()];
}
},
monthsParseExact : true,
weekdays : 'domingo_lunes_martes_miércoles_jueves_viernes_sábado'.split('_'),
weekdaysShort : 'dom._lun._mar._mié._jue._vie._sáb.'.split('_'),
weekdaysMin : 'do_lu_ma_mi_ju_vi_sá'.split('_'),
weekdaysParseExact : true,
longDateFormat : {
LT : 'h:mm A',
LTS : 'h:mm:ss A',
L : 'DD/MM/YYYY',
LL : 'D [de] MMMM [de] YYYY',
LLL : 'D [de] MMMM [de] YYYY h:mm A',
LLLL : 'dddd, D [de] MMMM [de] YYYY h:mm A'
},
calendar : {
sameDay : function () {
return '[hoy a la' + ((this.hours() !== 1) ? 's' : '') + '] LT';
},
nextDay : function () {
return '[mañana a la' + ((this.hours() !== 1) ? 's' : '') + '] LT';
},
nextWeek : function () {
return 'dddd [a la' + ((this.hours() !== 1) ? 's' : '') + '] LT';
},
lastDay : function () {
return '[ayer a la' + ((this.hours() !== 1) ? 's' : '') + '] LT';
},
lastWeek : function () {
return '[el] dddd [pasado a la' + ((this.hours() !== 1) ? 's' : '') + '] LT';
},
sameElse : 'L'
},
relativeTime : {
future : 'en %s',
past : 'hace %s',
s : 'unos segundos',
m : 'un minuto',
mm : '%d minutos',
h : 'una hora',
hh : '%d horas',
d : 'un día',
dd : '%d días',
M : 'un mes',
MM : '%d meses',
y : 'un año',
yy : '%d años'
},
ordinalParse : /\d{1,2}º/,
ordinal : '%dº',
week : {
dow : 1, // Monday is the first day of the week.
doy : 4 // The week that contains Jan 4th is the first week of the year.
}
});
return esDo;
})));
|
PypiClean
|
/testfixtures-7.1.0.tar.gz/testfixtures-7.1.0/README.rst
|
Testfixtures
============
|CircleCI|_ |Docs|_
.. |CircleCI| image:: https://circleci.com/gh/simplistix/testfixtures/tree/master.svg?style=shield
.. _CircleCI: https://circleci.com/gh/simplistix/testfixtures/tree/master
.. |Docs| image:: https://readthedocs.org/projects/testfixtures/badge/?version=latest
.. _Docs: http://testfixtures.readthedocs.org/en/latest/
Testfixtures is a collection of helpers and mock objects that are useful when
writing automated tests in Python.
The areas of testing this package can help with are listed below:
**Comparing objects and sequences**
Better feedback when the results aren't as you expected along with
support for comparison of objects that don't normally support
comparison and comparison of deeply nested datastructures.
**Mocking out objects and methods**
Easy to use ways of stubbing out objects, classes or individual
methods. Specialised helpers and mock objects are provided, including sub-processes,
dates and times.
**Testing logging**
Helpers for capturing logging and checking what has been logged is what was expected.
**Testing stream output**
Helpers for capturing stream output, such as that from print function calls or even
stuff written directly to file descriptors, and making assertions about it.
**Testing with files and directories**
Support for creating and checking both files and directories in sandboxes
including support for other common path libraries.
**Testing exceptions and warnings**
Easy to use ways of checking that a certain exception is raised,
or a warning is issued, even down the to the parameters provided.
**Testing when using django**
Helpers for comparing instances of django models.
**Testing when using Twisted**
Helpers for making assertions about logging when using Twisted.
**Testing when using zope.component**
An easy to use sterile component registry.
|
PypiClean
|
/asyncpg_trek-0.3.1.tar.gz/asyncpg_trek-0.3.1/asyncpg_trek/asyncpg.py
|
import pathlib
from contextlib import asynccontextmanager
from typing import AsyncContextManager, AsyncIterator, Optional
import asyncpg # type: ignore
from asyncpg_trek._types import Operation
CREATE_TABLE = """\
CREATE SCHEMA IF NOT EXISTS "{schema}";
CREATE TABLE IF NOT EXISTS "{schema}".migrations (
id SERIAL PRIMARY KEY,
from_revision TEXT,
to_revision TEXT,
timestamp TIMESTAMP NOT NULL DEFAULT current_timestamp
);
CREATE INDEX ON "{schema}".migrations(timestamp);
"""
GET_CURRENT_REVISION = """\
SELECT to_revision
FROM "{schema}".migrations
ORDER BY id DESC
LIMIT 1;
"""
RECORD_REVISION = """\
INSERT INTO "{schema}".migrations(from_revision, to_revision)
VALUES ($1, $2)
"""
class AsyncpgExecutor:
def __init__(self, connection: asyncpg.Connection, schema: str) -> None:
self.connection = connection
self.schema = schema
async def create_table_idempotent(self) -> None:
await self.connection.execute(CREATE_TABLE.format(schema=self.schema)) # type: ignore
async def get_current_revision(self) -> Optional[str]:
return await self.connection.fetchval(GET_CURRENT_REVISION.format(schema=self.schema)) # type: ignore
async def record_migration(
self, from_revision: Optional[str], to_revision: Optional[str]
) -> None:
await self.connection.execute(RECORD_REVISION.format(schema=self.schema), from_revision, to_revision) # type: ignore
async def execute_operation(self, operation: Operation[asyncpg.Connection]) -> None:
await operation(self.connection)
class AsyncpgBackend:
def __init__(self, connection: asyncpg.Connection, schema: str = "public") -> None:
self.connection = connection
self.schema = schema
def connect(self) -> AsyncContextManager[AsyncpgExecutor]:
@asynccontextmanager
async def cm() -> AsyncIterator[AsyncpgExecutor]:
async with self.connection.transaction(isolation="serializable"): # type: ignore
yield AsyncpgExecutor(self.connection, self.schema)
return cm()
def prepare_operation_from_sql_file(
self, path: pathlib.Path
) -> Operation[asyncpg.Connection]:
async def operation(connection: asyncpg.Connection) -> None:
with open(path) as f:
query = f.read()
await connection.execute(query) # type: ignore
return operation
|
PypiClean
|
/deoldify-0.0.1-py3-none-any.whl/fastai/callbacks/csv_logger.py
|
"A `Callback` that saves tracked metrics into a persistent file."
#Contribution from devforfu: https://nbviewer.jupyter.org/gist/devforfu/ea0b3fcfe194dad323c3762492b05cae
from ..torch_core import *
from ..basic_data import DataBunch
from ..callback import *
from ..basic_train import Learner, LearnerCallback
from time import time
from fastprogress.fastprogress import format_time
__all__ = ['CSVLogger']
class CSVLogger(LearnerCallback):
"A `LearnerCallback` that saves history of metrics while training `learn` into CSV `filename`."
def __init__(self, learn:Learner, filename: str = 'history', append: bool = False):
super().__init__(learn)
self.filename,self.path,self.append = filename,self.learn.path/f'{filename}.csv',append
self.add_time = True
def read_logged_file(self):
"Read the content of saved file"
return pd.read_csv(self.path)
def on_train_begin(self, **kwargs: Any) -> None:
"Prepare file with metric names."
self.path.parent.mkdir(parents=True, exist_ok=True)
self.file = self.path.open('a') if self.append else self.path.open('w')
self.file.write(','.join(self.learn.recorder.names[:(None if self.add_time else -1)]) + '\n')
def on_epoch_begin(self, **kwargs:Any)->None:
if self.add_time: self.start_epoch = time()
def on_epoch_end(self, epoch: int, smooth_loss: Tensor, last_metrics: MetricsList, **kwargs: Any) -> bool:
"Add a line with `epoch` number, `smooth_loss` and `last_metrics`."
last_metrics = ifnone(last_metrics, [])
stats = [str(stat) if isinstance(stat, int) else '#na#' if stat is None else f'{stat:.6f}'
for name, stat in zip(self.learn.recorder.names, [epoch, smooth_loss] + last_metrics)]
if self.add_time: stats.append(format_time(time() - self.start_epoch))
str_stats = ','.join(stats)
self.file.write(str_stats + '\n')
def on_train_end(self, **kwargs: Any) -> None:
"Close the file."
self.file.close()
|
PypiClean
|
/china-1.0.1-py3-none-any.whl/yuan/utils/sql/hive.py
|
class Hive(object):
get_feats = lambda feats: [i.split('.')[1] if '.' in i else i for i in feats]
def gen_cols(self, feats_str: list = [], feats_num: list = [], all_feats: list = []):
if all_feats:
if feats_str or feats_num:
if feats_str:
z = zip(range(1024), ['string' if feat in feats_str else 'double' for feat in all_feats], all_feats)
else:
z = zip(range(1024), ['double' if feat in feats_num else 'string' for feat in all_feats], all_feats)
res = '\n'.join([f"{idx}:optional {dtype} {feat};//{feat}" for idx, dtype, feat in z])
else:
z = zip(range(1024), len(feats_str) * ['string'] + len(feats_num) * ['double'], feats_str + feats_num)
res = '\n'.join([f"{idx}:optional {dtype} {feat};//{feat}" for idx, dtype, feat in z])
print(res)
def gen_agg_cols(self, feats: list, funcs: list = None):
"""
:param column: str
:param funcs: list or tuple
'mean', 'max', 'min', ...
mad skew kurt待增加
:return:
"""
exprs = lambda column: {
'sum': f'AVG({column}) AS {column}_sum',
'mean': f'AVG({column}) AS {column}_mean',
'max': f'MAX({column}) AS {column}_max',
'min': f'MIN({column}) AS {column}_min',
'range': f'MAX({column}) - MIN({column}) AS {column}_range',
'std': f'STDDEV_SAMP({column}) AS {column}_std',
'per25': f'PERCENTILE_APPROX({column}, 0.25) AS {column}_per25',
'per50': f'PERCENTILE_APPROX({column}, 0.50) AS {column}_per50',
'per75': f'PERCENTILE_APPROX({column}, 0.75) AS {column}_per75',
'iqr': f'PERCENTILE_APPROX({column}, 0.75) - PERCENTILE_APPROX({column}, 0.25) AS {column}_iqr',
'cv': f'STDDEV_SAMP({column}) / (AVG({column}) + pow(10, -8)) AS {column}_cv',
'zeros_num': f'COUNT(CASE WHEN {column} = 0 THEN 1 ELSE NULL END) AS {column}_zeros_num',
'zeros_perc': f'COUNT(CASE WHEN {column} = 0 THEN 1 ELSE NULL END) / COUNT(1) AS {column}_zeros_perc'
}
if funcs is None:
res = ',\n'.join([',\n'.join(exprs(feat).values()) for feat in feats])
else:
assert isinstance(funcs, tuple) or isinstance(funcs, list)
res = ',\n'.join([',\n'.join([exprs(feat)[func] for func in funcs]) for feat in feats])
print(res)
|
PypiClean
|
/cdktf_cdktf_provider_aws-17.0.2-py3-none-any.whl/cdktf_cdktf_provider_aws/data_aws_ec2_host/__init__.py
|
import abc
import builtins
import datetime
import enum
import typing
import jsii
import publication
import typing_extensions
from typeguard import check_type
from .._jsii import *
import cdktf as _cdktf_9a9027ec
import constructs as _constructs_77d1e7e8
class DataAwsEc2Host(
_cdktf_9a9027ec.TerraformDataSource,
metaclass=jsii.JSIIMeta,
jsii_type="@cdktf/provider-aws.dataAwsEc2Host.DataAwsEc2Host",
):
'''Represents a {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host aws_ec2_host}.'''
def __init__(
self,
scope: _constructs_77d1e7e8.Construct,
id_: builtins.str,
*,
filter: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.Sequence[typing.Union["DataAwsEc2HostFilter", typing.Dict[builtins.str, typing.Any]]]]] = None,
host_id: typing.Optional[builtins.str] = None,
id: typing.Optional[builtins.str] = None,
tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
timeouts: typing.Optional[typing.Union["DataAwsEc2HostTimeouts", typing.Dict[builtins.str, typing.Any]]] = None,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
) -> None:
'''Create a new {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host aws_ec2_host} Data Source.
:param scope: The scope in which to define this construct.
:param id_: The scoped construct ID. Must be unique amongst siblings in the same scope
:param filter: filter block. Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#filter DataAwsEc2Host#filter}
:param host_id: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#host_id DataAwsEc2Host#host_id}.
:param id: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#id DataAwsEc2Host#id}. Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
:param tags: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#tags DataAwsEc2Host#tags}.
:param timeouts: timeouts block. Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#timeouts DataAwsEc2Host#timeouts}
:param connection:
:param count:
:param depends_on:
:param for_each:
:param lifecycle:
:param provider:
:param provisioners:
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__4baf6510698575bd3786a012e93d10ca0bf03014c366100f8bd02012167888aa)
check_type(argname="argument scope", value=scope, expected_type=type_hints["scope"])
check_type(argname="argument id_", value=id_, expected_type=type_hints["id_"])
config = DataAwsEc2HostConfig(
filter=filter,
host_id=host_id,
id=id,
tags=tags,
timeouts=timeouts,
connection=connection,
count=count,
depends_on=depends_on,
for_each=for_each,
lifecycle=lifecycle,
provider=provider,
provisioners=provisioners,
)
jsii.create(self.__class__, self, [scope, id_, config])
@jsii.member(jsii_name="putFilter")
def put_filter(
self,
value: typing.Union[_cdktf_9a9027ec.IResolvable, typing.Sequence[typing.Union["DataAwsEc2HostFilter", typing.Dict[builtins.str, typing.Any]]]],
) -> None:
'''
:param value: -
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__a3475f5a4533c2a7647487b071b2850036eb863b5401e6870f53c452e80aac9a)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
return typing.cast(None, jsii.invoke(self, "putFilter", [value]))
@jsii.member(jsii_name="putTimeouts")
def put_timeouts(self, *, read: typing.Optional[builtins.str] = None) -> None:
'''
:param read: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#read DataAwsEc2Host#read}.
'''
value = DataAwsEc2HostTimeouts(read=read)
return typing.cast(None, jsii.invoke(self, "putTimeouts", [value]))
@jsii.member(jsii_name="resetFilter")
def reset_filter(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetFilter", []))
@jsii.member(jsii_name="resetHostId")
def reset_host_id(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetHostId", []))
@jsii.member(jsii_name="resetId")
def reset_id(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetId", []))
@jsii.member(jsii_name="resetTags")
def reset_tags(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetTags", []))
@jsii.member(jsii_name="resetTimeouts")
def reset_timeouts(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetTimeouts", []))
@jsii.member(jsii_name="synthesizeAttributes")
def _synthesize_attributes(self) -> typing.Mapping[builtins.str, typing.Any]:
return typing.cast(typing.Mapping[builtins.str, typing.Any], jsii.invoke(self, "synthesizeAttributes", []))
@jsii.python.classproperty
@jsii.member(jsii_name="tfResourceType")
def TF_RESOURCE_TYPE(cls) -> builtins.str:
return typing.cast(builtins.str, jsii.sget(cls, "tfResourceType"))
@builtins.property
@jsii.member(jsii_name="arn")
def arn(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "arn"))
@builtins.property
@jsii.member(jsii_name="assetId")
def asset_id(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "assetId"))
@builtins.property
@jsii.member(jsii_name="autoPlacement")
def auto_placement(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "autoPlacement"))
@builtins.property
@jsii.member(jsii_name="availabilityZone")
def availability_zone(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "availabilityZone"))
@builtins.property
@jsii.member(jsii_name="cores")
def cores(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "cores"))
@builtins.property
@jsii.member(jsii_name="filter")
def filter(self) -> "DataAwsEc2HostFilterList":
return typing.cast("DataAwsEc2HostFilterList", jsii.get(self, "filter"))
@builtins.property
@jsii.member(jsii_name="hostRecovery")
def host_recovery(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "hostRecovery"))
@builtins.property
@jsii.member(jsii_name="instanceFamily")
def instance_family(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "instanceFamily"))
@builtins.property
@jsii.member(jsii_name="instanceType")
def instance_type(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "instanceType"))
@builtins.property
@jsii.member(jsii_name="outpostArn")
def outpost_arn(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "outpostArn"))
@builtins.property
@jsii.member(jsii_name="ownerId")
def owner_id(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "ownerId"))
@builtins.property
@jsii.member(jsii_name="sockets")
def sockets(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "sockets"))
@builtins.property
@jsii.member(jsii_name="timeouts")
def timeouts(self) -> "DataAwsEc2HostTimeoutsOutputReference":
return typing.cast("DataAwsEc2HostTimeoutsOutputReference", jsii.get(self, "timeouts"))
@builtins.property
@jsii.member(jsii_name="totalVcpus")
def total_vcpus(self) -> jsii.Number:
return typing.cast(jsii.Number, jsii.get(self, "totalVcpus"))
@builtins.property
@jsii.member(jsii_name="filterInput")
def filter_input(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List["DataAwsEc2HostFilter"]]]:
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List["DataAwsEc2HostFilter"]]], jsii.get(self, "filterInput"))
@builtins.property
@jsii.member(jsii_name="hostIdInput")
def host_id_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "hostIdInput"))
@builtins.property
@jsii.member(jsii_name="idInput")
def id_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "idInput"))
@builtins.property
@jsii.member(jsii_name="tagsInput")
def tags_input(self) -> typing.Optional[typing.Mapping[builtins.str, builtins.str]]:
return typing.cast(typing.Optional[typing.Mapping[builtins.str, builtins.str]], jsii.get(self, "tagsInput"))
@builtins.property
@jsii.member(jsii_name="timeoutsInput")
def timeouts_input(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, "DataAwsEc2HostTimeouts"]]:
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, "DataAwsEc2HostTimeouts"]], jsii.get(self, "timeoutsInput"))
@builtins.property
@jsii.member(jsii_name="hostId")
def host_id(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "hostId"))
@host_id.setter
def host_id(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__92a3d8e3d051816eee572443f32cf0774da97c1ff04adf5e39eb5abd63da37ec)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "hostId", value)
@builtins.property
@jsii.member(jsii_name="id")
def id(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "id"))
@id.setter
def id(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__cede331a15ad0c321133c18923eaf588bfde4947e95b8672bd55e601d8ae5af7)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "id", value)
@builtins.property
@jsii.member(jsii_name="tags")
def tags(self) -> typing.Mapping[builtins.str, builtins.str]:
return typing.cast(typing.Mapping[builtins.str, builtins.str], jsii.get(self, "tags"))
@tags.setter
def tags(self, value: typing.Mapping[builtins.str, builtins.str]) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__1ae727ed2b6d8c1afbd8f8a4c943e1027faeda70cdbea0951e6c68c7b28bfc5f)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "tags", value)
@jsii.data_type(
jsii_type="@cdktf/provider-aws.dataAwsEc2Host.DataAwsEc2HostConfig",
jsii_struct_bases=[_cdktf_9a9027ec.TerraformMetaArguments],
name_mapping={
"connection": "connection",
"count": "count",
"depends_on": "dependsOn",
"for_each": "forEach",
"lifecycle": "lifecycle",
"provider": "provider",
"provisioners": "provisioners",
"filter": "filter",
"host_id": "hostId",
"id": "id",
"tags": "tags",
"timeouts": "timeouts",
},
)
class DataAwsEc2HostConfig(_cdktf_9a9027ec.TerraformMetaArguments):
def __init__(
self,
*,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
filter: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.Sequence[typing.Union["DataAwsEc2HostFilter", typing.Dict[builtins.str, typing.Any]]]]] = None,
host_id: typing.Optional[builtins.str] = None,
id: typing.Optional[builtins.str] = None,
tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
timeouts: typing.Optional[typing.Union["DataAwsEc2HostTimeouts", typing.Dict[builtins.str, typing.Any]]] = None,
) -> None:
'''
:param connection:
:param count:
:param depends_on:
:param for_each:
:param lifecycle:
:param provider:
:param provisioners:
:param filter: filter block. Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#filter DataAwsEc2Host#filter}
:param host_id: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#host_id DataAwsEc2Host#host_id}.
:param id: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#id DataAwsEc2Host#id}. Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2. If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
:param tags: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#tags DataAwsEc2Host#tags}.
:param timeouts: timeouts block. Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#timeouts DataAwsEc2Host#timeouts}
'''
if isinstance(lifecycle, dict):
lifecycle = _cdktf_9a9027ec.TerraformResourceLifecycle(**lifecycle)
if isinstance(timeouts, dict):
timeouts = DataAwsEc2HostTimeouts(**timeouts)
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__6cd88589fb9adb3aa0a75ac16b942c11789c55991e43a9487ce7dcfd38660df0)
check_type(argname="argument connection", value=connection, expected_type=type_hints["connection"])
check_type(argname="argument count", value=count, expected_type=type_hints["count"])
check_type(argname="argument depends_on", value=depends_on, expected_type=type_hints["depends_on"])
check_type(argname="argument for_each", value=for_each, expected_type=type_hints["for_each"])
check_type(argname="argument lifecycle", value=lifecycle, expected_type=type_hints["lifecycle"])
check_type(argname="argument provider", value=provider, expected_type=type_hints["provider"])
check_type(argname="argument provisioners", value=provisioners, expected_type=type_hints["provisioners"])
check_type(argname="argument filter", value=filter, expected_type=type_hints["filter"])
check_type(argname="argument host_id", value=host_id, expected_type=type_hints["host_id"])
check_type(argname="argument id", value=id, expected_type=type_hints["id"])
check_type(argname="argument tags", value=tags, expected_type=type_hints["tags"])
check_type(argname="argument timeouts", value=timeouts, expected_type=type_hints["timeouts"])
self._values: typing.Dict[builtins.str, typing.Any] = {}
if connection is not None:
self._values["connection"] = connection
if count is not None:
self._values["count"] = count
if depends_on is not None:
self._values["depends_on"] = depends_on
if for_each is not None:
self._values["for_each"] = for_each
if lifecycle is not None:
self._values["lifecycle"] = lifecycle
if provider is not None:
self._values["provider"] = provider
if provisioners is not None:
self._values["provisioners"] = provisioners
if filter is not None:
self._values["filter"] = filter
if host_id is not None:
self._values["host_id"] = host_id
if id is not None:
self._values["id"] = id
if tags is not None:
self._values["tags"] = tags
if timeouts is not None:
self._values["timeouts"] = timeouts
@builtins.property
def connection(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, _cdktf_9a9027ec.WinrmProvisionerConnection]]:
'''
:stability: experimental
'''
result = self._values.get("connection")
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, _cdktf_9a9027ec.WinrmProvisionerConnection]], result)
@builtins.property
def count(
self,
) -> typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]]:
'''
:stability: experimental
'''
result = self._values.get("count")
return typing.cast(typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]], result)
@builtins.property
def depends_on(
self,
) -> typing.Optional[typing.List[_cdktf_9a9027ec.ITerraformDependable]]:
'''
:stability: experimental
'''
result = self._values.get("depends_on")
return typing.cast(typing.Optional[typing.List[_cdktf_9a9027ec.ITerraformDependable]], result)
@builtins.property
def for_each(self) -> typing.Optional[_cdktf_9a9027ec.ITerraformIterator]:
'''
:stability: experimental
'''
result = self._values.get("for_each")
return typing.cast(typing.Optional[_cdktf_9a9027ec.ITerraformIterator], result)
@builtins.property
def lifecycle(self) -> typing.Optional[_cdktf_9a9027ec.TerraformResourceLifecycle]:
'''
:stability: experimental
'''
result = self._values.get("lifecycle")
return typing.cast(typing.Optional[_cdktf_9a9027ec.TerraformResourceLifecycle], result)
@builtins.property
def provider(self) -> typing.Optional[_cdktf_9a9027ec.TerraformProvider]:
'''
:stability: experimental
'''
result = self._values.get("provider")
return typing.cast(typing.Optional[_cdktf_9a9027ec.TerraformProvider], result)
@builtins.property
def provisioners(
self,
) -> typing.Optional[typing.List[typing.Union[_cdktf_9a9027ec.FileProvisioner, _cdktf_9a9027ec.LocalExecProvisioner, _cdktf_9a9027ec.RemoteExecProvisioner]]]:
'''
:stability: experimental
'''
result = self._values.get("provisioners")
return typing.cast(typing.Optional[typing.List[typing.Union[_cdktf_9a9027ec.FileProvisioner, _cdktf_9a9027ec.LocalExecProvisioner, _cdktf_9a9027ec.RemoteExecProvisioner]]], result)
@builtins.property
def filter(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List["DataAwsEc2HostFilter"]]]:
'''filter block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#filter DataAwsEc2Host#filter}
'''
result = self._values.get("filter")
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List["DataAwsEc2HostFilter"]]], result)
@builtins.property
def host_id(self) -> typing.Optional[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#host_id DataAwsEc2Host#host_id}.'''
result = self._values.get("host_id")
return typing.cast(typing.Optional[builtins.str], result)
@builtins.property
def id(self) -> typing.Optional[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#id DataAwsEc2Host#id}.
Please be aware that the id field is automatically added to all resources in Terraform providers using a Terraform provider SDK version below 2.
If you experience problems setting this value it might not be settable. Please take a look at the provider documentation to ensure it should be settable.
'''
result = self._values.get("id")
return typing.cast(typing.Optional[builtins.str], result)
@builtins.property
def tags(self) -> typing.Optional[typing.Mapping[builtins.str, builtins.str]]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#tags DataAwsEc2Host#tags}.'''
result = self._values.get("tags")
return typing.cast(typing.Optional[typing.Mapping[builtins.str, builtins.str]], result)
@builtins.property
def timeouts(self) -> typing.Optional["DataAwsEc2HostTimeouts"]:
'''timeouts block.
Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#timeouts DataAwsEc2Host#timeouts}
'''
result = self._values.get("timeouts")
return typing.cast(typing.Optional["DataAwsEc2HostTimeouts"], result)
def __eq__(self, rhs: typing.Any) -> builtins.bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs: typing.Any) -> builtins.bool:
return not (rhs == self)
def __repr__(self) -> str:
return "DataAwsEc2HostConfig(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
@jsii.data_type(
jsii_type="@cdktf/provider-aws.dataAwsEc2Host.DataAwsEc2HostFilter",
jsii_struct_bases=[],
name_mapping={"name": "name", "values": "values"},
)
class DataAwsEc2HostFilter:
def __init__(
self,
*,
name: builtins.str,
values: typing.Sequence[builtins.str],
) -> None:
'''
:param name: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#name DataAwsEc2Host#name}.
:param values: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#values DataAwsEc2Host#values}.
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__26d1f49947b01d2cd4fd95e417087f21a085b62a155bb7f547b077ff4b4a0e50)
check_type(argname="argument name", value=name, expected_type=type_hints["name"])
check_type(argname="argument values", value=values, expected_type=type_hints["values"])
self._values: typing.Dict[builtins.str, typing.Any] = {
"name": name,
"values": values,
}
@builtins.property
def name(self) -> builtins.str:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#name DataAwsEc2Host#name}.'''
result = self._values.get("name")
assert result is not None, "Required property 'name' is missing"
return typing.cast(builtins.str, result)
@builtins.property
def values(self) -> typing.List[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#values DataAwsEc2Host#values}.'''
result = self._values.get("values")
assert result is not None, "Required property 'values' is missing"
return typing.cast(typing.List[builtins.str], result)
def __eq__(self, rhs: typing.Any) -> builtins.bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs: typing.Any) -> builtins.bool:
return not (rhs == self)
def __repr__(self) -> str:
return "DataAwsEc2HostFilter(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
class DataAwsEc2HostFilterList(
_cdktf_9a9027ec.ComplexList,
metaclass=jsii.JSIIMeta,
jsii_type="@cdktf/provider-aws.dataAwsEc2Host.DataAwsEc2HostFilterList",
):
def __init__(
self,
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
wraps_set: builtins.bool,
) -> None:
'''
:param terraform_resource: The parent resource.
:param terraform_attribute: The attribute on the parent resource this class is referencing.
:param wraps_set: whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__c7bf47d2bbaac1ce4725d31dca051f2ea8060159aa7e73e5b8dbbe1ea7a0eba5)
check_type(argname="argument terraform_resource", value=terraform_resource, expected_type=type_hints["terraform_resource"])
check_type(argname="argument terraform_attribute", value=terraform_attribute, expected_type=type_hints["terraform_attribute"])
check_type(argname="argument wraps_set", value=wraps_set, expected_type=type_hints["wraps_set"])
jsii.create(self.__class__, self, [terraform_resource, terraform_attribute, wraps_set])
@jsii.member(jsii_name="get")
def get(self, index: jsii.Number) -> "DataAwsEc2HostFilterOutputReference":
'''
:param index: the index of the item to return.
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__98624d4abd9f4069eb921e05b396b3d766198c69b1262cda822eef5c92090016)
check_type(argname="argument index", value=index, expected_type=type_hints["index"])
return typing.cast("DataAwsEc2HostFilterOutputReference", jsii.invoke(self, "get", [index]))
@builtins.property
@jsii.member(jsii_name="terraformAttribute")
def _terraform_attribute(self) -> builtins.str:
'''The attribute on the parent resource this class is referencing.'''
return typing.cast(builtins.str, jsii.get(self, "terraformAttribute"))
@_terraform_attribute.setter
def _terraform_attribute(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__a94c915391ed372f2129927b02b948513f5bc3df6167d8576be34938dcb9c5a5)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "terraformAttribute", value)
@builtins.property
@jsii.member(jsii_name="terraformResource")
def _terraform_resource(self) -> _cdktf_9a9027ec.IInterpolatingParent:
'''The parent resource.'''
return typing.cast(_cdktf_9a9027ec.IInterpolatingParent, jsii.get(self, "terraformResource"))
@_terraform_resource.setter
def _terraform_resource(self, value: _cdktf_9a9027ec.IInterpolatingParent) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__ee1027e253d8941d238eeeb8e9ad5d426cdf961691b3f39bf9046372ede825d9)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "terraformResource", value)
@builtins.property
@jsii.member(jsii_name="wrapsSet")
def _wraps_set(self) -> builtins.bool:
'''whether the list is wrapping a set (will add tolist() to be able to access an item via an index).'''
return typing.cast(builtins.bool, jsii.get(self, "wrapsSet"))
@_wraps_set.setter
def _wraps_set(self, value: builtins.bool) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__514a150d4d1aaebbeef78bf315cf60d2ee46d2b98e81b20ec92700f837dfa040)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "wrapsSet", value)
@builtins.property
@jsii.member(jsii_name="internalValue")
def internal_value(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List[DataAwsEc2HostFilter]]]:
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List[DataAwsEc2HostFilter]]], jsii.get(self, "internalValue"))
@internal_value.setter
def internal_value(
self,
value: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List[DataAwsEc2HostFilter]]],
) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__0079bc96a95c8e178c2e06a944fd96e063031c9b615c74db5ff7181eb9a4d3d1)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "internalValue", value)
class DataAwsEc2HostFilterOutputReference(
_cdktf_9a9027ec.ComplexObject,
metaclass=jsii.JSIIMeta,
jsii_type="@cdktf/provider-aws.dataAwsEc2Host.DataAwsEc2HostFilterOutputReference",
):
def __init__(
self,
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
complex_object_index: jsii.Number,
complex_object_is_from_set: builtins.bool,
) -> None:
'''
:param terraform_resource: The parent resource.
:param terraform_attribute: The attribute on the parent resource this class is referencing.
:param complex_object_index: the index of this item in the list.
:param complex_object_is_from_set: whether the list is wrapping a set (will add tolist() to be able to access an item via an index).
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__cbfc7808d0969081fbe6a3d5a9e568f70cd9a5ca05a3e3b798b2209807c1e5d1)
check_type(argname="argument terraform_resource", value=terraform_resource, expected_type=type_hints["terraform_resource"])
check_type(argname="argument terraform_attribute", value=terraform_attribute, expected_type=type_hints["terraform_attribute"])
check_type(argname="argument complex_object_index", value=complex_object_index, expected_type=type_hints["complex_object_index"])
check_type(argname="argument complex_object_is_from_set", value=complex_object_is_from_set, expected_type=type_hints["complex_object_is_from_set"])
jsii.create(self.__class__, self, [terraform_resource, terraform_attribute, complex_object_index, complex_object_is_from_set])
@builtins.property
@jsii.member(jsii_name="nameInput")
def name_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "nameInput"))
@builtins.property
@jsii.member(jsii_name="valuesInput")
def values_input(self) -> typing.Optional[typing.List[builtins.str]]:
return typing.cast(typing.Optional[typing.List[builtins.str]], jsii.get(self, "valuesInput"))
@builtins.property
@jsii.member(jsii_name="name")
def name(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "name"))
@name.setter
def name(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__45d2cb6209059c7b6b0819aed37d023240ddbfbf3904c3e0d5242f3fb4e5e43d)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "name", value)
@builtins.property
@jsii.member(jsii_name="values")
def values(self) -> typing.List[builtins.str]:
return typing.cast(typing.List[builtins.str], jsii.get(self, "values"))
@values.setter
def values(self, value: typing.List[builtins.str]) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__3e8561e6c55fd4269197d94429f4ab2662037b9f1f5af645c06d4d8b061ce46b)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "values", value)
@builtins.property
@jsii.member(jsii_name="internalValue")
def internal_value(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostFilter]]:
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostFilter]], jsii.get(self, "internalValue"))
@internal_value.setter
def internal_value(
self,
value: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostFilter]],
) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__e3c7263821d7365d62798d52d2df21cf441e48b44cfa682be41d10f6b5c0c63b)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "internalValue", value)
@jsii.data_type(
jsii_type="@cdktf/provider-aws.dataAwsEc2Host.DataAwsEc2HostTimeouts",
jsii_struct_bases=[],
name_mapping={"read": "read"},
)
class DataAwsEc2HostTimeouts:
def __init__(self, *, read: typing.Optional[builtins.str] = None) -> None:
'''
:param read: Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#read DataAwsEc2Host#read}.
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__f0735fe808e9e0b153207e8ec20ee000a6cc4a16e70f3d7053625dbecf0d276d)
check_type(argname="argument read", value=read, expected_type=type_hints["read"])
self._values: typing.Dict[builtins.str, typing.Any] = {}
if read is not None:
self._values["read"] = read
@builtins.property
def read(self) -> typing.Optional[builtins.str]:
'''Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/aws/5.15.0/docs/data-sources/ec2_host#read DataAwsEc2Host#read}.'''
result = self._values.get("read")
return typing.cast(typing.Optional[builtins.str], result)
def __eq__(self, rhs: typing.Any) -> builtins.bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs: typing.Any) -> builtins.bool:
return not (rhs == self)
def __repr__(self) -> str:
return "DataAwsEc2HostTimeouts(%s)" % ", ".join(
k + "=" + repr(v) for k, v in self._values.items()
)
class DataAwsEc2HostTimeoutsOutputReference(
_cdktf_9a9027ec.ComplexObject,
metaclass=jsii.JSIIMeta,
jsii_type="@cdktf/provider-aws.dataAwsEc2Host.DataAwsEc2HostTimeoutsOutputReference",
):
def __init__(
self,
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
) -> None:
'''
:param terraform_resource: The parent resource.
:param terraform_attribute: The attribute on the parent resource this class is referencing.
'''
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__cae597bcf7caa4e10b00d83a97cb25852c25308aeff62788a95306abdc4e89f3)
check_type(argname="argument terraform_resource", value=terraform_resource, expected_type=type_hints["terraform_resource"])
check_type(argname="argument terraform_attribute", value=terraform_attribute, expected_type=type_hints["terraform_attribute"])
jsii.create(self.__class__, self, [terraform_resource, terraform_attribute])
@jsii.member(jsii_name="resetRead")
def reset_read(self) -> None:
return typing.cast(None, jsii.invoke(self, "resetRead", []))
@builtins.property
@jsii.member(jsii_name="readInput")
def read_input(self) -> typing.Optional[builtins.str]:
return typing.cast(typing.Optional[builtins.str], jsii.get(self, "readInput"))
@builtins.property
@jsii.member(jsii_name="read")
def read(self) -> builtins.str:
return typing.cast(builtins.str, jsii.get(self, "read"))
@read.setter
def read(self, value: builtins.str) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__7dd3c6bdbeba962fee074e39156ce7ad5edb175610906e4c001622bc619d66c7)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "read", value)
@builtins.property
@jsii.member(jsii_name="internalValue")
def internal_value(
self,
) -> typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostTimeouts]]:
return typing.cast(typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostTimeouts]], jsii.get(self, "internalValue"))
@internal_value.setter
def internal_value(
self,
value: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostTimeouts]],
) -> None:
if __debug__:
type_hints = typing.get_type_hints(_typecheckingstub__9b02837c90c2b886f2f00c6560ffd1af1bdbd874b137ff41f3a392c3c55781bd)
check_type(argname="argument value", value=value, expected_type=type_hints["value"])
jsii.set(self, "internalValue", value)
__all__ = [
"DataAwsEc2Host",
"DataAwsEc2HostConfig",
"DataAwsEc2HostFilter",
"DataAwsEc2HostFilterList",
"DataAwsEc2HostFilterOutputReference",
"DataAwsEc2HostTimeouts",
"DataAwsEc2HostTimeoutsOutputReference",
]
publication.publish()
def _typecheckingstub__4baf6510698575bd3786a012e93d10ca0bf03014c366100f8bd02012167888aa(
scope: _constructs_77d1e7e8.Construct,
id_: builtins.str,
*,
filter: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.Sequence[typing.Union[DataAwsEc2HostFilter, typing.Dict[builtins.str, typing.Any]]]]] = None,
host_id: typing.Optional[builtins.str] = None,
id: typing.Optional[builtins.str] = None,
tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
timeouts: typing.Optional[typing.Union[DataAwsEc2HostTimeouts, typing.Dict[builtins.str, typing.Any]]] = None,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__a3475f5a4533c2a7647487b071b2850036eb863b5401e6870f53c452e80aac9a(
value: typing.Union[_cdktf_9a9027ec.IResolvable, typing.Sequence[typing.Union[DataAwsEc2HostFilter, typing.Dict[builtins.str, typing.Any]]]],
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__92a3d8e3d051816eee572443f32cf0774da97c1ff04adf5e39eb5abd63da37ec(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__cede331a15ad0c321133c18923eaf588bfde4947e95b8672bd55e601d8ae5af7(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__1ae727ed2b6d8c1afbd8f8a4c943e1027faeda70cdbea0951e6c68c7b28bfc5f(
value: typing.Mapping[builtins.str, builtins.str],
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__6cd88589fb9adb3aa0a75ac16b942c11789c55991e43a9487ce7dcfd38660df0(
*,
connection: typing.Optional[typing.Union[typing.Union[_cdktf_9a9027ec.SSHProvisionerConnection, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.WinrmProvisionerConnection, typing.Dict[builtins.str, typing.Any]]]] = None,
count: typing.Optional[typing.Union[jsii.Number, _cdktf_9a9027ec.TerraformCount]] = None,
depends_on: typing.Optional[typing.Sequence[_cdktf_9a9027ec.ITerraformDependable]] = None,
for_each: typing.Optional[_cdktf_9a9027ec.ITerraformIterator] = None,
lifecycle: typing.Optional[typing.Union[_cdktf_9a9027ec.TerraformResourceLifecycle, typing.Dict[builtins.str, typing.Any]]] = None,
provider: typing.Optional[_cdktf_9a9027ec.TerraformProvider] = None,
provisioners: typing.Optional[typing.Sequence[typing.Union[typing.Union[_cdktf_9a9027ec.FileProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.LocalExecProvisioner, typing.Dict[builtins.str, typing.Any]], typing.Union[_cdktf_9a9027ec.RemoteExecProvisioner, typing.Dict[builtins.str, typing.Any]]]]] = None,
filter: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.Sequence[typing.Union[DataAwsEc2HostFilter, typing.Dict[builtins.str, typing.Any]]]]] = None,
host_id: typing.Optional[builtins.str] = None,
id: typing.Optional[builtins.str] = None,
tags: typing.Optional[typing.Mapping[builtins.str, builtins.str]] = None,
timeouts: typing.Optional[typing.Union[DataAwsEc2HostTimeouts, typing.Dict[builtins.str, typing.Any]]] = None,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__26d1f49947b01d2cd4fd95e417087f21a085b62a155bb7f547b077ff4b4a0e50(
*,
name: builtins.str,
values: typing.Sequence[builtins.str],
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__c7bf47d2bbaac1ce4725d31dca051f2ea8060159aa7e73e5b8dbbe1ea7a0eba5(
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
wraps_set: builtins.bool,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__98624d4abd9f4069eb921e05b396b3d766198c69b1262cda822eef5c92090016(
index: jsii.Number,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__a94c915391ed372f2129927b02b948513f5bc3df6167d8576be34938dcb9c5a5(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__ee1027e253d8941d238eeeb8e9ad5d426cdf961691b3f39bf9046372ede825d9(
value: _cdktf_9a9027ec.IInterpolatingParent,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__514a150d4d1aaebbeef78bf315cf60d2ee46d2b98e81b20ec92700f837dfa040(
value: builtins.bool,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__0079bc96a95c8e178c2e06a944fd96e063031c9b615c74db5ff7181eb9a4d3d1(
value: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, typing.List[DataAwsEc2HostFilter]]],
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__cbfc7808d0969081fbe6a3d5a9e568f70cd9a5ca05a3e3b798b2209807c1e5d1(
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
complex_object_index: jsii.Number,
complex_object_is_from_set: builtins.bool,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__45d2cb6209059c7b6b0819aed37d023240ddbfbf3904c3e0d5242f3fb4e5e43d(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__3e8561e6c55fd4269197d94429f4ab2662037b9f1f5af645c06d4d8b061ce46b(
value: typing.List[builtins.str],
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__e3c7263821d7365d62798d52d2df21cf441e48b44cfa682be41d10f6b5c0c63b(
value: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostFilter]],
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__f0735fe808e9e0b153207e8ec20ee000a6cc4a16e70f3d7053625dbecf0d276d(
*,
read: typing.Optional[builtins.str] = None,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__cae597bcf7caa4e10b00d83a97cb25852c25308aeff62788a95306abdc4e89f3(
terraform_resource: _cdktf_9a9027ec.IInterpolatingParent,
terraform_attribute: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__7dd3c6bdbeba962fee074e39156ce7ad5edb175610906e4c001622bc619d66c7(
value: builtins.str,
) -> None:
"""Type checking stubs"""
pass
def _typecheckingstub__9b02837c90c2b886f2f00c6560ffd1af1bdbd874b137ff41f3a392c3c55781bd(
value: typing.Optional[typing.Union[_cdktf_9a9027ec.IResolvable, DataAwsEc2HostTimeouts]],
) -> None:
"""Type checking stubs"""
pass
|
PypiClean
|
/echarts-china-counties-pypkg-0.0.2.tar.gz/echarts-china-counties-pypkg-0.0.2/echarts_china_counties_pypkg/resources/echarts-china-counties-js/8b845197731b3b5affc2159ca0bee819.js
|
(function (root, factory) {if (typeof define === 'function' && define.amd) {define(['exports', 'echarts'], factory);} else if (typeof exports === 'object' && typeof exports.nodeName !== 'string') {factory(exports, require('echarts'));} else {factory({}, root.echarts);}}(this, function (exports, echarts) {var log = function (msg) {if (typeof console !== 'undefined') {console && console.error && console.error(msg);}};if (!echarts) {log('ECharts is not Loaded');return;}if (!echarts.registerMap) {log('ECharts Map is not loaded');return;}echarts.registerMap('禄丰县', {"type":"FeatureCollection","features":[{"type":"Feature","id":"532331","properties":{"name":"禄丰县","cp":[102.079027,25.150111],"childNum":2},"geometry":{"type":"MultiPolygon","coordinates":[["@@BBB@B@FCDABA@CAEEEGDCDADABFH"],["@@DAFAH@F@DAFA@@@@BK@AAA@@AAAC@ABC@A@ABAB@CA@AA@@AAAAC@A@@AC@AAAAC@@@CACAAAAACA@AAAACCCC@@@EKQCM@C@EDEFC@@@C@@ACACGCA@A@A@A@A@C@C@A@ABA@CBC@CBC@A@C@A@E@CBABKH@@IAC@C@EBCFAFCDEAG@EBCD@D@FFDB@@FABCBC@A@ABCBCDCBA@A@C@A@ABEDEDABABEDAFCAIF@GCIAA@A@EBA@CBA@CAK@AAECEAACA@ACECECECCC@E@C@AACAAAGCOIAAII@EDGFG@GECEEAEACG@KBM@E@A@EACE@@ACA@A@MDGDCBCBMBCBQ@E@@@GAIHA@A@A@A@AAAAA@@@@FCHADCJEHCDEJIDIFE@C@KBC@C@A@C@E@E@E@C@ABGBCBE@CAA@A@AA@CBEAEAECGAACECACAEAAA@AAA@A@@A@E@A@EDCBEBEBEBCDCDAACACABAGEC@CHABA@E@ACGOACC@@@A@ADCFABA@AAA@AECIAE@EBE@ACEAAAAA@CAIAKAE@KAI@CBC@GBC@@@@BBD@DABILADC@AAA@C@C@CBAB@BB@DDBBBDDDBBADCBABCBA@AB@@@B@BBBBDFHDFBB@B@FCDABABCDGBE@CBABABC@ABC@E@CBCAKCMCAACBA@I@A@C@ABEBA@AB@FBH@DBD@BAD@D@DDBDDH@FBFDFDFD^JNFPJFFFFHJBDBFBJADEJKJQLGH@DBRFFBBBDA@ABCDAB@J@BA@A@A@A@@@CHALCDCDCBE@GAW@G@ABITCDCBC@CAEAC@ABAD@@AR@B@R@BABABCBC@G@E@EBCFAD@DBDFL@B@DADCH@F@DBBDBBB@BA@EFC@E@E@EDAB@BBPLPBH@@@D@DGPOPIJGFGHGFABAL@B@DADC@C@QIEAA@C@A@A@ABCBC@G@CAA@@@A@ADADBB@BB@B@B@@D@BABABBBDDNFDBDD@DCFCDKHGDQPAB@D@BD@FBDDHH@BAFEFA@@F@B@BAB@@@BAB@D@F@BBBB@B@B@BAAA@A@ABAD@BADEDCB@DBB@@B@@A@A@AB@@@BB@B@D@@F@BA@ABA@@B@B@B@BB@BABAFA@ABA@AAA@A@@@ABBB@BBB@B@B@DAB@DCBAD@B@@BDDB@@@B@BCB@BA@@BB@@@B@@@BA@AB@@BBH@DAB@DCB@B@@@BBAB@D@@@BB@D@@@B@@B@D@@ABCB@@@@HDDDBD@B@BABA@AB@@@BHJ@DABA@ABCBCAC@C@CDC@A@AA@DBNBLAJ@@AHEH@JEHCFCFADCFIFM@Q@OAKBEBCFCBDFD@D@FHFJHN@BB@@@B@B@BA@@B@HFBDABCD@BB@DB@@B@@A@AB@B@PFBB@B@B@@ADA@BBBBDBB@@@BA@C@CBCBABAB@JBD@DBFFJFJFDBBFBBB@DBB@@B@@A@EAC@@@@@BBHF@BA@EBA@EB@@@BBF@H@BABARE@GFAFCBC@E@BDBDADCBE@CAABBDFBLBFBH@DB@BDBF@D@DAJCNDDBH@JB@JDFDABCBID@BBFFLHFBBBFFDDBBLJLHJFDBHDDBHDLHHDPHNHRHLDHBPBB@BA@@FCDANEBA@A@AFC@A@@BABA@ABC@IBC@AACCG@A@EBAB@DABA@A@@AAA@@@@CBCDG@IBC@AB@@@BBFJDBFFB@BADC@@@@B@@@FBHBJFDDBDBBBB@BB@DABADCFCBAD@DBD@F@BA@ADABABAB@DA@@D@B@BABCBA@A@AAICI@C@CB@B@DBDBNHDBDDBFFH@@B@B@@@BA@E@ABAF@B@BC@E@@B@DBB@DD@BB@DBB@@A@AAAA@AC@A@C@CBA@AB@FC@@BA@A@GCE@ADCBABABEAI@CBCDGBA@@HFDBF@BADAD@JFFBH@BB@X@@BBDB@@D@DBJ@H@FDDJ@DATBHBDCNCBCBADBDBBFBDJ@JBLBBBBH@FCHAF@HBRLBO@I@M@IB[BOHIHEPAH@JAFEBI@ADKDIHKHCHAD@TCDDB@D@B@D@DBBAHABAFODA@@FAFCDEDEFEBAAEAC@KH@FBDHDDFBHCFCDAF@BDFDBDDD@@HBJ@J@HDHFFFDBBBJLBL@DABABA@@@E@CAEACCAE@C@@CFAJBBBJAFCDEFABAFJJLPJTBNBPJLFF@B@NDHBJFDFDFBDAD@FAF@PFHB@DADED@D@D@B@LBFDDABB@DHBB@BCBADADABDDFF@@BDBBBBDBFBDBFHD@NBJAH@DAFMDMXUJIBAJGNGBCBC@@FIFMBM@AB@DMDGLOFI@@@@FGFAF@LAD@BADGBCFAHW@@B@@@B@B@F@FBB@BAB@D@B@ECIOEIAC@C@C@ABABC@CBEACACEAEACACAAC@A@@BABADADABABCDCBALCFA@C@CAAEEAE@AAC@C@@AE@CBGFEDEHGDE@G@KAYDGHEH@B@B@FBXJHDD@D@@A@CCM@G@G@ADCFEHG@@BCBC@C@C@CAGACAE@C@AIECGEE@C@C@EDCFCAAA@E@E@EBEDC@CB@@AAAAACACGGGKAAAAEEOe@ACOIUOKBCBAHBFBDEBG@C@I@IAG@GDCFE@E@@AAACAA@CAC@CAA@@CAC@AAAACCAECE@EBC@CBCAC@AAAAE@CBCACCACCEGBCCAC@CDEBE@CAC@A@EBI@MBADCBCBEBC@A@@@CAGGCCCEECECCACCAEE@EAC@QMYKUCM@GFABBFFN@F@B@BCDABMJMDC@A@ECC@C@CBCDCDADABBD@DBB@D@D@D@DBD@DADA@C@C@C@CBCACAAA@E@A@CCEECGEACA@AEAEACEE@A@A@CBEDGFEFADCBEAEB@@C@E@CAEAECC@CBADCFCBCFEBA@E@A@CBCBA@ABAFEFAFAD@FBJDFBHBB@BDHBDBD@B@BABA@A@ACCAEBCDE"]],"encodeOffsets":[[[104838,26003]],[[104685,25587]]]}}],"UTF8Encoding":true});}));
|
PypiClean
|
/readme_metrics-3.0.3-py3-none-any.whl/readme_metrics/PayloadBuilder.py
|
from collections.abc import Mapping
import importlib
import json
from json import JSONDecodeError
from logging import Logger
import platform
import time
from typing import List, Optional
from urllib import parse
import uuid
from readme_metrics import ResponseInfoWrapper
class QueryNotFound(Exception):
pass
class BaseURLError(Exception):
pass
class PayloadBuilder:
"""
Internal builder class that handles the construction of the request and response
portions of the payload sent to the ReadMe API.
Attributes:
denylist (List[str]): Cached denylist for current PayloadBuilder instance
allowlist (List[str]): Cached allowlist for current PayloadBuilder instance
development_mode (bool): Cached development mode parameter for current
PayloadBuilder instance
grouping_function ([type]): Cached grouping function for current PayloadBuilder
instance
"""
def __init__(
self,
denylist: List[str],
allowlist: List[str],
development_mode: bool,
grouping_function,
logger: Logger,
):
"""Creates a PayloadBuilder instance with the supplied configuration
Args:
denylist (List[str]): Header/JSON body denylist
allowlist (List[str]): Header/JSON body allowlist
development_mode (bool): Development mode flag passed to ReadMe
grouping_function ([type]): Grouping function to generate an identity
payload
logger (Logger): Logging
"""
self.denylist = denylist
self.allowlist = allowlist
self.development_mode = development_mode
self.grouping_function = grouping_function
self.logger = logger
def __call__(self, request, response: ResponseInfoWrapper) -> dict:
"""Builds a HAR payload encompassing the request & response data
Args:
request: Request information to use, either a `werkzeug.Request`
or a `django.core.handlers.wsgi.WSGIRequest`.
response (ResponseInfoWrapper): Response information to use
Returns:
dict: Payload object (ready to be serialized and sent to ReadMe)
"""
group = self.grouping_function(request)
group = self._validate_group(group)
if group is None:
return None
payload = {
"_id": str(uuid.uuid4()),
"group": group,
"clientIPAddress": request.environ.get("REMOTE_ADDR"),
"development": self.development_mode,
"request": {
"log": {
"creator": {
"name": "readme-metrics (python)",
"version": importlib.import_module(__package__).__version__,
"comment": self._get_har_creator_comment(),
},
"entries": [
{
"pageref": self._build_base_url(request),
"startedDateTime": request.rm_start_dt,
"time": int(time.time() * 1000) - request.rm_start_ts,
"request": self._build_request_payload(request),
"response": self._build_response_payload(response),
}
],
}
},
}
return payload
def _get_har_creator_comment(self):
# arm64-darwin21.3.0/3.8.9
return (
platform.machine()
+ "-"
+ platform.system().lower()
+ platform.uname().release
+ "/"
+ platform.python_version()
)
def _validate_group(self, group: Optional[dict]):
if group is None:
return None
if not isinstance(group, dict):
self.logger.error(
"Grouping function returned %s but should return a dict; not logging this request",
type(group).__name__,
)
return None
if "api_key" in group:
# The public API for the grouping function now asks users to return
# an "api_key", but our Metrics API expects an "id" field. Quietly
# update it to "id".
group["id"] = group["api_key"]
del group["api_key"]
elif "id" not in group:
self.logger.error(
"Grouping function response missing 'api_key' field; not logging this request"
)
return None
for field in ["email", "label"]:
if field not in group:
self.logger.warning(
"Grouping function response missing %s field; logging request anyway",
field,
)
extra_fields = set(group.keys()).difference(["id", "email", "label"])
if extra_fields:
# pylint: disable=C0301
self.logger.warning(
"Grouping function included unexpected field(s) in response: %s; discarding those fields and logging request anyway",
extra_fields,
)
for field in extra_fields:
del group[field]
return group
def _get_content_type(self, headers):
return headers.get("content-type", "text/plain")
def _build_request_payload(self, request) -> dict:
"""Wraps the request portion of the payload
Args:
request (Request): Request object containing the request information, either
a `werkzeug.Request` or a `django.core.handlers.wsgi.WSGIRequest`.
Returns:
dict: Wrapped request payload
"""
headers = self.redact_dict(request.headers)
queryString = parse.parse_qsl(self._get_query_string(request))
content_type = self._get_content_type(headers)
post_data = False
if getattr(request, "content_length", None) or getattr(
request, "rm_content_length", None
):
if content_type == "application/x-www-form-urlencoded":
# Flask creates `request.form` but Django puts that data in `request.body`, and
# then our `request.rm_body` store, instead.
if hasattr(request, "form"):
params = [
# Reason this is not mixed in with the `rm_body` parsing if we don't have
# `request.form` is that if we attempt to do `str(var, 'utf-8)` on data
# coming out of `request.form.items()` an "decoding str is not supported"
# exception will be raised as they're already strings.
{"name": k, "value": v}
for (k, v) in request.form.items()
]
else:
params = [
# `request.form.items` will give us already decoded UTF-8 data but
# `parse_qsl` gives us bytes. If we don't do this we'll be creating an
# invalid JSON payload.
{
"name": str(k, "utf-8"),
"value": str(v, "utf-8"),
}
for (k, v) in parse.parse_qsl(request.rm_body)
]
post_data = {
"mimeType": content_type,
"params": params,
}
else:
post_data = self._process_body(content_type, request.rm_body)
payload = {
"method": request.method,
"url": self._build_base_url(request),
"httpVersion": request.environ["SERVER_PROTOCOL"],
"headers": [{"name": k, "value": v} for (k, v) in headers.items()],
"headersSize": -1,
"queryString": [{"name": k, "value": v} for (k, v) in queryString],
"cookies": [],
"bodySize": -1,
}
if not post_data is False:
payload["postData"] = post_data
return payload
def _build_response_payload(self, response: ResponseInfoWrapper) -> dict:
"""Wraps the response portion of the payload
Args:
response (ResponseInfoWrapper): containing the response information
Returns:
dict: Wrapped response payload
"""
headers = self.redact_dict(response.headers)
content_type = self._get_content_type(response.headers)
body = self._process_body(content_type, response.body).get("text")
headers = [{"name": k, "value": v} for (k, v) in headers.items()]
status_string = str(response.status)
status_code = int(status_string.split(" ")[0])
status_text = status_string.replace(str(status_code) + " ", "")
return {
"status": status_code,
"statusText": status_text or "",
"headers": headers,
"headersSize": -1,
"bodySize": int(response.content_length),
"content": {
"text": body,
"size": int(response.content_length),
"mimeType": response.content_type,
},
}
def _get_query_string(self, request):
"""Helper function to get the query string for a request, translating fields from
either a Werkzeug Request object or a Django WSGIRequest object.
Args:
request (Request): Request object containing the request information, either
a `werkzeug.Request` or a `django.core.handlers.wsgi.WSGIRequest`.
Returns:
str: Query string, for example "field1=value1&field2=value2"
"""
if hasattr(request, "query_string"):
# works for Werkzeug request objects only
result = request.query_string
elif "QUERY_STRING" in request.environ:
# works for Django, and possibly other request objects too
result = request.environ["QUERY_STRING"]
else:
raise QueryNotFound(
"Don't know how to retrieve query string from this type of request"
)
if isinstance(result, bytes):
result = result.decode("utf-8")
return result
def _build_base_url(self, request):
"""Helper function to get the base URL for a request (full URL excluding the
query string), translating fields from either a Werkzeug Request object or a
Django WSGIRequest object.
Args:
request (Request): Request object containing the request information, either
a `werkzeug.Request` or a `django.core.handlers.wsgi.WSGIRequest`.
Returns:
str: Query string, for example "https://api.example.local:8080/v1/userinfo"
"""
query_string = self._get_query_string(request)
if hasattr(request, "base_url"):
# Werkzeug request objects already have exactly what we need
base_url = request.base_url
if len(query_string) > 0:
base_url += f"?{query_string}"
return base_url
scheme, host, path = None, None, None
if "wsgi.url_scheme" in request.environ:
scheme = request.environ["wsgi.url_scheme"]
# pylint: disable=protected-access
if hasattr(request, "_get_raw_host"):
# Django request objects already have a properly formatted host field
host = request._get_raw_host()
elif "HTTP_HOST" in request.environ:
host = request.environ["HTTP_HOST"]
if "PATH_INFO" in request.environ:
path = request.environ["PATH_INFO"]
if scheme and path and host:
if len(query_string) > 0:
return f"{scheme}://{host}{path}?{query_string}"
return f"{scheme}://{host}{path}"
raise BaseURLError("Don't know how to build URL from this type of request")
# always returns a dict with some of these fields: text, mimeType, params
def _process_body(self, content_type, body):
if isinstance(body, bytes):
# Non-unicode bytes cannot be directly serialized as a JSON
# payload to send to the ReadMe API, so we need to convert this to a
# unicode string first. But we don't know what encoding it might be
# using, if any (it could also just be raw bytes, like an image).
# We're going to assume that if it's possible to decode at all, then
# it's most likely UTF-8. If we can't decode it, just send an error
# with the JSON payload.
try:
body = body.decode("utf-8")
except UnicodeDecodeError:
return {"mimeType": content_type, "text": "[NOT VALID UTF-8]"}
if not isinstance(body, str):
# We don't know how to process this body. If it's safe to encode as
# JSON, return it unchanged; otherwise return an error.
try:
json.dumps(body)
return {"mimeType": content_type, "text": body}
except TypeError:
return {"mimeType": content_type, "text": "[ERROR: NOT SERIALIZABLE]"}
try:
body_data = json.loads(body)
except JSONDecodeError:
return {"mimeType": content_type, "text": body}
if (self.denylist or self.allowlist) and isinstance(body_data, dict):
redacted_data = self.redact_dict(body_data)
body = json.dumps(redacted_data)
return {"mimeType": content_type, "text": body}
def redact_dict(self, mapping: Mapping):
def _redact_value(val):
if isinstance(val, str):
return f"[REDACTED {len(val)}]"
return "[REDACTED]"
# Short-circuit this function if there's no allowlist or denylist
if not (self.allowlist or self.denylist):
return mapping
result = {}
for (key, value) in mapping.items():
if self.denylist and key in self.denylist:
result[key] = _redact_value(value)
elif self.allowlist and key not in self.allowlist:
result[key] = _redact_value(value)
else:
result[key] = value
return result
|
PypiClean
|
/mercurial-6.5.1.tar.gz/mercurial-6.5.1/hgext/convert/gnuarch.py
|
import os
import shutil
import stat
import tempfile
from mercurial.i18n import _
from mercurial import (
encoding,
error,
mail,
pycompat,
util,
)
from mercurial.utils import (
dateutil,
procutil,
)
from . import common
class gnuarch_source(common.converter_source, common.commandline):
class gnuarch_rev:
def __init__(self, rev):
self.rev = rev
self.summary = b''
self.date = None
self.author = b''
self.continuationof = None
self.add_files = []
self.mod_files = []
self.del_files = []
self.ren_files = {}
self.ren_dirs = {}
def __init__(self, ui, repotype, path, revs=None):
super(gnuarch_source, self).__init__(ui, repotype, path, revs=revs)
if not os.path.exists(os.path.join(path, b'{arch}')):
raise common.NoRepo(
_(b"%s does not look like a GNU Arch repository") % path
)
# Could use checktool, but we want to check for baz or tla.
self.execmd = None
if procutil.findexe(b'baz'):
self.execmd = b'baz'
else:
if procutil.findexe(b'tla'):
self.execmd = b'tla'
else:
raise error.Abort(_(b'cannot find a GNU Arch tool'))
common.commandline.__init__(self, ui, self.execmd)
self.path = os.path.realpath(path)
self.tmppath = None
self.treeversion = None
self.lastrev = None
self.changes = {}
self.parents = {}
self.tags = {}
self.encoding = encoding.encoding
self.archives = []
def before(self):
# Get registered archives
self.archives = [
i.rstrip(b'\n') for i in self.runlines0(b'archives', b'-n')
]
if self.execmd == b'tla':
output = self.run0(b'tree-version', self.path)
else:
output = self.run0(b'tree-version', b'-d', self.path)
self.treeversion = output.strip()
# Get name of temporary directory
version = self.treeversion.split(b'/')
self.tmppath = os.path.join(
pycompat.fsencode(tempfile.gettempdir()), b'hg-%s' % version[1]
)
# Generate parents dictionary
self.parents[None] = []
treeversion = self.treeversion
child = None
while treeversion:
self.ui.status(_(b'analyzing tree version %s...\n') % treeversion)
archive = treeversion.split(b'/')[0]
if archive not in self.archives:
self.ui.status(
_(
b'tree analysis stopped because it points to '
b'an unregistered archive %s...\n'
)
% archive
)
break
# Get the complete list of revisions for that tree version
output, status = self.runlines(
b'revisions', b'-r', b'-f', treeversion
)
self.checkexit(
status, b'failed retrieving revisions for %s' % treeversion
)
# No new iteration unless a revision has a continuation-of header
treeversion = None
for l in output:
rev = l.strip()
self.changes[rev] = self.gnuarch_rev(rev)
self.parents[rev] = []
# Read author, date and summary
catlog, status = self.run(b'cat-log', b'-d', self.path, rev)
if status:
catlog = self.run0(b'cat-archive-log', rev)
self._parsecatlog(catlog, rev)
# Populate the parents map
self.parents[child].append(rev)
# Keep track of the current revision as the child of the next
# revision scanned
child = rev
# Check if we have to follow the usual incremental history
# or if we have to 'jump' to a different treeversion given
# by the continuation-of header.
if self.changes[rev].continuationof:
treeversion = b'--'.join(
self.changes[rev].continuationof.split(b'--')[:-1]
)
break
# If we reached a base-0 revision w/o any continuation-of
# header, it means the tree history ends here.
if rev[-6:] == b'base-0':
break
def after(self):
self.ui.debug(b'cleaning up %s\n' % self.tmppath)
shutil.rmtree(self.tmppath, ignore_errors=True)
def getheads(self):
return self.parents[None]
def getfile(self, name, rev):
if rev != self.lastrev:
raise error.Abort(_(b'internal calling inconsistency'))
if not os.path.lexists(os.path.join(self.tmppath, name)):
return None, None
return self._getfile(name, rev)
def getchanges(self, rev, full):
if full:
raise error.Abort(_(b"convert from arch does not support --full"))
self._update(rev)
changes = []
copies = {}
for f in self.changes[rev].add_files:
changes.append((f, rev))
for f in self.changes[rev].mod_files:
changes.append((f, rev))
for f in self.changes[rev].del_files:
changes.append((f, rev))
for src in self.changes[rev].ren_files:
to = self.changes[rev].ren_files[src]
changes.append((src, rev))
changes.append((to, rev))
copies[to] = src
for src in self.changes[rev].ren_dirs:
to = self.changes[rev].ren_dirs[src]
chgs, cps = self._rendirchanges(src, to)
changes += [(f, rev) for f in chgs]
copies.update(cps)
self.lastrev = rev
return sorted(set(changes)), copies, set()
def getcommit(self, rev):
changes = self.changes[rev]
return common.commit(
author=changes.author,
date=changes.date,
desc=changes.summary,
parents=self.parents[rev],
rev=rev,
)
def gettags(self):
return self.tags
def _execute(self, cmd, *args, **kwargs):
cmdline = [self.execmd, cmd]
cmdline += args
cmdline = [procutil.shellquote(arg) for arg in cmdline]
bdevnull = pycompat.bytestr(os.devnull)
cmdline += [b'>', bdevnull, b'2>', bdevnull]
cmdline = b' '.join(cmdline)
self.ui.debug(cmdline, b'\n')
return os.system(pycompat.rapply(procutil.tonativestr, cmdline))
def _update(self, rev):
self.ui.debug(b'applying revision %s...\n' % rev)
changeset, status = self.runlines(b'replay', b'-d', self.tmppath, rev)
if status:
# Something went wrong while merging (baz or tla
# issue?), get latest revision and try from there
shutil.rmtree(self.tmppath, ignore_errors=True)
self._obtainrevision(rev)
else:
old_rev = self.parents[rev][0]
self.ui.debug(
b'computing changeset between %s and %s...\n' % (old_rev, rev)
)
self._parsechangeset(changeset, rev)
def _getfile(self, name, rev):
mode = os.lstat(os.path.join(self.tmppath, name)).st_mode
if stat.S_ISLNK(mode):
data = util.readlink(os.path.join(self.tmppath, name))
if mode:
mode = b'l'
else:
mode = b''
else:
data = util.readfile(os.path.join(self.tmppath, name))
mode = (mode & 0o111) and b'x' or b''
return data, mode
def _exclude(self, name):
exclude = [b'{arch}', b'.arch-ids', b'.arch-inventory']
for exc in exclude:
if name.find(exc) != -1:
return True
return False
def _readcontents(self, path):
files = []
contents = os.listdir(path)
while len(contents) > 0:
c = contents.pop()
p = os.path.join(path, c)
# os.walk could be used, but here we avoid internal GNU
# Arch files and directories, thus saving a lot time.
if not self._exclude(p):
if os.path.isdir(p):
contents += [os.path.join(c, f) for f in os.listdir(p)]
else:
files.append(c)
return files
def _rendirchanges(self, src, dest):
changes = []
copies = {}
files = self._readcontents(os.path.join(self.tmppath, dest))
for f in files:
s = os.path.join(src, f)
d = os.path.join(dest, f)
changes.append(s)
changes.append(d)
copies[d] = s
return changes, copies
def _obtainrevision(self, rev):
self.ui.debug(b'obtaining revision %s...\n' % rev)
output = self._execute(b'get', rev, self.tmppath)
self.checkexit(output)
self.ui.debug(b'analyzing revision %s...\n' % rev)
files = self._readcontents(self.tmppath)
self.changes[rev].add_files += files
def _stripbasepath(self, path):
if path.startswith(b'./'):
return path[2:]
return path
def _parsecatlog(self, data, rev):
try:
catlog = mail.parsebytes(data)
# Commit date
self.changes[rev].date = dateutil.datestr(
dateutil.strdate(catlog['Standard-date'], b'%Y-%m-%d %H:%M:%S')
)
# Commit author
self.changes[rev].author = self.recode(catlog['Creator'])
# Commit description
self.changes[rev].summary = b'\n\n'.join(
(
self.recode(catlog['Summary']),
self.recode(catlog.get_payload()),
)
)
self.changes[rev].summary = self.recode(self.changes[rev].summary)
# Commit revision origin when dealing with a branch or tag
if 'Continuation-of' in catlog:
self.changes[rev].continuationof = self.recode(
catlog['Continuation-of']
)
except Exception:
raise error.Abort(_(b'could not parse cat-log of %s') % rev)
def _parsechangeset(self, data, rev):
for l in data:
l = l.strip()
# Added file (ignore added directory)
if l.startswith(b'A') and not l.startswith(b'A/'):
file = self._stripbasepath(l[1:].strip())
if not self._exclude(file):
self.changes[rev].add_files.append(file)
# Deleted file (ignore deleted directory)
elif l.startswith(b'D') and not l.startswith(b'D/'):
file = self._stripbasepath(l[1:].strip())
if not self._exclude(file):
self.changes[rev].del_files.append(file)
# Modified binary file
elif l.startswith(b'Mb'):
file = self._stripbasepath(l[2:].strip())
if not self._exclude(file):
self.changes[rev].mod_files.append(file)
# Modified link
elif l.startswith(b'M->'):
file = self._stripbasepath(l[3:].strip())
if not self._exclude(file):
self.changes[rev].mod_files.append(file)
# Modified file
elif l.startswith(b'M'):
file = self._stripbasepath(l[1:].strip())
if not self._exclude(file):
self.changes[rev].mod_files.append(file)
# Renamed file (or link)
elif l.startswith(b'=>'):
files = l[2:].strip().split(b' ')
if len(files) == 1:
files = l[2:].strip().split(b'\t')
src = self._stripbasepath(files[0])
dst = self._stripbasepath(files[1])
if not self._exclude(src) and not self._exclude(dst):
self.changes[rev].ren_files[src] = dst
# Conversion from file to link or from link to file (modified)
elif l.startswith(b'ch'):
file = self._stripbasepath(l[2:].strip())
if not self._exclude(file):
self.changes[rev].mod_files.append(file)
# Renamed directory
elif l.startswith(b'/>'):
dirs = l[2:].strip().split(b' ')
if len(dirs) == 1:
dirs = l[2:].strip().split(b'\t')
src = self._stripbasepath(dirs[0])
dst = self._stripbasepath(dirs[1])
if not self._exclude(src) and not self._exclude(dst):
self.changes[rev].ren_dirs[src] = dst
|
PypiClean
|
/headergen-1.1.1-py3-none-any.whl/callsites-jupyternb-micro-benchmark/utils.py
|
import jupytext
import re
from pathlib import Path
def create_input_py(filename):
py_ntbk_path = None
if filename.endswith(".ipynb"):
_file = Path(filename)
_filename = _file.name.split(".ipynb")[0]
ntbk = jupytext.read(_file)
# py_ntbk = jupytext.writes(ntbk, fmt='py:percent')
py_ntbk_path = "{}/{}.py".format(Path(_file).parent, _filename)
# write to python file for analysis
jupytext.write(ntbk, py_ntbk_path, fmt='py:percent')
filename = py_ntbk_path
return filename
# Find all blocks and the line numbers in notebook script
def find_block_numbers(filename):
if filename.endswith(".ipynb"):
_file = Path(filename)
_filename = _file.name.split(".ipynb")[0]
ntbk = jupytext.read(_file)
py_ntbk = jupytext.writes(ntbk, fmt='py:percent')
py_source_split = py_ntbk.split("\n")
else:
py_source_split = filename.split("\n")
_start, _end = None, None
lineno = 1
block = 1
mapping = {}
_current_md = False
for _line in py_source_split:
if _line.startswith("# %%"):
if _start is None:
_start = lineno
if _line.startswith("# %% [markdown]"):
_current_md = True
else:
_current_md = False
else:
_end = lineno
if _end == (_start+1):
_start = lineno
continue
if not _current_md:
mapping[block] = {
"start": _start,
"end": _end - 1
}
block += 1
_start = _end
if _line.startswith("# %% [markdown]"):
_current_md = True
else:
_current_md = False
lineno += 1
if not _current_md:
mapping[block] = {
"start": _start,
"end": lineno - 1
}
return mapping
def get_block_of_lineno(lineno, block_mapping):
for map_key, map_value in block_mapping.items():
if map_value["start"] <= lineno <= map_value["end"]:
return map_key
return None
def get_cell_numbers(call_sites, filename, module_name):
block_mapping = find_block_numbers(filename)
cell_call_sites = {}
def cellid_repl(matchobj):
_cell_id = get_block_of_lineno(int(matchobj.group(0)), block_mapping)
return str(_cell_id)
for _cs_line, _cs_calls in call_sites.items():
# (:[\s\S]*?\.)|:[\s\S]*?$
# :(.*)\.|:(.*)$
_cell_id = get_block_of_lineno(int(_cs_line), block_mapping)
if _cell_id not in cell_call_sites:
cell_call_sites[_cell_id] = set()
# only keep cellid of calls found in the same notebook
cell_calls = [re.sub(r"(?s):.*?(?=\.)|:.*?(?=$)", "", _call) if not _call.startswith(module_name)
else re.sub(r"(?s)(?<=:).*?(?=\.)|(?<=:).*?(?=$)", cellid_repl, _call) for _call in _cs_calls]
cell_call_sites[_cell_id] = cell_call_sites[_cell_id].union(cell_calls)
return cell_call_sites
|
PypiClean
|
/huoyong_paoxue_siwei-2022.10.11.0-py3-none-any.whl/HuoyongPaoxueSiwei/js/reader.js
|
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? factory(exports) :
typeof define === 'function' && define.amd ? define(['exports'], factory) :
(factory((global.RSVP = global.RSVP || {})));
}(this, (function (exports) { 'use strict';
function indexOf(callbacks, callback) {
for (var i = 0, l = callbacks.length; i < l; i++) {
if (callbacks[i] === callback) {
return i;
}
}
return -1;
}
function callbacksFor(object) {
var callbacks = object._promiseCallbacks;
if (!callbacks) {
callbacks = object._promiseCallbacks = {};
}
return callbacks;
}
/**
@class RSVP.EventTarget
*/
var EventTarget = {
/**
`RSVP.EventTarget.mixin` extends an object with EventTarget methods. For
Example:
```javascript
let object = {};
RSVP.EventTarget.mixin(object);
object.on('finished', function(event) {
// handle event
});
object.trigger('finished', { detail: value });
```
`EventTarget.mixin` also works with prototypes:
```javascript
let Person = function() {};
RSVP.EventTarget.mixin(Person.prototype);
let yehuda = new Person();
let tom = new Person();
yehuda.on('poke', function(event) {
console.log('Yehuda says OW');
});
tom.on('poke', function(event) {
console.log('Tom says OW');
});
yehuda.trigger('poke');
tom.trigger('poke');
```
@method mixin
@for RSVP.EventTarget
@private
@param {Object} object object to extend with EventTarget methods
*/
mixin: function (object) {
object['on'] = this['on'];
object['off'] = this['off'];
object['trigger'] = this['trigger'];
object._promiseCallbacks = undefined;
return object;
},
/**
Registers a callback to be executed when `eventName` is triggered
```javascript
object.on('event', function(eventInfo){
// handle the event
});
object.trigger('event');
```
@method on
@for RSVP.EventTarget
@private
@param {String} eventName name of the event to listen for
@param {Function} callback function to be called when the event is triggered.
*/
on: function (eventName, callback) {
if (typeof callback !== 'function') {
throw new TypeError('Callback must be a function');
}
var allCallbacks = callbacksFor(this),
callbacks = void 0;
callbacks = allCallbacks[eventName];
if (!callbacks) {
callbacks = allCallbacks[eventName] = [];
}
if (indexOf(callbacks, callback) === -1) {
callbacks.push(callback);
}
},
/**
You can use `off` to stop firing a particular callback for an event:
```javascript
function doStuff() { // do stuff! }
object.on('stuff', doStuff);
object.trigger('stuff'); // doStuff will be called
// Unregister ONLY the doStuff callback
object.off('stuff', doStuff);
object.trigger('stuff'); // doStuff will NOT be called
```
If you don't pass a `callback` argument to `off`, ALL callbacks for the
event will not be executed when the event fires. For example:
```javascript
let callback1 = function(){};
let callback2 = function(){};
object.on('stuff', callback1);
object.on('stuff', callback2);
object.trigger('stuff'); // callback1 and callback2 will be executed.
object.off('stuff');
object.trigger('stuff'); // callback1 and callback2 will not be executed!
```
@method off
@for RSVP.EventTarget
@private
@param {String} eventName event to stop listening to
@param {Function} callback optional argument. If given, only the function
given will be removed from the event's callback queue. If no `callback`
argument is given, all callbacks will be removed from the event's callback
queue.
*/
off: function (eventName, callback) {
var allCallbacks = callbacksFor(this),
callbacks = void 0,
index = void 0;
if (!callback) {
allCallbacks[eventName] = [];
return;
}
callbacks = allCallbacks[eventName];
index = indexOf(callbacks, callback);
if (index !== -1) {
callbacks.splice(index, 1);
}
},
/**
Use `trigger` to fire custom events. For example:
```javascript
object.on('foo', function(){
console.log('foo event happened!');
});
object.trigger('foo');
// 'foo event happened!' logged to the console
```
You can also pass a value as a second argument to `trigger` that will be
passed as an argument to all event listeners for the event:
```javascript
object.on('foo', function(value){
console.log(value.name);
});
object.trigger('foo', { name: 'bar' });
// 'bar' logged to the console
```
@method trigger
@for RSVP.EventTarget
@private
@param {String} eventName name of the event to be triggered
@param {*} options optional value to be passed to any event handlers for
the given `eventName`
*/
trigger: function (eventName, options, label) {
var allCallbacks = callbacksFor(this),
callbacks = void 0,
callback = void 0;
if (callbacks = allCallbacks[eventName]) {
// Don't cache the callbacks.length since it may grow
for (var i = 0; i < callbacks.length; i++) {
callback = callbacks[i];
callback(options, label);
}
}
}
};
var config = {
instrument: false
};
EventTarget['mixin'](config);
function configure(name, value) {
if (arguments.length === 2) {
config[name] = value;
} else {
return config[name];
}
}
function objectOrFunction(x) {
var type = typeof x;
return x !== null && (type === 'object' || type === 'function');
}
function isFunction(x) {
return typeof x === 'function';
}
function isObject(x) {
return x !== null && typeof x === 'object';
}
function isMaybeThenable(x) {
return x !== null && typeof x === 'object';
}
var _isArray = void 0;
if (Array.isArray) {
_isArray = Array.isArray;
} else {
_isArray = function (x) {
return Object.prototype.toString.call(x) === '[object Array]';
};
}
var isArray = _isArray;
// Date.now is not available in browsers < IE9
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now#Compatibility
var now = Date.now || function () {
return new Date().getTime();
};
var queue = [];
function scheduleFlush() {
setTimeout(function () {
for (var i = 0; i < queue.length; i++) {
var entry = queue[i];
var payload = entry.payload;
payload.guid = payload.key + payload.id;
payload.childGuid = payload.key + payload.childId;
if (payload.error) {
payload.stack = payload.error.stack;
}
config['trigger'](entry.name, entry.payload);
}
queue.length = 0;
}, 50);
}
function instrument(eventName, promise, child) {
if (1 === queue.push({
name: eventName,
payload: {
key: promise._guidKey,
id: promise._id,
eventName: eventName,
detail: promise._result,
childId: child && child._id,
label: promise._label,
timeStamp: now(),
error: config["instrument-with-stack"] ? new Error(promise._label) : null
} })) {
scheduleFlush();
}
}
/**
`RSVP.Promise.resolve` returns a promise that will become resolved with the
passed `value`. It is shorthand for the following:
```javascript
let promise = new RSVP.Promise(function(resolve, reject){
resolve(1);
});
promise.then(function(value){
// value === 1
});
```
Instead of writing the above, your code now simply becomes the following:
```javascript
let promise = RSVP.Promise.resolve(1);
promise.then(function(value){
// value === 1
});
```
@method resolve
@static
@param {*} object value that the returned promise will be resolved with
@param {String} label optional string for identifying the returned promise.
Useful for tooling.
@return {Promise} a promise that will become fulfilled with the given
`value`
*/
function resolve$1(object, label) {
/*jshint validthis:true */
var Constructor = this;
if (object && typeof object === 'object' && object.constructor === Constructor) {
return object;
}
var promise = new Constructor(noop, label);
resolve(promise, object);
return promise;
}
function withOwnPromise() {
return new TypeError('A promises callback cannot return that same promise.');
}
function noop() {}
var PENDING = void 0;
var FULFILLED = 1;
var REJECTED = 2;
var GET_THEN_ERROR = new ErrorObject();
function getThen(promise) {
try {
return promise.then;
} catch (error) {
GET_THEN_ERROR.error = error;
return GET_THEN_ERROR;
}
}
function tryThen(then$$1, value, fulfillmentHandler, rejectionHandler) {
try {
then$$1.call(value, fulfillmentHandler, rejectionHandler);
} catch (e) {
return e;
}
}
function handleForeignThenable(promise, thenable, then$$1) {
config.async(function (promise) {
var sealed = false;
var error = tryThen(then$$1, thenable, function (value) {
if (sealed) {
return;
}
sealed = true;
if (thenable !== value) {
resolve(promise, value, undefined);
} else {
fulfill(promise, value);
}
}, function (reason) {
if (sealed) {
return;
}
sealed = true;
reject(promise, reason);
}, 'Settle: ' + (promise._label || ' unknown promise'));
if (!sealed && error) {
sealed = true;
reject(promise, error);
}
}, promise);
}
function handleOwnThenable(promise, thenable) {
if (thenable._state === FULFILLED) {
fulfill(promise, thenable._result);
} else if (thenable._state === REJECTED) {
thenable._onError = null;
reject(promise, thenable._result);
} else {
subscribe(thenable, undefined, function (value) {
if (thenable !== value) {
resolve(promise, value, undefined);
} else {
fulfill(promise, value);
}
}, function (reason) {
return reject(promise, reason);
});
}
}
function handleMaybeThenable(promise, maybeThenable, then$$1) {
var isOwnThenable = maybeThenable.constructor === promise.constructor && then$$1 === then && promise.constructor.resolve === resolve$1;
if (isOwnThenable) {
handleOwnThenable(promise, maybeThenable);
} else if (then$$1 === GET_THEN_ERROR) {
reject(promise, GET_THEN_ERROR.error);
GET_THEN_ERROR.error = null;
} else if (isFunction(then$$1)) {
handleForeignThenable(promise, maybeThenable, then$$1);
} else {
fulfill(promise, maybeThenable);
}
}
function resolve(promise, value) {
if (promise === value) {
fulfill(promise, value);
} else if (objectOrFunction(value)) {
handleMaybeThenable(promise, value, getThen(value));
} else {
fulfill(promise, value);
}
}
function publishRejection(promise) {
if (promise._onError) {
promise._onError(promise._result);
}
publish(promise);
}
function fulfill(promise, value) {
if (promise._state !== PENDING) {
return;
}
promise._result = value;
promise._state = FULFILLED;
if (promise._subscribers.length === 0) {
if (config.instrument) {
instrument('fulfilled', promise);
}
} else {
config.async(publish, promise);
}
}
function reject(promise, reason) {
if (promise._state !== PENDING) {
return;
}
promise._state = REJECTED;
promise._result = reason;
config.async(publishRejection, promise);
}
function subscribe(parent, child, onFulfillment, onRejection) {
var subscribers = parent._subscribers;
var length = subscribers.length;
parent._onError = null;
subscribers[length] = child;
subscribers[length + FULFILLED] = onFulfillment;
subscribers[length + REJECTED] = onRejection;
if (length === 0 && parent._state) {
config.async(publish, parent);
}
}
function publish(promise) {
var subscribers = promise._subscribers;
var settled = promise._state;
if (config.instrument) {
instrument(settled === FULFILLED ? 'fulfilled' : 'rejected', promise);
}
if (subscribers.length === 0) {
return;
}
var child = void 0,
callback = void 0,
result = promise._result;
for (var i = 0; i < subscribers.length; i += 3) {
child = subscribers[i];
callback = subscribers[i + settled];
if (child) {
invokeCallback(settled, child, callback, result);
} else {
callback(result);
}
}
promise._subscribers.length = 0;
}
function ErrorObject() {
this.error = null;
}
var TRY_CATCH_ERROR = new ErrorObject();
function tryCatch(callback, result) {
try {
return callback(result);
} catch (e) {
TRY_CATCH_ERROR.error = e;
return TRY_CATCH_ERROR;
}
}
function invokeCallback(state, promise, callback, result) {
var hasCallback = isFunction(callback);
var value = void 0,
error = void 0;
if (hasCallback) {
value = tryCatch(callback, result);
if (value === TRY_CATCH_ERROR) {
error = value.error;
value.error = null; // release
} else if (value === promise) {
reject(promise, withOwnPromise());
return;
}
} else {
value = result;
}
if (promise._state !== PENDING) {
// noop
} else if (hasCallback && error === undefined) {
resolve(promise, value);
} else if (error !== undefined) {
reject(promise, error);
} else if (state === FULFILLED) {
fulfill(promise, value);
} else if (state === REJECTED) {
reject(promise, value);
}
}
function initializePromise(promise, resolver) {
var resolved = false;
try {
resolver(function (value) {
if (resolved) {
return;
}
resolved = true;
resolve(promise, value);
}, function (reason) {
if (resolved) {
return;
}
resolved = true;
reject(promise, reason);
});
} catch (e) {
reject(promise, e);
}
}
function then(onFulfillment, onRejection, label) {
var parent = this;
var state = parent._state;
if (state === FULFILLED && !onFulfillment || state === REJECTED && !onRejection) {
config.instrument && instrument('chained', parent, parent);
return parent;
}
parent._onError = null;
var child = new parent.constructor(noop, label);
var result = parent._result;
config.instrument && instrument('chained', parent, child);
if (state === PENDING) {
subscribe(parent, child, onFulfillment, onRejection);
} else {
var callback = state === FULFILLED ? onFulfillment : onRejection;
config.async(function () {
return invokeCallback(state, child, callback, result);
});
}
return child;
}
var Enumerator = function () {
function Enumerator(Constructor, input, abortOnReject, label) {
this._instanceConstructor = Constructor;
this.promise = new Constructor(noop, label);
this._abortOnReject = abortOnReject;
this._init.apply(this, arguments);
}
Enumerator.prototype._init = function _init(Constructor, input) {
var len = input.length || 0;
this.length = len;
this._remaining = len;
this._result = new Array(len);
this._enumerate(input);
if (this._remaining === 0) {
fulfill(this.promise, this._result);
}
};
Enumerator.prototype._enumerate = function _enumerate(input) {
var length = this.length;
var promise = this.promise;
for (var i = 0; promise._state === PENDING && i < length; i++) {
this._eachEntry(input[i], i);
}
};
Enumerator.prototype._settleMaybeThenable = function _settleMaybeThenable(entry, i) {
var c = this._instanceConstructor;
var resolve$$1 = c.resolve;
if (resolve$$1 === resolve$1) {
var then$$1 = getThen(entry);
if (then$$1 === then && entry._state !== PENDING) {
entry._onError = null;
this._settledAt(entry._state, i, entry._result);
} else if (typeof then$$1 !== 'function') {
this._remaining--;
this._result[i] = this._makeResult(FULFILLED, i, entry);
} else if (c === Promise) {
var promise = new c(noop);
handleMaybeThenable(promise, entry, then$$1);
this._willSettleAt(promise, i);
} else {
this._willSettleAt(new c(function (resolve$$1) {
return resolve$$1(entry);
}), i);
}
} else {
this._willSettleAt(resolve$$1(entry), i);
}
};
Enumerator.prototype._eachEntry = function _eachEntry(entry, i) {
if (isMaybeThenable(entry)) {
this._settleMaybeThenable(entry, i);
} else {
this._remaining--;
this._result[i] = this._makeResult(FULFILLED, i, entry);
}
};
Enumerator.prototype._settledAt = function _settledAt(state, i, value) {
var promise = this.promise;
if (promise._state === PENDING) {
if (this._abortOnReject && state === REJECTED) {
reject(promise, value);
} else {
this._remaining--;
this._result[i] = this._makeResult(state, i, value);
if (this._remaining === 0) {
fulfill(promise, this._result);
}
}
}
};
Enumerator.prototype._makeResult = function _makeResult(state, i, value) {
return value;
};
Enumerator.prototype._willSettleAt = function _willSettleAt(promise, i) {
var enumerator = this;
subscribe(promise, undefined, function (value) {
return enumerator._settledAt(FULFILLED, i, value);
}, function (reason) {
return enumerator._settledAt(REJECTED, i, reason);
});
};
return Enumerator;
}();
function makeSettledResult(state, position, value) {
if (state === FULFILLED) {
return {
state: 'fulfilled',
value: value
};
} else {
return {
state: 'rejected',
reason: value
};
}
}
/**
`RSVP.Promise.all` accepts an array of promises, and returns a new promise which
is fulfilled with an array of fulfillment values for the passed promises, or
rejected with the reason of the first passed promise to be rejected. It casts all
elements of the passed iterable to promises as it runs this algorithm.
Example:
```javascript
let promise1 = RSVP.resolve(1);
let promise2 = RSVP.resolve(2);
let promise3 = RSVP.resolve(3);
let promises = [ promise1, promise2, promise3 ];
RSVP.Promise.all(promises).then(function(array){
// The array here would be [ 1, 2, 3 ];
});
```
If any of the `promises` given to `RSVP.all` are rejected, the first promise
that is rejected will be given as an argument to the returned promises's
rejection handler. For example:
Example:
```javascript
let promise1 = RSVP.resolve(1);
let promise2 = RSVP.reject(new Error("2"));
let promise3 = RSVP.reject(new Error("3"));
let promises = [ promise1, promise2, promise3 ];
RSVP.Promise.all(promises).then(function(array){
// Code here never runs because there are rejected promises!
}, function(error) {
// error.message === "2"
});
```
@method all
@static
@param {Array} entries array of promises
@param {String} label optional string for labeling the promise.
Useful for tooling.
@return {Promise} promise that is fulfilled when all `promises` have been
fulfilled, or rejected if any of them become rejected.
@static
*/
function all(entries, label) {
if (!isArray(entries)) {
return this.reject(new TypeError("Promise.all must be called with an array"), label);
}
return new Enumerator(this, entries, true /* abort on reject */, label).promise;
}
/**
`RSVP.Promise.race` returns a new promise which is settled in the same way as the
first passed promise to settle.
Example:
```javascript
let promise1 = new RSVP.Promise(function(resolve, reject){
setTimeout(function(){
resolve('promise 1');
}, 200);
});
let promise2 = new RSVP.Promise(function(resolve, reject){
setTimeout(function(){
resolve('promise 2');
}, 100);
});
RSVP.Promise.race([promise1, promise2]).then(function(result){
// result === 'promise 2' because it was resolved before promise1
// was resolved.
});
```
`RSVP.Promise.race` is deterministic in that only the state of the first
settled promise matters. For example, even if other promises given to the
`promises` array argument are resolved, but the first settled promise has
become rejected before the other promises became fulfilled, the returned
promise will become rejected:
```javascript
let promise1 = new RSVP.Promise(function(resolve, reject){
setTimeout(function(){
resolve('promise 1');
}, 200);
});
let promise2 = new RSVP.Promise(function(resolve, reject){
setTimeout(function(){
reject(new Error('promise 2'));
}, 100);
});
RSVP.Promise.race([promise1, promise2]).then(function(result){
// Code here never runs
}, function(reason){
// reason.message === 'promise 2' because promise 2 became rejected before
// promise 1 became fulfilled
});
```
An example real-world use case is implementing timeouts:
```javascript
RSVP.Promise.race([ajax('foo.json'), timeout(5000)])
```
@method race
@static
@param {Array} entries array of promises to observe
@param {String} label optional string for describing the promise returned.
Useful for tooling.
@return {Promise} a promise which settles in the same way as the first passed
promise to settle.
*/
function race(entries, label) {
/*jshint validthis:true */
var Constructor = this;
var promise = new Constructor(noop, label);
if (!isArray(entries)) {
reject(promise, new TypeError('Promise.race must be called with an array'));
return promise;
}
for (var i = 0; promise._state === PENDING && i < entries.length; i++) {
subscribe(Constructor.resolve(entries[i]), undefined, function (value) {
return resolve(promise, value);
}, function (reason) {
return reject(promise, reason);
});
}
return promise;
}
/**
`RSVP.Promise.reject` returns a promise rejected with the passed `reason`.
It is shorthand for the following:
```javascript
let promise = new RSVP.Promise(function(resolve, reject){
reject(new Error('WHOOPS'));
});
promise.then(function(value){
// Code here doesn't run because the promise is rejected!
}, function(reason){
// reason.message === 'WHOOPS'
});
```
Instead of writing the above, your code now simply becomes the following:
```javascript
let promise = RSVP.Promise.reject(new Error('WHOOPS'));
promise.then(function(value){
// Code here doesn't run because the promise is rejected!
}, function(reason){
// reason.message === 'WHOOPS'
});
```
@method reject
@static
@param {*} reason value that the returned promise will be rejected with.
@param {String} label optional string for identifying the returned promise.
Useful for tooling.
@return {Promise} a promise rejected with the given `reason`.
*/
function reject$1(reason, label) {
/*jshint validthis:true */
var Constructor = this;
var promise = new Constructor(noop, label);
reject(promise, reason);
return promise;
}
var guidKey = 'rsvp_' + now() + '-';
var counter = 0;
function needsResolver() {
throw new TypeError('You must pass a resolver function as the first argument to the promise constructor');
}
function needsNew() {
throw new TypeError("Failed to construct 'Promise': Please use the 'new' operator, this object constructor cannot be called as a function.");
}
/**
Promise objects represent the eventual result of an asynchronous operation. The
primary way of interacting with a promise is through its `then` method, which
registers callbacks to receive either a promise’s eventual value or the reason
why the promise cannot be fulfilled.
Terminology
-----------
- `promise` is an object or function with a `then` method whose behavior conforms to this specification.
- `thenable` is an object or function that defines a `then` method.
- `value` is any legal JavaScript value (including undefined, a thenable, or a promise).
- `exception` is a value that is thrown using the throw statement.
- `reason` is a value that indicates why a promise was rejected.
- `settled` the final resting state of a promise, fulfilled or rejected.
A promise can be in one of three states: pending, fulfilled, or rejected.
Promises that are fulfilled have a fulfillment value and are in the fulfilled
state. Promises that are rejected have a rejection reason and are in the
rejected state. A fulfillment value is never a thenable.
Promises can also be said to *resolve* a value. If this value is also a
promise, then the original promise's settled state will match the value's
settled state. So a promise that *resolves* a promise that rejects will
itself reject, and a promise that *resolves* a promise that fulfills will
itself fulfill.
Basic Usage:
------------
```js
let promise = new Promise(function(resolve, reject) {
// on success
resolve(value);
// on failure
reject(reason);
});
promise.then(function(value) {
// on fulfillment
}, function(reason) {
// on rejection
});
```
Advanced Usage:
---------------
Promises shine when abstracting away asynchronous interactions such as
`XMLHttpRequest`s.
```js
function getJSON(url) {
return new Promise(function(resolve, reject){
let xhr = new XMLHttpRequest();
xhr.open('GET', url);
xhr.onreadystatechange = handler;
xhr.responseType = 'json';
xhr.setRequestHeader('Accept', 'application/json');
xhr.send();
function handler() {
if (this.readyState === this.DONE) {
if (this.status === 200) {
resolve(this.response);
} else {
reject(new Error('getJSON: `' + url + '` failed with status: [' + this.status + ']'));
}
}
};
});
}
getJSON('/posts.json').then(function(json) {
// on fulfillment
}, function(reason) {
// on rejection
});
```
Unlike callbacks, promises are great composable primitives.
```js
Promise.all([
getJSON('/posts'),
getJSON('/comments')
]).then(function(values){
values[0] // => postsJSON
values[1] // => commentsJSON
return values;
});
```
@class RSVP.Promise
@param {function} resolver
@param {String} label optional string for labeling the promise.
Useful for tooling.
@constructor
*/
var Promise = function () {
function Promise(resolver, label) {
this._id = counter++;
this._label = label;
this._state = undefined;
this._result = undefined;
this._subscribers = [];
config.instrument && instrument('created', this);
if (noop !== resolver) {
typeof resolver !== 'function' && needsResolver();
this instanceof Promise ? initializePromise(this, resolver) : needsNew();
}
}
Promise.prototype._onError = function _onError(reason) {
var _this = this;
config.after(function () {
if (_this._onError) {
config.trigger('error', reason, _this._label);
}
});
};
/**
`catch` is simply sugar for `then(undefined, onRejection)` which makes it the same
as the catch block of a try/catch statement.
```js
function findAuthor(){
throw new Error('couldn\'t find that author');
}
// synchronous
try {
findAuthor();
} catch(reason) {
// something went wrong
}
// async with promises
findAuthor().catch(function(reason){
// something went wrong
});
```
@method catch
@param {Function} onRejection
@param {String} label optional string for labeling the promise.
Useful for tooling.
@return {Promise}
*/
Promise.prototype.catch = function _catch(onRejection, label) {
return this.then(undefined, onRejection, label);
};
/**
`finally` will be invoked regardless of the promise's fate just as native
try/catch/finally behaves
Synchronous example:
```js
findAuthor() {
if (Math.random() > 0.5) {
throw new Error();
}
return new Author();
}
try {
return findAuthor(); // succeed or fail
} catch(error) {
return findOtherAuthor();
} finally {
// always runs
// doesn't affect the return value
}
```
Asynchronous example:
```js
findAuthor().catch(function(reason){
return findOtherAuthor();
}).finally(function(){
// author was either found, or not
});
```
@method finally
@param {Function} callback
@param {String} label optional string for labeling the promise.
Useful for tooling.
@return {Promise}
*/
Promise.prototype.finally = function _finally(callback, label) {
var promise = this;
var constructor = promise.constructor;
return promise.then(function (value) {
return constructor.resolve(callback()).then(function () {
return value;
});
}, function (reason) {
return constructor.resolve(callback()).then(function () {
throw reason;
});
}, label);
};
return Promise;
}();
Promise.cast = resolve$1; // deprecated
Promise.all = all;
Promise.race = race;
Promise.resolve = resolve$1;
Promise.reject = reject$1;
Promise.prototype._guidKey = guidKey;
/**
The primary way of interacting with a promise is through its `then` method,
which registers callbacks to receive either a promise's eventual value or the
reason why the promise cannot be fulfilled.
```js
findUser().then(function(user){
// user is available
}, function(reason){
// user is unavailable, and you are given the reason why
});
```
Chaining
--------
The return value of `then` is itself a promise. This second, 'downstream'
promise is resolved with the return value of the first promise's fulfillment
or rejection handler, or rejected if the handler throws an exception.
```js
findUser().then(function (user) {
return user.name;
}, function (reason) {
return 'default name';
}).then(function (userName) {
// If `findUser` fulfilled, `userName` will be the user's name, otherwise it
// will be `'default name'`
});
findUser().then(function (user) {
throw new Error('Found user, but still unhappy');
}, function (reason) {
throw new Error('`findUser` rejected and we\'re unhappy');
}).then(function (value) {
// never reached
}, function (reason) {
// if `findUser` fulfilled, `reason` will be 'Found user, but still unhappy'.
// If `findUser` rejected, `reason` will be '`findUser` rejected and we\'re unhappy'.
});
```
If the downstream promise does not specify a rejection handler, rejection reasons will be propagated further downstream.
```js
findUser().then(function (user) {
throw new PedagogicalException('Upstream error');
}).then(function (value) {
// never reached
}).then(function (value) {
// never reached
}, function (reason) {
// The `PedgagocialException` is propagated all the way down to here
});
```
Assimilation
------------
Sometimes the value you want to propagate to a downstream promise can only be
retrieved asynchronously. This can be achieved by returning a promise in the
fulfillment or rejection handler. The downstream promise will then be pending
until the returned promise is settled. This is called *assimilation*.
```js
findUser().then(function (user) {
return findCommentsByAuthor(user);
}).then(function (comments) {
// The user's comments are now available
});
```
If the assimliated promise rejects, then the downstream promise will also reject.
```js
findUser().then(function (user) {
return findCommentsByAuthor(user);
}).then(function (comments) {
// If `findCommentsByAuthor` fulfills, we'll have the value here
}, function (reason) {
// If `findCommentsByAuthor` rejects, we'll have the reason here
});
```
Simple Example
--------------
Synchronous Example
```javascript
let result;
try {
result = findResult();
// success
} catch(reason) {
// failure
}
```
Errback Example
```js
findResult(function(result, err){
if (err) {
// failure
} else {
// success
}
});
```
Promise Example;
```javascript
findResult().then(function(result){
// success
}, function(reason){
// failure
});
```
Advanced Example
--------------
Synchronous Example
```javascript
let author, books;
try {
author = findAuthor();
books = findBooksByAuthor(author);
// success
} catch(reason) {
// failure
}
```
Errback Example
```js
function foundBooks(books) {
}
function failure(reason) {
}
findAuthor(function(author, err){
if (err) {
failure(err);
// failure
} else {
try {
findBoooksByAuthor(author, function(books, err) {
if (err) {
failure(err);
} else {
try {
foundBooks(books);
} catch(reason) {
failure(reason);
}
}
});
} catch(error) {
failure(err);
}
// success
}
});
```
Promise Example;
```javascript
findAuthor().
then(findBooksByAuthor).
then(function(books){
// found books
}).catch(function(reason){
// something went wrong
});
```
@method then
@param {Function} onFulfillment
@param {Function} onRejection
@param {String} label optional string for labeling the promise.
Useful for tooling.
@return {Promise}
*/
Promise.prototype.then = then;
function Result() {
this.value = undefined;
}
var ERROR = new Result();
var GET_THEN_ERROR$1 = new Result();
function getThen$1(obj) {
try {
return obj.then;
} catch (error) {
ERROR.value = error;
return ERROR;
}
}
function tryApply(f, s, a) {
try {
f.apply(s, a);
} catch (error) {
ERROR.value = error;
return ERROR;
}
}
function makeObject(_, argumentNames) {
var obj = {};
var length = _.length;
var args = new Array(length);
for (var x = 0; x < length; x++) {
args[x] = _[x];
}
for (var i = 0; i < argumentNames.length; i++) {
var name = argumentNames[i];
obj[name] = args[i + 1];
}
return obj;
}
function arrayResult(_) {
var length = _.length;
var args = new Array(length - 1);
for (var i = 1; i < length; i++) {
args[i - 1] = _[i];
}
return args;
}
function wrapThenable(then, promise) {
return {
then: function (onFulFillment, onRejection) {
return then.call(promise, onFulFillment, onRejection);
}
};
}
/**
`RSVP.denodeify` takes a 'node-style' function and returns a function that
will return an `RSVP.Promise`. You can use `denodeify` in Node.js or the
browser when you'd prefer to use promises over using callbacks. For example,
`denodeify` transforms the following:
```javascript
let fs = require('fs');
fs.readFile('myfile.txt', function(err, data){
if (err) return handleError(err);
handleData(data);
});
```
into:
```javascript
let fs = require('fs');
let readFile = RSVP.denodeify(fs.readFile);
readFile('myfile.txt').then(handleData, handleError);
```
If the node function has multiple success parameters, then `denodeify`
just returns the first one:
```javascript
let request = RSVP.denodeify(require('request'));
request('http://example.com').then(function(res) {
// ...
});
```
However, if you need all success parameters, setting `denodeify`'s
second parameter to `true` causes it to return all success parameters
as an array:
```javascript
let request = RSVP.denodeify(require('request'), true);
request('http://example.com').then(function(result) {
// result[0] -> res
// result[1] -> body
});
```
Or if you pass it an array with names it returns the parameters as a hash:
```javascript
let request = RSVP.denodeify(require('request'), ['res', 'body']);
request('http://example.com').then(function(result) {
// result.res
// result.body
});
```
Sometimes you need to retain the `this`:
```javascript
let app = require('express')();
let render = RSVP.denodeify(app.render.bind(app));
```
The denodified function inherits from the original function. It works in all
environments, except IE 10 and below. Consequently all properties of the original
function are available to you. However, any properties you change on the
denodeified function won't be changed on the original function. Example:
```javascript
let request = RSVP.denodeify(require('request')),
cookieJar = request.jar(); // <- Inheritance is used here
request('http://example.com', {jar: cookieJar}).then(function(res) {
// cookieJar.cookies holds now the cookies returned by example.com
});
```
Using `denodeify` makes it easier to compose asynchronous operations instead
of using callbacks. For example, instead of:
```javascript
let fs = require('fs');
fs.readFile('myfile.txt', function(err, data){
if (err) { ... } // Handle error
fs.writeFile('myfile2.txt', data, function(err){
if (err) { ... } // Handle error
console.log('done')
});
});
```
you can chain the operations together using `then` from the returned promise:
```javascript
let fs = require('fs');
let readFile = RSVP.denodeify(fs.readFile);
let writeFile = RSVP.denodeify(fs.writeFile);
readFile('myfile.txt').then(function(data){
return writeFile('myfile2.txt', data);
}).then(function(){
console.log('done')
}).catch(function(error){
// Handle error
});
```
@method denodeify
@static
@for RSVP
@param {Function} nodeFunc a 'node-style' function that takes a callback as
its last argument. The callback expects an error to be passed as its first
argument (if an error occurred, otherwise null), and the value from the
operation as its second argument ('function(err, value){ }').
@param {Boolean|Array} [options] An optional paramter that if set
to `true` causes the promise to fulfill with the callback's success arguments
as an array. This is useful if the node function has multiple success
paramters. If you set this paramter to an array with names, the promise will
fulfill with a hash with these names as keys and the success parameters as
values.
@return {Function} a function that wraps `nodeFunc` to return an
`RSVP.Promise`
@static
*/
function denodeify(nodeFunc, options) {
var fn = function () {
var self = this;
var l = arguments.length;
var args = new Array(l + 1);
var promiseInput = false;
for (var i = 0; i < l; ++i) {
var arg = arguments[i];
if (!promiseInput) {
// TODO: clean this up
promiseInput = needsPromiseInput(arg);
if (promiseInput === GET_THEN_ERROR$1) {
var p = new Promise(noop);
reject(p, GET_THEN_ERROR$1.value);
return p;
} else if (promiseInput && promiseInput !== true) {
arg = wrapThenable(promiseInput, arg);
}
}
args[i] = arg;
}
var promise = new Promise(noop);
args[l] = function (err, val) {
if (err) reject(promise, err);else if (options === undefined) resolve(promise, val);else if (options === true) resolve(promise, arrayResult(arguments));else if (isArray(options)) resolve(promise, makeObject(arguments, options));else resolve(promise, val);
};
if (promiseInput) {
return handlePromiseInput(promise, args, nodeFunc, self);
} else {
return handleValueInput(promise, args, nodeFunc, self);
}
};
fn.__proto__ = nodeFunc;
return fn;
}
function handleValueInput(promise, args, nodeFunc, self) {
var result = tryApply(nodeFunc, self, args);
if (result === ERROR) {
reject(promise, result.value);
}
return promise;
}
function handlePromiseInput(promise, args, nodeFunc, self) {
return Promise.all(args).then(function (args) {
var result = tryApply(nodeFunc, self, args);
if (result === ERROR) {
reject(promise, result.value);
}
return promise;
});
}
function needsPromiseInput(arg) {
if (arg && typeof arg === 'object') {
if (arg.constructor === Promise) {
return true;
} else {
return getThen$1(arg);
}
} else {
return false;
}
}
/**
This is a convenient alias for `RSVP.Promise.all`.
@method all
@static
@for RSVP
@param {Array} array Array of promises.
@param {String} label An optional label. This is useful
for tooling.
*/
function all$1(array, label) {
return Promise.all(array, label);
}
function _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError("this hasn't been initialised - super() hasn't been called"); } return call && (typeof call === "object" || typeof call === "function") ? call : self; }
function _inherits(subClass, superClass) { if (typeof superClass !== "function" && superClass !== null) { throw new TypeError("Super expression must either be null or a function, not " + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }
var AllSettled = function (_Enumerator) {
_inherits(AllSettled, _Enumerator);
function AllSettled(Constructor, entries, label) {
return _possibleConstructorReturn(this, _Enumerator.call(this, Constructor, entries, false /* don't abort on reject */, label));
}
return AllSettled;
}(Enumerator);
AllSettled.prototype._makeResult = makeSettledResult;
/**
`RSVP.allSettled` is similar to `RSVP.all`, but instead of implementing
a fail-fast method, it waits until all the promises have returned and
shows you all the results. This is useful if you want to handle multiple
promises' failure states together as a set.
Returns a promise that is fulfilled when all the given promises have been
settled. The return promise is fulfilled with an array of the states of
the promises passed into the `promises` array argument.
Each state object will either indicate fulfillment or rejection, and
provide the corresponding value or reason. The states will take one of
the following formats:
```javascript
{ state: 'fulfilled', value: value }
or
{ state: 'rejected', reason: reason }
```
Example:
```javascript
let promise1 = RSVP.Promise.resolve(1);
let promise2 = RSVP.Promise.reject(new Error('2'));
let promise3 = RSVP.Promise.reject(new Error('3'));
let promises = [ promise1, promise2, promise3 ];
RSVP.allSettled(promises).then(function(array){
// array == [
// { state: 'fulfilled', value: 1 },
// { state: 'rejected', reason: Error },
// { state: 'rejected', reason: Error }
// ]
// Note that for the second item, reason.message will be '2', and for the
// third item, reason.message will be '3'.
}, function(error) {
// Not run. (This block would only be called if allSettled had failed,
// for instance if passed an incorrect argument type.)
});
```
@method allSettled
@static
@for RSVP
@param {Array} entries
@param {String} label - optional string that describes the promise.
Useful for tooling.
@return {Promise} promise that is fulfilled with an array of the settled
states of the constituent promises.
*/
function allSettled(entries, label) {
if (!isArray(entries)) {
return Promise.reject(new TypeError("Promise.allSettled must be called with an array"), label);
}
return new AllSettled(Promise, entries, label).promise;
}
/**
This is a convenient alias for `RSVP.Promise.race`.
@method race
@static
@for RSVP
@param {Array} array Array of promises.
@param {String} label An optional label. This is useful
for tooling.
*/
function race$1(array, label) {
return Promise.race(array, label);
}
function _possibleConstructorReturn$1(self, call) { if (!self) { throw new ReferenceError("this hasn't been initialised - super() hasn't been called"); } return call && (typeof call === "object" || typeof call === "function") ? call : self; }
function _inherits$1(subClass, superClass) { if (typeof superClass !== "function" && superClass !== null) { throw new TypeError("Super expression must either be null or a function, not " + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }
var hasOwnProperty = Object.prototype.hasOwnProperty;
var PromiseHash = function (_Enumerator) {
_inherits$1(PromiseHash, _Enumerator);
function PromiseHash(Constructor, object) {
var abortOnReject = arguments.length > 2 && arguments[2] !== undefined ? arguments[2] : true;
var label = arguments[3];
return _possibleConstructorReturn$1(this, _Enumerator.call(this, Constructor, object, abortOnReject, label));
}
PromiseHash.prototype._init = function _init(Constructor, object) {
this._result = {};
this._enumerate(object);
if (this._remaining === 0) {
fulfill(this.promise, this._result);
}
};
PromiseHash.prototype._enumerate = function _enumerate(input) {
var promise = this.promise;
var results = [];
for (var key in input) {
if (hasOwnProperty.call(input, key)) {
results.push({
position: key,
entry: input[key]
});
}
}
var length = results.length;
this._remaining = length;
var result = void 0;
for (var i = 0; promise._state === PENDING && i < length; i++) {
result = results[i];
this._eachEntry(result.entry, result.position);
}
};
return PromiseHash;
}(Enumerator);
/**
`RSVP.hash` is similar to `RSVP.all`, but takes an object instead of an array
for its `promises` argument.
Returns a promise that is fulfilled when all the given promises have been
fulfilled, or rejected if any of them become rejected. The returned promise
is fulfilled with a hash that has the same key names as the `promises` object
argument. If any of the values in the object are not promises, they will
simply be copied over to the fulfilled object.
Example:
```javascript
let promises = {
myPromise: RSVP.resolve(1),
yourPromise: RSVP.resolve(2),
theirPromise: RSVP.resolve(3),
notAPromise: 4
};
RSVP.hash(promises).then(function(hash){
// hash here is an object that looks like:
// {
// myPromise: 1,
// yourPromise: 2,
// theirPromise: 3,
// notAPromise: 4
// }
});
````
If any of the `promises` given to `RSVP.hash` are rejected, the first promise
that is rejected will be given as the reason to the rejection handler.
Example:
```javascript
let promises = {
myPromise: RSVP.resolve(1),
rejectedPromise: RSVP.reject(new Error('rejectedPromise')),
anotherRejectedPromise: RSVP.reject(new Error('anotherRejectedPromise')),
};
RSVP.hash(promises).then(function(hash){
// Code here never runs because there are rejected promises!
}, function(reason) {
// reason.message === 'rejectedPromise'
});
```
An important note: `RSVP.hash` is intended for plain JavaScript objects that
are just a set of keys and values. `RSVP.hash` will NOT preserve prototype
chains.
Example:
```javascript
function MyConstructor(){
this.example = RSVP.resolve('Example');
}
MyConstructor.prototype = {
protoProperty: RSVP.resolve('Proto Property')
};
let myObject = new MyConstructor();
RSVP.hash(myObject).then(function(hash){
// protoProperty will not be present, instead you will just have an
// object that looks like:
// {
// example: 'Example'
// }
//
// hash.hasOwnProperty('protoProperty'); // false
// 'undefined' === typeof hash.protoProperty
});
```
@method hash
@static
@for RSVP
@param {Object} object
@param {String} label optional string that describes the promise.
Useful for tooling.
@return {Promise} promise that is fulfilled when all properties of `promises`
have been fulfilled, or rejected if any of them become rejected.
*/
function hash(object, label) {
if (!isObject(object)) {
return Promise.reject(new TypeError("Promise.hash must be called with an object"), label);
}
return new PromiseHash(Promise, object, label).promise;
}
function _possibleConstructorReturn$2(self, call) { if (!self) { throw new ReferenceError("this hasn't been initialised - super() hasn't been called"); } return call && (typeof call === "object" || typeof call === "function") ? call : self; }
function _inherits$2(subClass, superClass) { if (typeof superClass !== "function" && superClass !== null) { throw new TypeError("Super expression must either be null or a function, not " + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }
var HashSettled = function (_PromiseHash) {
_inherits$2(HashSettled, _PromiseHash);
function HashSettled(Constructor, object, label) {
return _possibleConstructorReturn$2(this, _PromiseHash.call(this, Constructor, object, false, label));
}
return HashSettled;
}(PromiseHash);
HashSettled.prototype._makeResult = makeSettledResult;
/**
`RSVP.hashSettled` is similar to `RSVP.allSettled`, but takes an object
instead of an array for its `promises` argument.
Unlike `RSVP.all` or `RSVP.hash`, which implement a fail-fast method,
but like `RSVP.allSettled`, `hashSettled` waits until all the
constituent promises have returned and then shows you all the results
with their states and values/reasons. This is useful if you want to
handle multiple promises' failure states together as a set.
Returns a promise that is fulfilled when all the given promises have been
settled, or rejected if the passed parameters are invalid.
The returned promise is fulfilled with a hash that has the same key names as
the `promises` object argument. If any of the values in the object are not
promises, they will be copied over to the fulfilled object and marked with state
'fulfilled'.
Example:
```javascript
let promises = {
myPromise: RSVP.Promise.resolve(1),
yourPromise: RSVP.Promise.resolve(2),
theirPromise: RSVP.Promise.resolve(3),
notAPromise: 4
};
RSVP.hashSettled(promises).then(function(hash){
// hash here is an object that looks like:
// {
// myPromise: { state: 'fulfilled', value: 1 },
// yourPromise: { state: 'fulfilled', value: 2 },
// theirPromise: { state: 'fulfilled', value: 3 },
// notAPromise: { state: 'fulfilled', value: 4 }
// }
});
```
If any of the `promises` given to `RSVP.hash` are rejected, the state will
be set to 'rejected' and the reason for rejection provided.
Example:
```javascript
let promises = {
myPromise: RSVP.Promise.resolve(1),
rejectedPromise: RSVP.Promise.reject(new Error('rejection')),
anotherRejectedPromise: RSVP.Promise.reject(new Error('more rejection')),
};
RSVP.hashSettled(promises).then(function(hash){
// hash here is an object that looks like:
// {
// myPromise: { state: 'fulfilled', value: 1 },
// rejectedPromise: { state: 'rejected', reason: Error },
// anotherRejectedPromise: { state: 'rejected', reason: Error },
// }
// Note that for rejectedPromise, reason.message == 'rejection',
// and for anotherRejectedPromise, reason.message == 'more rejection'.
});
```
An important note: `RSVP.hashSettled` is intended for plain JavaScript objects that
are just a set of keys and values. `RSVP.hashSettled` will NOT preserve prototype
chains.
Example:
```javascript
function MyConstructor(){
this.example = RSVP.Promise.resolve('Example');
}
MyConstructor.prototype = {
protoProperty: RSVP.Promise.resolve('Proto Property')
};
let myObject = new MyConstructor();
RSVP.hashSettled(myObject).then(function(hash){
// protoProperty will not be present, instead you will just have an
// object that looks like:
// {
// example: { state: 'fulfilled', value: 'Example' }
// }
//
// hash.hasOwnProperty('protoProperty'); // false
// 'undefined' === typeof hash.protoProperty
});
```
@method hashSettled
@for RSVP
@param {Object} object
@param {String} label optional string that describes the promise.
Useful for tooling.
@return {Promise} promise that is fulfilled when when all properties of `promises`
have been settled.
@static
*/
function hashSettled(object, label) {
if (!isObject(object)) {
return Promise.reject(new TypeError("RSVP.hashSettled must be called with an object"), label);
}
return new HashSettled(Promise, object, false, label).promise;
}
/**
`RSVP.rethrow` will rethrow an error on the next turn of the JavaScript event
loop in order to aid debugging.
Promises A+ specifies that any exceptions that occur with a promise must be
caught by the promises implementation and bubbled to the last handler. For
this reason, it is recommended that you always specify a second rejection
handler function to `then`. However, `RSVP.rethrow` will throw the exception
outside of the promise, so it bubbles up to your console if in the browser,
or domain/cause uncaught exception in Node. `rethrow` will also throw the
error again so the error can be handled by the promise per the spec.
```javascript
function throws(){
throw new Error('Whoops!');
}
let promise = new RSVP.Promise(function(resolve, reject){
throws();
});
promise.catch(RSVP.rethrow).then(function(){
// Code here doesn't run because the promise became rejected due to an
// error!
}, function (err){
// handle the error here
});
```
The 'Whoops' error will be thrown on the next turn of the event loop
and you can watch for it in your console. You can also handle it using a
rejection handler given to `.then` or `.catch` on the returned promise.
@method rethrow
@static
@for RSVP
@param {Error} reason reason the promise became rejected.
@throws Error
@static
*/
function rethrow(reason) {
setTimeout(function () {
throw reason;
});
throw reason;
}
/**
`RSVP.defer` returns an object similar to jQuery's `$.Deferred`.
`RSVP.defer` should be used when porting over code reliant on `$.Deferred`'s
interface. New code should use the `RSVP.Promise` constructor instead.
The object returned from `RSVP.defer` is a plain object with three properties:
* promise - an `RSVP.Promise`.
* reject - a function that causes the `promise` property on this object to
become rejected
* resolve - a function that causes the `promise` property on this object to
become fulfilled.
Example:
```javascript
let deferred = RSVP.defer();
deferred.resolve("Success!");
deferred.promise.then(function(value){
// value here is "Success!"
});
```
@method defer
@static
@for RSVP
@param {String} label optional string for labeling the promise.
Useful for tooling.
@return {Object}
*/
function defer(label) {
var deferred = { resolve: undefined, reject: undefined };
deferred.promise = new Promise(function (resolve, reject) {
deferred.resolve = resolve;
deferred.reject = reject;
}, label);
return deferred;
}
/**
`RSVP.map` is similar to JavaScript's native `map` method, except that it
waits for all promises to become fulfilled before running the `mapFn` on
each item in given to `promises`. `RSVP.map` returns a promise that will
become fulfilled with the result of running `mapFn` on the values the promises
become fulfilled with.
For example:
```javascript
let promise1 = RSVP.resolve(1);
let promise2 = RSVP.resolve(2);
let promise3 = RSVP.resolve(3);
let promises = [ promise1, promise2, promise3 ];
let mapFn = function(item){
return item + 1;
};
RSVP.map(promises, mapFn).then(function(result){
// result is [ 2, 3, 4 ]
});
```
If any of the `promises` given to `RSVP.map` are rejected, the first promise
that is rejected will be given as an argument to the returned promise's
rejection handler. For example:
```javascript
let promise1 = RSVP.resolve(1);
let promise2 = RSVP.reject(new Error('2'));
let promise3 = RSVP.reject(new Error('3'));
let promises = [ promise1, promise2, promise3 ];
let mapFn = function(item){
return item + 1;
};
RSVP.map(promises, mapFn).then(function(array){
// Code here never runs because there are rejected promises!
}, function(reason) {
// reason.message === '2'
});
```
`RSVP.map` will also wait if a promise is returned from `mapFn`. For example,
say you want to get all comments from a set of blog posts, but you need
the blog posts first because they contain a url to those comments.
```javscript
let mapFn = function(blogPost){
// getComments does some ajax and returns an RSVP.Promise that is fulfilled
// with some comments data
return getComments(blogPost.comments_url);
};
// getBlogPosts does some ajax and returns an RSVP.Promise that is fulfilled
// with some blog post data
RSVP.map(getBlogPosts(), mapFn).then(function(comments){
// comments is the result of asking the server for the comments
// of all blog posts returned from getBlogPosts()
});
```
@method map
@static
@for RSVP
@param {Array} promises
@param {Function} mapFn function to be called on each fulfilled promise.
@param {String} label optional string for labeling the promise.
Useful for tooling.
@return {Promise} promise that is fulfilled with the result of calling
`mapFn` on each fulfilled promise or value when they become fulfilled.
The promise will be rejected if any of the given `promises` become rejected.
@static
*/
function map(promises, mapFn, label) {
if (!isArray(promises)) {
return Promise.reject(new TypeError("RSVP.map must be called with an array"), label);
}
if (!isFunction(mapFn)) {
return Promise.reject(new TypeError("RSVP.map expects a function as a second argument"), label);
}
return Promise.all(promises, label).then(function (values) {
var length = values.length;
var results = new Array(length);
for (var i = 0; i < length; i++) {
results[i] = mapFn(values[i]);
}
return Promise.all(results, label);
});
}
/**
This is a convenient alias for `RSVP.Promise.resolve`.
@method resolve
@static
@for RSVP
@param {*} value value that the returned promise will be resolved with
@param {String} label optional string for identifying the returned promise.
Useful for tooling.
@return {Promise} a promise that will become fulfilled with the given
`value`
*/
function resolve$2(value, label) {
return Promise.resolve(value, label);
}
/**
This is a convenient alias for `RSVP.Promise.reject`.
@method reject
@static
@for RSVP
@param {*} reason value that the returned promise will be rejected with.
@param {String} label optional string for identifying the returned promise.
Useful for tooling.
@return {Promise} a promise rejected with the given `reason`.
*/
function reject$2(reason, label) {
return Promise.reject(reason, label);
}
/**
`RSVP.filter` is similar to JavaScript's native `filter` method, except that it
waits for all promises to become fulfilled before running the `filterFn` on
each item in given to `promises`. `RSVP.filter` returns a promise that will
become fulfilled with the result of running `filterFn` on the values the
promises become fulfilled with.
For example:
```javascript
let promise1 = RSVP.resolve(1);
let promise2 = RSVP.resolve(2);
let promise3 = RSVP.resolve(3);
let promises = [promise1, promise2, promise3];
let filterFn = function(item){
return item > 1;
};
RSVP.filter(promises, filterFn).then(function(result){
// result is [ 2, 3 ]
});
```
If any of the `promises` given to `RSVP.filter` are rejected, the first promise
that is rejected will be given as an argument to the returned promise's
rejection handler. For example:
```javascript
let promise1 = RSVP.resolve(1);
let promise2 = RSVP.reject(new Error('2'));
let promise3 = RSVP.reject(new Error('3'));
let promises = [ promise1, promise2, promise3 ];
let filterFn = function(item){
return item > 1;
};
RSVP.filter(promises, filterFn).then(function(array){
// Code here never runs because there are rejected promises!
}, function(reason) {
// reason.message === '2'
});
```
`RSVP.filter` will also wait for any promises returned from `filterFn`.
For instance, you may want to fetch a list of users then return a subset
of those users based on some asynchronous operation:
```javascript
let alice = { name: 'alice' };
let bob = { name: 'bob' };
let users = [ alice, bob ];
let promises = users.map(function(user){
return RSVP.resolve(user);
});
let filterFn = function(user){
// Here, Alice has permissions to create a blog post, but Bob does not.
return getPrivilegesForUser(user).then(function(privs){
return privs.can_create_blog_post === true;
});
};
RSVP.filter(promises, filterFn).then(function(users){
// true, because the server told us only Alice can create a blog post.
users.length === 1;
// false, because Alice is the only user present in `users`
users[0] === bob;
});
```
@method filter
@static
@for RSVP
@param {Array} promises
@param {Function} filterFn - function to be called on each resolved value to
filter the final results.
@param {String} label optional string describing the promise. Useful for
tooling.
@return {Promise}
*/
function resolveAll(promises, label) {
return Promise.all(promises, label);
}
function resolveSingle(promise, label) {
return Promise.resolve(promise, label).then(function (promises) {
return resolveAll(promises, label);
});
}
function filter(promises, filterFn, label) {
if (!isArray(promises) && !(isObject(promises) && promises.then !== undefined)) {
return Promise.reject(new TypeError("RSVP.filter must be called with an array or promise"), label);
}
if (!isFunction(filterFn)) {
return Promise.reject(new TypeError("RSVP.filter expects function as a second argument"), label);
}
var promise = isArray(promises) ? resolveAll(promises, label) : resolveSingle(promises, label);
return promise.then(function (values) {
var length = values.length;
var filtered = new Array(length);
for (var i = 0; i < length; i++) {
filtered[i] = filterFn(values[i]);
}
return resolveAll(filtered, label).then(function (filtered) {
var results = new Array(length);
var newLength = 0;
for (var _i = 0; _i < length; _i++) {
if (filtered[_i]) {
results[newLength] = values[_i];
newLength++;
}
}
results.length = newLength;
return results;
});
});
}
var len = 0;
var vertxNext = void 0;
function asap(callback, arg) {
queue$1[len] = callback;
queue$1[len + 1] = arg;
len += 2;
if (len === 2) {
// If len is 1, that means that we need to schedule an async flush.
// If additional callbacks are queued before the queue is flushed, they
// will be processed by this flush that we are scheduling.
scheduleFlush$1();
}
}
var browserWindow = typeof window !== 'undefined' ? window : undefined;
var browserGlobal = browserWindow || {};
var BrowserMutationObserver = browserGlobal.MutationObserver || browserGlobal.WebKitMutationObserver;
var isNode = typeof self === 'undefined' && typeof process !== 'undefined' && {}.toString.call(process) === '[object process]';
// test for web worker but not in IE10
var isWorker = typeof Uint8ClampedArray !== 'undefined' && typeof importScripts !== 'undefined' && typeof MessageChannel !== 'undefined';
// node
function useNextTick() {
var nextTick = process.nextTick;
// node version 0.10.x displays a deprecation warning when nextTick is used recursively
// setImmediate should be used instead instead
var version = process.versions.node.match(/^(?:(\d+)\.)?(?:(\d+)\.)?(\*|\d+)$/);
if (Array.isArray(version) && version[1] === '0' && version[2] === '10') {
nextTick = setImmediate;
}
return function () {
return nextTick(flush);
};
}
// vertx
function useVertxTimer() {
if (typeof vertxNext !== 'undefined') {
return function () {
vertxNext(flush);
};
}
return useSetTimeout();
}
function useMutationObserver() {
var iterations = 0;
var observer = new BrowserMutationObserver(flush);
var node = document.createTextNode('');
observer.observe(node, { characterData: true });
return function () {
return node.data = iterations = ++iterations % 2;
};
}
// web worker
function useMessageChannel() {
var channel = new MessageChannel();
channel.port1.onmessage = flush;
return function () {
return channel.port2.postMessage(0);
};
}
function useSetTimeout() {
return function () {
return setTimeout(flush, 1);
};
}
var queue$1 = new Array(1000);
function flush() {
for (var i = 0; i < len; i += 2) {
var callback = queue$1[i];
var arg = queue$1[i + 1];
callback(arg);
queue$1[i] = undefined;
queue$1[i + 1] = undefined;
}
len = 0;
}
function attemptVertex() {
try {
var r = require;
var vertx = r('vertx');
vertxNext = vertx.runOnLoop || vertx.runOnContext;
return useVertxTimer();
} catch (e) {
return useSetTimeout();
}
}
var scheduleFlush$1 = void 0;
// Decide what async method to use to triggering processing of queued callbacks:
if (isNode) {
scheduleFlush$1 = useNextTick();
} else if (BrowserMutationObserver) {
scheduleFlush$1 = useMutationObserver();
} else if (isWorker) {
scheduleFlush$1 = useMessageChannel();
} else if (browserWindow === undefined && typeof require === 'function') {
scheduleFlush$1 = attemptVertex();
} else {
scheduleFlush$1 = useSetTimeout();
}
var platform = void 0;
/* global self */
if (typeof self === 'object') {
platform = self;
/* global global */
} else if (typeof global === 'object') {
platform = global;
} else {
throw new Error('no global: `self` or `global` found');
}
var _asap$cast$Promise$Ev;
function _defineProperty(obj, key, value) { if (key in obj) { Object.defineProperty(obj, key, { value: value, enumerable: true, configurable: true, writable: true }); } else { obj[key] = value; } return obj; }
// defaults
config.async = asap;
config.after = function (cb) {
return setTimeout(cb, 0);
};
var cast = resolve$2;
var async = function (callback, arg) {
return config.async(callback, arg);
};
function on() {
config['on'].apply(config, arguments);
}
function off() {
config['off'].apply(config, arguments);
}
// Set up instrumentation through `window.__PROMISE_INTRUMENTATION__`
if (typeof window !== 'undefined' && typeof window['__PROMISE_INSTRUMENTATION__'] === 'object') {
var callbacks = window['__PROMISE_INSTRUMENTATION__'];
configure('instrument', true);
for (var eventName in callbacks) {
if (callbacks.hasOwnProperty(eventName)) {
on(eventName, callbacks[eventName]);
}
}
}
// the default export here is for backwards compat:
// https://github.com/tildeio/rsvp.js/issues/434
var rsvp = (_asap$cast$Promise$Ev = {
asap: asap,
cast: cast,
Promise: Promise,
EventTarget: EventTarget,
all: all$1,
allSettled: allSettled,
race: race$1,
hash: hash,
hashSettled: hashSettled,
rethrow: rethrow,
defer: defer,
denodeify: denodeify,
configure: configure,
on: on,
off: off,
resolve: resolve$2,
reject: reject$2,
map: map
}, _defineProperty(_asap$cast$Promise$Ev, 'async', async), _defineProperty(_asap$cast$Promise$Ev, 'filter', filter), _asap$cast$Promise$Ev);
exports['default'] = rsvp;
exports.asap = asap;
exports.cast = cast;
exports.Promise = Promise;
exports.EventTarget = EventTarget;
exports.all = all$1;
exports.allSettled = allSettled;
exports.race = race$1;
exports.hash = hash;
exports.hashSettled = hashSettled;
exports.rethrow = rethrow;
exports.defer = defer;
exports.denodeify = denodeify;
exports.configure = configure;
exports.on = on;
exports.off = off;
exports.resolve = resolve$2;
exports.reject = reject$2;
exports.map = map;
exports.async = async;
exports.filter = filter;
Object.defineProperty(exports, '__esModule', { value: true });
})));
//
var EPUBJS = EPUBJS || {};
EPUBJS.core = {};
var ELEMENT_NODE = 1;
var TEXT_NODE = 3;
var COMMENT_NODE = 8;
var DOCUMENT_NODE = 9;
//-- Get a element for an id
EPUBJS.core.getEl = function(elem) {
return document.getElementById(elem);
};
//-- Get all elements for a class
EPUBJS.core.getEls = function(classes) {
return document.getElementsByClassName(classes);
};
EPUBJS.core.request = function(url, type, withCredentials) {
var supportsURL = window.URL;
var BLOB_RESPONSE = supportsURL ? "blob" : "arraybuffer";
var deferred = new RSVP.defer();
var xhr = new XMLHttpRequest();
var uri;
//-- Check from PDF.js:
// https://github.com/mozilla/pdf.js/blob/master/web/compatibility.js
var xhrPrototype = XMLHttpRequest.prototype;
var handler = function() {
var r;
if (this.readyState != this.DONE) return;
if ((this.status === 200 || this.status === 0) && this.response) { // Android & Firefox reporting 0 for local & blob urls
if (type == 'xml'){
// If this.responseXML wasn't set, try to parse using a DOMParser from text
if(!this.responseXML) {
r = new DOMParser().parseFromString(this.response, "application/xml");
} else {
r = this.responseXML;
}
} else if (type == 'xhtml') {
if (!this.responseXML){
r = new DOMParser().parseFromString(this.response, "application/xhtml+xml");
} else {
r = this.responseXML;
}
} else if (type == 'html') {
if (!this.responseXML){
r = new DOMParser().parseFromString(this.response, "text/html");
} else {
r = this.responseXML;
}
} else if (type == 'json') {
r = JSON.parse(this.response);
} else if (type == 'blob') {
if (supportsURL) {
r = this.response;
} else {
//-- Safari doesn't support responseType blob, so create a blob from arraybuffer
r = new Blob([this.response]);
}
} else {
r = this.response;
}
deferred.resolve(r);
} else {
deferred.reject({
message : this.response,
stack : new Error().stack
});
}
};
if (!('overrideMimeType' in xhrPrototype)) {
// IE10 might have response, but not overrideMimeType
Object.defineProperty(xhrPrototype, 'overrideMimeType', {
value: function xmlHttpRequestOverrideMimeType(mimeType) {}
});
}
xhr.onreadystatechange = handler;
xhr.open("GET", url, true);
if(withCredentials) {
xhr.withCredentials = true;
}
// If type isn't set, determine it from the file extension
if(!type) {
uri = EPUBJS.core.uri(url);
type = uri.extension;
type = {
'htm': 'html'
}[type] || type;
}
if(type == 'blob'){
xhr.responseType = BLOB_RESPONSE;
}
if(type == "json") {
xhr.setRequestHeader("Accept", "application/json");
}
if(type == 'xml') {
xhr.responseType = "document";
xhr.overrideMimeType('text/xml'); // for OPF parsing
}
if(type == 'xhtml') {
xhr.responseType = "document";
}
if(type == 'html') {
xhr.responseType = "document";
}
if(type == "binary") {
xhr.responseType = "arraybuffer";
}
xhr.send();
return deferred.promise;
};
EPUBJS.core.toArray = function(obj) {
var arr = [];
for (var member in obj) {
var newitm;
if ( obj.hasOwnProperty(member) ) {
newitm = obj[member];
newitm.ident = member;
arr.push(newitm);
}
}
return arr;
};
//-- Parse the different parts of a url, returning a object
EPUBJS.core.uri = function(url){
var uri = {
protocol : '',
host : '',
path : '',
origin : '',
directory : '',
base : '',
filename : '',
extension : '',
fragment : '',
href : url
},
blob = url.indexOf('blob:'),
doubleSlash = url.indexOf('://'),
search = url.indexOf('?'),
fragment = url.indexOf("#"),
withoutProtocol,
dot,
firstSlash;
if(blob === 0) {
uri.protocol = "blob";
uri.base = url.indexOf(0, fragment);
return uri;
}
if(fragment != -1) {
uri.fragment = url.slice(fragment + 1);
url = url.slice(0, fragment);
}
if(search != -1) {
uri.search = url.slice(search + 1);
url = url.slice(0, search);
href = uri.href;
}
if(doubleSlash != -1) {
uri.protocol = url.slice(0, doubleSlash);
withoutProtocol = url.slice(doubleSlash+3);
firstSlash = withoutProtocol.indexOf('/');
if(firstSlash === -1) {
uri.host = uri.path;
uri.path = "";
} else {
uri.host = withoutProtocol.slice(0, firstSlash);
uri.path = withoutProtocol.slice(firstSlash);
}
uri.origin = uri.protocol + "://" + uri.host;
uri.directory = EPUBJS.core.folder(uri.path);
uri.base = uri.origin + uri.directory;
// return origin;
} else {
uri.path = url;
uri.directory = EPUBJS.core.folder(url);
uri.base = uri.directory;
}
//-- Filename
uri.filename = url.replace(uri.base, '');
dot = uri.filename.lastIndexOf('.');
if(dot != -1) {
uri.extension = uri.filename.slice(dot+1);
}
return uri;
};
//-- Parse out the folder, will return everything before the last slash
EPUBJS.core.folder = function(url){
var lastSlash = url.lastIndexOf('/');
if(lastSlash == -1) var folder = '';
folder = url.slice(0, lastSlash + 1);
return folder;
};
//-- https://github.com/ebidel/filer.js/blob/master/src/filer.js#L128
EPUBJS.core.dataURLToBlob = function(dataURL) {
var BASE64_MARKER = ';base64,',
parts, contentType, raw, rawLength, uInt8Array;
if (dataURL.indexOf(BASE64_MARKER) == -1) {
parts = dataURL.split(',');
contentType = parts[0].split(':')[1];
raw = parts[1];
return new Blob([raw], {type: contentType});
}
parts = dataURL.split(BASE64_MARKER);
contentType = parts[0].split(':')[1];
raw = window.atob(parts[1]);
rawLength = raw.length;
uInt8Array = new Uint8Array(rawLength);
for (var i = 0; i < rawLength; ++i) {
uInt8Array[i] = raw.charCodeAt(i);
}
return new Blob([uInt8Array], {type: contentType});
};
//-- Load scripts async: http://stackoverflow.com/questions/7718935/load-scripts-asynchronously
EPUBJS.core.addScript = function(src, callback, target) {
var s, r;
r = false;
s = document.createElement('script');
s.type = 'text/javascript';
s.async = false;
s.src = src;
s.onload = s.onreadystatechange = function() {
if ( !r && (!this.readyState || this.readyState == 'complete') ) {
r = true;
if(callback) callback();
}
};
target = target || document.body;
target.appendChild(s);
};
EPUBJS.core.addScripts = function(srcArr, callback, target) {
var total = srcArr.length,
curr = 0,
cb = function(){
curr++;
if(total == curr){
if(callback) callback();
}else{
EPUBJS.core.addScript(srcArr[curr], cb, target);
}
};
EPUBJS.core.addScript(srcArr[curr], cb, target);
};
EPUBJS.core.addCss = function(src, callback, target) {
var s, r;
r = false;
s = document.createElement('link');
s.type = 'text/css';
s.rel = "stylesheet";
s.href = src;
s.onload = s.onreadystatechange = function() {
if ( !r && (!this.readyState || this.readyState == 'complete') ) {
r = true;
if(callback) callback();
}
};
target = target || document.body;
target.appendChild(s);
};
EPUBJS.core.prefixed = function(unprefixed) {
var vendors = ["Webkit", "Moz", "O", "ms" ],
prefixes = ['-Webkit-', '-moz-', '-o-', '-ms-'],
upper = unprefixed[0].toUpperCase() + unprefixed.slice(1),
length = vendors.length;
if (typeof(document.documentElement.style[unprefixed]) != 'undefined') {
return unprefixed;
}
for ( var i=0; i < length; i++ ) {
if (typeof(document.documentElement.style[vendors[i] + upper]) != 'undefined') {
return vendors[i] + upper;
}
}
return unprefixed;
};
EPUBJS.core.resolveUrl = function(base, path) {
var url,
segments = [],
uri = EPUBJS.core.uri(path),
folders = base.split("/"),
paths;
if(uri.host) {
return path;
}
folders.pop();
paths = path.split("/");
paths.forEach(function(p){
if(p === ".."){
folders.pop();
}else{
segments.push(p);
}
});
url = folders.concat(segments);
return url.join("/");
};
// http://stackoverflow.com/questions/105034/how-to-create-a-guid-uuid-in-javascript
EPUBJS.core.uuid = function() {
var d = new Date().getTime();
var uuid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = (d + Math.random()*16)%16 | 0;
d = Math.floor(d/16);
return (c=='x' ? r : (r&0x7|0x8)).toString(16);
});
return uuid;
};
// Fast quicksort insert for sorted array -- based on:
// http://stackoverflow.com/questions/1344500/efficient-way-to-insert-a-number-into-a-sorted-array-of-numbers
EPUBJS.core.insert = function(item, array, compareFunction) {
var location = EPUBJS.core.locationOf(item, array, compareFunction);
array.splice(location, 0, item);
return location;
};
EPUBJS.core.locationOf = function(item, array, compareFunction, _start, _end) {
var start = _start || 0;
var end = _end || array.length;
var pivot = parseInt(start + (end - start) / 2);
var compared;
if(!compareFunction){
compareFunction = function(a, b) {
if(a > b) return 1;
if(a < b) return -1;
if(a = b) return 0;
};
}
if(end-start <= 0) {
return pivot;
}
compared = compareFunction(array[pivot], item);
if(end-start === 1) {
return compared > 0 ? pivot : pivot + 1;
}
if(compared === 0) {
return pivot;
}
if(compared === -1) {
return EPUBJS.core.locationOf(item, array, compareFunction, pivot, end);
} else{
return EPUBJS.core.locationOf(item, array, compareFunction, start, pivot);
}
};
EPUBJS.core.indexOfSorted = function(item, array, compareFunction, _start, _end) {
var start = _start || 0;
var end = _end || array.length;
var pivot = parseInt(start + (end - start) / 2);
var compared;
if(!compareFunction){
compareFunction = function(a, b) {
if(a > b) return 1;
if(a < b) return -1;
if(a = b) return 0;
};
}
if(end-start <= 0) {
return -1; // Not found
}
compared = compareFunction(array[pivot], item);
if(end-start === 1) {
return compared === 0 ? pivot : -1;
}
if(compared === 0) {
return pivot; // Found
}
if(compared === -1) {
return EPUBJS.core.indexOfSorted(item, array, compareFunction, pivot, end);
} else{
return EPUBJS.core.indexOfSorted(item, array, compareFunction, start, pivot);
}
};
EPUBJS.core.queue = function(_scope){
var _q = [];
var scope = _scope;
// Add an item to the queue
var enqueue = function(funcName, args, context) {
_q.push({
"funcName" : funcName,
"args" : args,
"context" : context
});
return _q;
};
// Run one item
var dequeue = function(){
var inwait;
if(_q.length) {
inwait = _q.shift();
// Defer to any current tasks
// setTimeout(function(){
scope[inwait.funcName].apply(inwait.context || scope, inwait.args);
// }, 0);
}
};
// Run All
var flush = function(){
while(_q.length) {
dequeue();
}
};
// Clear all items in wait
var clear = function(){
_q = [];
};
var length = function(){
return _q.length;
};
return {
"enqueue" : enqueue,
"dequeue" : dequeue,
"flush" : flush,
"clear" : clear,
"length" : length
};
};
// From: https://code.google.com/p/fbug/source/browse/branches/firebug1.10/content/firebug/lib/xpath.js
/**
* Gets an XPath for an element which describes its hierarchical location.
*/
EPUBJS.core.getElementXPath = function(element) {
if (element && element.id) {
return '//*[@id="' + element.id + '"]';
} else {
return EPUBJS.core.getElementTreeXPath(element);
}
};
EPUBJS.core.getElementTreeXPath = function(element) {
var paths = [];
var isXhtml = (element.ownerDocument.documentElement.getAttribute('xmlns') === "http://www.w3.org/1999/xhtml");
var index, nodeName, tagName, pathIndex;
if(element.nodeType === Node.TEXT_NODE){
// index = Array.prototype.indexOf.call(element.parentNode.childNodes, element) + 1;
index = EPUBJS.core.indexOfTextNode(element) + 1;
paths.push("text()["+index+"]");
element = element.parentNode;
}
// Use nodeName (instead of localName) so namespace prefix is included (if any).
for (; element && element.nodeType == 1; element = element.parentNode)
{
index = 0;
for (var sibling = element.previousSibling; sibling; sibling = sibling.previousSibling)
{
// Ignore document type declaration.
if (sibling.nodeType == Node.DOCUMENT_TYPE_NODE) {
continue;
}
if (sibling.nodeName == element.nodeName) {
++index;
}
}
nodeName = element.nodeName.toLowerCase();
tagName = (isXhtml ? "xhtml:" + nodeName : nodeName);
pathIndex = (index ? "[" + (index+1) + "]" : "");
paths.splice(0, 0, tagName + pathIndex);
}
return paths.length ? "./" + paths.join("/") : null;
};
EPUBJS.core.nsResolver = function(prefix) {
var ns = {
'xhtml' : 'http://www.w3.org/1999/xhtml',
'epub': 'http://www.idpf.org/2007/ops'
};
return ns[prefix] || null;
};
//https://stackoverflow.com/questions/13482352/xquery-looking-for-text-with-single-quote/13483496#13483496
EPUBJS.core.cleanStringForXpath = function(str) {
var parts = str.match(/[^'"]+|['"]/g);
parts = parts.map(function(part){
if (part === "'") {
return '\"\'\"'; // output "'"
}
if (part === '"') {
return "\'\"\'"; // output '"'
}
return "\'" + part + "\'";
});
return "concat(\'\'," + parts.join(",") + ")";
};
EPUBJS.core.indexOfTextNode = function(textNode){
var parent = textNode.parentNode;
var children = parent.childNodes;
var sib;
var index = -1;
for (var i = 0; i < children.length; i++) {
sib = children[i];
if(sib.nodeType === Node.TEXT_NODE){
index++;
}
if(sib == textNode) break;
}
return index;
};
// Underscore
EPUBJS.core.defaults = function(obj) {
for (var i = 1, length = arguments.length; i < length; i++) {
var source = arguments[i];
for (var prop in source) {
if (obj[prop] === void 0) obj[prop] = source[prop];
}
}
return obj;
};
EPUBJS.core.extend = function(target) {
var sources = [].slice.call(arguments, 1);
sources.forEach(function (source) {
if(!source) return;
Object.getOwnPropertyNames(source).forEach(function(propName) {
Object.defineProperty(target, propName, Object.getOwnPropertyDescriptor(source, propName));
});
});
return target;
};
EPUBJS.core.clone = function(obj) {
return EPUBJS.core.isArray(obj) ? obj.slice() : EPUBJS.core.extend({}, obj);
};
EPUBJS.core.isElement = function(obj) {
return !!(obj && obj.nodeType == 1);
};
EPUBJS.core.isNumber = function(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
};
EPUBJS.core.isString = function(str) {
return (typeof str === 'string' || str instanceof String);
};
EPUBJS.core.isArray = Array.isArray || function(obj) {
return Object.prototype.toString.call(obj) === '[object Array]';
};
// Lodash
EPUBJS.core.values = function(object) {
var index = -1;
var props, length, result;
if(!object) return [];
props = Object.keys(object);
length = props.length;
result = Array(length);
while (++index < length) {
result[index] = object[props[index]];
}
return result;
};
EPUBJS.core.indexOfNode = function(node, typeId) {
var parent = node.parentNode;
var children = parent.childNodes;
var sib;
var index = -1;
for (var i = 0; i < children.length; i++) {
sib = children[i];
if (sib.nodeType === typeId) {
index++;
}
if (sib == node) break;
}
return index;
}
EPUBJS.core.indexOfTextNode = function(textNode) {
return EPUBJS.core.indexOfNode(textNode, TEXT_NODE);
}
EPUBJS.core.indexOfElementNode = function(elementNode) {
return EPUBJS.core.indexOfNode(elementNode, ELEMENT_NODE);
}
var EPUBJS = EPUBJS || {};
EPUBJS.reader = {};
EPUBJS.reader.plugins = {}; //-- Attach extra Controllers as plugins (like search?)
(function(root, $) {
var previousReader = root.ePubReader || {};
var ePubReader = root.ePubReader = function(path, options) {
return new EPUBJS.Reader(path, options);
};
//exports to multiple environments
if (typeof define === 'function' && define.amd) {
//AMD
define(function(){ return Reader; });
} else if (typeof module != "undefined" && module.exports) {
//Node
module.exports = ePubReader;
}
})(window, jQuery);
EPUBJS.Reader = function(bookPath, _options) {
var reader = this;
var book;
var plugin;
var $viewer = $("#viewer");
var search = window.location.search;
var parameters;
this.settings = EPUBJS.core.defaults(_options || {}, {
bookPath : bookPath,
restore : false,
reload : false,
bookmarks : undefined,
annotations : undefined,
contained : undefined,
bookKey : undefined,
styles : undefined,
sidebarReflow: false,
generatePagination: false,
history: true
});
// Overide options with search parameters
if(search) {
parameters = search.slice(1).split("&");
parameters.forEach(function(p){
var split = p.split("=");
var name = split[0];
var value = split[1] || '';
reader.settings[name] = decodeURIComponent(value);
});
}
this.setBookKey(this.settings.bookPath); //-- This could be username + path or any unique string
if(this.settings.restore && this.isSaved()) {
this.applySavedSettings();
}
this.settings.styles = this.settings.styles || {
fontSize : "100%"
};
this.book = book = new ePub(this.settings.bookPath, this.settings);
this.offline = false;
this.sidebarOpen = false;
if(!this.settings.bookmarks) {
this.settings.bookmarks = [];
}
if(!this.settings.annotations) {
this.settings.annotations = [];
}
if(this.settings.generatePagination) {
book.generatePagination($viewer.width(), $viewer.height());
}
this.rendition = book.renderTo("viewer", {
ignoreClass: "annotator-hl",
width: "100%",
height: "100%"
});
if(this.settings.previousLocationCfi) {
this.displayed = this.rendition.display(this.settings.previousLocationCfi);
} else {
this.displayed = this.rendition.display();
}
book.ready.then(function () {
reader.ReaderController = EPUBJS.reader.ReaderController.call(reader, book);
reader.SettingsController = EPUBJS.reader.SettingsController.call(reader, book);
reader.ControlsController = EPUBJS.reader.ControlsController.call(reader, book);
reader.SidebarController = EPUBJS.reader.SidebarController.call(reader, book);
reader.BookmarksController = EPUBJS.reader.BookmarksController.call(reader, book);
reader.NotesController = EPUBJS.reader.NotesController.call(reader, book);
window.addEventListener("hashchange", this.hashChanged.bind(this), false);
document.addEventListener('keydown', this.adjustFontSize.bind(this), false);
this.rendition.on("keydown", this.adjustFontSize.bind(this));
this.rendition.on("keydown", reader.ReaderController.arrowKeys.bind(this));
this.rendition.on("selected", this.selectedRange.bind(this));
}.bind(this)).then(function() {
reader.ReaderController.hideLoader();
}.bind(this));
// Call Plugins
for(plugin in EPUBJS.reader.plugins) {
if(EPUBJS.reader.plugins.hasOwnProperty(plugin)) {
reader[plugin] = EPUBJS.reader.plugins[plugin].call(reader, book);
}
}
book.loaded.metadata.then(function(meta) {
reader.MetaController = EPUBJS.reader.MetaController.call(reader, meta);
});
book.loaded.navigation.then(function(navigation) {
reader.TocController = EPUBJS.reader.TocController.call(reader, navigation);
});
window.addEventListener("beforeunload", this.unload.bind(this), false);
return this;
};
EPUBJS.Reader.prototype.adjustFontSize = function(e) {
var fontSize;
var interval = 2;
var PLUS = 187;
var MINUS = 189;
var ZERO = 48;
var MOD = (e.ctrlKey || e.metaKey );
if(!this.settings.styles) return;
if(!this.settings.styles.fontSize) {
this.settings.styles.fontSize = "100%";
}
fontSize = parseInt(this.settings.styles.fontSize.slice(0, -1));
if(MOD && e.keyCode == PLUS) {
e.preventDefault();
this.book.setStyle("fontSize", (fontSize + interval) + "%");
}
if(MOD && e.keyCode == MINUS){
e.preventDefault();
this.book.setStyle("fontSize", (fontSize - interval) + "%");
}
if(MOD && e.keyCode == ZERO){
e.preventDefault();
this.book.setStyle("fontSize", "100%");
}
};
EPUBJS.Reader.prototype.addBookmark = function(cfi) {
var present = this.isBookmarked(cfi);
if(present > -1 ) return;
this.settings.bookmarks.push(cfi);
this.trigger("reader:bookmarked", cfi);
};
EPUBJS.Reader.prototype.removeBookmark = function(cfi) {
var bookmark = this.isBookmarked(cfi);
if( bookmark === -1 ) return;
this.settings.bookmarks.splice(bookmark, 1);
this.trigger("reader:unbookmarked", bookmark);
};
EPUBJS.Reader.prototype.isBookmarked = function(cfi) {
var bookmarks = this.settings.bookmarks;
return bookmarks.indexOf(cfi);
};
/*
EPUBJS.Reader.prototype.searchBookmarked = function(cfi) {
var bookmarks = this.settings.bookmarks,
len = bookmarks.length,
i;
for(i = 0; i < len; i++) {
if (bookmarks[i]['cfi'] === cfi) return i;
}
return -1;
};
*/
EPUBJS.Reader.prototype.clearBookmarks = function() {
this.settings.bookmarks = [];
};
//-- Notes
EPUBJS.Reader.prototype.addNote = function(note) {
this.settings.annotations.push(note);
};
EPUBJS.Reader.prototype.removeNote = function(note) {
var index = this.settings.annotations.indexOf(note);
if( index === -1 ) return;
delete this.settings.annotations[index];
};
EPUBJS.Reader.prototype.clearNotes = function() {
this.settings.annotations = [];
};
//-- Settings
EPUBJS.Reader.prototype.setBookKey = function(identifier){
if(!this.settings.bookKey) {
this.settings.bookKey = "epubjsreader:" + EPUBJS.VERSION + ":" + window.location.host + ":" + identifier;
}
return this.settings.bookKey;
};
//-- Checks if the book setting can be retrieved from localStorage
EPUBJS.Reader.prototype.isSaved = function(bookPath) {
var storedSettings;
if(!localStorage) {
return false;
}
storedSettings = localStorage.getItem(this.settings.bookKey);
if(storedSettings === null) {
return false;
} else {
return true;
}
};
EPUBJS.Reader.prototype.removeSavedSettings = function() {
if(!localStorage) {
return false;
}
localStorage.removeItem(this.settings.bookKey);
};
EPUBJS.Reader.prototype.applySavedSettings = function() {
var stored;
if(!localStorage) {
return false;
}
try {
stored = JSON.parse(localStorage.getItem(this.settings.bookKey));
} catch (e) { // parsing error of localStorage
return false;
}
if(stored) {
// Merge styles
if(stored.styles) {
this.settings.styles = EPUBJS.core.defaults(this.settings.styles || {}, stored.styles);
}
// Merge the rest
this.settings = EPUBJS.core.defaults(this.settings, stored);
return true;
} else {
return false;
}
};
EPUBJS.Reader.prototype.saveSettings = function(){
if(this.book) {
this.settings.previousLocationCfi = this.rendition.currentLocation().start.cfi;
}
if(!localStorage) {
return false;
}
localStorage.setItem(this.settings.bookKey, JSON.stringify(this.settings));
};
EPUBJS.Reader.prototype.unload = function(){
if(this.settings.restore && localStorage) {
this.saveSettings();
}
};
EPUBJS.Reader.prototype.hashChanged = function(){
var hash = window.location.hash.slice(1);
this.rendition.display(hash);
};
EPUBJS.Reader.prototype.selectedRange = function(cfiRange){
var cfiFragment = "#"+cfiRange;
// Update the History Location
if(this.settings.history &&
window.location.hash != cfiFragment) {
// Add CFI fragment to the history
history.pushState({}, '', cfiFragment);
this.currentLocationCfi = cfiRange;
}
};
//-- Enable binding events to reader
RSVP.EventTarget.mixin(EPUBJS.Reader.prototype);
EPUBJS.reader.BookmarksController = function() {
var reader = this;
var book = this.book;
var rendition = this.rendition;
var $bookmarks = $("#bookmarksView"),
$list = $bookmarks.find("#bookmarks");
var docfrag = document.createDocumentFragment();
var show = function() {
$bookmarks.show();
};
var hide = function() {
$bookmarks.hide();
};
var counter = 0;
var createBookmarkItem = function(cfi) {
var listitem = document.createElement("li"),
link = document.createElement("a");
listitem.id = "bookmark-"+counter;
listitem.classList.add('list_item');
var spineItem = book.spine.get(cfi);
var tocItem;
if (spineItem.index in book.navigation.toc) {
tocItem = book.navigation.toc[spineItem.index];
link.textContent = tocItem.label;
} else {
link.textContent = cfi;
}
link.href = cfi;
link.classList.add('bookmark_link');
link.addEventListener("click", function(event){
var cfi = this.getAttribute('href');
rendition.display(cfi);
event.preventDefault();
}, false);
listitem.appendChild(link);
counter++;
return listitem;
};
this.settings.bookmarks.forEach(function(cfi) {
var bookmark = createBookmarkItem(cfi);
docfrag.appendChild(bookmark);
});
$list.append(docfrag);
this.on("reader:bookmarked", function(cfi) {
var item = createBookmarkItem(cfi);
$list.append(item);
});
this.on("reader:unbookmarked", function(index) {
var $item = $("#bookmark-"+index);
$item.remove();
});
return {
"show" : show,
"hide" : hide
};
};
EPUBJS.reader.ControlsController = function(book) {
var reader = this;
var rendition = this.rendition;
var $store = $("#store"),
$fullscreen = $("#fullscreen"),
$fullscreenicon = $("#fullscreenicon"),
$cancelfullscreenicon = $("#cancelfullscreenicon"),
$slider = $("#slider"),
$main = $("#main"),
$sidebar = $("#sidebar"),
$settings = $("#setting"),
$bookmark = $("#bookmark");
/*
var goOnline = function() {
reader.offline = false;
// $store.attr("src", $icon.data("save"));
};
var goOffline = function() {
reader.offline = true;
// $store.attr("src", $icon.data("saved"));
};
var fullscreen = false;
book.on("book:online", goOnline);
book.on("book:offline", goOffline);
*/
$slider.on("click", function () {
if(reader.sidebarOpen) {
reader.SidebarController.hide();
$slider.addClass("icon-menu");
$slider.removeClass("icon-right");
} else {
reader.SidebarController.show();
$slider.addClass("icon-right");
$slider.removeClass("icon-menu");
}
});
if(typeof screenfull !== 'undefined') {
$fullscreen.on("click", function() {
screenfull.toggle($('#container')[0]);
});
if(screenfull.raw) {
document.addEventListener(screenfull.raw.fullscreenchange, function() {
fullscreen = screenfull.isFullscreen;
if(fullscreen) {
$fullscreen
.addClass("icon-resize-small")
.removeClass("icon-resize-full");
} else {
$fullscreen
.addClass("icon-resize-full")
.removeClass("icon-resize-small");
}
});
}
}
$settings.on("click", function() {
reader.SettingsController.show();
});
$bookmark.on("click", function() {
var cfi = reader.rendition.currentLocation().start.cfi;
var bookmarked = reader.isBookmarked(cfi);
if(bookmarked === -1) { //-- Add bookmark
reader.addBookmark(cfi);
$bookmark
.addClass("icon-bookmark")
.removeClass("icon-bookmark-empty");
} else { //-- Remove Bookmark
reader.removeBookmark(cfi);
$bookmark
.removeClass("icon-bookmark")
.addClass("icon-bookmark-empty");
}
});
rendition.on('relocated', function(location){
var cfi = location.start.cfi;
var cfiFragment = "#" + cfi;
//-- Check if bookmarked
var bookmarked = reader.isBookmarked(cfi);
if(bookmarked === -1) { //-- Not bookmarked
$bookmark
.removeClass("icon-bookmark")
.addClass("icon-bookmark-empty");
} else { //-- Bookmarked
$bookmark
.addClass("icon-bookmark")
.removeClass("icon-bookmark-empty");
}
reader.currentLocationCfi = cfi;
// Update the History Location
if(reader.settings.history &&
window.location.hash != cfiFragment) {
// Add CFI fragment to the history
history.pushState({}, '', cfiFragment);
}
});
return {
};
};
EPUBJS.reader.MetaController = function(meta) {
var title = meta.title,
author = meta.creator;
var $title = $("#book-title"),
$author = $("#chapter-title"),
$dash = $("#title-seperator");
document.title = title+" – "+author;
$title.html(title);
$author.html(author);
$dash.show();
};
EPUBJS.reader.NotesController = function() {
var book = this.book;
var rendition = this.rendition;
var reader = this;
var $notesView = $("#notesView");
var $notes = $("#notes");
var $text = $("#note-text");
var $anchor = $("#note-anchor");
var annotations = reader.settings.annotations;
var renderer = book.renderer;
var popups = [];
var epubcfi = new ePub.CFI();
var show = function() {
$notesView.show();
};
var hide = function() {
$notesView.hide();
}
var insertAtPoint = function(e) {
var range;
var textNode;
var offset;
var doc = book.renderer.doc;
var cfi;
var annotation;
// standard
if (doc.caretPositionFromPoint) {
range = doc.caretPositionFromPoint(e.clientX, e.clientY);
textNode = range.offsetNode;
offset = range.offset;
// WebKit
} else if (doc.caretRangeFromPoint) {
range = doc.caretRangeFromPoint(e.clientX, e.clientY);
textNode = range.startContainer;
offset = range.startOffset;
}
if (textNode.nodeType !== 3) {
for (var i=0; i < textNode.childNodes.length; i++) {
if (textNode.childNodes[i].nodeType == 3) {
textNode = textNode.childNodes[i];
break;
}
}
}
// Find the end of the sentance
offset = textNode.textContent.indexOf(".", offset);
if(offset === -1){
offset = textNode.length; // Last item
} else {
offset += 1; // After the period
}
cfi = epubcfi.generateCfiFromTextNode(textNode, offset, book.renderer.currentChapter.cfiBase);
annotation = {
annotatedAt: new Date(),
anchor: cfi,
body: $text.val()
}
// add to list
reader.addNote(annotation);
// attach
addAnnotation(annotation);
placeMarker(annotation);
// clear
$text.val('');
$anchor.text("Attach");
$text.prop("disabled", false);
rendition.off("click", insertAtPoint);
};
var addAnnotation = function(annotation){
var note = document.createElement("li");
var link = document.createElement("a");
note.innerHTML = annotation.body;
// note.setAttribute("ref", annotation.anchor);
link.innerHTML = " context »";
link.href = "#"+annotation.anchor;
link.onclick = function(){
rendition.display(annotation.anchor);
return false;
};
note.appendChild(link);
$notes.append(note);
};
var placeMarker = function(annotation){
var doc = book.renderer.doc;
var marker = document.createElement("span");
var mark = document.createElement("a");
marker.classList.add("footnotesuperscript", "reader_generated");
marker.style.verticalAlign = "super";
marker.style.fontSize = ".75em";
// marker.style.position = "relative";
marker.style.lineHeight = "1em";
// mark.style.display = "inline-block";
mark.style.padding = "2px";
mark.style.backgroundColor = "#fffa96";
mark.style.borderRadius = "5px";
mark.style.cursor = "pointer";
marker.id = "note-"+EPUBJS.core.uuid();
mark.innerHTML = annotations.indexOf(annotation) + 1 + "[Reader]";
marker.appendChild(mark);
epubcfi.addMarker(annotation.anchor, doc, marker);
markerEvents(marker, annotation.body);
}
var markerEvents = function(item, txt){
var id = item.id;
var showPop = function(){
var poppos,
iheight = renderer.height,
iwidth = renderer.width,
tip,
pop,
maxHeight = 225,
itemRect,
left,
top,
pos;
//-- create a popup with endnote inside of it
if(!popups[id]) {
popups[id] = document.createElement("div");
popups[id].setAttribute("class", "popup");
pop_content = document.createElement("div");
popups[id].appendChild(pop_content);
pop_content.innerHTML = txt;
pop_content.setAttribute("class", "pop_content");
renderer.render.document.body.appendChild(popups[id]);
//-- TODO: will these leak memory? - Fred
popups[id].addEventListener("mouseover", onPop, false);
popups[id].addEventListener("mouseout", offPop, false);
//-- Add hide on page change
rendition.on("locationChanged", hidePop, this);
rendition.on("locationChanged", offPop, this);
// chapter.book.on("renderer:chapterDestroy", hidePop, this);
}
pop = popups[id];
//-- get location of item
itemRect = item.getBoundingClientRect();
left = itemRect.left;
top = itemRect.top;
//-- show the popup
pop.classList.add("show");
//-- locations of popup
popRect = pop.getBoundingClientRect();
//-- position the popup
pop.style.left = left - popRect.width / 2 + "px";
pop.style.top = top + "px";
//-- Adjust max height
if(maxHeight > iheight / 2.5) {
maxHeight = iheight / 2.5;
pop_content.style.maxHeight = maxHeight + "px";
}
//-- switch above / below
if(popRect.height + top >= iheight - 25) {
pop.style.top = top - popRect.height + "px";
pop.classList.add("above");
}else{
pop.classList.remove("above");
}
//-- switch left
if(left - popRect.width <= 0) {
pop.style.left = left + "px";
pop.classList.add("left");
}else{
pop.classList.remove("left");
}
//-- switch right
if(left + popRect.width / 2 >= iwidth) {
//-- TEMP MOVE: 300
pop.style.left = left - 300 + "px";
popRect = pop.getBoundingClientRect();
pop.style.left = left - popRect.width + "px";
//-- switch above / below again
if(popRect.height + top >= iheight - 25) {
pop.style.top = top - popRect.height + "px";
pop.classList.add("above");
}else{
pop.classList.remove("above");
}
pop.classList.add("right");
}else{
pop.classList.remove("right");
}
}
var onPop = function(){
popups[id].classList.add("on");
}
var offPop = function(){
popups[id].classList.remove("on");
}
var hidePop = function(){
setTimeout(function(){
popups[id].classList.remove("show");
}, 100);
}
var openSidebar = function(){
reader.ReaderController.slideOut();
show();
};
item.addEventListener("mouseover", showPop, false);
item.addEventListener("mouseout", hidePop, false);
item.addEventListener("click", openSidebar, false);
}
$anchor.on("click", function(e){
$anchor.text("Cancel");
$text.prop("disabled", "true");
// listen for selection
rendition.on("click", insertAtPoint);
});
annotations.forEach(function(note) {
addAnnotation(note);
});
/*
renderer.registerHook("beforeChapterDisplay", function(callback, renderer){
var chapter = renderer.currentChapter;
annotations.forEach(function(note) {
var cfi = epubcfi.parse(note.anchor);
if(cfi.spinePos === chapter.spinePos) {
try {
placeMarker(note);
} catch(e) {
console.log("anchoring failed", note.anchor);
}
}
});
callback();
}, true);
*/
return {
"show" : show,
"hide" : hide
};
};
EPUBJS.reader.ReaderController = function(book) {
var $main = $("#main"),
$divider = $("#divider"),
$loader = $("#loader"),
$next = $("#next"),
$prev = $("#prev");
var reader = this;
var book = this.book;
var rendition = this.rendition;
var slideIn = function() {
var currentPosition = rendition.currentLocation().start.cfi;
if (reader.settings.sidebarReflow){
$main.removeClass('single');
$main.one("transitionend", function(){
rendition.resize();
});
} else {
$main.removeClass("closed");
}
};
var slideOut = function() {
var location = rendition.currentLocation();
if (!location) {
return;
}
var currentPosition = location.start.cfi;
if (reader.settings.sidebarReflow){
$main.addClass('single');
$main.one("transitionend", function(){
rendition.resize();
});
} else {
$main.addClass("closed");
}
};
var showLoader = function() {
$loader.show();
hideDivider();
};
var hideLoader = function() {
$loader.hide();
//-- If the book is using spreads, show the divider
// if(book.settings.spreads) {
// showDivider();
// }
};
var showDivider = function() {
$divider.addClass("show");
};
var hideDivider = function() {
$divider.removeClass("show");
};
var keylock = false;
var arrowKeys = function(e) {
if(e.keyCode == 37) {
if(book.package.metadata.direction === "rtl") {
rendition.next();
} else {
rendition.prev();
}
$prev.addClass("active");
keylock = true;
setTimeout(function(){
keylock = false;
$prev.removeClass("active");
}, 100);
e.preventDefault();
}
if(e.keyCode == 39) {
if(book.package.metadata.direction === "rtl") {
rendition.prev();
} else {
rendition.next();
}
$next.addClass("active");
keylock = true;
setTimeout(function(){
keylock = false;
$next.removeClass("active");
}, 100);
e.preventDefault();
}
}
document.addEventListener('keydown', arrowKeys, false);
$next.on("click", function(e){
if(book.package.metadata.direction === "rtl") {
rendition.prev();
} else {
rendition.next();
}
e.preventDefault();
});
$prev.on("click", function(e){
if(book.package.metadata.direction === "rtl") {
rendition.next();
} else {
rendition.prev();
}
e.preventDefault();
});
rendition.on("layout", function(props){
if(props.spread === true) {
showDivider();
} else {
hideDivider();
}
});
rendition.on('relocated', function(location){
if (location.atStart) {
$prev.addClass("disabled");
}
if (location.atEnd) {
$next.addClass("disabled");
}
});
return {
"slideOut" : slideOut,
"slideIn" : slideIn,
"showLoader" : showLoader,
"hideLoader" : hideLoader,
"showDivider" : showDivider,
"hideDivider" : hideDivider,
"arrowKeys" : arrowKeys
};
};
EPUBJS.reader.SettingsController = function() {
var book = this.book;
var reader = this;
var $settings = $("#settings-modal"),
$overlay = $(".overlay");
var show = function() {
$settings.addClass("md-show");
};
var hide = function() {
$settings.removeClass("md-show");
};
var $sidebarReflowSetting = $('#sidebarReflow');
$sidebarReflowSetting.on('click', function() {
reader.settings.sidebarReflow = !reader.settings.sidebarReflow;
});
$settings.find(".closer").on("click", function() {
hide();
});
$overlay.on("click", function() {
hide();
});
return {
"show" : show,
"hide" : hide
};
};
EPUBJS.reader.SidebarController = function(book) {
var reader = this;
var $sidebar = $("#sidebar"),
$panels = $("#panels");
var activePanel = "Toc";
var changePanelTo = function(viewName) {
var controllerName = viewName + "Controller";
if(activePanel == viewName || typeof reader[controllerName] === 'undefined' ) return;
reader[activePanel+ "Controller"].hide();
reader[controllerName].show();
activePanel = viewName;
$panels.find('.active').removeClass("active");
$panels.find("#show-" + viewName ).addClass("active");
};
var getActivePanel = function() {
return activePanel;
};
var show = function() {
reader.sidebarOpen = true;
reader.ReaderController.slideOut();
$sidebar.addClass("open");
}
var hide = function() {
reader.sidebarOpen = false;
reader.ReaderController.slideIn();
$sidebar.removeClass("open");
}
$panels.find(".show_view").on("click", function(event) {
var view = $(this).data("view");
changePanelTo(view);
event.preventDefault();
});
return {
'show' : show,
'hide' : hide,
'getActivePanel' : getActivePanel,
'changePanelTo' : changePanelTo
};
};
EPUBJS.reader.TocController = function(toc) {
var book = this.book;
var rendition = this.rendition;
var $list = $("#tocView"),
docfrag = document.createDocumentFragment();
var currentChapter = false;
var generateTocItems = function(toc, level) {
var container = document.createElement("ul");
if(!level) level = 1;
toc.forEach(function(chapter) {
var listitem = document.createElement("li"),
link = document.createElement("a");
toggle = document.createElement("a");
var subitems;
listitem.id = "toc-"+chapter.id;
listitem.classList.add('list_item');
link.textContent = chapter.label;
link.href = chapter.href;
link.classList.add('toc_link');
listitem.appendChild(link);
if(chapter.subitems && chapter.subitems.length > 0) {
level++;
subitems = generateTocItems(chapter.subitems, level);
toggle.classList.add('toc_toggle');
listitem.insertBefore(toggle, link);
listitem.appendChild(subitems);
}
container.appendChild(listitem);
});
return container;
};
var onShow = function() {
$list.show();
};
var onHide = function() {
$list.hide();
};
var chapterChange = function(e) {
var id = e.id,
$item = $list.find("#toc-"+id),
$current = $list.find(".currentChapter"),
$open = $list.find('.openChapter');
if($item.length){
if($item != $current && $item.has(currentChapter).length > 0) {
$current.removeClass("currentChapter");
}
$item.addClass("currentChapter");
// $open.removeClass("openChapter");
$item.parents('li').addClass("openChapter");
}
};
rendition.on('renderered', chapterChange);
var tocitems = generateTocItems(toc);
docfrag.appendChild(tocitems);
$list.append(docfrag);
$list.find(".toc_link").on("click", function(event){
var url = this.getAttribute('href');
event.preventDefault();
//-- Provide the Book with the url to show
// The Url must be found in the books manifest
rendition.display(url);
$list.find(".currentChapter")
.addClass("openChapter")
.removeClass("currentChapter");
$(this).parent('li').addClass("currentChapter");
});
$list.find(".toc_toggle").on("click", function(event){
var $el = $(this).parent('li'),
open = $el.hasClass("openChapter");
event.preventDefault();
if(open){
$el.removeClass("openChapter");
} else {
$el.addClass("openChapter");
}
});
return {
"show" : onShow,
"hide" : onHide
};
};
//# sourceMappingURL=reader.js.map
|
PypiClean
|
/dsin100daysv33-6.0.1.tar.gz/dsin100daysv33-6.0.1/notebook/utils.py
|
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from __future__ import print_function
import asyncio
import concurrent.futures
import ctypes
import errno
import inspect
import os
import stat
import sys
from distutils.version import LooseVersion
try:
from urllib.parse import quote, unquote, urlparse, urljoin
from urllib.request import pathname2url
except ImportError:
from urllib import quote, unquote, pathname2url
from urlparse import urlparse, urljoin
# tornado.concurrent.Future is asyncio.Future
# in tornado >=5 with Python 3
from tornado.concurrent import Future as TornadoFuture
from tornado import gen
from ipython_genutils import py3compat
# UF_HIDDEN is a stat flag not defined in the stat module.
# It is used by BSD to indicate hidden files.
UF_HIDDEN = getattr(stat, 'UF_HIDDEN', 32768)
def exists(path):
"""Replacement for `os.path.exists` which works for host mapped volumes
on Windows containers
"""
try:
os.lstat(path)
except OSError:
return False
return True
def url_path_join(*pieces):
"""Join components of url into a relative url
Use to prevent double slash when joining subpath. This will leave the
initial and final / in place
"""
initial = pieces[0].startswith('/')
final = pieces[-1].endswith('/')
stripped = [s.strip('/') for s in pieces]
result = '/'.join(s for s in stripped if s)
if initial: result = '/' + result
if final: result = result + '/'
if result == '//': result = '/'
return result
def url_is_absolute(url):
"""Determine whether a given URL is absolute"""
return urlparse(url).path.startswith("/")
def path2url(path):
"""Convert a local file path to a URL"""
pieces = [ quote(p) for p in path.split(os.sep) ]
# preserve trailing /
if pieces[-1] == '':
pieces[-1] = '/'
url = url_path_join(*pieces)
return url
def url2path(url):
"""Convert a URL to a local file path"""
pieces = [ unquote(p) for p in url.split('/') ]
path = os.path.join(*pieces)
return path
def url_escape(path):
"""Escape special characters in a URL path
Turns '/foo bar/' into '/foo%20bar/'
"""
parts = py3compat.unicode_to_str(path, encoding='utf8').split('/')
return u'/'.join([quote(p) for p in parts])
def url_unescape(path):
"""Unescape special characters in a URL path
Turns '/foo%20bar/' into '/foo bar/'
"""
return u'/'.join([
py3compat.str_to_unicode(unquote(p), encoding='utf8')
for p in py3compat.unicode_to_str(path, encoding='utf8').split('/')
])
def is_file_hidden_win(abs_path, stat_res=None):
"""Is a file hidden?
This only checks the file itself; it should be called in combination with
checking the directory containing the file.
Use is_hidden() instead to check the file and its parent directories.
Parameters
----------
abs_path : unicode
The absolute path to check.
stat_res : os.stat_result, optional
Ignored on Windows, exists for compatibility with POSIX version of the
function.
"""
if os.path.basename(abs_path).startswith('.'):
return True
win32_FILE_ATTRIBUTE_HIDDEN = 0x02
try:
attrs = ctypes.windll.kernel32.GetFileAttributesW(
py3compat.cast_unicode(abs_path)
)
except AttributeError:
pass
else:
if attrs > 0 and attrs & win32_FILE_ATTRIBUTE_HIDDEN:
return True
return False
def is_file_hidden_posix(abs_path, stat_res=None):
"""Is a file hidden?
This only checks the file itself; it should be called in combination with
checking the directory containing the file.
Use is_hidden() instead to check the file and its parent directories.
Parameters
----------
abs_path : unicode
The absolute path to check.
stat_res : os.stat_result, optional
The result of calling stat() on abs_path. If not passed, this function
will call stat() internally.
"""
if os.path.basename(abs_path).startswith('.'):
return True
if stat_res is None or stat.S_ISLNK(stat_res.st_mode):
try:
stat_res = os.stat(abs_path)
except OSError as e:
if e.errno == errno.ENOENT:
return False
raise
# check that dirs can be listed
if stat.S_ISDIR(stat_res.st_mode):
# use x-access, not actual listing, in case of slow/large listings
if not os.access(abs_path, os.X_OK | os.R_OK):
return True
# check UF_HIDDEN
if getattr(stat_res, 'st_flags', 0) & UF_HIDDEN:
return True
return False
if sys.platform == 'win32':
is_file_hidden = is_file_hidden_win
else:
is_file_hidden = is_file_hidden_posix
def is_hidden(abs_path, abs_root=''):
"""Is a file hidden or contained in a hidden directory?
This will start with the rightmost path element and work backwards to the
given root to see if a path is hidden or in a hidden directory. Hidden is
determined by either name starting with '.' or the UF_HIDDEN flag as
reported by stat.
If abs_path is the same directory as abs_root, it will be visible even if
that is a hidden folder. This only checks the visibility of files
and directories *within* abs_root.
Parameters
----------
abs_path : unicode
The absolute path to check for hidden directories.
abs_root : unicode
The absolute path of the root directory in which hidden directories
should be checked for.
"""
if os.path.normpath(abs_path) == os.path.normpath(abs_root):
return False
if is_file_hidden(abs_path):
return True
if not abs_root:
abs_root = abs_path.split(os.sep, 1)[0] + os.sep
inside_root = abs_path[len(abs_root):]
if any(part.startswith('.') for part in inside_root.split(os.sep)):
return True
# check UF_HIDDEN on any location up to root.
# is_file_hidden() already checked the file, so start from its parent dir
path = os.path.dirname(abs_path)
while path and path.startswith(abs_root) and path != abs_root:
if not exists(path):
path = os.path.dirname(path)
continue
try:
# may fail on Windows junctions
st = os.lstat(path)
except OSError:
return True
if getattr(st, 'st_flags', 0) & UF_HIDDEN:
return True
path = os.path.dirname(path)
return False
def samefile_simple(path, other_path):
"""
Fill in for os.path.samefile when it is unavailable (Windows+py2).
Do a case-insensitive string comparison in this case
plus comparing the full stat result (including times)
because Windows + py2 doesn't support the stat fields
needed for identifying if it's the same file (st_ino, st_dev).
Only to be used if os.path.samefile is not available.
Parameters
-----------
path: String representing a path to a file
other_path: String representing a path to another file
Returns
-----------
same: Boolean that is True if both path and other path are the same
"""
path_stat = os.stat(path)
other_path_stat = os.stat(other_path)
return (path.lower() == other_path.lower()
and path_stat == other_path_stat)
def to_os_path(path, root=''):
"""Convert an API path to a filesystem path
If given, root will be prepended to the path.
root must be a filesystem path already.
"""
parts = path.strip('/').split('/')
parts = [p for p in parts if p != ''] # remove duplicate splits
path = os.path.join(root, *parts)
return path
def to_api_path(os_path, root=''):
"""Convert a filesystem path to an API path
If given, root will be removed from the path.
root must be a filesystem path already.
"""
if os_path.startswith(root):
os_path = os_path[len(root):]
parts = os_path.strip(os.path.sep).split(os.path.sep)
parts = [p for p in parts if p != ''] # remove duplicate splits
path = '/'.join(parts)
return path
def check_version(v, check):
"""check version string v >= check
If dev/prerelease tags result in TypeError for string-number comparison,
it is assumed that the dependency is satisfied.
Users on dev branches are responsible for keeping their own packages up to date.
"""
try:
return LooseVersion(v) >= LooseVersion(check)
except TypeError:
return True
# Copy of IPython.utils.process.check_pid:
def _check_pid_win32(pid):
import ctypes
# OpenProcess returns 0 if no such process (of ours) exists
# positive int otherwise
return bool(ctypes.windll.kernel32.OpenProcess(1,0,pid))
def _check_pid_posix(pid):
"""Copy of IPython.utils.process.check_pid"""
try:
os.kill(pid, 0)
except OSError as err:
if err.errno == errno.ESRCH:
return False
elif err.errno == errno.EPERM:
# Don't have permission to signal the process - probably means it exists
return True
raise
else:
return True
if sys.platform == 'win32':
check_pid = _check_pid_win32
else:
check_pid = _check_pid_posix
def maybe_future(obj):
"""Like tornado's deprecated gen.maybe_future
but more compatible with asyncio for recent versions
of tornado
"""
if inspect.isawaitable(obj):
return asyncio.ensure_future(obj)
elif isinstance(obj, concurrent.futures.Future):
return asyncio.wrap_future(obj)
else:
# not awaitable, wrap scalar in future
f = asyncio.Future()
f.set_result(obj)
return f
|
PypiClean
|
/nautobot_version_control-1.0.0a0-py3-none-any.whl/nautobot_version_control/middleware.py
|
from django.contrib import messages
from django.core.exceptions import ObjectDoesNotExist
from django.db.models.signals import m2m_changed, post_save, pre_delete
from django.http import HttpResponse
from django.shortcuts import redirect
from django.utils.safestring import mark_safe
from nautobot.extras.models.change_logging import ObjectChange
from nautobot_version_control.constants import (
DOLT_BRANCH_KEYWORD,
DOLT_DEFAULT_BRANCH,
)
from nautobot_version_control.models import Branch, Commit
from nautobot_version_control.utils import DoltError
def dolt_health_check_intercept_middleware(get_response):
"""
Intercept health check calls and disregard.
TODO: fix health-check and remove
"""
def middleware(request):
if "/health" in request.path:
return HttpResponse(status=201)
return get_response(request)
return middleware
class DoltBranchMiddleware:
"""DoltBranchMiddleware keeps track of which branch the dolt database is on."""
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
"""Override __call__."""
return self.get_response(request)
def process_view(self, request, view_func, view_args, view_kwargs): # pylint: disable=R0201
"""
process_view maintains the dolt branch session cookie and verifies authentication. It then returns the
view that needs to be rendered.
"""
# Check whether the desired branch was passed in as a querystring
query_string_branch = request.GET.get(DOLT_BRANCH_KEYWORD, None)
if query_string_branch is not None:
# update the session Cookie
request.session[DOLT_BRANCH_KEYWORD] = query_string_branch
return redirect(request.path)
branch = DoltBranchMiddleware.get_branch(request)
try:
branch.checkout()
except Exception as e:
msg = f"could not checkout branch {branch}: {str(e)}"
messages.error(request, mark_safe(msg))
try:
return view_func(request, *view_args, **view_kwargs)
except DoltError as e:
messages.error(request, mark_safe(e))
return redirect(request.path)
@staticmethod
def get_branch(request):
"""get_branch returns the Branch object of the branch stored in the session cookie."""
# lookup the active branch in the session cookie
requested = branch_from_request(request)
try:
return Branch.objects.get(pk=requested)
except ObjectDoesNotExist:
messages.warning(
request,
mark_safe(f"""<div class="text-center">branch not found: {requested}</div>"""), # nosec
)
request.session[DOLT_BRANCH_KEYWORD] = DOLT_DEFAULT_BRANCH
return Branch.objects.get(pk=DOLT_DEFAULT_BRANCH)
class DoltAutoCommitMiddleware:
"""
DoltAutoCommitMiddleware calls the AutoDoltCommit class on a request.
- adapted from nautobot.extras.middleware.ObjectChangeMiddleware.
"""
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
"""Override call."""
# Process the request with auto-dolt-commit enabled
with AutoDoltCommit(request):
return self.get_response(request)
class AutoDoltCommit:
"""
AutoDoltCommit handles automatic dolt commits on the case than objects is created or deleted.
- adapted from `nautobot.extras.context_managers`.
"""
def __init__(self, request):
self.request = request
self.commit = False
self.changes_for_db = {}
def __enter__(self):
# Connect our receivers to the post_save and post_delete signals.
post_save.connect(self._handle_update, dispatch_uid="dolt_commit_update")
m2m_changed.connect(self._handle_update, dispatch_uid="dolt_commit_update")
pre_delete.connect(self._handle_delete, dispatch_uid="dolt_commit_delete")
def __exit__(self, type, value, traceback): # pylint: disable=W0622
if self.commit:
self.make_commits()
# Disconnect change logging signals. This is necessary to avoid recording any errant
# changes during test cleanup.
post_save.disconnect(self._handle_update, dispatch_uid="dolt_commit_update")
m2m_changed.disconnect(self._handle_update, dispatch_uid="dolt_commit_update")
pre_delete.disconnect(self._handle_delete, dispatch_uid="dolt_commit_delete")
def _handle_update(self, sender, instance, **kwargs): # pylint: disable=W0613
"""Fires when an object is created or updated."""
if isinstance(instance, ObjectChange):
# ignore ObjectChange instances
return
msg = self.change_msg_for_update(instance, kwargs)
self.collect_change(instance, msg)
self.commit = True
def _handle_delete(self, sender, instance, **kwargs): # pylint: disable=W0613
"""Fires when an object is deleted."""
if isinstance(instance, ObjectChange):
# ignore ObjectChange instances
return
msg = self.change_msg_for_delete(instance)
self.collect_change(instance, msg)
self.commit = True
def make_commits(self):
"""make_commits creates and saves a Commit object."""
for db, msgs in self.changes_for_db.items():
msg = "; ".join(msgs)
Commit(message=msg).save(
user=self.request.user,
using=db,
)
def collect_change(self, instance, msg):
"""collect_change stores changes messages for each db."""
db = self.database_from_instance(instance)
if db not in self.changes_for_db:
self.changes_for_db[db] = []
self.changes_for_db[db].append(msg)
@staticmethod
def database_from_instance(instance):
"""database_from_instance returns a database from an instance type."""
return instance._state.db # pylint: disable=W0212
@staticmethod
def change_msg_for_update(instance, kwargs):
"""Generates a commit message for create or update."""
created = "created" in kwargs and kwargs["created"]
verb = "Created" if created else "Updated"
return f"""{verb} {instance._meta.verbose_name} "{instance}" """
@staticmethod
def change_msg_for_delete(instance):
"""Generates a commit message for delete."""
return f"""Deleted {instance._meta.verbose_name} "{instance}" """
def branch_from_request(request):
"""
Returns the active branch from a request
:param request: A django request
:return: Branch name
"""
if DOLT_BRANCH_KEYWORD in request.session:
return request.session.get(DOLT_BRANCH_KEYWORD)
if DOLT_BRANCH_KEYWORD in request.headers:
return request.headers.get(DOLT_BRANCH_KEYWORD)
return DOLT_DEFAULT_BRANCH
|
PypiClean
|
/pygame-engine-0.0.6.tar.gz/pygame-engine-0.0.6/game/game_objects/controllers/items_controller/invencible_power_up_controller.py
|
from pygame.math import Vector2
from pygame import mixer
from game_engine.time import Time
from game_engine.game_object import GameObject
from random import uniform as randfloat
from game.game_objects.mesh_objects.invencible_circle import InvencibleCircle
from game_engine.material import Material
from game_engine.basic_objects.text import Text
from game_engine.color import Color
from game.scripts.constants import Constants
from game.animations.text_up_fade_out_animation import TextUpFadeOutAnimation
from game_engine.components.animator import Animator
class InvenciblePowerUpController(GameObject):
def start(self):
self.fall_velocity = 150
self.radius = Constants.screen_width * 0.025
self.game_object_list = []
self.sound_collect = mixer.Sound('game/assets/soundtrack/powerup_collect_01.ogg')
self.time_of_last_invencibily = -1000
self.invecible_time = 3.5
self.current_color = "normal"
self.animation_ticks_times = [0.4, 0.5, 0.6, 0.7, 0.75, 0.80, 0.85, 0.90, 0.95, 1.00, 1.10]
self.current_animation_tick_index = 0
self.should_delete_power_up_text = False
self.power_up_text_gen_time = 0.0
def awake(self):
self.player_controller = GameObject.find_by_type("PlayerController")[0]
def update(self):
if Time.time_scale == 0.0:
#Paused game. Adjust timers
self.time_of_last_invencibily += Time.delta_time(True)
difference_time = Time.now() - self.time_of_last_invencibily
if difference_time > self.invecible_time:
for i in range(2):
self.player_controller.game_object_list[i].is_invencible = False
self.get_back_to_original_colors()
self.current_animation_tick_index = 0
else:
value = min(difference_time / self.invecible_time, 1) # Just to convert between 0 and 1
diff = abs(value - self.animation_ticks_times[self.current_animation_tick_index])
if(diff < 0.01):
self.current_animation_tick_index += 1
self.tick_colors()
for obstacle in self.game_object_list:
if obstacle.transform.position.y > Constants.screen_height:
self.game_object_list.remove(obstacle)
obstacle.destroy(obstacle)
GameObject.destroy(obstacle)
else:
self.fall(obstacle)
self.delete_power_up_text()
def fall(self, obstacle):
obstacle.transform.position.y = obstacle.transform.position.y + (self.fall_velocity * Time.delta_time())
def get_power_up(self):
self.sound_collect.play()
power_up = self.game_object_list[0]
#Power up text effect
font_path = "game/assets/fonts/neuropolxrg.ttf"
text_size = 15
power_up_text = Text(power_up.transform.position, "INVENCIBLE!", Material(Color.purple, alpha=255), text_size, font_path)
power_up_text.transform.position.x -= power_up_text.text_mesh.size
power_up_text.animation = TextUpFadeOutAnimation(power_up_text)
power_up_text.animator = Animator(power_up_text, [power_up_text.animation])
power_up_text.animator.play()
for i in range(2):
self.player_controller.game_object_list[i].is_invencible = True
self.change_colors_to_green()
self.time_of_last_invencibily = Time.now()
self.power_up_text = power_up_text
self.should_delete_power_up_text = True
def delete_power_up_text(self):
if self.should_delete_power_up_text:
if Time.now() - self.time_of_last_invencibily > 1.0:
self.should_delete_power_up_text = False
self.power_up_text.destroy_me()
def generate_obstacle(self):
random_pos = int(randfloat(self.radius + Constants.circCenter_x - Constants.circRadius,
Constants.screen_width -
(self.radius + Constants.circCenter_x - Constants.circRadius)))
circle = InvencibleCircle(Vector2(random_pos, -2 * self.radius), self.radius,
Material(Color.purple))
self.game_object_list.append(circle)
def tick_colors(self):
if(self.current_color == "normal"):
self.current_color = "green"
self.change_colors_to_green()
else:
self.current_color = "normal"
self.get_back_to_original_colors()
def get_back_to_original_colors(self):
self.player_controller.game_object_list[0].change_color(Color.orange)
self.player_controller.game_object_list[1].change_color(Color.blue)
def change_colors_to_green(self):
for i in range(2):
self.player_controller.game_object_list[i].change_color(Color.purple)
|
PypiClean
|
/sat_mapping_cyborg_ai-0.0.37-py3-none-any.whl/sat_mapping/Lib/gsutil/third_party/pyasn1/docs/source/pyasn1/type/namedtype/defaultednamedtype.rst
|
.. _namedtype.DefaultedNamedType:
.. |NamedType| replace:: DefaultedNamedType
DefaultedNamedType
------------------
.. autoclass:: pyasn1.type.namedtype.DefaultedNamedType
:members:
.. note::
The *DefaultedNamedType* class models named field of a constructed
ASN.1 type which has a default value.
The *DefaultedNamedType* objects are normally utilized
by the :ref:`NamedTypes <namedtype.NamedTypes>` objects
to model individual fields of the constructed ASN.1
types.
|
PypiClean
|
/pretty_simple_namespace-0.1.1-py3-none-any.whl/pretty_simple_namespace/_vendor/ordered_set.py
|
import itertools as it
from collections import deque
try:
# Python 3
from collections.abc import MutableSet, Sequence
except ImportError:
# Python 2.7
from collections import MutableSet, Sequence
SLICE_ALL = slice(None)
__version__ = "3.1"
def is_iterable(obj):
"""
Are we being asked to look up a list of things, instead of a single thing?
We check for the `__iter__` attribute so that this can cover types that
don't have to be known by this module, such as NumPy arrays.
Strings, however, should be considered as atomic values to look up, not
iterables. The same goes for tuples, since they are immutable and therefore
valid entries.
We don't need to check for the Python 2 `unicode` type, because it doesn't
have an `__iter__` attribute anyway.
"""
return (
hasattr(obj, "__iter__")
and not isinstance(obj, str)
and not isinstance(obj, tuple)
)
class OrderedSet(MutableSet, Sequence):
"""
An OrderedSet is a custom MutableSet that remembers its order, so that
every entry has an index that can be looked up.
Example:
>>> OrderedSet([1, 1, 2, 3, 2])
OrderedSet([1, 2, 3])
"""
def __init__(self, iterable=None):
self.items = []
self.map = {}
if iterable is not None:
self |= iterable
def __len__(self):
"""
Returns the number of unique elements in the ordered set
Example:
>>> len(OrderedSet([]))
0
>>> len(OrderedSet([1, 2]))
2
"""
return len(self.items)
def __getitem__(self, index):
"""
Get the item at a given index.
If `index` is a slice, you will get back that slice of items, as a
new OrderedSet.
If `index` is a list or a similar iterable, you'll get a list of
items corresponding to those indices. This is similar to NumPy's
"fancy indexing". The result is not an OrderedSet because you may ask
for duplicate indices, and the number of elements returned should be
the number of elements asked for.
Example:
>>> oset = OrderedSet([1, 2, 3])
>>> oset[1]
2
"""
if isinstance(index, slice) and index == SLICE_ALL:
return self.copy()
elif is_iterable(index):
return [self.items[i] for i in index]
elif hasattr(index, "__index__") or isinstance(index, slice):
result = self.items[index]
if isinstance(result, list):
return self.__class__(result)
else:
return result
else:
raise TypeError("Don't know how to index an OrderedSet by %r" % index)
def copy(self):
"""
Return a shallow copy of this object.
Example:
>>> this = OrderedSet([1, 2, 3])
>>> other = this.copy()
>>> this == other
True
>>> this is other
False
"""
return self.__class__(self)
def __getstate__(self):
if len(self) == 0:
# The state can't be an empty list.
# We need to return a truthy value, or else __setstate__ won't be run.
#
# This could have been done more gracefully by always putting the state
# in a tuple, but this way is backwards- and forwards- compatible with
# previous versions of OrderedSet.
return (None,)
else:
return list(self)
def __setstate__(self, state):
if state == (None,):
self.__init__([])
else:
self.__init__(state)
def __contains__(self, key):
"""
Test if the item is in this ordered set
Example:
>>> 1 in OrderedSet([1, 3, 2])
True
>>> 5 in OrderedSet([1, 3, 2])
False
"""
return key in self.map
def add(self, key):
"""
Add `key` as an item to this OrderedSet, then return its index.
If `key` is already in the OrderedSet, return the index it already
had.
Example:
>>> oset = OrderedSet()
>>> oset.append(3)
0
>>> print(oset)
OrderedSet([3])
"""
if key not in self.map:
self.map[key] = len(self.items)
self.items.append(key)
return self.map[key]
append = add
def update(self, sequence):
"""
Update the set with the given iterable sequence, then return the index
of the last element inserted.
Example:
>>> oset = OrderedSet([1, 2, 3])
>>> oset.update([3, 1, 5, 1, 4])
4
>>> print(oset)
OrderedSet([1, 2, 3, 5, 4])
"""
item_index = None
try:
for item in sequence:
item_index = self.add(item)
except TypeError:
raise ValueError(
"Argument needs to be an iterable, got %s" % type(sequence)
)
return item_index
def index(self, key):
"""
Get the index of a given entry, raising an IndexError if it's not
present.
`key` can be an iterable of entries that is not a string, in which case
this returns a list of indices.
Example:
>>> oset = OrderedSet([1, 2, 3])
>>> oset.index(2)
1
"""
if is_iterable(key):
return [self.index(subkey) for subkey in key]
return self.map[key]
# Provide some compatibility with pd.Index
get_loc = index
get_indexer = index
def pop(self):
"""
Remove and return the last element from the set.
Raises KeyError if the set is empty.
Example:
>>> oset = OrderedSet([1, 2, 3])
>>> oset.pop()
3
"""
if not self.items:
raise KeyError("Set is empty")
elem = self.items[-1]
del self.items[-1]
del self.map[elem]
return elem
def discard(self, key):
"""
Remove an element. Do not raise an exception if absent.
The MutableSet mixin uses this to implement the .remove() method, which
*does* raise an error when asked to remove a non-existent item.
Example:
>>> oset = OrderedSet([1, 2, 3])
>>> oset.discard(2)
>>> print(oset)
OrderedSet([1, 3])
>>> oset.discard(2)
>>> print(oset)
OrderedSet([1, 3])
"""
if key in self:
i = self.map[key]
del self.items[i]
del self.map[key]
for k, v in self.map.items():
if v >= i:
self.map[k] = v - 1
def clear(self):
"""
Remove all items from this OrderedSet.
"""
del self.items[:]
self.map.clear()
def __iter__(self):
"""
Example:
>>> list(iter(OrderedSet([1, 2, 3])))
[1, 2, 3]
"""
return iter(self.items)
def __reversed__(self):
"""
Example:
>>> list(reversed(OrderedSet([1, 2, 3])))
[3, 2, 1]
"""
return reversed(self.items)
def __repr__(self):
if not self:
return "%s()" % (self.__class__.__name__,)
return "%s(%r)" % (self.__class__.__name__, list(self))
def __eq__(self, other):
"""
Returns true if the containers have the same items. If `other` is a
Sequence, then order is checked, otherwise it is ignored.
Example:
>>> oset = OrderedSet([1, 3, 2])
>>> oset == [1, 3, 2]
True
>>> oset == [1, 2, 3]
False
>>> oset == [2, 3]
False
>>> oset == OrderedSet([3, 2, 1])
False
"""
# In Python 2 deque is not a Sequence, so treat it as one for
# consistent behavior with Python 3.
if isinstance(other, (Sequence, deque)):
# Check that this OrderedSet contains the same elements, in the
# same order, as the other object.
return list(self) == list(other)
try:
other_as_set = set(other)
except TypeError:
# If `other` can't be converted into a set, it's not equal.
return False
else:
return set(self) == other_as_set
def union(self, *sets):
"""
Combines all unique items.
Each items order is defined by its first appearance.
Example:
>>> oset = OrderedSet.union(OrderedSet([3, 1, 4, 1, 5]), [1, 3], [2, 0])
>>> print(oset)
OrderedSet([3, 1, 4, 5, 2, 0])
>>> oset.union([8, 9])
OrderedSet([3, 1, 4, 5, 2, 0, 8, 9])
>>> oset | {10}
OrderedSet([3, 1, 4, 5, 2, 0, 10])
"""
cls = self.__class__ if isinstance(self, OrderedSet) else OrderedSet
containers = map(list, it.chain([self], sets))
items = it.chain.from_iterable(containers)
return cls(items)
def __and__(self, other):
# the parent implementation of this is backwards
return self.intersection(other)
def intersection(self, *sets):
"""
Returns elements in common between all sets. Order is defined only
by the first set.
Example:
>>> oset = OrderedSet.intersection(OrderedSet([0, 1, 2, 3]), [1, 2, 3])
>>> print(oset)
OrderedSet([1, 2, 3])
>>> oset.intersection([2, 4, 5], [1, 2, 3, 4])
OrderedSet([2])
>>> oset.intersection()
OrderedSet([1, 2, 3])
"""
cls = self.__class__ if isinstance(self, OrderedSet) else OrderedSet
if sets:
common = set.intersection(*map(set, sets))
items = (item for item in self if item in common)
else:
items = self
return cls(items)
def difference(self, *sets):
"""
Returns all elements that are in this set but not the others.
Example:
>>> OrderedSet([1, 2, 3]).difference(OrderedSet([2]))
OrderedSet([1, 3])
>>> OrderedSet([1, 2, 3]).difference(OrderedSet([2]), OrderedSet([3]))
OrderedSet([1])
>>> OrderedSet([1, 2, 3]) - OrderedSet([2])
OrderedSet([1, 3])
>>> OrderedSet([1, 2, 3]).difference()
OrderedSet([1, 2, 3])
"""
cls = self.__class__
if sets:
other = set.union(*map(set, sets))
items = (item for item in self if item not in other)
else:
items = self
return cls(items)
def issubset(self, other):
"""
Report whether another set contains this set.
Example:
>>> OrderedSet([1, 2, 3]).issubset({1, 2})
False
>>> OrderedSet([1, 2, 3]).issubset({1, 2, 3, 4})
True
>>> OrderedSet([1, 2, 3]).issubset({1, 4, 3, 5})
False
"""
if len(self) > len(other): # Fast check for obvious cases
return False
return all(item in other for item in self)
def issuperset(self, other):
"""
Report whether this set contains another set.
Example:
>>> OrderedSet([1, 2]).issuperset([1, 2, 3])
False
>>> OrderedSet([1, 2, 3, 4]).issuperset({1, 2, 3})
True
>>> OrderedSet([1, 4, 3, 5]).issuperset({1, 2, 3})
False
"""
if len(self) < len(other): # Fast check for obvious cases
return False
return all(item in self for item in other)
def symmetric_difference(self, other):
"""
Return the symmetric difference of two OrderedSets as a new set.
That is, the new set will contain all elements that are in exactly
one of the sets.
Their order will be preserved, with elements from `self` preceding
elements from `other`.
Example:
>>> this = OrderedSet([1, 4, 3, 5, 7])
>>> other = OrderedSet([9, 7, 1, 3, 2])
>>> this.symmetric_difference(other)
OrderedSet([4, 5, 9, 2])
"""
cls = self.__class__ if isinstance(self, OrderedSet) else OrderedSet
diff1 = cls(self).difference(other)
diff2 = cls(other).difference(self)
return diff1.union(diff2)
def _update_items(self, items):
"""
Replace the 'items' list of this OrderedSet with a new one, updating
self.map accordingly.
"""
self.items = items
self.map = {item: idx for (idx, item) in enumerate(items)}
def difference_update(self, *sets):
"""
Update this OrderedSet to remove items from one or more other sets.
Example:
>>> this = OrderedSet([1, 2, 3])
>>> this.difference_update(OrderedSet([2, 4]))
>>> print(this)
OrderedSet([1, 3])
>>> this = OrderedSet([1, 2, 3, 4, 5])
>>> this.difference_update(OrderedSet([2, 4]), OrderedSet([1, 4, 6]))
>>> print(this)
OrderedSet([3, 5])
"""
items_to_remove = set()
for other in sets:
items_to_remove |= set(other)
self._update_items([item for item in self.items if item not in items_to_remove])
def intersection_update(self, other):
"""
Update this OrderedSet to keep only items in another set, preserving
their order in this set.
Example:
>>> this = OrderedSet([1, 4, 3, 5, 7])
>>> other = OrderedSet([9, 7, 1, 3, 2])
>>> this.intersection_update(other)
>>> print(this)
OrderedSet([1, 3, 7])
"""
other = set(other)
self._update_items([item for item in self.items if item in other])
def symmetric_difference_update(self, other):
"""
Update this OrderedSet to remove items from another set, then
add items from the other set that were not present in this set.
Example:
>>> this = OrderedSet([1, 4, 3, 5, 7])
>>> other = OrderedSet([9, 7, 1, 3, 2])
>>> this.symmetric_difference_update(other)
>>> print(this)
OrderedSet([4, 5, 9, 2])
"""
items_to_add = [item for item in other if item not in self]
items_to_remove = set(other)
self._update_items(
[item for item in self.items if item not in items_to_remove] + items_to_add
)
|
PypiClean
|
/ncrar-abr-1.0.1.tar.gz/ncrar-abr-1.0.1/ncrar_abr/app.py
|
import argparse
from collections import Counter
import json
from pathlib import Path
from matplotlib import pylab as pl
from numpy import random
import pandas as pd
from scipy import stats
import enaml
from enaml.application import deferred_call
from enaml.qt.qt_application import QtApplication
from enaml.qt.QtCore import QStandardPaths
with enaml.imports():
from enaml.stdlib.message_box import information
from ncrar_abr.compare import Compare
from ncrar_abr.compare_window import CompareWindow
from ncrar_abr.launch_window import LaunchWindow, Settings
from ncrar_abr.main_window import (DNDWindow, load_files, SerialWindow)
from ncrar_abr.presenter import SerialWaveformPresenter, WaveformPresenter
from ncrar_abr import parsers, __version__
from ncrar_abr.parsers import Parser
def config_path():
config_path = Path(QStandardPaths.standardLocations(QStandardPaths.GenericConfigLocation)[0])
return config_path / 'NCRAR' / 'abr'
def config_file():
config_file = config_path() / 'config.json'
config_file.parent.mkdir(exist_ok=True, parents=True)
return config_file
def read_config():
filename = config_file()
if not filename.exists():
return {}
return json.loads(filename.read_text())
def write_config(config):
filename = config_file()
filename.write_text(json.dumps(config, indent=2))
def add_default_arguments(parser, waves=True):
parser.add_argument('--nofilter', action='store_false', dest='filter',
default=True, help='Do not filter waveform')
parser.add_argument('--lowpass',
help='Lowpass cutoff (Hz), default 3000 Hz',
default=3000, type=float)
parser.add_argument('--highpass',
help='Highpass cutoff (Hz), default 300 Hz',
default=300, type=float)
parser.add_argument('--order',
help='Filter order, default 1st order', default=1,
type=int)
parser.add_argument('--parser', default='HDF5', help='Parser to use')
parser.add_argument('--user', help='Name of person analyzing data')
parser.add_argument('--calibration', help='Calibration file')
parser.add_argument('--latency', help='Latency file')
if waves:
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--threshold-only', action='store_true')
group.add_argument('--all-waves', action='store_true')
group.add_argument('--waves', type=int, nargs='+')
def parse_args(parser, waves=True):
options = parser.parse_args()
exclude = ('filter', 'lowpass', 'highpass', 'order', 'parser', 'user',
'waves', 'all_waves', 'threshold_only')
new_options = {k: v for k, v in vars(options).items() if k not in exclude}
filter_settings = None
if options.filter:
filter_settings = {
'lowpass': options.lowpass,
'highpass': options.highpass,
'order': options.order,
}
if waves:
if options.all_waves:
waves = [1, 2, 3, 4, 5]
elif options.threshold_only:
waves = []
else:
waves = options.waves[:]
else:
waves = []
new_options['parser'] = Parser(file_format=options.parser,
filter_settings=filter_settings,
user=options.user,
calibration=options.calibration,
waves=waves,
latency=options.latency)
return new_options
def main_launcher():
parser = argparse.ArgumentParser('ncrar-abr')
args = parser.parse_args()
app = QtApplication()
settings = Settings()
settings.set_state(read_config())
window = LaunchWindow(settings=settings)
window.show()
app.start()
app.stop()
write_config(settings.get_state())
def main_gui():
parser = argparse.ArgumentParser('ncrar-abr-gui')
add_default_arguments(parser)
parser.add_argument('--demo', action='store_true', dest='demo',
default=False, help='Load demo data')
parser.add_argument('filenames', nargs='*')
options = parse_args(parser)
app = QtApplication()
view = DNDWindow(parser=options['parser'])
filenames = [(Path(f), None) for f in options['filenames']]
deferred_call(load_files, options['parser'], filenames, view.find('dock_area'))
view.show()
app.start()
app.stop()
def main_batch():
parser = argparse.ArgumentParser("ncrar-abr-batch")
add_default_arguments(parser)
parser.add_argument('dirnames', nargs='*')
parser.add_argument('--list', action='store_true')
parser.add_argument('--skip-errors', action='store_true')
parser.add_argument('--frequencies', nargs='*', type=float)
parser.add_argument('--shuffle', action='store_true')
options = parse_args(parser)
parser = options['parser']
unprocessed = []
for dirname in options['dirnames']:
files = parser.find_unprocessed(dirname, frequencies=options['frequencies'])
unprocessed.extend(files)
if options['shuffle']:
random.shuffle(unprocessed)
if options['list']:
counts = Counter(f for f, _ in unprocessed)
for filename, n in counts.items():
filename = filename.stem
print(f'{filename} ({n})')
return
app = QtApplication()
if len(unprocessed) == 0:
information(None, 'Data', 'No datasets to process.')
return
presenter = SerialWaveformPresenter(parser=parser, unprocessed=unprocessed)
view = SerialWindow(presenter=presenter)
view.show()
app.start()
app.stop()
def aggregate(study_directory, output_file):
output_file = Path(output_file).with_suffix('.xlsx')
study_directory = Path(study_directory)
analyzed = list(study_directory.glob('*analyzed*.txt'))
keys = []
thresholds = []
waves = []
for a in analyzed:
f, _, w = parsers.load_analysis(a)
parts = a.stem.split('-')
if parts[-2].endswith('kHz'):
analyzer = 'Unknown'
subject = '-'.join(parts[:-2])
else:
analyzer = parts[-2]
subject = '-'.join(parts[:-3])
keys.append((subject, analyzer, f))
waves.append(w)
index = pd.MultiIndex.from_tuples(keys, names=['subject', 'analyzer', 'frequency'])
waves = pd.concat(waves, keys=keys, names=['subject', 'analyzer', 'frequency']).reset_index()
for i in range(1, 7):
try:
waves[f'W{i} Amplitude'] = waves[f'P{i} Amplitude'] - waves[f'N{i} Amplitude']
waves[f'W{i} Amplitude re baseline'] = waves[f'P{i} Amplitude'] - waves[f'1msec Avg']
except KeyError:
pass
cols = ['frequency'] + [c for c in waves.columns if c.startswith('P') and c.endswith('Latency')]
latencies = waves[cols].copy()
latency_summary = latencies.rename(columns={
'frequency': 'stimulus',
'P1 Latency': 1,
'P2 Latency': 2,
'P3 Latency': 3,
'P4 Latency': 4,
'P5 Latency': 5,
}).groupby('stimulus').agg(['mean', 'std'])
with pd.ExcelWriter(output_file) as writer:
waves.to_excel(writer, sheet_name='waves', index=False)
latency_summary.to_excel(writer, sheet_name='latencies')
def main_aggregate():
parser = argparse.ArgumentParser('ncrar-abr-aggregate')
parser.add_argument('study_directory')
parser.add_argument('output_file')
args = parser.parse_args()
aggregate(args.study_directory, args.output_file)
def make_shortcuts():
from importlib.resources import files
import os
import sys
from pyshortcuts import make_shortcut, platform
bindir = 'bin'
if platform.startswith('win'):
bindir = 'Scripts'
icon_file = files('ncrar_abr').joinpath('abr-icon.ico')
shortcut = make_shortcut(
os.path.normpath(os.path.join(sys.prefix, bindir, 'ncrar-abr')),
name=f'ABR {__version__}',
folder='NCRAR',
description='Auditroy Wave Analysis customized for NCRAR',
icon=icon_file,
terminal=False,
desktop=False,
startmenu=True,
)
def main_compare():
parser = argparse.ArgumentParser("ncrar-abr-compare")
add_default_arguments(parser, waves=False)
parser.add_argument('directory')
options = parse_args(parser, waves=False)
cols = ['filename', 'analyzed_filename', 'subject', 'frequency', 'Level', 'Replicate', 'Channel', 'analyzer']
app = QtApplication()
_, waves = options['parser'].load_analyses(options['directory'])
waves = waves.set_index(cols).sort_index()
presenter_a = WaveformPresenter(parser=options['parser'], interactive=False)
presenter_b = WaveformPresenter(parser=options['parser'], interactive=False)
presenter_c = WaveformPresenter(parser=options['parser'])
compare = Compare(waves=waves)
view = CompareWindow(parser=options['parser'],
compare=compare,
presenter_a=presenter_a,
presenter_b=presenter_b,
presenter_c=presenter_c,
)
view.show()
app.start()
app.stop()
|
PypiClean
|
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/utils/events.py
|
import datetime
import json
import logging
import os
import time
from collections import defaultdict
from contextlib import contextmanager
from typing import Optional
import torch
from fvcore.common.history_buffer import HistoryBuffer
from detectron2.utils.file_io import PathManager
__all__ = [
"get_event_storage",
"JSONWriter",
"TensorboardXWriter",
"CommonMetricPrinter",
"EventStorage",
]
_CURRENT_STORAGE_STACK = []
def get_event_storage():
"""
Returns:
The :class:`EventStorage` object that's currently being used.
Throws an error if no :class:`EventStorage` is currently enabled.
"""
assert len(
_CURRENT_STORAGE_STACK
), "get_event_storage() has to be called inside a 'with EventStorage(...)' context!"
return _CURRENT_STORAGE_STACK[-1]
class EventWriter:
"""
Base class for writers that obtain events from :class:`EventStorage` and process them.
"""
def write(self):
raise NotImplementedError
def close(self):
pass
class JSONWriter(EventWriter):
"""
Write scalars to a json file.
It saves scalars as one json per line (instead of a big json) for easy parsing.
Examples parsing such a json file:
::
$ cat metrics.json | jq -s '.[0:2]'
[
{
"data_time": 0.008433341979980469,
"iteration": 19,
"loss": 1.9228371381759644,
"loss_box_reg": 0.050025828182697296,
"loss_classifier": 0.5316952466964722,
"loss_mask": 0.7236229181289673,
"loss_rpn_box": 0.0856662318110466,
"loss_rpn_cls": 0.48198649287223816,
"lr": 0.007173333333333333,
"time": 0.25401854515075684
},
{
"data_time": 0.007216215133666992,
"iteration": 39,
"loss": 1.282649278640747,
"loss_box_reg": 0.06222952902317047,
"loss_classifier": 0.30682939291000366,
"loss_mask": 0.6970193982124329,
"loss_rpn_box": 0.038663312792778015,
"loss_rpn_cls": 0.1471673548221588,
"lr": 0.007706666666666667,
"time": 0.2490077018737793
}
]
$ cat metrics.json | jq '.loss_mask'
0.7126231789588928
0.689423680305481
0.6776131987571716
...
"""
def __init__(self, json_file, window_size=20):
"""
Args:
json_file (str): path to the json file. New data will be appended if the file exists.
window_size (int): the window size of median smoothing for the scalars whose
`smoothing_hint` are True.
"""
self._file_handle = PathManager.open(json_file, "a")
self._window_size = window_size
self._last_write = -1
def write(self):
storage = get_event_storage()
to_save = defaultdict(dict)
for k, (v, iter) in storage.latest_with_smoothing_hint(self._window_size).items():
# keep scalars that have not been written
if iter <= self._last_write:
continue
to_save[iter][k] = v
if len(to_save):
all_iters = sorted(to_save.keys())
self._last_write = max(all_iters)
for itr, scalars_per_iter in to_save.items():
scalars_per_iter["iteration"] = itr
self._file_handle.write(json.dumps(scalars_per_iter, sort_keys=True) + "\n")
self._file_handle.flush()
try:
os.fsync(self._file_handle.fileno())
except AttributeError:
pass
def close(self):
self._file_handle.close()
class TensorboardXWriter(EventWriter):
"""
Write all scalars to a tensorboard file.
"""
def __init__(self, log_dir: str, window_size: int = 20, **kwargs):
"""
Args:
log_dir (str): the directory to save the output events
window_size (int): the scalars will be median-smoothed by this window size
kwargs: other arguments passed to `torch.utils.tensorboard.SummaryWriter(...)`
"""
self._window_size = window_size
from torch.utils.tensorboard import SummaryWriter
self._writer = SummaryWriter(log_dir, **kwargs)
self._last_write = -1
def write(self):
storage = get_event_storage()
new_last_write = self._last_write
for k, (v, iter) in storage.latest_with_smoothing_hint(self._window_size).items():
if iter > self._last_write:
self._writer.add_scalar(k, v, iter)
new_last_write = max(new_last_write, iter)
self._last_write = new_last_write
# storage.put_{image,histogram} is only meant to be used by
# tensorboard writer. So we access its internal fields directly from here.
if len(storage._vis_data) >= 1:
for img_name, img, step_num in storage._vis_data:
self._writer.add_image(img_name, img, step_num)
# Storage stores all image data and rely on this writer to clear them.
# As a result it assumes only one writer will use its image data.
# An alternative design is to let storage store limited recent
# data (e.g. only the most recent image) that all writers can access.
# In that case a writer may not see all image data if its period is long.
storage.clear_images()
if len(storage._histograms) >= 1:
for params in storage._histograms:
self._writer.add_histogram_raw(**params)
storage.clear_histograms()
def close(self):
if hasattr(self, "_writer"): # doesn't exist when the code fails at import
self._writer.close()
class CommonMetricPrinter(EventWriter):
"""
Print **common** metrics to the terminal, including
iteration time, ETA, memory, all losses, and the learning rate.
It also applies smoothing using a window of 20 elements.
It's meant to print common metrics in common ways.
To print something in more customized ways, please implement a similar printer by yourself.
"""
def __init__(self, max_iter: Optional[int] = None, window_size: int = 20):
"""
Args:
max_iter: the maximum number of iterations to train.
Used to compute ETA. If not given, ETA will not be printed.
window_size (int): the losses will be median-smoothed by this window size
"""
self.logger = logging.getLogger(__name__)
self._max_iter = max_iter
self._window_size = window_size
self._last_write = None # (step, time) of last call to write(). Used to compute ETA
def _get_eta(self, storage) -> Optional[str]:
if self._max_iter is None:
return ""
iteration = storage.iter
try:
eta_seconds = storage.history("time").median(1000) * (self._max_iter - iteration - 1)
storage.put_scalar("eta_seconds", eta_seconds, smoothing_hint=False)
return str(datetime.timedelta(seconds=int(eta_seconds)))
except KeyError:
# estimate eta on our own - more noisy
eta_string = None
if self._last_write is not None:
estimate_iter_time = (time.perf_counter() - self._last_write[1]) / (
iteration - self._last_write[0]
)
eta_seconds = estimate_iter_time * (self._max_iter - iteration - 1)
eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
self._last_write = (iteration, time.perf_counter())
return eta_string
def write(self):
storage = get_event_storage()
iteration = storage.iter
if iteration == self._max_iter:
# This hook only reports training progress (loss, ETA, etc) but not other data,
# therefore do not write anything after training succeeds, even if this method
# is called.
return
try:
data_time = storage.history("data_time").avg(20)
except KeyError:
# they may not exist in the first few iterations (due to warmup)
# or when SimpleTrainer is not used
data_time = None
try:
iter_time = storage.history("time").global_avg()
except KeyError:
iter_time = None
try:
lr = "{:.5g}".format(storage.history("lr").latest())
except KeyError:
lr = "N/A"
eta_string = self._get_eta(storage)
if torch.cuda.is_available():
max_mem_mb = torch.cuda.max_memory_allocated() / 1024.0 / 1024.0
else:
max_mem_mb = None
# NOTE: max_mem is parsed by grep in "dev/parse_results.sh"
self.logger.info(
" {eta}iter: {iter} {losses} {time}{data_time}lr: {lr} {memory}".format(
eta=f"eta: {eta_string} " if eta_string else "",
iter=iteration,
losses=" ".join(
[
"{}: {:.4g}".format(k, v.median(self._window_size))
for k, v in storage.histories().items()
if "loss" in k
]
),
time="time: {:.4f} ".format(iter_time) if iter_time is not None else "",
data_time="data_time: {:.4f} ".format(data_time) if data_time is not None else "",
lr=lr,
memory="max_mem: {:.0f}M".format(max_mem_mb) if max_mem_mb is not None else "",
)
)
class EventStorage:
"""
The user-facing class that provides metric storage functionalities.
In the future we may add support for storing / logging other types of data if needed.
"""
def __init__(self, start_iter=0):
"""
Args:
start_iter (int): the iteration number to start with
"""
self._history = defaultdict(HistoryBuffer)
self._smoothing_hints = {}
self._latest_scalars = {}
self._iter = start_iter
self._current_prefix = ""
self._vis_data = []
self._histograms = []
def put_image(self, img_name, img_tensor):
"""
Add an `img_tensor` associated with `img_name`, to be shown on
tensorboard.
Args:
img_name (str): The name of the image to put into tensorboard.
img_tensor (torch.Tensor or numpy.array): An `uint8` or `float`
Tensor of shape `[channel, height, width]` where `channel` is
3. The image format should be RGB. The elements in img_tensor
can either have values in [0, 1] (float32) or [0, 255] (uint8).
The `img_tensor` will be visualized in tensorboard.
"""
self._vis_data.append((img_name, img_tensor, self._iter))
def put_scalar(self, name, value, smoothing_hint=True):
"""
Add a scalar `value` to the `HistoryBuffer` associated with `name`.
Args:
smoothing_hint (bool): a 'hint' on whether this scalar is noisy and should be
smoothed when logged. The hint will be accessible through
:meth:`EventStorage.smoothing_hints`. A writer may ignore the hint
and apply custom smoothing rule.
It defaults to True because most scalars we save need to be smoothed to
provide any useful signal.
"""
name = self._current_prefix + name
history = self._history[name]
value = float(value)
history.update(value, self._iter)
self._latest_scalars[name] = (value, self._iter)
existing_hint = self._smoothing_hints.get(name)
if existing_hint is not None:
assert (
existing_hint == smoothing_hint
), "Scalar {} was put with a different smoothing_hint!".format(name)
else:
self._smoothing_hints[name] = smoothing_hint
def put_scalars(self, *, smoothing_hint=True, **kwargs):
"""
Put multiple scalars from keyword arguments.
Examples:
storage.put_scalars(loss=my_loss, accuracy=my_accuracy, smoothing_hint=True)
"""
for k, v in kwargs.items():
self.put_scalar(k, v, smoothing_hint=smoothing_hint)
def put_histogram(self, hist_name, hist_tensor, bins=1000):
"""
Create a histogram from a tensor.
Args:
hist_name (str): The name of the histogram to put into tensorboard.
hist_tensor (torch.Tensor): A Tensor of arbitrary shape to be converted
into a histogram.
bins (int): Number of histogram bins.
"""
ht_min, ht_max = hist_tensor.min().item(), hist_tensor.max().item()
# Create a histogram with PyTorch
hist_counts = torch.histc(hist_tensor, bins=bins)
hist_edges = torch.linspace(start=ht_min, end=ht_max, steps=bins + 1, dtype=torch.float32)
# Parameter for the add_histogram_raw function of SummaryWriter
hist_params = dict(
tag=hist_name,
min=ht_min,
max=ht_max,
num=len(hist_tensor),
sum=float(hist_tensor.sum()),
sum_squares=float(torch.sum(hist_tensor ** 2)),
bucket_limits=hist_edges[1:].tolist(),
bucket_counts=hist_counts.tolist(),
global_step=self._iter,
)
self._histograms.append(hist_params)
def history(self, name):
"""
Returns:
HistoryBuffer: the scalar history for name
"""
ret = self._history.get(name, None)
if ret is None:
raise KeyError("No history metric available for {}!".format(name))
return ret
def histories(self):
"""
Returns:
dict[name -> HistoryBuffer]: the HistoryBuffer for all scalars
"""
return self._history
def latest(self):
"""
Returns:
dict[str -> (float, int)]: mapping from the name of each scalar to the most
recent value and the iteration number its added.
"""
return self._latest_scalars
def latest_with_smoothing_hint(self, window_size=20):
"""
Similar to :meth:`latest`, but the returned values
are either the un-smoothed original latest value,
or a median of the given window_size,
depend on whether the smoothing_hint is True.
This provides a default behavior that other writers can use.
"""
result = {}
for k, (v, itr) in self._latest_scalars.items():
result[k] = (
self._history[k].median(window_size) if self._smoothing_hints[k] else v,
itr,
)
return result
def smoothing_hints(self):
"""
Returns:
dict[name -> bool]: the user-provided hint on whether the scalar
is noisy and needs smoothing.
"""
return self._smoothing_hints
def step(self):
"""
User should either: (1) Call this function to increment storage.iter when needed. Or
(2) Set `storage.iter` to the correct iteration number before each iteration.
The storage will then be able to associate the new data with an iteration number.
"""
self._iter += 1
@property
def iter(self):
"""
Returns:
int: The current iteration number. When used together with a trainer,
this is ensured to be the same as trainer.iter.
"""
return self._iter
@iter.setter
def iter(self, val):
self._iter = int(val)
@property
def iteration(self):
# for backward compatibility
return self._iter
def __enter__(self):
_CURRENT_STORAGE_STACK.append(self)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
assert _CURRENT_STORAGE_STACK[-1] == self
_CURRENT_STORAGE_STACK.pop()
@contextmanager
def name_scope(self, name):
"""
Yields:
A context within which all the events added to this storage
will be prefixed by the name scope.
"""
old_prefix = self._current_prefix
self._current_prefix = name.rstrip("/") + "/"
yield
self._current_prefix = old_prefix
def clear_images(self):
"""
Delete all the stored images for visualization. This should be called
after images are written to tensorboard.
"""
self._vis_data = []
def clear_histograms(self):
"""
Delete all the stored histograms for visualization.
This should be called after histograms are written to tensorboard.
"""
self._histograms = []
|
PypiClean
|
/gino-admin-0.3.0.tar.gz/gino-admin-0.3.0/gino_admin/routes/main.py
|
import os
from ast import literal_eval
from typing import Text
import asyncpg
from sanic import response
from sanic.request import Request
from sqlalchemy.engine.url import URL
from gino_admin import auth, config, utils
from gino_admin.core import admin
from gino_admin.history import log_history_event, write_history_after_response
from gino_admin.routes.crud import model_view_table
from gino_admin.routes.logic import (count_elements_in_db, create_object_copy,
deepcopy_recursive,
drop_and_recreate_all_tables,
insert_data_from_csv_file,
render_model_view, upload_from_csv_data)
from gino_admin.users import add_users_model
cfg = config.cfg
jinja = cfg.jinja
@admin.listener("after_server_start")
async def before_server_start(_, loop):
if cfg.app.config.get("DB_DSN"):
dsn = cfg.app.config.DB_DSN
else:
dsn = URL(
drivername=cfg.app.config.setdefault("DB_DRIVER", "asyncpg"),
host=cfg.app.config.setdefault("DB_HOST", "localhost"),
port=cfg.app.config.setdefault("DB_PORT", 5432),
username=cfg.app.config.setdefault("DB_USER", "postgres"),
password=cfg.app.config.setdefault("DB_PASSWORD", ""),
database=cfg.app.config.setdefault("DB_DATABASE", "postgres"),
)
await cfg.app.db.set_bind(
dsn,
echo=cfg.app.config.setdefault("DB_ECHO", False),
min_size=cfg.app.config.setdefault("DB_POOL_MIN_SIZE", 5),
max_size=cfg.app.config.setdefault("DB_POOL_MAX_SIZE", 10),
ssl=cfg.app.config.setdefault("DB_SSL"),
loop=loop,
**cfg.app.config.setdefault("DB_KWARGS", dict()),
)
@admin.middleware("request")
async def middleware_request(request):
request.ctx.flash_messages = []
request.ctx.history_action = {}
conn = await cfg.app.db.acquire(lazy=True)
request.ctx.connection = conn
@admin.middleware("response")
async def middleware_response(request, response):
if (
request.endpoint.split(".")[-1] in cfg.track_history_endpoints
and request.method == "POST"
):
await write_history_after_response(request)
conn = getattr(request.ctx, "connection", None)
if conn is not None:
try:
await conn.release()
except ValueError:
pass
@admin.route("/")
@auth.token_validation()
async def bp_root(request):
return jinja.render("index.html", request)
@admin.route("/logout", methods=["GET"])
async def logout(request: Request):
request = auth.logout_user(request)
return jinja.render("login.html", request)
@admin.route("/logout", methods=["POST"])
async def logout_post(request: Request):
return await login(request)
@admin.route("/login", methods=["GET", "POST"])
async def login(request):
if not cfg.admin_user_model:
await add_users_model(cfg.app.db)
_login, request = await auth.validate_login(request, cfg.app.config)
if _login:
_token = utils.generate_token(request.ip)
cfg.sessions[_token] = {
"user_agent": request.headers["User-Agent"],
"user": _login,
}
request.cookies["auth-token"] = _token
request.ctx.session = {"_auth": True}
_response = jinja.render("index.html", request)
_response.cookies["auth-token"] = _token
return _response
request.ctx.session = {"_flashes": request.ctx.flash_messages}
return jinja.render("login.html", request)
@admin.listener("before_server_stop")
async def before_server_stop(_, loop):
conn = cfg.app.db.bind.pop("connection", None)
if conn is not None:
await conn.release()
@admin.route("/<model_id>/deepcopy", methods=["POST"])
@auth.token_validation()
async def model_deepcopy(request, model_id):
"""
Recursively creates copies for the whole chain of entities, referencing the given model and instance id through
the foreign keys.
:param request:
:param model_id:
:return:
"""
request_params = {key: request.form[key][0] for key in request.form}
columns_data = cfg.models[model_id]["columns_data"]
base_obj_id = utils.extract_obj_id_from_query(request_params["_id"])
try:
# todo: fix deepcopy
new_id = utils.extract_obj_id_from_query(request_params["new_id"])
new_id = utils.correct_types(new_id, columns_data)
except ValueError as e:
request.ctx.flash(e, "error")
return await render_model_view(request, model_id)
try:
async with cfg.app.db.acquire() as conn:
async with conn.transaction() as _:
new_base_obj_id = await deepcopy_recursive(
cfg.models[model_id]["model"],
base_obj_id,
new_id=new_id,
model_data=cfg.models[model_id],
)
if isinstance(new_base_obj_id, tuple):
request.ctx.flash(new_base_obj_id, "error")
else:
message = f"Object with {request_params['_id']} was deepcopied with new id {new_base_obj_id}"
request.ctx.flash(message, "success")
log_history_event(request, message, new_base_obj_id)
except asyncpg.exceptions.PostgresError as e:
request.ctx.flash(e.args, "error")
return await render_model_view(request, model_id)
@admin.route("/<model_id>/copy", methods=["POST"])
@auth.token_validation()
async def model_copy(request, model_id):
""" route for copy item per row """
request_params = {elem: request.form[elem][0] for elem in request.form}
base_obj_id = utils.extract_obj_id_from_query(request_params["_id"])
try:
new_obj_key = await create_object_copy(
model_id, base_obj_id, cfg.models[model_id]
)
message = f"Object with {base_obj_id} key was copied as {new_obj_key}"
flash_message = (message, "success")
log_history_event(request, message, new_obj_key)
except asyncpg.exceptions.UniqueViolationError as e:
flash_message = (
f"Duplicate in Unique column Error during copy: {e.args}. \n"
f"Try to rename existed id or add manual.",
"error",
)
except asyncpg.exceptions.ForeignKeyViolationError as e:
flash_message = (e.args, "error")
return await model_view_table(request, model_id, flash_message)
@admin.route("/init_db", methods=["GET"])
@auth.token_validation()
async def init_db_view(request: Request):
return jinja.render("init_db.html", request, data=await count_elements_in_db())
@admin.route("/init_db", methods=["POST"])
@auth.token_validation()
async def init_db_run(request: Request):
data = literal_eval(request.form["data"][0])
count = 0
for _, value in data.items():
if isinstance(value, int):
count += value
await drop_and_recreate_all_tables()
message = f"{count} object was deleted. DB was Init from Scratch"
request.ctx.flash(message, "success")
log_history_event(request, message, "system: init_db")
return jinja.render("init_db.html", request, data=await count_elements_in_db())
@admin.route("/presets", methods=["GET"])
@auth.token_validation()
async def presets_view(request: Request):
return jinja.render(
"presets.html",
request,
presets_folder=cfg.presets_folder,
presets=utils.get_presets()["presets"],
)
@admin.route("/settings", methods=["GET"])
@auth.token_validation()
async def settings_view(request: Request):
return jinja.render("settings.html", request, settings=utils.get_settings())
@admin.route("/presets/", methods=["POST"])
@auth.token_validation()
async def presets_use(request: Request):
preset = utils.get_preset_by_id(request.form["preset"][0])
with_drop = "with_db" in request.form
if with_drop:
await drop_and_recreate_all_tables()
request.ctx.flash("DB was successful Dropped", "success")
try:
for model_id, file_path in preset["files"].items():
request, is_success = await insert_data_from_csv_file(
os.path.join(cfg.presets_folder, file_path), model_id.lower(), request
)
for message in request.ctx.flash_messages:
request.ctx.flash(*message)
history_message = (
f"Loaded preset {preset['id']} {' with DB drop' if with_drop else ''}"
)
log_history_event(request, history_message, "system: load_preset")
except FileNotFoundError:
request.ctx.flash(f"Wrong file path in Preset {preset['name']}.", "error")
return jinja.render("presets.html", request, presets=utils.get_presets()["presets"])
@admin.route("/<model_id>/upload/", methods=["POST"])
@auth.token_validation()
async def file_upload(request: Request, model_id: Text):
upload_file = request.files.get("file_names")
file_name = utils.secure_filename(upload_file.name)
if not upload_file or not file_name:
flash_message = ("No file chosen to Upload", "error")
return await model_view_table(request, model_id, flash_message)
if not utils.valid_file_size(upload_file.body, cfg.max_file_size):
return response.redirect("/?error=invalid_file_size")
else:
request, is_success = await upload_from_csv_data(
upload_file, file_name, request, model_id
)
return await model_view_table(request, model_id, request.ctx.flash_messages)
@admin.route("/sql_run", methods=["GET"])
@auth.token_validation()
async def sql_query_run_view(request):
return jinja.render("sql_runner.html", request)
@admin.route("/sql_run", methods=["POST"])
@auth.token_validation()
async def sql_query_run(request):
result = []
if not request.form.get("sql_query"):
request.ctx.flash("SQL query cannot be empty", "error")
else:
sql_query = request.form["sql_query"][0]
try:
result = await cfg.app.db.status(cfg.app.db.text(sql_query))
log_history_event(request, f"Query run '{sql_query}'", "system: sql_run")
except asyncpg.exceptions.PostgresSyntaxError as e:
request.ctx.flash(f"{e.args}", "error")
except asyncpg.exceptions.UndefinedTableError as e:
request.ctx.flash(f"{e.args}", "error")
if result:
return jinja.render(
"sql_runner.html", request, columns=result[1], result=result[1]
)
else:
return jinja.render("sql_runner.html", request)
@admin.route("/history", methods=["GET"])
@auth.token_validation()
async def history_display(request):
model = cfg.app.db.tables[cfg.history_table_name]
query = cfg.app.db.select([model])
try:
rows = await query.gino.all()
except asyncpg.exceptions.UndefinedTableError:
await cfg.app.db.gino.create_all(tables=[model])
rows = await query.gino.all()
history_data = []
for row in rows:
row = {cfg.history_data_columns[num]: field for num, field in enumerate(row)}
history_data.append(row)
return jinja.render(
"history.html",
request,
history_data_columns=cfg.history_data_columns,
history_data=history_data,
)
|
PypiClean
|
/certora-cli-alpha-jtoman-gmx-set-data-20230501.21.4.812906.tar.gz/certora-cli-alpha-jtoman-gmx-set-data-20230501.21.4.812906/certora_cli/EVMVerifier/Compiler/CompilerCollectorSol.py
|
from pathlib import Path
from typing import Any, List, Tuple, Dict, Set
from EVMVerifier.Compiler.CompilerCollector import CompilerLang, CompilerCollector, CompilerLangFunc
from Shared.certoraUtils import Singleton
import EVMVerifier.certoraType as CT
class CompilerLangSol(CompilerLang, metaclass=Singleton):
"""
[CompilerLang] for Solidity.
"""
@property
def name(self) -> str:
return "Solidity"
@property
def compiler_name(self) -> str:
return "solc"
@staticmethod
def get_contract_def_node_ref(contract_file_ast: Dict[int, Any], contract_file: str, contract_name: str) -> \
int:
contract_def_refs = list(filter(
lambda node_id: contract_file_ast[node_id].get("nodeType") == "ContractDefinition" and
contract_file_ast[node_id].get("name") == contract_name, contract_file_ast))
assert len(contract_def_refs) != 0, \
f'Failed to find a "ContractDefinition" ast node id for the contract {contract_name}'
assert len(
contract_def_refs) == 1, f'Found multiple "ContractDefinition" ast node ids for the same contract ' \
f'{contract_name}: {contract_def_refs}'
return contract_def_refs[0]
@staticmethod
def compilation_output_path(sdc_name: str, config_path: Path) -> Path:
return config_path / f"{sdc_name}.standard.json.stdout"
@staticmethod
def get_supports_imports() -> bool:
return True
# Todo - add this for Vyper too and make it a CompilerLang class method one day
@staticmethod
def compilation_error_path(sdc_name: str, config_path: Path) -> Path:
return config_path / f"{sdc_name}.standard.json.stderr"
@staticmethod
def all_compilation_artifacts(sdc_name: str, config_path: Path) -> Set[Path]:
"""
Returns the set of paths for all files generated after compilation.
"""
return {CompilerLangSol.compilation_output_path(sdc_name, config_path),
CompilerLangSol.compilation_error_path(sdc_name, config_path)}
@staticmethod
def collect_source_type_descriptions_and_funcs(asts: Dict[str, Dict[str, Dict[int, Any]]],
data: Dict[str, Any],
contract_file: str,
contract_name: str,
build_arg_contract_file: str) -> \
Tuple[List[CT.Type], List[CompilerLangFunc]]:
assert False, "collect_source_type_descriptions() has not yet been implemented in CompilerLangSol"
# This class is intended for calculations of compiler-settings related queries
class CompilerCollectorSol(CompilerCollector):
def __init__(self, version: Tuple[int, int, int], solc_flags: str = ""):
self._compiler_version = version
self._optimization_flags = solc_flags # optimize, optimize_runs, solc_mapping
@property
def compiler_name(self) -> str:
return self.smart_contract_lang.compiler_name
@property
def smart_contract_lang(self) -> CompilerLangSol:
return CompilerLangSol()
@property
def compiler_version(self) -> Tuple[int, int, int]:
return self._compiler_version
@property
def optimization_flags(self) -> str:
return self._optimization_flags
def normalize_storage(self, is_storage: bool, arg_name: str) -> str:
if not is_storage:
return arg_name
if self._compiler_version[0] == 0 and self._compiler_version[1] < 7:
return arg_name + "_slot"
else:
return arg_name + ".slot"
def supports_calldata_assembly(self, arg_name: str) -> bool:
return (self._compiler_version[1] > 7 or (
self._compiler_version[1] == 7 and self._compiler_version[2] >= 5)) and arg_name != ""
|
PypiClean
|
/uoft_core-1.0.1.tar.gz/uoft_core-1.0.1/uoft_core/yaml/constructor.py
|
import datetime
import base64
import binascii
import sys
import types
import warnings
from collections.abc import Hashable, MutableSequence, MutableMapping
from typing import TYPE_CHECKING, Tuple, cast, Optional
# fmt: off
from .error import (MarkedYAMLError, MarkedYAMLFutureWarning,
MantissaNoDotYAML1_1Warning)
from .nodes import * # NOQA
from .nodes import (SequenceNode, MappingNode, ScalarNode)
from .compat import (_F, builtins_module, # NOQA
nprint, nprintf, version_tnf)
from .compat import ordereddict
from .comments import * # NOQA
from .comments import (CommentedMap, CommentedOrderedMap, CommentedSet,
CommentedKeySeq, CommentedSeq, TaggedScalar,
CommentedKeyMap,
C_KEY_PRE, C_KEY_EOL, C_KEY_POST,
C_VALUE_PRE, C_VALUE_EOL, C_VALUE_POST,
)
from .scalarstring import (SingleQuotedScalarString, DoubleQuotedScalarString,
LiteralScalarString, FoldedScalarString,
PlainScalarString, ScalarString,)
from .scalarint import ScalarInt, BinaryInt, OctalInt, HexInt, HexCapsInt
from .scalarfloat import ScalarFloat
from .scalarbool import ScalarBoolean
from .timestamp import TimeStamp
from .util import timestamp_regexp, create_timestamp
if TYPE_CHECKING: # MYPY
from typing import Any, Dict, List, Set, Generator, Union, Optional # NOQA
from . import YAML
__all__ = ['BaseConstructor', 'SafeConstructor',
'ConstructorError', 'RoundTripConstructor']
# fmt: on
class ConstructorError(MarkedYAMLError):
pass
class DuplicateKeyFutureWarning(MarkedYAMLFutureWarning):
pass
class DuplicateKeyError(MarkedYAMLError):
pass
class BaseConstructor:
yaml_constructors = {}
yaml_multi_constructors = {}
def __init__(self, loader: "YAML"):
self.loader = loader
self.yaml_base_dict_type = dict
self.yaml_base_list_type = list
self.constructed_objects = {}
self.recursive_objects = {}
self.state_generators = []
self.deep_construct = False
self._preserve_quotes = self.loader.preserve_quotes
self.allow_duplicate_keys = version_tnf((0, 15, 1), (0, 16))
@property
def composer(self):
return self.loader.composer
@property
def resolver(self):
return self.loader.resolver
@property
def scanner(self):
return self.loader.scanner
def check_data(self):
# If there are more documents available?
return self.composer.check_node()
def get_data(self):
# Construct and return the next document.
if self.composer.check_node():
return self.construct_document(self.composer.get_node())
def get_single_data(self):
# Ensure that the stream contains a single document and construct it.
node = self.composer.get_single_node()
if node is not None:
return self.construct_document(node)
return None
def construct_document(self, node: Node):
# type: (Any) -> Any
data = self.construct_object(node)
while bool(self.state_generators):
state_generators = self.state_generators
self.state_generators = []
for generator in state_generators:
for _dummy in generator:
pass
self.constructed_objects = {}
self.recursive_objects = {}
self.deep_construct = False
return data
def construct_object(self, node: Node, deep=False):
# type: (Any, bool) -> Any
"""deep is True when creating an object/mapping recursively,
in that case want the underlying elements available during construction
"""
original_deep_construct = None
if node in self.constructed_objects:
return self.constructed_objects[node]
if deep:
original_deep_construct = self.deep_construct
self.deep_construct = True
if node in self.recursive_objects:
return self.recursive_objects[node]
# raise ConstructorError(
# None, None, 'found unconstructable recursive node', node.start_mark
# )
self.recursive_objects[node] = None
data = self.construct_non_recursive_object(node)
self.constructed_objects[node] = data
del self.recursive_objects[node]
if original_deep_construct is not None:
# restore the original value for deep_construct
self.deep_construct = original_deep_construct
return data
def construct_non_recursive_object(self, node: Node, tag: Optional[str] = None):
constructor = None
tag_suffix = None
if tag is None:
tag = node.tag
if tag in self.yaml_constructors:
constructor = self.yaml_constructors[tag]
else:
for tag_prefix in self.yaml_multi_constructors:
if tag and tag.startswith(tag_prefix):
tag_suffix = tag[len(tag_prefix) :]
constructor = self.yaml_multi_constructors[tag_prefix]
break
else:
if None in self.yaml_multi_constructors:
tag_suffix = tag
constructor = self.yaml_multi_constructors[None]
elif None in self.yaml_constructors:
constructor = self.yaml_constructors[None]
elif isinstance(node, ScalarNode):
constructor = self.__class__.construct_scalar
elif isinstance(node, SequenceNode):
constructor = self.__class__.construct_sequence
elif isinstance(node, MappingNode):
constructor = self.__class__.construct_mapping
else:
raise ConstructorError(
problem=f"Could not find a constructor for node of type {type(node)}",
problem_mark=node.start_mark,
)
if tag_suffix is None:
data = constructor(self, node) # type: ignore
else:
data = constructor(self, tag_suffix, node) # type: ignore
if isinstance(data, types.GeneratorType):
generator = data
data = next(generator)
if self.deep_construct:
for _dummy in generator:
pass
else:
self.state_generators.append(generator)
return data
def construct_scalar(self, node):
if not isinstance(node, ScalarNode):
raise ConstructorError(
None,
None,
_F("expected a scalar node, but found {node_id!s}", node_id=node.id),
node.start_mark,
)
return node.value
def construct_sequence(self, node, deep=False):
"""deep is True when creating an object/mapping recursively,
in that case want the underlying elements available during construction
"""
if not isinstance(node, SequenceNode):
raise ConstructorError(
problem=f"expected a sequence node, but found {node.id!s}",
problem_mark=node.start_mark,
)
return [self.construct_object(child, deep=deep) for child in node.value]
def construct_mapping(self, node: MappingNode, deep=False):
# type: (Any, bool) -> Any
"""deep is True when creating an object/mapping recursively,
in that case want the underlying elements available during construction
"""
if not isinstance(node, MappingNode):
raise ConstructorError(
problem=f"expected a mapping node, but found {node.id!s}",
problem_mark=node.start_mark,
)
total_mapping = self.yaml_base_dict_type()
if getattr(node, "merge", None) is not None:
todo = [(node.merge, False), (node.value, False)]
else:
todo = [(node.value, True)]
todo = cast(list[Tuple[list[Tuple[Node, Node]], bool]], todo)
for values, check in todo:
mapping = self.yaml_base_dict_type()
for key_node, value_node in values:
# keys can be list -> deep
key = self.construct_object(key_node, deep=True)
# lists are not hashable, but tuples are
if not isinstance(key, Hashable):
if isinstance(key, list):
key = tuple(key)
if not isinstance(key, Hashable):
raise ConstructorError(
context="while constructing a mapping",
context_mark=node.start_mark,
problem="found unhashable key",
problem_mark=key_node.start_mark,
)
value = self.construct_object(value_node, deep=deep)
if check:
if self.check_mapping_key(node, key_node, mapping, key, value):
mapping[key] = value
else:
mapping[key] = value
total_mapping.update(mapping)
return total_mapping
def check_mapping_key(self, node: Node, key_node: Node, mapping, key, value) -> bool:
"""return True if key is unique"""
if key in mapping:
if not self.allow_duplicate_keys:
mk = mapping.get(key)
args = [
"while constructing a mapping",
node.start_mark,
'found duplicate key "{}" with value "{}" '
'(original value: "{}")'.format(key, value, mk),
key_node.start_mark,
"""
To suppress this check see:
http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
""",
"""\
Duplicate keys will become an error in future releases, and are errors
by default when using the new API.
""",
]
if self.allow_duplicate_keys is None:
warnings.warn(DuplicateKeyFutureWarning(*args))
else:
raise DuplicateKeyError(*args)
return False
return True
def check_set_key(self, node: Node, key_node, setting, key):
if key in setting:
if not self.allow_duplicate_keys:
args = [
"while constructing a set",
node.start_mark,
'found duplicate key "{}"'.format(key),
key_node.start_mark,
"""
To suppress this check see:
http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
""",
"""\
Duplicate keys will become an error in future releases, and are errors
by default when using the new API.
""",
]
if self.allow_duplicate_keys is None:
warnings.warn(DuplicateKeyFutureWarning(*args))
else:
raise DuplicateKeyError(*args)
def construct_pairs(self, node: Node, deep=False):
if not isinstance(node, MappingNode):
raise ConstructorError(
problem=f"expected a mapping node, but found {node.id!s}",
problem_mark=node.start_mark,
)
pairs = []
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
value = self.construct_object(value_node, deep=deep)
pairs.append((key, value))
return pairs
@classmethod
def add_constructor(cls, tag, constructor):
if "yaml_constructors" not in cls.__dict__:
cls.yaml_constructors = cls.yaml_constructors.copy()
cls.yaml_constructors[tag] = constructor
@classmethod
def add_multi_constructor(cls, tag_prefix, multi_constructor):
if "yaml_multi_constructors" not in cls.__dict__:
cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
cls.yaml_multi_constructors[tag_prefix] = multi_constructor
class SafeConstructor(BaseConstructor):
def construct_scalar(self, node):
if isinstance(node, MappingNode):
for key_node, value_node in node.value:
if key_node.tag == "tag:yaml.org,2002:value":
return self.construct_scalar(value_node)
return BaseConstructor.construct_scalar(self, node)
def flatten_mapping(self, node: Node):
"""
This implements the merge key feature http://yaml.org/type/merge.html
by inserting keys from the merge dict/list of dicts if not yet
available in this node
"""
merge = []
index = 0
while index < len(node.value):
key_node, value_node = node.value[index]
if key_node.tag == "tag:yaml.org,2002:merge":
if merge: # double << key
if self.allow_duplicate_keys:
del node.value[index]
index += 1
continue
args = [
"while constructing a mapping",
node.start_mark,
'found duplicate key "{}"'.format(key_node.value),
key_node.start_mark,
"""
To suppress this check see:
http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
""",
"""\
Duplicate keys will become an error in future releases, and are errors
by default when using the new API.
""",
]
if self.allow_duplicate_keys is None:
warnings.warn(DuplicateKeyFutureWarning(*args))
else:
raise DuplicateKeyError(*args)
del node.value[index]
if isinstance(value_node, MappingNode):
self.flatten_mapping(value_node)
merge.extend(value_node.value)
elif isinstance(value_node, SequenceNode):
submerge = []
for subnode in value_node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError(
context="while constructing a mapping",
context_mark=node.start_mark,
problem=f"expected a mapping for merging, but found {subnode.id!s}",
problem_mark=subnode.start_mark,
)
self.flatten_mapping(subnode)
submerge.append(subnode.value)
submerge.reverse()
for value in submerge:
merge.extend(value)
else:
raise ConstructorError(
context="while constructing a mapping",
context_mark=node.start_mark,
problem=f"expected a mapping or list of mappings for merging, but found {value_node.id!s}",
problem_mark=value_node.start_mark,
)
elif key_node.tag == "tag:yaml.org,2002:value":
key_node.tag = "tag:yaml.org,2002:str"
index += 1
else:
index += 1
if bool(merge):
node.merge = (
merge # separate merge keys to be able to update without duplicate
)
node.value = merge + node.value
def construct_mapping(self, node, deep=False):
# type: (Any, bool) -> Any
"""deep is True when creating an object/mapping recursively,
in that case want the underlying elements available during construction
"""
if isinstance(node, MappingNode):
self.flatten_mapping(node)
return BaseConstructor.construct_mapping(self, node, deep=deep)
def construct_yaml_null(self, node):
# type: (Any) -> Any
self.construct_scalar(node)
return None
# YAML 1.2 spec doesn't mention yes/no etc any more, 1.1 does
bool_values = {
"yes": True,
"no": False,
"y": True,
"n": False,
"true": True,
"false": False,
"on": True,
"off": False,
}
def construct_yaml_bool(self, node):
# type: (Any) -> bool
value = self.construct_scalar(node)
return self.bool_values[value.lower()]
def construct_yaml_int(self, node):
# type: (Any) -> int
value_s = self.construct_scalar(node)
value_s = value_s.replace("_", "")
sign = +1
if value_s[0] == "-":
sign = -1
if value_s[0] in "+-":
value_s = value_s[1:]
if value_s == "0":
return 0
elif value_s.startswith("0b"):
return sign * int(value_s[2:], 2)
elif value_s.startswith("0x"):
return sign * int(value_s[2:], 16)
elif value_s.startswith("0o"):
return sign * int(value_s[2:], 8)
elif self.resolver.processing_version == (1, 1) and value_s[0] == "0":
return sign * int(value_s, 8)
elif self.resolver.processing_version == (1, 1) and ":" in value_s:
digits = [int(part) for part in value_s.split(":")]
digits.reverse()
base = 1
value = 0
for digit in digits:
value += digit * base
base *= 60
return sign * value
else:
return sign * int(value_s)
inf_value = 1e300
while inf_value != inf_value * inf_value:
inf_value *= inf_value
nan_value = -inf_value / inf_value # Trying to make a quiet NaN (like C99).
def construct_yaml_float(self, node):
# type: (Any) -> float
value_so = self.construct_scalar(node)
value_s = value_so.replace("_", "").lower()
sign = +1
if value_s[0] == "-":
sign = -1
if value_s[0] in "+-":
value_s = value_s[1:]
if value_s == ".inf":
return sign * self.inf_value
elif value_s == ".nan":
return self.nan_value
elif self.resolver.processing_version != (1, 2) and ":" in value_s:
digits = [float(part) for part in value_s.split(":")]
digits.reverse()
base = 1
value = 0.0
for digit in digits:
value += digit * base
base *= 60
return sign * value
else:
if self.resolver.processing_version != (1, 2) and "e" in value_s:
# value_s is lower case independent of input
mantissa, exponent = value_s.split("e")
if "." not in mantissa:
warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so))
return sign * float(value_s)
def construct_yaml_binary(self, node):
# type: (Any) -> Any
try:
value = self.construct_scalar(node).encode("ascii")
except UnicodeEncodeError as exc:
raise ConstructorError(
None,
None,
_F("failed to convert base64 data into ascii: {exc!s}", exc=exc),
node.start_mark,
)
try:
return base64.decodebytes(value)
except binascii.Error as exc:
raise ConstructorError(
None,
None,
_F("failed to decode base64 data: {exc!s}", exc=exc),
node.start_mark,
)
timestamp_regexp = timestamp_regexp # moved to util 0.17.17
def construct_yaml_timestamp(self, node, values=None):
# type: (Any, Any) -> Any
if values is None:
try:
match = self.timestamp_regexp.match(node.value)
except TypeError:
match = None
if match is None:
raise ConstructorError(
None,
None,
'failed to construct timestamp from "{}"'.format(node.value),
node.start_mark,
)
values = match.groupdict()
return create_timestamp(**values)
def construct_yaml_pairs(self, node):
# type: (Any) -> Any
# Note: the same code as `construct_yaml_omap`.
pairs = [] # type: List[Any]
yield pairs
if not isinstance(node, SequenceNode):
raise ConstructorError(
"while constructing pairs",
node.start_mark,
_F("expected a sequence, but found {node_id!s}", node_id=node.id),
node.start_mark,
)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError(
"while constructing pairs",
node.start_mark,
_F(
"expected a mapping of length 1, but found {subnode_id!s}",
subnode_id=subnode.id,
),
subnode.start_mark,
)
if len(subnode.value) != 1:
raise ConstructorError(
"while constructing pairs",
node.start_mark,
_F(
"expected a single mapping item, but found {len_subnode_val:d} items",
len_subnode_val=len(subnode.value),
),
subnode.start_mark,
)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
value = self.construct_object(value_node)
pairs.append((key, value))
class RoundTripConstructor(SafeConstructor):
"""need to store the comments on the node itself,
as well as on the items
"""
def comment(self, idx):
# type: (Any) -> Any
assert self.loader.comment_handling is not None
x = self.scanner.comments[idx]
x.set_assigned()
return x
def comments(self, list_of_comments, idx=None):
# type: (Any, Optional[Any]) -> Any
# hand in the comment and optional pre, eol, post segment
if list_of_comments is None:
return []
if idx is not None:
if list_of_comments[idx] is None:
return []
list_of_comments = list_of_comments[idx]
for x in list_of_comments:
yield self.comment(x)
def construct_scalar(self, node):
# type: (Any) -> Any
if not isinstance(node, ScalarNode):
raise ConstructorError(
None,
None,
_F("expected a scalar node, but found {node_id!s}", node_id=node.id),
node.start_mark,
)
if node.style == "|" and isinstance(node.value, str):
lss = LiteralScalarString(node.value, anchor=node.anchor)
if self.loader and self.loader.comment_handling is None:
if node.comment and node.comment[1]:
lss.comment = node.comment[1][0] # type: ignore
else:
# NEWCMNT
if node.comment is not None and node.comment[1]:
# nprintf('>>>>nc1', node.comment)
# EOL comment after |
lss.comment = self.comment(node.comment[1][0]) # type: ignore
return lss
if node.style == ">" and isinstance(node.value, str):
fold_positions = [] # type: List[int]
idx = -1
while True:
idx = node.value.find("\a", idx + 1)
if idx < 0:
break
fold_positions.append(idx - len(fold_positions))
fss = FoldedScalarString(node.value.replace("\a", ""), anchor=node.anchor)
if self.loader and self.loader.comment_handling is None:
if node.comment and node.comment[1]:
fss.comment = node.comment[1][0] # type: ignore
else:
# NEWCMNT
if node.comment is not None and node.comment[1]:
# nprintf('>>>>nc2', node.comment)
# EOL comment after >
fss.comment = self.comment(node.comment[1][0]) # type: ignore
if fold_positions:
fss.fold_pos = fold_positions # type: ignore
return fss
elif bool(self._preserve_quotes) and isinstance(node.value, str):
if node.style == "'":
return SingleQuotedScalarString(node.value, anchor=node.anchor)
if node.style == '"':
return DoubleQuotedScalarString(node.value, anchor=node.anchor)
if node.anchor:
return PlainScalarString(node.value, anchor=node.anchor)
return node.value
def construct_yaml_int(self, node):
# type: (Any) -> Any
width = None # type: Any
value_su = self.construct_scalar(node)
try:
sx = value_su.rstrip("_")
underscore = [len(sx) - sx.rindex("_") - 1, False, False] # type: Any
except ValueError:
underscore = None
except IndexError:
underscore = None
value_s = value_su.replace("_", "")
sign = +1
if value_s[0] == "-":
sign = -1
if value_s[0] in "+-":
value_s = value_s[1:]
if value_s == "0":
return 0
elif value_s.startswith("0b"):
if self.resolver.processing_version > (1, 1) and value_s[2] == "0":
width = len(value_s[2:])
if underscore is not None:
underscore[1] = value_su[2] == "_"
underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == "_"
return BinaryInt(
sign * int(value_s[2:], 2),
width=width,
underscore=underscore,
anchor=node.anchor,
)
elif value_s.startswith("0x"):
# default to lower-case if no a-fA-F in string
if self.resolver.processing_version > (1, 1) and value_s[2] == "0":
width = len(value_s[2:])
hex_fun = HexInt # type: Any
for ch in value_s[2:]:
if ch in "ABCDEF": # first non-digit is capital
hex_fun = HexCapsInt
break
if ch in "abcdef":
break
if underscore is not None:
underscore[1] = value_su[2] == "_"
underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == "_"
return hex_fun(
sign * int(value_s[2:], 16),
width=width,
underscore=underscore,
anchor=node.anchor,
)
elif value_s.startswith("0o"):
if self.resolver.processing_version > (1, 1) and value_s[2] == "0":
width = len(value_s[2:])
if underscore is not None:
underscore[1] = value_su[2] == "_"
underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == "_"
return OctalInt(
sign * int(value_s[2:], 8),
width=width,
underscore=underscore,
anchor=node.anchor,
)
elif self.resolver.processing_version != (1, 2) and value_s[0] == "0":
return sign * int(value_s, 8)
elif self.resolver.processing_version != (1, 2) and ":" in value_s:
digits = [int(part) for part in value_s.split(":")]
digits.reverse()
base = 1
value = 0
for digit in digits:
value += digit * base
base *= 60
return sign * value
elif self.resolver.processing_version > (1, 1) and value_s[0] == "0":
# not an octal, an integer with leading zero(s)
if underscore is not None:
# cannot have a leading underscore
underscore[2] = len(value_su) > 1 and value_su[-1] == "_"
return ScalarInt(
sign * int(value_s), width=len(value_s), underscore=underscore
)
elif underscore:
# cannot have a leading underscore
underscore[2] = len(value_su) > 1 and value_su[-1] == "_"
return ScalarInt(
sign * int(value_s),
width=None,
underscore=underscore,
anchor=node.anchor,
)
elif node.anchor:
return ScalarInt(sign * int(value_s), width=None, anchor=node.anchor)
else:
return sign * int(value_s)
def construct_yaml_float(self, node):
# type: (Any) -> Any
def leading_zeros(v):
# type: (Any) -> int
lead0 = 0
idx = 0
while idx < len(v) and v[idx] in "0.":
if v[idx] == "0":
lead0 += 1
idx += 1
return lead0
# underscore = None
m_sign = False # type: Any
value_so = self.construct_scalar(node)
value_s = value_so.replace("_", "").lower()
sign = +1
if value_s[0] == "-":
sign = -1
if value_s[0] in "+-":
m_sign = value_s[0]
value_s = value_s[1:]
if value_s == ".inf":
return sign * self.inf_value
if value_s == ".nan":
return self.nan_value
if self.resolver.processing_version != (1, 2) and ":" in value_s:
digits = [float(part) for part in value_s.split(":")]
digits.reverse()
base = 1
value = 0.0
for digit in digits:
value += digit * base
base *= 60
return sign * value
if "e" in value_s:
try:
mantissa, exponent = value_so.split("e")
exp = "e"
except ValueError:
mantissa, exponent = value_so.split("E")
exp = "E"
if self.resolver.processing_version != (1, 2):
# value_s is lower case independent of input
if "." not in mantissa:
warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so))
lead0 = leading_zeros(mantissa)
width = len(mantissa)
prec = mantissa.find(".")
if m_sign:
width -= 1
e_width = len(exponent)
e_sign = exponent[0] in "+-"
# nprint('sf', width, prec, m_sign, exp, e_width, e_sign)
return ScalarFloat(
sign * float(value_s),
width=width,
prec=prec,
m_sign=m_sign,
m_lead0=lead0,
exp=exp,
e_width=e_width,
e_sign=e_sign,
anchor=node.anchor,
)
width = len(value_so)
prec = value_so.index(
"."
) # you can use index, this would not be float without dot
lead0 = leading_zeros(value_so)
return ScalarFloat(
sign * float(value_s),
width=width,
prec=prec,
m_sign=m_sign,
m_lead0=lead0,
anchor=node.anchor,
)
def construct_yaml_str(self, node):
# type: (Any) -> Any
value = self.construct_scalar(node)
if isinstance(value, ScalarString):
return value
return value
def construct_rt_sequence(self, node, seqtyp, deep=False):
# type: (Any, Any, bool) -> Any
if not isinstance(node, SequenceNode):
raise ConstructorError(
None,
None,
_F("expected a sequence node, but found {node_id!s}", node_id=node.id),
node.start_mark,
)
ret_val = []
if self.loader and self.loader.comment_handling is None:
if node.comment:
seqtyp._yaml_add_comment(node.comment[:2])
if len(node.comment) > 2:
# this happens e.g. if you have a sequence element that is a flow-style
# mapping and that has no EOL comment but a following commentline or
# empty line
seqtyp.yaml_end_comment_extend(node.comment[2], clear=True)
else:
# NEWCMNT
if node.comment:
nprintf("nc3", node.comment)
if node.anchor:
from .serializer import templated_id
if not templated_id(node.anchor):
seqtyp.yaml_set_anchor(node.anchor)
for idx, child in enumerate(node.value):
if child.comment:
seqtyp._yaml_add_comment(child.comment, key=idx)
child.comment = None # if moved to sequence remove from child
ret_val.append(self.construct_object(child, deep=deep))
seqtyp._yaml_set_idx_line_col(
idx, [child.start_mark.line, child.start_mark.column]
)
return ret_val
def flatten_mapping(self, node):
# type: (Any) -> Any
"""
This implements the merge key feature http://yaml.org/type/merge.html
by inserting keys from the merge dict/list of dicts if not yet
available in this node
"""
def constructed(value_node):
# type: (Any) -> Any
# If the contents of a merge are defined within the
# merge marker, then they won't have been constructed
# yet. But if they were already constructed, we need to use
# the existing object.
if value_node in self.constructed_objects:
value = self.constructed_objects[value_node]
else:
value = self.construct_object(value_node, deep=False)
return value
# merge = []
merge_map_list = [] # type: List[Any]
index = 0
while index < len(node.value):
key_node, value_node = node.value[index]
if key_node.tag == "tag:yaml.org,2002:merge":
if merge_map_list: # double << key
if self.allow_duplicate_keys:
del node.value[index]
index += 1
continue
args = [
"while constructing a mapping",
node.start_mark,
'found duplicate key "{}"'.format(key_node.value),
key_node.start_mark,
"""
To suppress this check see:
http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
""",
"""\
Duplicate keys will become an error in future releases, and are errors
by default when using the new API.
""",
]
if self.allow_duplicate_keys is None:
warnings.warn(DuplicateKeyFutureWarning(*args))
else:
raise DuplicateKeyError(*args)
del node.value[index]
if isinstance(value_node, MappingNode):
merge_map_list.append((index, constructed(value_node)))
# self.flatten_mapping(value_node)
# merge.extend(value_node.value)
elif isinstance(value_node, SequenceNode):
# submerge = []
for subnode in value_node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError(
"while constructing a mapping",
node.start_mark,
_F(
"expected a mapping for merging, but found {subnode_id!s}",
subnode_id=subnode.id,
),
subnode.start_mark,
)
merge_map_list.append((index, constructed(subnode)))
# self.flatten_mapping(subnode)
# submerge.append(subnode.value)
# submerge.reverse()
# for value in submerge:
# merge.extend(value)
else:
raise ConstructorError(
"while constructing a mapping",
node.start_mark,
_F(
"expected a mapping or list of mappings for merging, "
"but found {value_node_id!s}",
value_node_id=value_node.id,
),
value_node.start_mark,
)
elif key_node.tag == "tag:yaml.org,2002:value":
key_node.tag = "tag:yaml.org,2002:str"
index += 1
else:
index += 1
return merge_map_list
# if merge:
# node.value = merge + node.value
def _sentinel(self):
# type: () -> None
pass
def construct_mapping(self, node, maptyp, deep=False): # type: ignore
# type: (Any, Any, bool) -> Any
if not isinstance(node, MappingNode):
raise ConstructorError(
None,
None,
_F("expected a mapping node, but found {node_id!s}", node_id=node.id),
node.start_mark,
)
merge_map = self.flatten_mapping(node)
# mapping = {}
if self.loader and self.loader.comment_handling is None:
if node.comment:
maptyp._yaml_add_comment(node.comment[:2])
if len(node.comment) > 2:
maptyp.yaml_end_comment_extend(node.comment[2], clear=True)
else:
# NEWCMNT
if node.comment:
# nprintf('nc4', node.comment, node.start_mark)
if maptyp.ca.pre is None:
maptyp.ca.pre = []
for cmnt in self.comments(node.comment, 0):
maptyp.ca.pre.append(cmnt)
if node.anchor:
from .serializer import templated_id
if not templated_id(node.anchor):
maptyp.yaml_set_anchor(node.anchor)
last_key, last_value = None, self._sentinel
for key_node, value_node in node.value:
# keys can be list -> deep
key = self.construct_object(key_node, deep=True)
# lists are not hashable, but tuples are
if not isinstance(key, Hashable):
if isinstance(key, MutableSequence):
key_s = CommentedKeySeq(key)
if key_node.flow_style is True:
key_s.fa.set_flow_style()
elif key_node.flow_style is False:
key_s.fa.set_block_style()
key = key_s
elif isinstance(key, MutableMapping):
key_m = CommentedKeyMap(key)
if key_node.flow_style is True:
key_m.fa.set_flow_style()
elif key_node.flow_style is False:
key_m.fa.set_block_style()
key = key_m
if not isinstance(key, Hashable):
raise ConstructorError(
"while constructing a mapping",
node.start_mark,
"found unhashable key",
key_node.start_mark,
)
value = self.construct_object(value_node, deep=deep)
if self.check_mapping_key(node, key_node, maptyp, key, value):
if self.loader and self.loader.comment_handling is None:
if (
key_node.comment
and len(key_node.comment) > 4
and key_node.comment[4]
):
if last_value is None:
key_node.comment[0] = key_node.comment.pop(4)
maptyp._yaml_add_comment(key_node.comment, value=last_key)
else:
key_node.comment[2] = key_node.comment.pop(4)
maptyp._yaml_add_comment(key_node.comment, key=key)
key_node.comment = None
if key_node.comment:
maptyp._yaml_add_comment(key_node.comment, key=key)
if value_node.comment:
maptyp._yaml_add_comment(value_node.comment, value=key)
else:
# NEWCMNT
if key_node.comment:
nprintf("nc5a", key, key_node.comment)
if key_node.comment[0]:
maptyp.ca.set(key, C_KEY_PRE, key_node.comment[0])
if key_node.comment[1]:
maptyp.ca.set(key, C_KEY_EOL, key_node.comment[1])
if key_node.comment[2]:
maptyp.ca.set(key, C_KEY_POST, key_node.comment[2])
if value_node.comment:
nprintf("nc5b", key, value_node.comment)
if value_node.comment[0]:
maptyp.ca.set(key, C_VALUE_PRE, value_node.comment[0])
if value_node.comment[1]:
maptyp.ca.set(key, C_VALUE_EOL, value_node.comment[1])
if value_node.comment[2]:
maptyp.ca.set(key, C_VALUE_POST, value_node.comment[2])
maptyp._yaml_set_kv_line_col(
key,
[
key_node.start_mark.line,
key_node.start_mark.column,
value_node.start_mark.line,
value_node.start_mark.column,
],
)
maptyp[key] = value
last_key, last_value = key, value # could use indexing
# do this last, or <<: before a key will prevent insertion in instances
# of collections.OrderedDict (as they have no __contains__
if merge_map:
maptyp.add_yaml_merge(merge_map)
def construct_setting(self, node, typ, deep=False):
# type: (Any, Any, bool) -> Any
if not isinstance(node, MappingNode):
raise ConstructorError(
None,
None,
_F("expected a mapping node, but found {node_id!s}", node_id=node.id),
node.start_mark,
)
if self.loader and self.loader.comment_handling is None:
if node.comment:
typ._yaml_add_comment(node.comment[:2])
if len(node.comment) > 2:
typ.yaml_end_comment_extend(node.comment[2], clear=True)
else:
# NEWCMNT
if node.comment:
nprintf("nc6", node.comment)
if node.anchor:
from .serializer import templated_id
if not templated_id(node.anchor):
typ.yaml_set_anchor(node.anchor)
for key_node, value_node in node.value:
# keys can be list -> deep
key = self.construct_object(key_node, deep=True)
# lists are not hashable, but tuples are
if not isinstance(key, Hashable):
if isinstance(key, list):
key = tuple(key)
if not isinstance(key, Hashable):
raise ConstructorError(
"while constructing a mapping",
node.start_mark,
"found unhashable key",
key_node.start_mark,
)
# construct but should be null
value = self.construct_object(value_node, deep=deep) # NOQA
self.check_set_key(node, key_node, typ, key)
if self.loader and self.loader.comment_handling is None:
if key_node.comment:
typ._yaml_add_comment(key_node.comment, key=key)
if value_node.comment:
typ._yaml_add_comment(value_node.comment, value=key)
else:
# NEWCMNT
if key_node.comment:
nprintf("nc7a", key_node.comment)
if value_node.comment:
nprintf("nc7b", value_node.comment)
typ.add(key)
def construct_yaml_seq(self, node):
# type: (Any) -> Any
data = CommentedSeq()
data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
# if node.comment:
# data._yaml_add_comment(node.comment)
yield data
data.extend(self.construct_rt_sequence(node, data))
self.set_collection_style(data, node)
def construct_yaml_map(self, node):
# type: (Any) -> Any
data = CommentedMap()
data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
yield data
self.construct_mapping(node, data, deep=True)
self.set_collection_style(data, node)
def set_collection_style(self, data, node):
# type: (Any, Any) -> None
if len(data) == 0:
return
if node.flow_style is True:
data.fa.set_flow_style()
elif node.flow_style is False:
data.fa.set_block_style()
def construct_yaml_object(self, node, cls):
# type: (Any, Any) -> Any
data = cls.__new__(cls)
yield data
if hasattr(data, "__setstate__"):
state = SafeConstructor.construct_mapping(self, node, deep=True)
data.__setstate__(state)
else:
state = SafeConstructor.construct_mapping(self, node)
if hasattr(data, "__attrs_attrs__"): # issue 394
data.__init__(**state)
else:
data.__dict__.update(state)
if node.anchor:
from .serializer import templated_id
from .anchor import Anchor
if not templated_id(node.anchor):
if not hasattr(data, Anchor.attrib):
a = Anchor()
setattr(data, Anchor.attrib, a)
else:
a = getattr(data, Anchor.attrib)
a.value = node.anchor
def construct_yaml_omap(self, node):
# type: (Any) -> Any
# Note: we do now check for duplicate keys
omap = CommentedOrderedMap()
omap._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
if node.flow_style is True:
omap.fa.set_flow_style()
elif node.flow_style is False:
omap.fa.set_block_style()
yield omap
if self.loader and self.loader.comment_handling is None:
if node.comment:
omap._yaml_add_comment(node.comment[:2])
if len(node.comment) > 2:
omap.yaml_end_comment_extend(node.comment[2], clear=True)
else:
# NEWCMNT
if node.comment:
nprintf("nc8", node.comment)
if not isinstance(node, SequenceNode):
raise ConstructorError(
"while constructing an ordered map",
node.start_mark,
_F("expected a sequence, but found {node_id!s}", node_id=node.id),
node.start_mark,
)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError(
"while constructing an ordered map",
node.start_mark,
_F(
"expected a mapping of length 1, but found {subnode_id!s}",
subnode_id=subnode.id,
),
subnode.start_mark,
)
if len(subnode.value) != 1:
raise ConstructorError(
"while constructing an ordered map",
node.start_mark,
_F(
"expected a single mapping item, but found {len_subnode_val:d} items",
len_subnode_val=len(subnode.value),
),
subnode.start_mark,
)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
assert key not in omap
value = self.construct_object(value_node)
if self.loader and self.loader.comment_handling is None:
if key_node.comment:
omap._yaml_add_comment(key_node.comment, key=key)
if subnode.comment:
omap._yaml_add_comment(subnode.comment, key=key)
if value_node.comment:
omap._yaml_add_comment(value_node.comment, value=key)
else:
# NEWCMNT
if key_node.comment:
nprintf("nc9a", key_node.comment)
if subnode.comment:
nprintf("nc9b", subnode.comment)
if value_node.comment:
nprintf("nc9c", value_node.comment)
omap[key] = value
def construct_yaml_set(self, node):
# type: (Any) -> Any
data = CommentedSet()
data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
yield data
self.construct_setting(node, data)
def construct_undefined(self, node):
# type: (Any) -> Any
try:
if isinstance(node, MappingNode):
data = CommentedMap()
data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
if node.flow_style is True:
data.fa.set_flow_style()
elif node.flow_style is False:
data.fa.set_block_style()
data.yaml_set_tag(node.tag)
yield data
if node.anchor:
from .serializer import templated_id
if not templated_id(node.anchor):
data.yaml_set_anchor(node.anchor)
self.construct_mapping(node, data)
return
elif isinstance(node, ScalarNode):
data2 = TaggedScalar()
data2.value = self.construct_scalar(node)
data2.style = node.style
data2.yaml_set_tag(node.tag)
yield data2
if node.anchor:
from .serializer import templated_id
if not templated_id(node.anchor):
data2.yaml_set_anchor(node.anchor, always_dump=True)
return
elif isinstance(node, SequenceNode):
data3 = CommentedSeq()
data3._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
if node.flow_style is True:
data3.fa.set_flow_style()
elif node.flow_style is False:
data3.fa.set_block_style()
data3.yaml_set_tag(node.tag)
yield data3
if node.anchor:
from .serializer import templated_id
if not templated_id(node.anchor):
data3.yaml_set_anchor(node.anchor)
data3.extend(self.construct_sequence(node))
return
except: # NOQA
pass
raise ConstructorError(
None,
None,
_F(
"could not determine a constructor for the tag {node_tag!r}",
node_tag=node.tag,
),
node.start_mark,
)
def construct_yaml_timestamp(self, node, values=None):
# type: (Any, Any) -> Any
try:
match = self.timestamp_regexp.match(node.value)
except TypeError:
match = None
if match is None:
raise ConstructorError(
None,
None,
'failed to construct timestamp from "{}"'.format(node.value),
node.start_mark,
)
values = match.groupdict()
if not values["hour"]:
return create_timestamp(**values)
# return SafeConstructor.construct_yaml_timestamp(self, node, values)
for part in ["t", "tz_sign", "tz_hour", "tz_minute"]:
if values[part]:
break
else:
return create_timestamp(**values)
# return SafeConstructor.construct_yaml_timestamp(self, node, values)
dd = create_timestamp(**values) # this has delta applied
delta = None
if values["tz_sign"]:
tz_hour = int(values["tz_hour"])
minutes = values["tz_minute"]
tz_minute = int(minutes) if minutes else 0
delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
if values["tz_sign"] == "-":
delta = -delta
# should check for None and solve issue 366 should be tzinfo=delta)
data = TimeStamp(
dd.year, dd.month, dd.day, dd.hour, dd.minute, dd.second, dd.microsecond
)
if delta:
data._yaml["delta"] = delta
tz = values["tz_sign"] + values["tz_hour"]
if values["tz_minute"]:
tz += ":" + values["tz_minute"]
data._yaml["tz"] = tz
else:
if values["tz"]: # no delta
data._yaml["tz"] = values["tz"]
if values["t"]:
data._yaml["t"] = True
return data
def construct_yaml_bool(self, node):
# type: (Any) -> Any
b = SafeConstructor.construct_yaml_bool(self, node)
if node.anchor:
return ScalarBoolean(b, anchor=node.anchor)
return b
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:null", RoundTripConstructor.construct_yaml_null
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:bool", RoundTripConstructor.construct_yaml_bool
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:int", RoundTripConstructor.construct_yaml_int
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:float", RoundTripConstructor.construct_yaml_float
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:binary", RoundTripConstructor.construct_yaml_binary
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:timestamp", RoundTripConstructor.construct_yaml_timestamp
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:omap", RoundTripConstructor.construct_yaml_omap
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:pairs", RoundTripConstructor.construct_yaml_pairs
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:set", RoundTripConstructor.construct_yaml_set
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:str", RoundTripConstructor.construct_yaml_str
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:seq", RoundTripConstructor.construct_yaml_seq
)
RoundTripConstructor.add_constructor(
"tag:yaml.org,2002:map", RoundTripConstructor.construct_yaml_map
)
RoundTripConstructor.add_constructor(None, RoundTripConstructor.construct_undefined)
|
PypiClean
|
/Atom_avatar-0.0.3.tar.gz/Atom_avatar-0.0.3/avatar/index.py
|
from io import BytesIO, StringIO
from random import randrange
from cairosvg import svg2png
from jinja2 import Template, Environment, FileSystemLoader
import json
import os
# a= open('male_back.json')
# b= a.read()
# print(type(json.loads(b)))
path=os.path.dirname(os.path.realpath(__file__))
load_base = FileSystemLoader(path+'./templates',)
# open("./male_back.json", )
env = Environment(loader=load_base)
env.trim_blocks = True
env.lstrip_blocks = True
template = env.get_template('base.svg')
male_back = json.load(open(path+'/male_back.json'))
male_face = json.load(open(path+'/male_face.json'))
male_eyes = json.load(open(path+'/male_eyes.json'))
male_ears = json.load(open(path+'/male_ears.json'))
male_iris = json.load(open(path+'/male_iris.json'))
male_nose = json.load(open(path+'/male_nose.json'))
male_mouth = json.load(open(path+'/male_mouth.json'))
male_brows = json.load(open(path+'/male_brows.json'))
male_mustache = json.load(open(path+'/male_mustache.json'))
male_beard = json.load(open(path+'/male_beard.json'))
male_hair = json.load(open(path+'/male_hair.json'))
male_clothes = json.load(open(path+'/male_clothes.json'))
# print(male_back)
# peyes = male_eyes['eyesback']['shapes'][0][0]['left']
# pback=male_back['backs']['shapes'][0]['single']
# print(template.render(back=peyes))
FACELOLORS = [
"#f6e4e2",
"#fbd5c0",
"#ffd0bc",
"#f4baa3",
"#ebaa82",
"#d79468",
"#cb8d60",
"#b2713b",
"#8c5537",
"#875732",
"#73512d",
"#582812"
]
HAIRCOLORS = [
"#2a232b",
"#080806",
"#3b3128",
"#4e4341",
"#504543",
"#6b4e40",
"#a68469",
"#b79675",
"#decfbc",
"#ddbc9b",
"#a46c47",
"#543c32",
"#73625b",
"#b84131",
"#d6c4c4",
"#fef6e1",
"#cac1b2",
"#b7513b",
"#caa478",
]
MATERIALCOLOR = [
"#386e77",
"#6a3a47",
"#591956",
"#864025",
"#dcc96b",
"#638e2f",
"#3f82a4",
"#335529",
"#82cbe2",
"#39557e",
"#1e78a2",
"#a44974",
"#152c5e",
"#9d69bc",
"#601090",
"#d46fbb",
"#cbe9ee",
"#4b2824",
"#653220",
"#1d282e"
]
FRONTEYESCOLORS = [
"#000000",
"#191c29",
"#0f190c",
"#09152e",
"#302040",
"#1b2a40",
"#2c1630",
"#2a150e",
"#131111",
"#1b1929",
"#09112e",
"#092e0c",
"#2e0914",
"#582311",
"#210d34",
"#153a4d",
"#d6f7f4",
"#5fa2a5",
"#782c76",
"#587d90"
]
IRISCOLORS = [
"#4e60a3",
"#7085b3",
"#b0b9d9",
"#3c8d8e",
"#3e4442",
"#66724e",
"#7b5c33",
"#ddb332",
"#8ab42d",
"#681711",
"#282978",
"#9b1d1b",
"#4d3623",
"#9fae70",
"#724f7c",
"#fdd70e",
"#00f0f1",
"#4faaab",
"#ea02f5",
"#bd1c1b"
]
BACKCOLOR = [
"#c4c7f3",
"#F1D4AF",
"#774F38",
"#ECE5CE",
"#C5E0DC",
"#594F4F",
"#547980",
"#45ADA8",
"#9DE0AD",
"#E5FCC2",
"#00A8C6",
"#40C0CB",
"#F9F2E7",
"#AEE239",
"#14305c",
"#5E8C6A",
"#88A65E",
"#036564",
"#CDB380",
"#ce6130"
]
MOUTHCOLORS = [
"#DA7C87",
"#F18F77",
"#e0a4a0",
"#9D6D5F",
"#A06B59",
"#904539",
"#e28c7c",
"#9B565F",
"#ff5027",
"#e66638",
"#fe856a",
"#E2929B",
"#a96a47",
"#335529",
"#1e78a2",
"#39557e",
"#6f147c",
"#43194b",
"#98a2a2",
"#161925"
]
class Canvas:
def __init__(self, back, face, eyes_back, eyes_front,ears, iris, nose, mouth, brows, mustache, beard, hair, cloth,
haircolor,backcolor,faceColor, materialcolor, fronteyescolor,iriscolor,mouthcolors, type=0,) -> None:
self.back = back
self.face = face
self.eyes_back = eyes_back
self.eyes_front = eyes_front
self.ear = ears
self.iris = iris
self.nose = nose
self.mouth = mouth
self.brows = brows
self.mustache = mustache
self.beard = beard
self.hair = hair
self.cloth = cloth
self.type=type
self.haircolor=haircolor
self.backcolor= backcolor
self.facecolor=faceColor
self.materialcolor=materialcolor
self.fronteyescolor=fronteyescolor
self.iriscolor=iriscolor
self.mouthcolor=mouthcolors
def canvas(self, obj=None):
context = self.make()
return template.render(context)
def toPng(temp):
arr = bytes(temp, 'utf-8')
byte=BytesIO(arr)
svg2png(arr,write_to="ade.png")
def make(self):
type=self.type
pback = male_back['backs']['shapes'][0]['single']
peyesback = male_eyes['eyesback']['shapes'][type][self.eyes_back] # not thesame with front
peyesfront = male_eyes['eyesfront']['shapes'][type][self.eyes_front]
pears = male_ears['ears']['shapes'][type][self.ear]
piris = male_iris['eyesiris']['shapes'][type][self.iris]
pnose = male_nose['nose']['shapes'][type][self.nose]['single']
pmouth = male_mouth['mouth']['shapes'][type][self.mouth]['single']
pbrows = male_brows['eyebrows']['shapes'][type][self.brows]
pmustache = male_mustache['mustache']['shapes'][type][self.mustache]['single']
pbeard = male_beard['beard']['shapes'][type][self.beard]['single']
phair = male_hair['hair']['shapes'][type][self.hair]
pclothes = male_clothes['clothes']['shapes'][type][self.cloth]['single']
faceshape = male_face['faceshape']['shapes'][type][self.face]['single']
chinshadow = male_face['chinshadow']['shapes'][type][randrange(3)]['single']
haircolor=self.haircolor
backcolor=self.backcolor
facecolor=self.facecolor
materialcolor=self.materialcolor
fronteyescolor=self.fronteyescolor
iriscolor=self.iriscolor
mouthcolor=self.mouthcolor
# humanbody=male_face['humanbody']['shapes'][0][0]['single']
return {
'back': pback,
'eyesback': peyesback,
'eyesfront': peyesfront,
'ears': pears,
'iris': piris,
'nose': pnose,
'brows': pbrows,
'mouth': pmouth,
'mustache': pmustache,
'beard': pbeard,
'hair': phair,
'cloth': pclothes,
'faceshape': faceshape,
'chinshadow': chinshadow,
# 'humanbody':humanbody,
'haircolor':haircolor,
'backcolor':backcolor,
'facecolor':facecolor,
'materialcolor':materialcolor,
'fronteyescolor':fronteyescolor,
'iriscolors':iriscolor,
'mouthcolor':mouthcolor
}
def toPng(temp):
# arr = bytes(temp, 'utf-8')
# byte=BytesIO(arr)
return svg2png(temp, output_height=200,output_width=200)
def toPngfile(temp:str, outfile:str):
svg2png(temp, write_to=outfile, output_height=200,output_width=200)
# a = Canvas(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,2,"#582311",
# "#210d34",
# "#153a4d",
# "#d6f7f4",
# "#5fa2a5",
# "#782c76",
# "#587d90").canvas()
# b=toPng(a)
# print(b)
# with open("./sample.svg", 'w') as f:
# f.write(a)
# f.close()
|
PypiClean
|
/hbmqtt-master2.2-0.9.7.tar.gz/hbmqtt-master2.2-0.9.7/docs/references/broker.rst
|
Broker API reference
====================
The :class:`~hbmqtt.broker.Broker` class provides a complete MQTT 3.1.1 broker implementation. This class allows Python developers to embed a MQTT broker in their own applications.
Usage example
-------------
The following example shows how to start a broker using the default configuration:
.. code-block:: python
import logging
import asyncio
import os
from hbmqtt.broker import Broker
@asyncio.coroutine
def broker_coro():
broker = Broker()
yield from broker.start()
if __name__ == '__main__':
formatter = "[%(asctime)s] :: %(levelname)s :: %(name)s :: %(message)s"
logging.basicConfig(level=logging.INFO, format=formatter)
asyncio.get_event_loop().run_until_complete(broker_coro())
asyncio.get_event_loop().run_forever()
When executed, this script gets the default event loop and asks it to run the ``broker_coro`` until it completes.
``broker_coro`` creates :class:`~hbmqtt.broker.Broker` instance and then :meth:`~hbmqtt.broker.Broker.start` the broker for serving.
Once completed, the loop is ran forever, making this script never stop ...
Reference
---------
Broker API
..........
.. automodule:: hbmqtt.broker
.. autoclass:: Broker
.. automethod:: start
.. automethod:: shutdown
Broker configuration
....................
The :class:`~hbmqtt.broker.Broker` ``__init__`` method accepts a ``config`` parameter which allow to setup some behaviour and defaults settings. This argument must be a Python dict object. For convinience, it is presented below as a YAML file [1]_.
.. code-block:: python
listeners:
default:
max-connections: 50000
type: tcp
my-tcp-1:
bind: 127.0.0.1:1883
my-tcp-2:
bind: 1.2.3.4:1884
max-connections: 1000
my-tcp-ssl-1:
bind: 127.0.0.1:8885
ssl: on
cafile: /some/cafile
capath: /some/folder
capath: certificate data
certfile: /some/certfile
keyfile: /some/key
my-ws-1:
bind: 0.0.0.0:8080
type: ws
timeout-disconnect-delay: 2
auth:
plugins: ['auth.anonymous'] #List of plugins to activate for authentication among all registered plugins
allow-anonymous: true / false
password-file: /some/passwd_file
topic-check:
enabled: true / false # Set to False if topic filtering is not needed
plugins: ['topic_acl'] #List of plugins to activate for topic filtering among all registered plugins
acl:
# username: [list of allowed topics]
username1: ['repositories/+/master', 'calendar/#', 'data/memes'] # List of topics on which client1 can publish and subscribe
username2: ...
anonymous: [] # List of topics on which an anonymous client can publish and subscribe
The ``listeners`` section allows to define network listeners which must be started by the :class:`~hbmqtt.broker.Broker`. Several listeners can be setup. ``default`` subsection defines common attributes for all listeners. Each listener can have the following settings:
* ``bind``: IP address and port binding.
* ``max-connections``: Set maximum number of active connection for the listener. ``0`` means no limit.
* ``type``: transport protocol type; can be ``tcp`` for classic TCP listener or ``ws`` for MQTT over websocket.
* ``ssl`` enables (``on``) or disable secured connection over the transport protocol.
* ``cafile``, ``cadata``, ``certfile`` and ``keyfile`` : mandatory parameters for SSL secured connections.
The ``auth`` section setup authentication behaviour:
* ``plugins``: defines the list of activated plugins. Note the plugins must be defined in the ``hbmqtt.broker.plugins`` `entry point <https://pythonhosted.org/setuptools/setuptools.html#dynamic-discovery-of-services-and-plugins>`_.
* ``allow-anonymous`` : used by the internal :class:`hbmqtt.plugins.authentication.AnonymousAuthPlugin` plugin. This parameter enables (``on``) or disable anonymous connection, ie. connection without username.
* ``password-file`` : used by the internal :class:`hbmqtt.plugins.authentication.FileAuthPlugin` plugin. This parameter gives to path of the password file to load for authenticating users.
The ``topic-check`` section setup access control policies for publishing and subscribing to topics:
* ``enabled``: set to true if you want to impose an access control policy. Otherwise, set it to false.
* ``plugins``: defines the list of activated plugins. Note the plugins must be defined in the ``hbmqtt.broker.plugins`` `entry point <https://pythonhosted.org/setuptools/setuptools.html#dynamic-discovery-of-services-and-plugins>`_.
* additional parameters: depending on the plugin used for access control, additional parameters should be added.
* In case of ``topic_acl`` plugin, the Access Control List (ACL) must be defined in the parameter ``acl``.
* For each username, a list with the allowed topics must be defined.
* If the client logs in anonymously, the ``anonymous`` entry within the ACL is used in order to grant/deny subscriptions.
.. [1] See `PyYAML <http://pyyaml.org/wiki/PyYAMLDocumentation>`_ for loading YAML files as Python dict.
|
PypiClean
|
/c65ryu-5.0.0.tar.gz/c65ryu-5.0.0/ryu/app/rest_qos.py
|
import logging
import json
import re
from ryu.app import conf_switch_key as cs_key
from ryu.app.wsgi import ControllerBase
from ryu.app.wsgi import Response
from ryu.app.wsgi import route
from ryu.app.wsgi import WSGIApplication
from ryu.base import app_manager
from ryu.controller import conf_switch
from ryu.controller import ofp_event
from ryu.controller import dpset
from ryu.controller.handler import set_ev_cls
from ryu.controller.handler import MAIN_DISPATCHER
from ryu.exception import OFPUnknownVersion
from ryu.lib import dpid as dpid_lib
from ryu.lib import mac
from ryu.lib import ofctl_v1_0
from ryu.lib import ofctl_v1_2
from ryu.lib import ofctl_v1_3
from ryu.lib.ovs import bridge
from ryu.ofproto import ofproto_v1_0
from ryu.ofproto import ofproto_v1_2
from ryu.ofproto import ofproto_v1_3
from ryu.ofproto import ofproto_v1_3_parser
from ryu.ofproto import ether
from ryu.ofproto import inet
# =============================
# REST API
# =============================
#
# Note: specify switch and vlan group, as follows.
# {switch-id} : 'all' or switchID
# {vlan-id} : 'all' or vlanID
#
# about queue status
#
# get status of queue
# GET /qos/queue/status/{switch-id}
#
# about queues
# get a queue configurations
# GET /qos/queue/{switch-id}
#
# set a queue to the switches
# POST /qos/queue/{switch-id}
#
# request body format:
# {"port_name":"<name of port>",
# "type": "<linux-htb or linux-other>",
# "max-rate": "<int>",
# "queues":[{"max_rate": "<int>", "min_rate": "<int>"},...]}
#
# Note: This operation override
# previous configurations.
# Note: Queue configurations are available for
# OpenvSwitch.
# Note: port_name is optional argument.
# If does not pass the port_name argument,
# all ports are target for configuration.
#
# delete queue
# DELETE /qos/queue/{swtich-id}
#
# Note: This operation delete relation of qos record from
# qos colum in Port table. Therefore,
# QoS records and Queue records will remain.
#
# about qos rules
#
# get rules of qos
# * for no vlan
# GET /qos/rules/{switch-id}
#
# * for specific vlan group
# GET /qos/rules/{switch-id}/{vlan-id}
#
# set a qos rules
#
# QoS rules will do the processing pipeline,
# which entries are register the first table (by default table id 0)
# and process will apply and go to next table.
#
# * for no vlan
# POST /qos/{switch-id}
#
# * for specific vlan group
# POST /qos/{switch-id}/{vlan-id}
#
# request body format:
# {"priority": "<value>",
# "match": {"<field1>": "<value1>", "<field2>": "<value2>",...},
# "actions": {"<action1>": "<value1>", "<action2>": "<value2>",...}
# }
#
# Description
# * priority field
# <value>
# "0 to 65533"
#
# Note: When "priority" has not been set up,
# "priority: 1" is set to "priority".
#
# * match field
# <field> : <value>
# "in_port" : "<int>"
# "dl_src" : "<xx:xx:xx:xx:xx:xx>"
# "dl_dst" : "<xx:xx:xx:xx:xx:xx>"
# "dl_type" : "<ARP or IPv4 or IPv6>"
# "nw_src" : "<A.B.C.D/M>"
# "nw_dst" : "<A.B.C.D/M>"
# "ipv6_src": "<xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/M>"
# "ipv6_dst": "<xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/M>"
# "nw_proto": "<TCP or UDP or ICMP or ICMPv6>"
# "tp_src" : "<int>"
# "tp_dst" : "<int>"
# "ip_dscp" : "<int>"
#
# * actions field
# <field> : <value>
# "mark": <dscp-value>
# sets the IPv4 ToS/DSCP field to tos.
# "meter": <meter-id>
# apply meter entry
# "queue": <queue-id>
# register queue specified by queue-id
#
# Note: When "actions" has not been set up,
# "queue: 0" is set to "actions".
#
# delete a qos rules
# * for no vlan
# DELETE /qos/rule/{switch-id}
#
# * for specific vlan group
# DELETE /qos/{switch-id}/{vlan-id}
#
# request body format:
# {"<field>":"<value>"}
#
# <field> : <value>
# "qos_id" : "<int>" or "all"
#
# about meter entries
#
# set a meter entry
# POST /qos/meter/{switch-id}
#
# request body format:
# {"meter_id": <int>,
# "bands":[{"action": "<DROP or DSCP_REMARK>",
# "flag": "<KBPS or PKTPS or BURST or STATS"
# "burst_size": <int>,
# "rate": <int>,
# "prec_level": <int>},...]}
#
# delete a meter entry
# DELETE /qos/meter/{switch-id}
#
# request body format:
# {"<field>":"<value>"}
#
# <field> : <value>
# "meter_id" : "<int>"
#
SWITCHID_PATTERN = dpid_lib.DPID_PATTERN + r'|all'
VLANID_PATTERN = r'[0-9]{1,4}|all'
QOS_TABLE_ID = 0
REST_ALL = 'all'
REST_SWITCHID = 'switch_id'
REST_COMMAND_RESULT = 'command_result'
REST_PRIORITY = 'priority'
REST_VLANID = 'vlan_id'
REST_PORT_NAME = 'port_name'
REST_QUEUE_TYPE = 'type'
REST_QUEUE_MAX_RATE = 'max_rate'
REST_QUEUE_MIN_RATE = 'min_rate'
REST_QUEUES = 'queues'
REST_QOS = 'qos'
REST_QOS_ID = 'qos_id'
REST_COOKIE = 'cookie'
REST_MATCH = 'match'
REST_IN_PORT = 'in_port'
REST_SRC_MAC = 'dl_src'
REST_DST_MAC = 'dl_dst'
REST_DL_TYPE = 'dl_type'
REST_DL_TYPE_ARP = 'ARP'
REST_DL_TYPE_IPV4 = 'IPv4'
REST_DL_TYPE_IPV6 = 'IPv6'
REST_DL_VLAN = 'dl_vlan'
REST_SRC_IP = 'nw_src'
REST_DST_IP = 'nw_dst'
REST_SRC_IPV6 = 'ipv6_src'
REST_DST_IPV6 = 'ipv6_dst'
REST_NW_PROTO = 'nw_proto'
REST_NW_PROTO_TCP = 'TCP'
REST_NW_PROTO_UDP = 'UDP'
REST_NW_PROTO_ICMP = 'ICMP'
REST_NW_PROTO_ICMPV6 = 'ICMPv6'
REST_TP_SRC = 'tp_src'
REST_TP_DST = 'tp_dst'
REST_DSCP = 'ip_dscp'
REST_ACTION = 'actions'
REST_ACTION_QUEUE = 'queue'
REST_ACTION_MARK = 'mark'
REST_ACTION_METER = 'meter'
REST_METER_ID = 'meter_id'
REST_METER_BURST_SIZE = 'burst_size'
REST_METER_RATE = 'rate'
REST_METER_PREC_LEVEL = 'prec_level'
REST_METER_BANDS = 'bands'
REST_METER_ACTION_DROP = 'drop'
REST_METER_ACTION_REMARK = 'remark'
DEFAULT_FLOW_PRIORITY = 0
QOS_PRIORITY_MAX = ofproto_v1_3_parser.UINT16_MAX - 1
QOS_PRIORITY_MIN = 1
VLANID_NONE = 0
VLANID_MIN = 2
VLANID_MAX = 4094
COOKIE_SHIFT_VLANID = 32
BASE_URL = '/qos'
REQUIREMENTS = {'switchid': SWITCHID_PATTERN,
'vlanid': VLANID_PATTERN}
LOG = logging.getLogger(__name__)
class RestQoSAPI(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_0.OFP_VERSION,
ofproto_v1_2.OFP_VERSION,
ofproto_v1_3.OFP_VERSION]
_CONTEXTS = {
'dpset': dpset.DPSet,
'conf_switch': conf_switch.ConfSwitchSet,
'wsgi': WSGIApplication}
def __init__(self, *args, **kwargs):
super(RestQoSAPI, self).__init__(*args, **kwargs)
# logger configure
QoSController.set_logger(self.logger)
self.cs = kwargs['conf_switch']
self.dpset = kwargs['dpset']
wsgi = kwargs['wsgi']
self.waiters = {}
self.data = {}
self.data['dpset'] = self.dpset
self.data['waiters'] = self.waiters
wsgi.registory['QoSController'] = self.data
wsgi.register(QoSController, self.data)
def stats_reply_handler(self, ev):
msg = ev.msg
dp = msg.datapath
if dp.id not in self.waiters:
return
if msg.xid not in self.waiters[dp.id]:
return
lock, msgs = self.waiters[dp.id][msg.xid]
msgs.append(msg)
flags = 0
if dp.ofproto.OFP_VERSION == ofproto_v1_0.OFP_VERSION or \
dp.ofproto.OFP_VERSION == ofproto_v1_2.OFP_VERSION:
flags = dp.ofproto.OFPSF_REPLY_MORE
elif dp.ofproto.OFP_VERSION == ofproto_v1_3.OFP_VERSION:
flags = dp.ofproto.OFPMPF_REPLY_MORE
if msg.flags & flags:
return
del self.waiters[dp.id][msg.xid]
lock.set()
@set_ev_cls(conf_switch.EventConfSwitchSet)
def conf_switch_set_handler(self, ev):
if ev.key == cs_key.OVSDB_ADDR:
QoSController.set_ovsdb_addr(ev.dpid, ev.value)
else:
QoSController._LOGGER.debug("unknown event: %s", ev)
@set_ev_cls(conf_switch.EventConfSwitchDel)
def conf_switch_del_handler(self, ev):
if ev.key == cs_key.OVSDB_ADDR:
QoSController.delete_ovsdb_addr(ev.dpid)
else:
QoSController._LOGGER.debug("unknown event: %s", ev)
@set_ev_cls(dpset.EventDP, dpset.DPSET_EV_DISPATCHER)
def handler_datapath(self, ev):
if ev.enter:
QoSController.regist_ofs(ev.dp, self.CONF)
else:
QoSController.unregist_ofs(ev.dp)
# for OpenFlow version1.0
@set_ev_cls(ofp_event.EventOFPFlowStatsReply, MAIN_DISPATCHER)
def stats_reply_handler_v1_0(self, ev):
self.stats_reply_handler(ev)
# for OpenFlow version1.2 or later
@set_ev_cls(ofp_event.EventOFPStatsReply, MAIN_DISPATCHER)
def stats_reply_handler_v1_2(self, ev):
self.stats_reply_handler(ev)
# for OpenFlow version1.2 or later
@set_ev_cls(ofp_event.EventOFPQueueStatsReply, MAIN_DISPATCHER)
def queue_stats_reply_handler_v1_2(self, ev):
self.stats_reply_handler(ev)
# for OpenFlow version1.2 or later
@set_ev_cls(ofp_event.EventOFPMeterStatsReply, MAIN_DISPATCHER)
def meter_stats_reply_handler_v1_2(self, ev):
self.stats_reply_handler(ev)
class QoSOfsList(dict):
def __init__(self):
super(QoSOfsList, self).__init__()
def get_ofs(self, dp_id):
if len(self) == 0:
raise ValueError('qos sw is not connected.')
dps = {}
if dp_id == REST_ALL:
dps = self
else:
try:
dpid = dpid_lib.str_to_dpid(dp_id)
except:
raise ValueError('Invalid switchID.')
if dpid in self:
dps = {dpid: self[dpid]}
else:
msg = 'qos sw is not connected. : switchID=%s' % dp_id
raise ValueError(msg)
return dps
class QoSController(ControllerBase):
_OFS_LIST = QoSOfsList()
_LOGGER = None
def __init__(self, req, link, data, **config):
super(QoSController, self).__init__(req, link, data, **config)
self.dpset = data['dpset']
self.waiters = data['waiters']
@classmethod
def set_logger(cls, logger):
cls._LOGGER = logger
cls._LOGGER.propagate = False
hdlr = logging.StreamHandler()
fmt_str = '[QoS][%(levelname)s] %(message)s'
hdlr.setFormatter(logging.Formatter(fmt_str))
cls._LOGGER.addHandler(hdlr)
@staticmethod
def regist_ofs(dp, CONF):
if dp.id in QoSController._OFS_LIST:
return
dpid_str = dpid_lib.dpid_to_str(dp.id)
try:
f_ofs = QoS(dp, CONF)
f_ofs.set_default_flow()
except OFPUnknownVersion as message:
QoSController._LOGGER.info('dpid=%s: %s',
dpid_str, message)
return
QoSController._OFS_LIST.setdefault(dp.id, f_ofs)
QoSController._LOGGER.info('dpid=%s: Join qos switch.',
dpid_str)
@staticmethod
def unregist_ofs(dp):
if dp.id in QoSController._OFS_LIST:
del QoSController._OFS_LIST[dp.id]
QoSController._LOGGER.info('dpid=%s: Leave qos switch.',
dpid_lib.dpid_to_str(dp.id))
@staticmethod
def set_ovsdb_addr(dpid, value):
ofs = QoSController._OFS_LIST.get(dpid, None)
if ofs is not None:
ofs.set_ovsdb_addr(dpid, value)
@staticmethod
def delete_ovsdb_addr(dpid):
ofs = QoSController._OFS_LIST.get(dpid, None)
if ofs is not None:
ofs.set_ovsdb_addr(dpid, None)
@route('qos_switch', BASE_URL + '/queue/{switchid}',
methods=['GET'], requirements=REQUIREMENTS)
def get_queue(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'get_queue', None)
@route('qos_switch', BASE_URL + '/queue/{switchid}',
methods=['POST'], requirements=REQUIREMENTS)
def set_queue(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'set_queue', None)
@route('qos_switch', BASE_URL + '/queue/{switchid}',
methods=['DELETE'], requirements=REQUIREMENTS)
def delete_queue(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'delete_queue', None)
@route('qos_switch', BASE_URL + '/queue/status/{switchid}',
methods=['GET'], requirements=REQUIREMENTS)
def get_status(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'get_status', self.waiters)
@route('qos_switch', BASE_URL + '/rules/{switchid}',
methods=['GET'], requirements=REQUIREMENTS)
def get_qos(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'get_qos', self.waiters)
@route('qos_switch', BASE_URL + '/rules/{switchid}/{vlanid}',
methods=['GET'], requirements=REQUIREMENTS)
def get_vlan_qos(self, req, switchid, vlanid, **_kwargs):
return self._access_switch(req, switchid, vlanid,
'get_qos', self.waiters)
@route('qos_switch', BASE_URL + '/rules/{switchid}',
methods=['POST'], requirements=REQUIREMENTS)
def set_qos(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'set_qos', self.waiters)
@route('qos_switch', BASE_URL + '/rules/{switchid}/{vlanid}',
methods=['POST'], requirements=REQUIREMENTS)
def set_vlan_qos(self, req, switchid, vlanid, **_kwargs):
return self._access_switch(req, switchid, vlanid,
'set_qos', self.waiters)
@route('qos_switch', BASE_URL + '/rules/{switchid}',
methods=['DELETE'], requirements=REQUIREMENTS)
def delete_qos(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'delete_qos', self.waiters)
@route('qos_switch', BASE_URL + '/rules/{switchid}/{vlanid}',
methods=['DELETE'], requirements=REQUIREMENTS)
def delete_vlan_qos(self, req, switchid, vlanid, **_kwargs):
return self._access_switch(req, switchid, vlanid,
'delete_qos', self.waiters)
@route('qos_switch', BASE_URL + '/meter/{switchid}',
methods=['GET'], requirements=REQUIREMENTS)
def get_meter(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'get_meter', self.waiters)
@route('qos_switch', BASE_URL + '/meter/{switchid}',
methods=['POST'], requirements=REQUIREMENTS)
def set_meter(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'set_meter', self.waiters)
@route('qos_switch', BASE_URL + '/meter/{switchid}',
methods=['DELETE'], requirements=REQUIREMENTS)
def delete_meter(self, req, switchid, **_kwargs):
return self._access_switch(req, switchid, VLANID_NONE,
'delete_meter', self.waiters)
def _access_switch(self, req, switchid, vlan_id, func, waiters):
try:
rest = req.json if req.body else {}
except ValueError:
QoSController._LOGGER.debug('invalid syntax %s', req.body)
return Response(status=400)
try:
dps = self._OFS_LIST.get_ofs(switchid)
vid = QoSController._conv_toint_vlanid(vlan_id)
except ValueError as message:
return Response(status=400, body=str(message))
msgs = []
for f_ofs in dps.values():
function = getattr(f_ofs, func)
try:
if waiters is not None:
msg = function(rest, vid, waiters)
else:
msg = function(rest, vid)
except ValueError as message:
return Response(status=400, body=str(message))
msgs.append(msg)
body = json.dumps(msgs)
return Response(content_type='application/json', body=body)
@staticmethod
def _conv_toint_vlanid(vlan_id):
if vlan_id != REST_ALL:
vlan_id = int(vlan_id)
if (vlan_id != VLANID_NONE and
(vlan_id < VLANID_MIN or VLANID_MAX < vlan_id)):
msg = 'Invalid {vlan_id} value. Set [%d-%d]' % (VLANID_MIN,
VLANID_MAX)
raise ValueError(msg)
return vlan_id
class QoS(object):
_OFCTL = {ofproto_v1_0.OFP_VERSION: ofctl_v1_0,
ofproto_v1_2.OFP_VERSION: ofctl_v1_2,
ofproto_v1_3.OFP_VERSION: ofctl_v1_3}
def __init__(self, dp, CONF):
super(QoS, self).__init__()
self.vlan_list = {}
self.vlan_list[VLANID_NONE] = 0 # for VLAN=None
self.dp = dp
self.version = dp.ofproto.OFP_VERSION
# Dictionary of port name to Queue config.
# e.g.)
# self.queue_list = {
# "s1-eth1": {
# "0": {
# "config": {
# "max-rate": "600000"
# }
# },
# "1": {
# "config": {
# "min-rate": "900000"
# }
# }
# }
# }
self.queue_list = {}
self.CONF = CONF
self.ovsdb_addr = None
self.ovs_bridge = None
if self.version not in self._OFCTL:
raise OFPUnknownVersion(version=self.version)
self.ofctl = self._OFCTL[self.version]
def set_default_flow(self):
if self.version == ofproto_v1_0.OFP_VERSION:
return
cookie = 0
priority = DEFAULT_FLOW_PRIORITY
actions = [{'type': 'GOTO_TABLE',
'table_id': QOS_TABLE_ID + 1}]
flow = self._to_of_flow(cookie=cookie,
priority=priority,
match={},
actions=actions)
cmd = self.dp.ofproto.OFPFC_ADD
self.ofctl.mod_flow_entry(self.dp, flow, cmd)
def set_ovsdb_addr(self, dpid, ovsdb_addr):
old_address = self.ovsdb_addr
if old_address == ovsdb_addr:
return
elif ovsdb_addr is None:
# Determine deleting OVSDB address was requested.
if self.ovs_bridge:
self.ovs_bridge = None
return
ovs_bridge = bridge.OVSBridge(self.CONF, dpid, ovsdb_addr)
try:
ovs_bridge.init()
except:
raise ValueError('ovsdb addr is not available.')
self.ovsdb_addr = ovsdb_addr
self.ovs_bridge = ovs_bridge
def _update_vlan_list(self, vlan_list):
for vlan_id in self.vlan_list.keys():
if vlan_id is not VLANID_NONE and vlan_id not in vlan_list:
del self.vlan_list[vlan_id]
def _get_cookie(self, vlan_id):
if vlan_id == REST_ALL:
vlan_ids = self.vlan_list.keys()
else:
vlan_ids = [vlan_id]
cookie_list = []
for vlan_id in vlan_ids:
self.vlan_list.setdefault(vlan_id, 0)
self.vlan_list[vlan_id] += 1
self.vlan_list[vlan_id] &= ofproto_v1_3_parser.UINT32_MAX
cookie = (vlan_id << COOKIE_SHIFT_VLANID) + \
self.vlan_list[vlan_id]
cookie_list.append([cookie, vlan_id])
return cookie_list
@staticmethod
def _cookie_to_qosid(cookie):
return cookie & ofproto_v1_3_parser.UINT32_MAX
# REST command template
def rest_command(func):
def _rest_command(*args, **kwargs):
key, value = func(*args, **kwargs)
switch_id = dpid_lib.dpid_to_str(args[0].dp.id)
return {REST_SWITCHID: switch_id,
key: value}
return _rest_command
@rest_command
def get_status(self, req, vlan_id, waiters):
if self.version == ofproto_v1_0.OFP_VERSION:
raise ValueError('get_status operation is not supported')
msgs = self.ofctl.get_queue_stats(self.dp, waiters)
return REST_COMMAND_RESULT, msgs
@rest_command
def get_queue(self, rest, vlan_id):
if len(self.queue_list):
msg = {'result': 'success',
'details': self.queue_list}
else:
msg = {'result': 'failure',
'details': 'Queue is not exists.'}
return REST_COMMAND_RESULT, msg
@rest_command
def set_queue(self, rest, vlan_id):
if self.ovs_bridge is None:
msg = {'result': 'failure',
'details': 'ovs_bridge is not exists'}
return REST_COMMAND_RESULT, msg
port_name = rest.get(REST_PORT_NAME, None)
vif_ports = self.ovs_bridge.get_port_name_list()
if port_name is not None:
if port_name not in vif_ports:
raise ValueError('%s port is not exists' % port_name)
vif_ports = [port_name]
queue_list = {}
queue_type = rest.get(REST_QUEUE_TYPE, 'linux-htb')
parent_max_rate = rest.get(REST_QUEUE_MAX_RATE, None)
queues = rest.get(REST_QUEUES, [])
queue_id = 0
queue_config = []
for queue in queues:
max_rate = queue.get(REST_QUEUE_MAX_RATE, None)
min_rate = queue.get(REST_QUEUE_MIN_RATE, None)
if max_rate is None and min_rate is None:
raise ValueError('Required to specify max_rate or min_rate')
config = {}
if max_rate is not None:
config['max-rate'] = max_rate
if min_rate is not None:
config['min-rate'] = min_rate
if len(config):
queue_config.append(config)
queue_list[queue_id] = {'config': config}
queue_id += 1
for port_name in vif_ports:
try:
self.ovs_bridge.set_qos(port_name, type=queue_type,
max_rate=parent_max_rate,
queues=queue_config)
except Exception as msg:
raise ValueError(msg)
self.queue_list[port_name] = queue_list
msg = {'result': 'success',
'details': queue_list}
return REST_COMMAND_RESULT, msg
def _delete_queue(self):
if self.ovs_bridge is None:
return False
vif_ports = self.ovs_bridge.get_external_ports()
for port in vif_ports:
self.ovs_bridge.del_qos(port.port_name)
return True
@rest_command
def delete_queue(self, rest, vlan_id):
if self._delete_queue():
msg = 'success'
self.queue_list.clear()
else:
msg = 'failure'
return REST_COMMAND_RESULT, msg
@rest_command
def set_qos(self, rest, vlan_id, waiters):
msgs = []
cookie_list = self._get_cookie(vlan_id)
for cookie, vid in cookie_list:
msg = self._set_qos(cookie, rest, waiters, vid)
msgs.append(msg)
return REST_COMMAND_RESULT, msgs
def _set_qos(self, cookie, rest, waiters, vlan_id):
match_value = rest[REST_MATCH]
if vlan_id:
match_value[REST_DL_VLAN] = vlan_id
priority = int(rest.get(REST_PRIORITY, QOS_PRIORITY_MIN))
if (QOS_PRIORITY_MAX < priority):
raise ValueError('Invalid priority value. Set [%d-%d]'
% (QOS_PRIORITY_MIN, QOS_PRIORITY_MAX))
match = Match.to_openflow(match_value)
actions = []
action = rest.get(REST_ACTION, None)
if action is not None:
if REST_ACTION_MARK in action:
actions.append({'type': 'SET_FIELD',
'field': REST_DSCP,
'value': int(action[REST_ACTION_MARK])})
if REST_ACTION_METER in action:
actions.append({'type': 'METER',
'meter_id': action[REST_ACTION_METER]})
if REST_ACTION_QUEUE in action:
actions.append({'type': 'SET_QUEUE',
'queue_id': action[REST_ACTION_QUEUE]})
else:
actions.append({'type': 'SET_QUEUE',
'queue_id': 0})
actions.append({'type': 'GOTO_TABLE',
'table_id': QOS_TABLE_ID + 1})
flow = self._to_of_flow(cookie=cookie, priority=priority,
match=match, actions=actions)
cmd = self.dp.ofproto.OFPFC_ADD
try:
self.ofctl.mod_flow_entry(self.dp, flow, cmd)
except:
raise ValueError('Invalid rule parameter.')
qos_id = QoS._cookie_to_qosid(cookie)
msg = {'result': 'success',
'details': 'QoS added. : qos_id=%d' % qos_id}
if vlan_id != VLANID_NONE:
msg.setdefault(REST_VLANID, vlan_id)
return msg
@rest_command
def get_qos(self, rest, vlan_id, waiters):
rules = {}
msgs = self.ofctl.get_flow_stats(self.dp, waiters)
if str(self.dp.id) in msgs:
flow_stats = msgs[str(self.dp.id)]
for flow_stat in flow_stats:
if flow_stat['table_id'] != QOS_TABLE_ID:
continue
priority = flow_stat[REST_PRIORITY]
if priority != DEFAULT_FLOW_PRIORITY:
vid = flow_stat[REST_MATCH].get(REST_DL_VLAN, VLANID_NONE)
if vlan_id == REST_ALL or vlan_id == vid:
rule = self._to_rest_rule(flow_stat)
rules.setdefault(vid, [])
rules[vid].append(rule)
get_data = []
for vid, rule in rules.items():
if vid == VLANID_NONE:
vid_data = {REST_QOS: rule}
else:
vid_data = {REST_VLANID: vid, REST_QOS: rule}
get_data.append(vid_data)
return REST_COMMAND_RESULT, get_data
@rest_command
def delete_qos(self, rest, vlan_id, waiters):
try:
if rest[REST_QOS_ID] == REST_ALL:
qos_id = REST_ALL
else:
qos_id = int(rest[REST_QOS_ID])
except:
raise ValueError('Invalid qos id.')
vlan_list = []
delete_list = []
msgs = self.ofctl.get_flow_stats(self.dp, waiters)
if str(self.dp.id) in msgs:
flow_stats = msgs[str(self.dp.id)]
for flow_stat in flow_stats:
cookie = flow_stat[REST_COOKIE]
ruleid = QoS._cookie_to_qosid(cookie)
priority = flow_stat[REST_PRIORITY]
dl_vlan = flow_stat[REST_MATCH].get(REST_DL_VLAN, VLANID_NONE)
if priority != DEFAULT_FLOW_PRIORITY:
if ((qos_id == REST_ALL or qos_id == ruleid) and
(vlan_id == dl_vlan or vlan_id == REST_ALL)):
match = Match.to_mod_openflow(flow_stat[REST_MATCH])
delete_list.append([cookie, priority, match])
else:
if dl_vlan not in vlan_list:
vlan_list.append(dl_vlan)
self._update_vlan_list(vlan_list)
if len(delete_list) == 0:
msg_details = 'QoS rule is not exist.'
if qos_id != REST_ALL:
msg_details += ' : QoS ID=%d' % qos_id
msg = {'result': 'failure',
'details': msg_details}
else:
cmd = self.dp.ofproto.OFPFC_DELETE_STRICT
actions = []
delete_ids = {}
for cookie, priority, match in delete_list:
flow = self._to_of_flow(cookie=cookie, priority=priority,
match=match, actions=actions)
self.ofctl.mod_flow_entry(self.dp, flow, cmd)
vid = match.get(REST_DL_VLAN, VLANID_NONE)
rule_id = QoS._cookie_to_qosid(cookie)
delete_ids.setdefault(vid, '')
delete_ids[vid] += (('%d' if delete_ids[vid] == ''
else ',%d') % rule_id)
msg = []
for vid, rule_ids in delete_ids.items():
del_msg = {'result': 'success',
'details': ' deleted. : QoS ID=%s' % rule_ids}
if vid != VLANID_NONE:
del_msg.setdefault(REST_VLANID, vid)
msg.append(del_msg)
return REST_COMMAND_RESULT, msg
@rest_command
def set_meter(self, rest, vlan_id, waiters):
if self.version == ofproto_v1_0.OFP_VERSION:
raise ValueError('set_meter operation is not supported')
msgs = []
msg = self._set_meter(rest, waiters)
msgs.append(msg)
return REST_COMMAND_RESULT, msgs
def _set_meter(self, rest, waiters):
cmd = self.dp.ofproto.OFPMC_ADD
try:
self.ofctl.mod_meter_entry(self.dp, rest, cmd)
except:
raise ValueError('Invalid meter parameter.')
msg = {'result': 'success',
'details': 'Meter added. : Meter ID=%s' %
rest[REST_METER_ID]}
return msg
@rest_command
def get_meter(self, rest, vlan_id, waiters):
if (self.version == ofproto_v1_0.OFP_VERSION or
self.version == ofproto_v1_2.OFP_VERSION):
raise ValueError('get_meter operation is not supported')
msgs = self.ofctl.get_meter_stats(self.dp, waiters)
return REST_COMMAND_RESULT, msgs
@rest_command
def delete_meter(self, rest, vlan_id, waiters):
if (self.version == ofproto_v1_0.OFP_VERSION or
self.version == ofproto_v1_2.OFP_VERSION):
raise ValueError('delete_meter operation is not supported')
cmd = self.dp.ofproto.OFPMC_DELETE
try:
self.ofctl.mod_meter_entry(self.dp, rest, cmd)
except:
raise ValueError('Invalid meter parameter.')
msg = {'result': 'success',
'details': 'Meter deleted. : Meter ID=%s' %
rest[REST_METER_ID]}
return REST_COMMAND_RESULT, msg
def _to_of_flow(self, cookie, priority, match, actions):
flow = {'cookie': cookie,
'priority': priority,
'flags': 0,
'idle_timeout': 0,
'hard_timeout': 0,
'match': match,
'actions': actions}
return flow
def _to_rest_rule(self, flow):
ruleid = QoS._cookie_to_qosid(flow[REST_COOKIE])
rule = {REST_QOS_ID: ruleid}
rule.update({REST_PRIORITY: flow[REST_PRIORITY]})
rule.update(Match.to_rest(flow))
rule.update(Action.to_rest(flow))
return rule
class Match(object):
_CONVERT = {REST_DL_TYPE:
{REST_DL_TYPE_ARP: ether.ETH_TYPE_ARP,
REST_DL_TYPE_IPV4: ether.ETH_TYPE_IP,
REST_DL_TYPE_IPV6: ether.ETH_TYPE_IPV6},
REST_NW_PROTO:
{REST_NW_PROTO_TCP: inet.IPPROTO_TCP,
REST_NW_PROTO_UDP: inet.IPPROTO_UDP,
REST_NW_PROTO_ICMP: inet.IPPROTO_ICMP,
REST_NW_PROTO_ICMPV6: inet.IPPROTO_ICMPV6}}
@staticmethod
def to_openflow(rest):
def __inv_combi(msg):
raise ValueError('Invalid combination: [%s]' % msg)
def __inv_2and1(*args):
__inv_combi('%s=%s and %s' % (args[0], args[1], args[2]))
def __inv_2and2(*args):
__inv_combi('%s=%s and %s=%s' % (
args[0], args[1], args[2], args[3]))
def __inv_1and1(*args):
__inv_combi('%s and %s' % (args[0], args[1]))
def __inv_1and2(*args):
__inv_combi('%s and %s=%s' % (args[0], args[1], args[2]))
match = {}
# error check
dl_type = rest.get(REST_DL_TYPE)
nw_proto = rest.get(REST_NW_PROTO)
if dl_type is not None:
if dl_type == REST_DL_TYPE_ARP:
if REST_SRC_IPV6 in rest:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_ARP, REST_SRC_IPV6)
if REST_DST_IPV6 in rest:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_ARP, REST_DST_IPV6)
if REST_DSCP in rest:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_ARP, REST_DSCP)
if nw_proto:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_ARP, REST_NW_PROTO)
elif dl_type == REST_DL_TYPE_IPV4:
if REST_SRC_IPV6 in rest:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_IPV4, REST_SRC_IPV6)
if REST_DST_IPV6 in rest:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_IPV4, REST_DST_IPV6)
if nw_proto == REST_NW_PROTO_ICMPV6:
__inv_2and2(
REST_DL_TYPE, REST_DL_TYPE_IPV4,
REST_NW_PROTO, REST_NW_PROTO_ICMPV6)
elif dl_type == REST_DL_TYPE_IPV6:
if REST_SRC_IP in rest:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_IPV6, REST_SRC_IP)
if REST_DST_IP in rest:
__inv_2and1(
REST_DL_TYPE, REST_DL_TYPE_IPV6, REST_DST_IP)
if nw_proto == REST_NW_PROTO_ICMP:
__inv_2and2(
REST_DL_TYPE, REST_DL_TYPE_IPV6,
REST_NW_PROTO, REST_NW_PROTO_ICMP)
else:
raise ValueError('Unknown dl_type : %s' % dl_type)
else:
if REST_SRC_IP in rest:
if REST_SRC_IPV6 in rest:
__inv_1and1(REST_SRC_IP, REST_SRC_IPV6)
if REST_DST_IPV6 in rest:
__inv_1and1(REST_SRC_IP, REST_DST_IPV6)
if nw_proto == REST_NW_PROTO_ICMPV6:
__inv_1and2(
REST_SRC_IP, REST_NW_PROTO, REST_NW_PROTO_ICMPV6)
rest[REST_DL_TYPE] = REST_DL_TYPE_IPV4
elif REST_DST_IP in rest:
if REST_SRC_IPV6 in rest:
__inv_1and1(REST_DST_IP, REST_SRC_IPV6)
if REST_DST_IPV6 in rest:
__inv_1and1(REST_DST_IP, REST_DST_IPV6)
if nw_proto == REST_NW_PROTO_ICMPV6:
__inv_1and2(
REST_DST_IP, REST_NW_PROTO, REST_NW_PROTO_ICMPV6)
rest[REST_DL_TYPE] = REST_DL_TYPE_IPV4
elif REST_SRC_IPV6 in rest:
if nw_proto == REST_NW_PROTO_ICMP:
__inv_1and2(
REST_SRC_IPV6, REST_NW_PROTO, REST_NW_PROTO_ICMP)
rest[REST_DL_TYPE] = REST_DL_TYPE_IPV6
elif REST_DST_IPV6 in rest:
if nw_proto == REST_NW_PROTO_ICMP:
__inv_1and2(
REST_DST_IPV6, REST_NW_PROTO, REST_NW_PROTO_ICMP)
rest[REST_DL_TYPE] = REST_DL_TYPE_IPV6
elif REST_DSCP in rest:
# Apply dl_type ipv4, if doesn't specify dl_type
rest[REST_DL_TYPE] = REST_DL_TYPE_IPV4
else:
if nw_proto == REST_NW_PROTO_ICMP:
rest[REST_DL_TYPE] = REST_DL_TYPE_IPV4
elif nw_proto == REST_NW_PROTO_ICMPV6:
rest[REST_DL_TYPE] = REST_DL_TYPE_IPV6
elif nw_proto == REST_NW_PROTO_TCP or \
nw_proto == REST_NW_PROTO_UDP:
raise ValueError('no dl_type was specified')
else:
raise ValueError('Unknown nw_proto: %s' % nw_proto)
for key, value in rest.items():
if key in Match._CONVERT:
if value in Match._CONVERT[key]:
match.setdefault(key, Match._CONVERT[key][value])
else:
raise ValueError('Invalid rule parameter. : key=%s' % key)
else:
match.setdefault(key, value)
return match
@staticmethod
def to_rest(openflow):
of_match = openflow[REST_MATCH]
mac_dontcare = mac.haddr_to_str(mac.DONTCARE)
ip_dontcare = '0.0.0.0'
ipv6_dontcare = '::'
match = {}
for key, value in of_match.items():
if key == REST_SRC_MAC or key == REST_DST_MAC:
if value == mac_dontcare:
continue
elif key == REST_SRC_IP or key == REST_DST_IP:
if value == ip_dontcare:
continue
elif key == REST_SRC_IPV6 or key == REST_DST_IPV6:
if value == ipv6_dontcare:
continue
elif value == 0:
continue
if key in Match._CONVERT:
conv = Match._CONVERT[key]
conv = dict((value, key) for key, value in conv.items())
match.setdefault(key, conv[value])
else:
match.setdefault(key, value)
return match
@staticmethod
def to_mod_openflow(of_match):
mac_dontcare = mac.haddr_to_str(mac.DONTCARE)
ip_dontcare = '0.0.0.0'
ipv6_dontcare = '::'
match = {}
for key, value in of_match.items():
if key == REST_SRC_MAC or key == REST_DST_MAC:
if value == mac_dontcare:
continue
elif key == REST_SRC_IP or key == REST_DST_IP:
if value == ip_dontcare:
continue
elif key == REST_SRC_IPV6 or key == REST_DST_IPV6:
if value == ipv6_dontcare:
continue
elif value == 0:
continue
match.setdefault(key, value)
return match
class Action(object):
@staticmethod
def to_rest(flow):
if REST_ACTION in flow:
actions = []
for act in flow[REST_ACTION]:
field_value = re.search(r'SET_FIELD: \{ip_dscp:(\d+)', act)
if field_value:
actions.append({REST_ACTION_MARK: field_value.group(1)})
meter_value = re.search(r'METER:(\d+)', act)
if meter_value:
actions.append({REST_ACTION_METER: meter_value.group(1)})
queue_value = re.search(r'SET_QUEUE:(\d+)', act)
if queue_value:
actions.append({REST_ACTION_QUEUE: queue_value.group(1)})
action = {REST_ACTION: actions}
else:
action = {REST_ACTION: 'Unknown action type.'}
return action
|
PypiClean
|
/levis_pdfparse-0.1.0-py3-none-any.whl/scipdf/features/text_utils.py
|
import numpy as np
import pandas as pd
import textstat
import spacy
from collections import Counter
from itertools import groupby
nlp = spacy.load("en_core_web_sm")
PRESENT_TENSE_VERB_LIST = ["VB", "VBP", "VBZ", "VBG"]
VERB_LIST = ["VB", "VBP", "VBZ", "VBG", "VBN", "VBD"]
NOUN_LIST = ["NNP", "NNPS"]
SECTIONS_MAPS = {
"Authors": "Authors",
"AUTHORS": "AUTHORS",
"Abstract": "Abstract",
"ABSTRACT": "Abstract",
"Date": "Date",
"DATE": "DATE",
"INTRODUCTION": "Introduction",
"MATERIALS AND METHODS": "Methods",
"Materials and methods": "Methods",
"METHODS": "Methods",
"RESULTS": "Results",
"CONCLUSIONS": "Conclusions",
"CONCLUSIONS AND FUTURE APPLICATIONS": "Conclusions",
"DISCUSSION": "Discussion",
"ACKNOWLEDGMENTS": "Acknowledgement",
"TABLES": "Tables",
"Tabnles": "Tables",
"DISCLOSURE": "Disclosure",
"CONFLICT OF INTEREST": "Disclosure",
"Acknowledgement": "Acknowledgements",
}
def compute_readability_stats(text):
"""
Compute reading statistics of the given text
Reference: https://github.com/shivam5992/textstat
Parameters
==========
text: str, input section or abstract text
"""
try:
readability_dict = {
"flesch_reading_ease": textstat.flesch_reading_ease(text),
"smog": textstat.smog_index(text),
"flesch_kincaid_grade": textstat.flesch_kincaid_grade(text),
"coleman_liau_index": textstat.coleman_liau_index(text),
"automated_readability_index": textstat.automated_readability_index(text),
"dale_chall": textstat.dale_chall_readability_score(text),
"difficult_words": textstat.difficult_words(text),
"linsear_write": textstat.linsear_write_formula(text),
"gunning_fog": textstat.gunning_fog(text),
"text_standard": textstat.text_standard(text),
"n_syllable": textstat.syllable_count(text),
"avg_letter_per_word": textstat.avg_letter_per_word(text),
"avg_sentence_length": textstat.avg_sentence_length(text),
}
except:
readability_dict = {
"flesch_reading_ease": None,
"smog": None,
"flesch_kincaid_grade": None,
"coleman_liau_index": None,
"automated_readability_index": None,
"dale_chall": None,
"difficult_words": None,
"linsear_write": None,
"gunning_fog": None,
"text_standard": None,
"n_syllable": None,
"avg_letter_per_word": None,
"avg_sentence_length": None,
}
return readability_dict
def compute_text_stats(text):
"""
Compute part of speech features from a given spacy wrapper of text
Parameters
==========
text: spacy.tokens.doc.Doc, spacy wrapper of the section or abstract text
Output
======
text_stat: dict, part of speech and text features extracted from the given text
"""
try:
pos = dict(Counter([token.pos_ for token in text]))
pos_tag = dict(
Counter([token.tag_ for token in text])
) # detailed part-of-speech
n_present_verb = sum(
[v for k, v in pos_tag.items() if k in PRESENT_TENSE_VERB_LIST]
)
n_verb = sum([v for k, v in pos_tag.items() if k in VERB_LIST])
word_shape = dict(Counter([token.shape_ for token in text])) # word shape
n_word_per_sents = [len([token for token in sent]) for sent in text.sents]
n_digits = sum([token.is_digit or token.like_num for token in text])
n_word = sum(n_word_per_sents)
n_sents = len(n_word_per_sents)
text_stats_dict = {
"pos": pos,
"pos_tag": pos_tag,
"word_shape": word_shape,
"n_word": n_word,
"n_sents": n_sents,
"n_present_verb": n_present_verb,
"n_verb": n_verb,
"n_digits": n_digits,
"percent_digits": n_digits / n_word,
"n_word_per_sents": n_word_per_sents,
"avg_word_per_sents": np.mean(n_word_per_sents),
}
except:
text_stats_dict = {
"pos": None,
"pos_tag": None,
"word_shape": None,
"n_word": None,
"n_sents": None,
"n_present_verb": None,
"n_verb": None,
"n_digits": None,
"percent_digits": None,
"n_word_per_sents": None,
"avg_word_per_sents": None,
}
return text_stats_dict
def compute_journal_features(article):
"""
Parse features about journal references from a given dictionary of parsed article e.g.
number of reference made, number of unique journal refered, minimum year of references,
maximum year of references, ...
Parameters
==========
article: dict, article dictionary parsed from GROBID and converted to dictionary
see ``pdf/parse_pdf.py`` for the detail of the output dictionary
Output
======
reference_dict: dict, dictionary of
"""
try:
n_reference = len(article["references"])
n_unique_journals = len(
pd.unique([a["journal"] for a in article["references"]])
)
reference_years = []
for reference in article["references"]:
year = reference["year"]
if year.isdigit():
# filter outliers
if int(year) in range(1800, 2100):
reference_years.append(int(year))
avg_ref_year = np.mean(reference_years)
median_ref_year = np.median(reference_years)
min_ref_year = np.min(reference_years)
max_ref_year = np.max(reference_years)
journal_features_dict = {
"n_reference": n_reference,
"n_unique_journals": n_unique_journals,
"avg_ref_year": avg_ref_year,
"median_ref_year": median_ref_year,
"min_ref_year": min_ref_year,
"max_ref_year": max_ref_year,
}
except:
journal_features_dict = {
"n_reference": None,
"n_unique_journals": None,
"avg_ref_year": None,
"median_ref_year": None,
"min_ref_year": None,
"max_ref_year": None,
}
return journal_features_dict
def merge_section_list(section_list, section_maps=SECTIONS_MAPS, section_start=""):
"""
Merge a list of sections into a normalized list of sections,
you can get the list of sections from parsed article JSON in ``parse_pdf.py`` e.g.
>> section_list = [s['heading'] for s in article_json['sections']]
>> section_list_merged = merge_section_list(section_list)
Parameters
==========
section_list: list, list of sections
Output
======
section_list_merged: list, sections
"""
sect_map = section_start # text for starting section e.g. ``Introduction``
section_list_merged = []
for section in section_list:
if any([(s.lower() in section.lower()) for s in section_maps.keys()]):
sect = [s for s in section_maps.keys() if s.lower() in section.lower()][0]
sect_map = section_maps.get(sect, "") #
section_list_merged.append(sect_map)
else:
section_list_merged.append(sect_map)
return section_list_merged
|
PypiClean
|
/custom-awscli-1.27.51.tar.gz/custom-awscli-1.27.51/awscli/examples/ram/get-resource-share-invitations.rst
|
**To list your resource share invitations**
The following ``get-resource-share-invitations`` example lists your current resource share invitations. ::
aws ram get-resource-share-invitations
Output::
{
"resourceShareInvitations": [
{
"resourceShareInvitationArn": "arn:aws:ram:us-west2-1:111111111111:resource-share-invitation/32b639f0-14b8-7e8f-55ea-e6117EXAMPLE",
"resourceShareName": "project-resource-share",
"resourceShareArn": "arn:aws:ram:us-west-2:111111111111:resource-share/fcb639f0-1449-4744-35bc-a983fEXAMPLE",
"senderAccountId": "111111111111",
"receiverAccountId": "222222222222",
"invitationTimestamp": 1565312166.258,
"status": "PENDING"
}
]
}
|
PypiClean
|
/FOFA_py-2.0.3-py3-none-any.whl/fofa/__main__.py
|
import os
import click
import fofa
import json
import logging
from tqdm import tqdm
from .helper import XLSWriter, CSVWriter
COLORIZE_FIELDS = {
'ip': 'green',
'port': 'yellow',
'domain': 'magenta',
'as_organization': 'cyan',
}
def escape_data(args):
# Make sure the string is unicode so the terminal can properly display it
# We do it using format() so it works across Python 2 and 3
args = u'{}'.format(args)
return args.replace('\n', '\\n').replace('\r', '\\r').replace('\t', '\\t')
# Define the main entry point for all of our commands
@click.group(context_settings={'help_option_names': ['-h', '--help']})
def main():
pass
def get_user_key():
return {
'email': os.environ.get('FOFA_EMAIL', ''),
'key': os.environ.get('FOFA_KEY', ''),
}
def print_data(data):
click.echo(json.dumps(data, ensure_ascii=False, sort_keys=True, indent=4))
@main.command()
def info():
"""Shows general information about your account"""
para = get_user_key()
api = fofa.Client(**para)
try:
r = api.get_userinfo()
except fofa.FofaError as e:
raise click.ClickException(e.message)
print_data(r)
@main.command()
@click.option('--detail/--no-detail', '-D', help='show host detail info', default=False, flag_value=True)
@click.argument('host', metavar='<domain or ip>', nargs=-1)
def host(detail, host):
"""Aggregated information for the specified host. """
para = get_user_key()
api = fofa.Client(**para)
try:
r = api.search_host(host, detail=detail)
except fofa.FofaError as e:
raise click.ClickException(e.message)
print_data(r)
def fofa_count(client, query):
"""Returns the number of results for a fofa query.
"""
try:
r = client.search(query, size=1, fields='ip')
except fofa.FofaError as e:
raise click.ClickException(e.message)
click.echo(r['size'])
def fofa_stats(client, query, fields='ip,port,protocol', size=5):
"""Returns the number of results for a fofa query.
"""
try:
r = client.search_stats(query, size=size, fields=fields)
except fofa.FofaError as e:
raise click.ClickException(e.message)
print_data(r)
def fofa_search_all(client, query, fields, num):
size = 10000
page = 1
result = {
'size': 0,
'results': [],
'consumed_fpoint': 0,
}
total = 0
while True:
try:
remain_num = num - total
if remain_num < size:
size = remain_num
r = client.search(query, fields=fields, page=page, size=size)
data = r['results']
total += len(data)
result['results'] += data
result['size'] += r['size']
result['consumed_fpoint'] += r['consumed_fpoint']
result['query'] = r['query']
if len(data) < size or total >= num:
break
page+=1
except fofa.FofaError as e:
raise click.ClickException(u'search page {}, error: {}'.format(page, e.message))
return result
def fofa_paged_search_save(writer, client, query, fields, num):
""" Perform paged search using the search API and save the results to a writer.
Args:
writer: Writer object (e.g., CSVWriter or XLSWriter) for saving the results.
client: FOFA API client.
query: FOFA query string.
fields: Comma-separated string of fields to include in the search results.
num: Number of results to save.
"""
size = 10000
page = 1
result = {
'size': 0,
'writed': 0,
'consumed_fpoint': 0,
}
total = 0
progress_bar = tqdm(total=num, desc='Downloading Fofa Data', leave=True, unit='item', unit_scale=True)
try:
while True:
remain_num = num - total
if remain_num < size:
size = remain_num
r = client.search(query, fields=fields, page=page, size=size)
data = r['results']
total += len(data)
for d1 in data:
progress_bar.update(1)
writer.write_data(d1)
if num > r['size']:
progress_bar.total = r['size']
progress_bar.refresh()
result['size'] = r['size']
result['consumed_fpoint'] += r['consumed_fpoint']
result['query'] = r['query']
result['writed'] = total
if len(data) < size or total >= num:
break
page+=1
progress_bar.set_postfix({'completed': True})
except fofa.FofaError as e:
raise click.ClickException(u'search page {}, error: {}'.format(page, e.message))
except Exception as e:
raise click.ClickException(u'search page {}, error: {}'.format(next, e))
return result
def fofa_next_search_save(writer, client, query, fields, num):
""" Perform next search using the search next API and save the results to a writer.
Args:
writer: Writer object (e.g., CSVWriter or XLSWriter) for saving the results.
client: FOFA API client.
query: FOFA query string.
fields: Comma-separated string of fields to include in the search results.
num: Number of results to save.
"""
size = 10000
page = 1
result = {
'size': 0,
'writed': 0,
'consumed_fpoint': 0,
}
total = 0
next = ''
progress_bar = tqdm(total=num, desc='Downloading Fofa Data', leave=True, unit='item', unit_scale=True)
try:
while True:
remain_num = num - total
if remain_num < size:
size = remain_num
r = client.search_next(query, fields=fields, next=next, size=size)
data = r['results']
total += len(data)
for d1 in data:
progress_bar.update(1)
writer.write_data(d1)
if num > r['size']:
progress_bar.total = r['size']
progress_bar.refresh()
next = r['next']
result['size'] = r['size']
result['consumed_fpoint'] += r['consumed_fpoint']
result['query'] = r['query']
result['writed'] = total
if len(data) < size or total >= num:
break
page+=1
progress_bar.set_postfix({'completed': True})
except fofa.FofaError as e:
raise click.ClickException(u'search next {}, error: {}'.format(next, e.message))
except Exception as e:
raise click.ClickException(u'search next {}, error: {}'.format(next, e))
return result
def fofa_download(client, query, fields, num, save_file, filetype='xls'):
header = fields.split(',')
if filetype == 'xls':
writer = XLSWriter(save_file)
else:
writer = CSVWriter(save_file)
writer.write_data(header)
result = None
try:
if client.can_use_next():
result = fofa_next_search_save(writer, client, query, fields, num)
else:
result = fofa_paged_search_save(writer, client, query, fields, num)
finally:
writer.close_writer()
if result:
click.echo("Query: '{}', saved to file: '{}', total: {:,}, written: {:,}, consumed fpoints: {:,}\n".format(
result['query'],
save_file,
result['size'],
result['writed'],
result['consumed_fpoint']
))
else:
raise click.ClickException('No result')
from tabulate import tabulate
@main.command()
@click.option('--count', '-c', default=False, flag_value=True, help='Count the number of results.')
@click.option('--stats', default=False, flag_value=True, help='Query statistics information.')
@click.option('--save', metavar='<filename>', help='Save the results to a file, supports csv and xls formats.')
@click.option('--color/--no-color', default=True, help='Enable/disable colorized output. Default: True')
@click.option('--fields', '-f', help='List of properties to show in the search results.', default='ip,port,protocol,domain')
@click.option('--size', help='The number of search results that should be returned. Default: 100', default=100, type=int)
@click.option('-v', '--verbose', count=True, help='Increase verbosity level. Use -v for INFO level, -vv for DEBUG level.')
@click.option('--retry', help='The number of times to retry the HTTP request in case of failure. Default: 10', default=10, type=int)
@click.argument('query', metavar='<fofa query>')
def search(count, stats, save, color, fields, size, verbose, retry, query):
""" Returns the results for a fofa query.
If the query contains special characters like && or ||, please enclose it in single quotes ('').
Example:
# Show results in the console
fofa search 'title="fofa" && cert.is_match=true'
# Count the number of results
fofa search --count 'title="fofa" && cert.is_match=true'
# Query statistics information
fofa search --stats 'title="fofa" && cert.is_match=true'
# Save the results to a csv file
fofa search --save results.csv 'title="fofa" && cert.is_match=true'
# Save the results to an Excel file
fofa search --save results.xlsx 'title="fofa" && cert.is_match=true'
"""
if query == '':
raise click.ClickException('Empty fofa query')
fields = fields.strip()
default_log_level = logging.WARN
# 根据 -v 参数增加 verbosity level
if verbose == 1:
default_log_level = logging.INFO
elif verbose >= 2:
default_log_level = logging.DEBUG
logging.basicConfig(level=default_log_level)
para = get_user_key()
api = fofa.Client(**para)
api.tries = retry
# count mode
if count:
fofa_count(api, query)
return
# stat mode
if stats:
fofa_stats(api, query, fields, size)
return
# download mode
if save:
filetype = ''
if save.endswith('.csv'):
filetype = 'csv'
elif save.endswith('.xls') or save.endswith('.xlsx'):
filetype = 'xls'
else:
raise click.ClickException('save only support .csv or .xls file')
fofa_download(api, query, fields, size, save, filetype)
return
# search mode
r = fofa_search_all(api, query, fields, size)
if r['size'] == 0:
raise click.ClickException('No result')
flds = fields.split(',')
# stats line
stats = "#stat query:'{}' total:{:,} size:{:,} consumed fpoints:{:,}".format(
r['query'],
r['size'],
len(r['results']),
r['consumed_fpoint']
)
click.echo(stats)
datas = []
for line in r['results']:
row = []
for index, field in enumerate(flds):
tmp = u''
value = line[index]
if value:
tmp = escape_data(value)
if color:
tmp = click.style(tmp, fg=COLORIZE_FIELDS.get(field, 'white'))
row.append(tmp)
if len(row) != len(flds):
logging.error("row mismatch: %s", row)
datas.append(row)
table = tabulate(datas, headers=flds, tablefmt="simple", colalign=("left",))
click.echo(table)
if __name__ == "__main__":
main()
|
PypiClean
|
/PyUpdater-SCP-Plugin-4.0.3.tar.gz/PyUpdater-SCP-Plugin-4.0.3/versioneer.py
|
from __future__ import print_function
try:
import configparser
except ImportError:
import ConfigParser as configparser
import errno
import json
import os
import re
import subprocess
import sys
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_root():
"""Get the project root directory.
We require that all commands are run from the project root, i.e. the
directory that contains setup.py, setup.cfg, and versioneer.py .
"""
root = os.path.realpath(os.path.abspath(os.getcwd()))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
# allow 'python path/to/setup.py COMMAND'
root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0])))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
err = ("Versioneer was unable to run the project root directory. "
"Versioneer requires setup.py to be executed from "
"its immediate directory (like 'python setup.py COMMAND'), "
"or in a way that lets it use sys.argv[0] to find the root "
"(like 'python path/to/setup.py COMMAND').")
raise VersioneerBadRootError(err)
try:
# Certain runtime workflows (setup.py install/develop in a setuptools
# tree) execute all dependencies in a single python process, so
# "versioneer" may be imported multiple times, and python's shared
# module-import table will cache the first one. So we can't use
# os.path.dirname(__file__), as that will find whichever
# versioneer.py was first imported, even in later projects.
me = os.path.realpath(os.path.abspath(__file__))
me_dir = os.path.normcase(os.path.splitext(me)[0])
vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0])
if me_dir != vsr_dir:
print("Warning: build in %s is using versioneer.py from %s"
% (os.path.dirname(me), versioneer_py))
except NameError:
pass
return root
def get_config_from_root(root):
"""Read the project setup.cfg file to determine Versioneer config."""
# This might raise EnvironmentError (if setup.cfg is missing), or
# configparser.NoSectionError (if it lacks a [versioneer] section), or
# configparser.NoOptionError (if it lacks "VCS="). See the docstring at
# the top of versioneer.py for instructions on writing your setup.cfg .
setup_cfg = os.path.join(root, "setup.cfg")
parser = configparser.SafeConfigParser()
with open(setup_cfg, "r") as f:
parser.readfp(f)
VCS = parser.get("versioneer", "VCS") # mandatory
def get(parser, name):
if parser.has_option("versioneer", name):
return parser.get("versioneer", name)
return None
cfg = VersioneerConfig()
cfg.VCS = VCS
cfg.style = get(parser, "style") or ""
cfg.versionfile_source = get(parser, "versionfile_source")
cfg.versionfile_build = get(parser, "versionfile_build")
cfg.tag_prefix = get(parser, "tag_prefix")
if cfg.tag_prefix in ("''", '""'):
cfg.tag_prefix = ""
cfg.parentdir_prefix = get(parser, "parentdir_prefix")
cfg.verbose = get(parser, "verbose")
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
# these dictionaries contain VCS-specific tools
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Decorator to mark a method as the handler for a particular VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None, None
stdout = p.communicate()[0].strip()
if sys.version_info[0] >= 3:
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
print("stdout was %s" % stdout)
return None, p.returncode
return stdout, p.returncode
LONG_VERSION_PY['git'] = '''
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.18 (https://github.com/warner/python-versioneer)
"""Git implementation of _version.py."""
import errno
import os
import re
import subprocess
import sys
def get_keywords():
"""Get the keywords needed to look up the version information."""
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "%(STYLE)s"
cfg.tag_prefix = "%(TAG_PREFIX)s"
cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s"
cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Decorator to mark a method as the handler for a particular VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %%s" %% dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %%s" %% (commands,))
return None, None
stdout = p.communicate()[0].strip()
if sys.version_info[0] >= 3:
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% dispcmd)
print("stdout was %%s" %% stdout)
return None, p.returncode
return stdout, p.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %%s but none started with prefix %%s" %%
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %%d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%%s', no digits" %% ",".join(refs - tags))
if verbose:
print("likely tags: %%s" %% ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %%s" %% r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %%s not under git control" %% root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%%s*" %% tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%%s'"
%% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%%s' doesn't start with prefix '%%s'"
print(fmt %% (full_tag, tag_prefix))
pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'"
%% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"],
cwd=root)[0].strip()
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post.dev%%d" %% pieces["distance"]
else:
# exception #1
rendered = "0.post.dev%%d" %% pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%%s" %% pieces["short"]
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%%s" %% pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Eexceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%%s'" %% style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in cfg.versionfile_source.split('/'):
root = os.path.dirname(root)
except NameError:
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None}
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to compute version", "date": None}
'''
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%s*" % tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = ("tag '%s' doesn't start with prefix '%s'"
% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"],
cwd=root)[0].strip()
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def do_vcs_install(manifest_in, versionfile_source, ipy):
"""Git-specific installation logic for Versioneer.
For Git, this means creating/changing .gitattributes to mark _version.py
for export-subst keyword substitution.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
files = [manifest_in, versionfile_source]
if ipy:
files.append(ipy)
try:
me = __file__
if me.endswith(".pyc") or me.endswith(".pyo"):
me = os.path.splitext(me)[0] + ".py"
versioneer_file = os.path.relpath(me)
except NameError:
versioneer_file = "versioneer.py"
files.append(versioneer_file)
present = False
try:
f = open(".gitattributes", "r")
for line in f.readlines():
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
f.close()
except EnvironmentError:
pass
if not present:
f = open(".gitattributes", "a+")
f.write("%s export-subst\n" % versionfile_source)
f.close()
files.append(".gitattributes")
run_command(GITS, ["add", "--"] + files)
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %s but none started with prefix %s" %
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.18) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
import json
version_json = '''
%s
''' # END VERSION_JSON
def get_versions():
return json.loads(version_json)
"""
def versions_from_file(filename):
"""Try to determine the version from _version.py if present."""
try:
with open(filename) as f:
contents = f.read()
except EnvironmentError:
raise NotThisMethod("unable to read _version.py")
mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
raise NotThisMethod("no version_json in _version.py")
return json.loads(mo.group(1))
def write_to_version_file(filename, versions):
"""Write the given version number to the given _version.py file."""
os.unlink(filename)
contents = json.dumps(versions, sort_keys=True,
indent=1, separators=(",", ": "))
with open(filename, "w") as f:
f.write(SHORT_VERSION_PY % contents)
print("set %s to '%s'" % (filename, versions["version"]))
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post.dev%d" % pieces["distance"]
else:
# exception #1
rendered = "0.post.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Eexceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
class VersioneerBadRootError(Exception):
"""The project root directory is unknown or missing key files."""
def get_versions(verbose=False):
"""Get the project version from whatever source is available.
Returns dict with two keys: 'version' and 'full'.
"""
if "versioneer" in sys.modules:
# see the discussion in cmdclass.py:get_cmdclass()
del sys.modules["versioneer"]
root = get_root()
cfg = get_config_from_root(root)
assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg"
handlers = HANDLERS.get(cfg.VCS)
assert handlers, "unrecognized VCS '%s'" % cfg.VCS
verbose = verbose or cfg.verbose
assert cfg.versionfile_source is not None, \
"please set versioneer.versionfile_source"
assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix"
versionfile_abs = os.path.join(root, cfg.versionfile_source)
# extract version from first of: _version.py, VCS command (e.g. 'git
# describe'), parentdir. This is meant to work for developers using a
# source checkout, for users of a tarball created by 'setup.py sdist',
# and for users of a tarball/zipball created by 'git archive' or github's
# download-from-tag feature or the equivalent in other VCSes.
get_keywords_f = handlers.get("get_keywords")
from_keywords_f = handlers.get("keywords")
if get_keywords_f and from_keywords_f:
try:
keywords = get_keywords_f(versionfile_abs)
ver = from_keywords_f(keywords, cfg.tag_prefix, verbose)
if verbose:
print("got version from expanded keyword %s" % ver)
return ver
except NotThisMethod:
pass
try:
ver = versions_from_file(versionfile_abs)
if verbose:
print("got version from file %s %s" % (versionfile_abs, ver))
return ver
except NotThisMethod:
pass
from_vcs_f = handlers.get("pieces_from_vcs")
if from_vcs_f:
try:
pieces = from_vcs_f(cfg.tag_prefix, root, verbose)
ver = render(pieces, cfg.style)
if verbose:
print("got version from VCS %s" % ver)
return ver
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
if verbose:
print("got version from parentdir %s" % ver)
return ver
except NotThisMethod:
pass
if verbose:
print("unable to compute version")
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None, "error": "unable to compute version",
"date": None}
def get_version():
"""Get the short version string for this project."""
return get_versions()["version"]
def get_cmdclass():
"""Get the custom setuptools/distutils subclasses used by Versioneer."""
if "versioneer" in sys.modules:
del sys.modules["versioneer"]
# this fixes the "python setup.py develop" case (also 'install' and
# 'easy_install .'), in which subdependencies of the main project are
# built (using setup.py bdist_egg) in the same python process. Assume
# a main project A and a dependency B, which use different versions
# of Versioneer. A's setup.py imports A's Versioneer, leaving it in
# sys.modules by the time B's setup.py is executed, causing B to run
# with the wrong versioneer. Setuptools wraps the sub-dep builds in a
# sandbox that restores sys.modules to it's pre-build state, so the
# parent is protected against the child's "import versioneer". By
# removing ourselves from sys.modules here, before the child build
# happens, we protect the child from the parent's versioneer too.
# Also see https://github.com/warner/python-versioneer/issues/52
cmds = {}
# we add "version" to both distutils and setuptools
from distutils.core import Command
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
vers = get_versions(verbose=True)
print("Version: %s" % vers["version"])
print(" full-revisionid: %s" % vers.get("full-revisionid"))
print(" dirty: %s" % vers.get("dirty"))
print(" date: %s" % vers.get("date"))
if vers["error"]:
print(" error: %s" % vers["error"])
cmds["version"] = cmd_version
# we override "build_py" in both distutils and setuptools
#
# most invocation pathways end up running build_py:
# distutils/build -> build_py
# distutils/install -> distutils/build ->..
# setuptools/bdist_wheel -> distutils/install ->..
# setuptools/bdist_egg -> distutils/install_lib -> build_py
# setuptools/install -> bdist_egg ->..
# setuptools/develop -> ?
# pip install:
# copies source tree to a tempdir before running egg_info/etc
# if .git isn't copied too, 'git describe' will fail
# then does setup.py bdist_wheel, or sometimes setup.py install
# setup.py egg_info -> ?
# we override different "build_py" commands for both environments
if "setuptools" in sys.modules:
from setuptools.command.build_py import build_py as _build_py
else:
from distutils.command.build_py import build_py as _build_py
class cmd_build_py(_build_py):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_py.run(self)
# now locate _version.py in the new build/ directory and replace
# it with an updated value
if cfg.versionfile_build:
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_build)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_py"] = cmd_build_py
if "cx_Freeze" in sys.modules: # cx_freeze enabled?
from cx_Freeze.dist import build_exe as _build_exe
# nczeczulin reports that py2exe won't like the pep440-style string
# as FILEVERSION, but it can be used for PRODUCTVERSION, e.g.
# setup(console=[{
# "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION
# "product_version": versioneer.get_version(),
# ...
class cmd_build_exe(_build_exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_build_exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["build_exe"] = cmd_build_exe
del cmds["build_py"]
if 'py2exe' in sys.modules: # py2exe enabled?
try:
from py2exe.distutils_buildexe import py2exe as _py2exe # py3
except ImportError:
from py2exe.build_exe import py2exe as _py2exe # py2
class cmd_py2exe(_py2exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_py2exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["py2exe"] = cmd_py2exe
# we override different "sdist" commands for both environments
if "setuptools" in sys.modules:
from setuptools.command.sdist import sdist as _sdist
else:
from distutils.command.sdist import sdist as _sdist
class cmd_sdist(_sdist):
def run(self):
versions = get_versions()
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old
# version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
root = get_root()
cfg = get_config_from_root(root)
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory
# (remembering that it may be a hardlink) and replace it with an
# updated value
target_versionfile = os.path.join(base_dir, cfg.versionfile_source)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile,
self._versioneer_generated_versions)
cmds["sdist"] = cmd_sdist
return cmds
CONFIG_ERROR = """
setup.cfg is missing the necessary Versioneer configuration. You need
a section like:
[versioneer]
VCS = git
style = pep440
versionfile_source = src/myproject/_version.py
versionfile_build = myproject/_version.py
tag_prefix =
parentdir_prefix = myproject-
You will also need to edit your setup.py to use the results:
import versioneer
setup(version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(), ...)
Please read the docstring in ./versioneer.py for configuration instructions,
edit setup.cfg, and re-run the installer or 'python versioneer.py setup'.
"""
SAMPLE_CONFIG = """
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
[versioneer]
#VCS = git
#style = pep440
#versionfile_source =
#versionfile_build =
#tag_prefix =
#parentdir_prefix =
"""
INIT_PY_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
def do_setup():
"""Main VCS-independent setup function for installing Versioneer."""
root = get_root()
try:
cfg = get_config_from_root(root)
except (EnvironmentError, configparser.NoSectionError,
configparser.NoOptionError) as e:
if isinstance(e, (EnvironmentError, configparser.NoSectionError)):
print("Adding sample versioneer config to setup.cfg",
file=sys.stderr)
with open(os.path.join(root, "setup.cfg"), "a") as f:
f.write(SAMPLE_CONFIG)
print(CONFIG_ERROR, file=sys.stderr)
return 1
print(" creating %s" % cfg.versionfile_source)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG % {"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
ipy = os.path.join(os.path.dirname(cfg.versionfile_source),
"__init__.py")
if os.path.exists(ipy):
try:
with open(ipy, "r") as f:
old = f.read()
except EnvironmentError:
old = ""
if INIT_PY_SNIPPET not in old:
print(" appending to %s" % ipy)
with open(ipy, "a") as f:
f.write(INIT_PY_SNIPPET)
else:
print(" %s unmodified" % ipy)
else:
print(" %s doesn't exist, ok" % ipy)
ipy = None
# Make sure both the top-level "versioneer.py" and versionfile_source
# (PKG/_version.py, used by runtime code) are in MANIFEST.in, so
# they'll be copied into source distributions. Pip won't be able to
# install the package without this.
manifest_in = os.path.join(root, "MANIFEST.in")
simple_includes = set()
try:
with open(manifest_in, "r") as f:
for line in f:
if line.startswith("include "):
for include in line.split()[1:]:
simple_includes.add(include)
except EnvironmentError:
pass
# That doesn't cover everything MANIFEST.in can do
# (http://docs.python.org/2/distutils/sourcedist.html#commands), so
# it might give some false negatives. Appending redundant 'include'
# lines is safe, though.
if "versioneer.py" not in simple_includes:
print(" appending 'versioneer.py' to MANIFEST.in")
with open(manifest_in, "a") as f:
f.write("include versioneer.py\n")
else:
print(" 'versioneer.py' already in MANIFEST.in")
if cfg.versionfile_source not in simple_includes:
print(" appending versionfile_source ('%s') to MANIFEST.in" %
cfg.versionfile_source)
with open(manifest_in, "a") as f:
f.write("include %s\n" % cfg.versionfile_source)
else:
print(" versionfile_source already in MANIFEST.in")
# Make VCS-specific changes. For git, this means creating/changing
# .gitattributes to mark _version.py for export-subst keyword
# substitution.
do_vcs_install(manifest_in, cfg.versionfile_source, ipy)
return 0
def scan_setup_py():
"""Validate the contents of setup.py against Versioneer's expectations."""
found = set()
setters = False
errors = 0
with open("setup.py", "r") as f:
for line in f.readlines():
if "import versioneer" in line:
found.add("import")
if "versioneer.get_cmdclass()" in line:
found.add("cmdclass")
if "versioneer.get_version()" in line:
found.add("get_version")
if "versioneer.VCS" in line:
setters = True
if "versioneer.versionfile_source" in line:
setters = True
if len(found) != 3:
print("")
print("Your setup.py appears to be missing some important items")
print("(but I might be wrong). Please make sure it has something")
print("roughly like the following:")
print("")
print(" import versioneer")
print(" setup( version=versioneer.get_version(),")
print(" cmdclass=versioneer.get_cmdclass(), ...)")
print("")
errors += 1
if setters:
print("You should remove lines like 'versioneer.VCS = ' and")
print("'versioneer.versionfile_source = ' . This configuration")
print("now lives in setup.cfg, and should be removed from setup.py")
print("")
errors += 1
return errors
if __name__ == "__main__":
cmd = sys.argv[1]
if cmd == "setup":
errors = do_setup()
errors += scan_setup_py()
if errors:
sys.exit(1)
|
PypiClean
|
/nnisgf-0.4-py3-none-manylinux1_x86_64.whl/nnisgf-0.4.data/data/nni/node_modules/browserify-mime/browserify-mime.js
|
"use strict"
var mime = module.exports = {
lookup: function (path, fallback) {
var ext = path.replace(/.*[\.\/]/, '').toLowerCase();
return this.types[ext] || fallback || this.default_type;
}
, default_type: "application/octet-stream"
, types: {
"123": "application/vnd.lotus-1-2-3",
"ez": "application/andrew-inset",
"aw": "application/applixware",
"atom": "application/atom+xml",
"atomcat": "application/atomcat+xml",
"atomsvc": "application/atomsvc+xml",
"ccxml": "application/ccxml+xml",
"cdmia": "application/cdmi-capability",
"cdmic": "application/cdmi-container",
"cdmid": "application/cdmi-domain",
"cdmio": "application/cdmi-object",
"cdmiq": "application/cdmi-queue",
"cu": "application/cu-seeme",
"davmount": "application/davmount+xml",
"dbk": "application/docbook+xml",
"dssc": "application/dssc+der",
"xdssc": "application/dssc+xml",
"ecma": "application/ecmascript",
"emma": "application/emma+xml",
"epub": "application/epub+zip",
"exi": "application/exi",
"pfr": "application/font-tdpfr",
"gml": "application/gml+xml",
"gpx": "application/gpx+xml",
"gxf": "application/gxf",
"stk": "application/hyperstudio",
"ink": "application/inkml+xml",
"inkml": "application/inkml+xml",
"ipfix": "application/ipfix",
"jar": "application/java-archive",
"ser": "application/java-serialized-object",
"class": "application/java-vm",
"js": "application/javascript",
"json": "application/json",
"jsonml": "application/jsonml+json",
"lostxml": "application/lost+xml",
"hqx": "application/mac-binhex40",
"cpt": "application/mac-compactpro",
"mads": "application/mads+xml",
"mrc": "application/marc",
"mrcx": "application/marcxml+xml",
"ma": "application/mathematica",
"nb": "application/mathematica",
"mb": "application/mathematica",
"mathml": "application/mathml+xml",
"mbox": "application/mbox",
"mscml": "application/mediaservercontrol+xml",
"metalink": "application/metalink+xml",
"meta4": "application/metalink4+xml",
"mets": "application/mets+xml",
"mods": "application/mods+xml",
"m21": "application/mp21",
"mp21": "application/mp21",
"mp4s": "application/mp4",
"doc": "application/msword",
"dot": "application/msword",
"mxf": "application/mxf",
"bin": "application/octet-stream",
"dms": "application/octet-stream",
"lrf": "application/octet-stream",
"mar": "application/octet-stream",
"so": "application/octet-stream",
"dist": "application/octet-stream",
"distz": "application/octet-stream",
"pkg": "application/octet-stream",
"bpk": "application/octet-stream",
"dump": "application/octet-stream",
"elc": "application/octet-stream",
"deploy": "application/octet-stream",
"oda": "application/oda",
"opf": "application/oebps-package+xml",
"ogx": "application/ogg",
"omdoc": "application/omdoc+xml",
"onetoc": "application/onenote",
"onetoc2": "application/onenote",
"onetmp": "application/onenote",
"onepkg": "application/onenote",
"oxps": "application/oxps",
"xer": "application/patch-ops-error+xml",
"pdf": "application/pdf",
"pgp": "application/pgp-encrypted",
"asc": "application/pgp-signature",
"sig": "application/pgp-signature",
"prf": "application/pics-rules",
"p10": "application/pkcs10",
"p7m": "application/pkcs7-mime",
"p7c": "application/pkcs7-mime",
"p7s": "application/pkcs7-signature",
"p8": "application/pkcs8",
"ac": "application/pkix-attr-cert",
"cer": "application/pkix-cert",
"crl": "application/pkix-crl",
"pkipath": "application/pkix-pkipath",
"pki": "application/pkixcmp",
"pls": "application/pls+xml",
"ai": "application/postscript",
"eps": "application/postscript",
"ps": "application/postscript",
"cww": "application/prs.cww",
"pskcxml": "application/pskc+xml",
"rdf": "application/rdf+xml",
"rif": "application/reginfo+xml",
"rnc": "application/relax-ng-compact-syntax",
"rl": "application/resource-lists+xml",
"rld": "application/resource-lists-diff+xml",
"rs": "application/rls-services+xml",
"gbr": "application/rpki-ghostbusters",
"mft": "application/rpki-manifest",
"roa": "application/rpki-roa",
"rsd": "application/rsd+xml",
"rss": "application/rss+xml",
"rtf": "application/rtf",
"sbml": "application/sbml+xml",
"scq": "application/scvp-cv-request",
"scs": "application/scvp-cv-response",
"spq": "application/scvp-vp-request",
"spp": "application/scvp-vp-response",
"sdp": "application/sdp",
"setpay": "application/set-payment-initiation",
"setreg": "application/set-registration-initiation",
"shf": "application/shf+xml",
"smi": "application/smil+xml",
"smil": "application/smil+xml",
"rq": "application/sparql-query",
"srx": "application/sparql-results+xml",
"gram": "application/srgs",
"grxml": "application/srgs+xml",
"sru": "application/sru+xml",
"ssdl": "application/ssdl+xml",
"ssml": "application/ssml+xml",
"tei": "application/tei+xml",
"teicorpus": "application/tei+xml",
"tfi": "application/thraud+xml",
"tsd": "application/timestamped-data",
"plb": "application/vnd.3gpp.pic-bw-large",
"psb": "application/vnd.3gpp.pic-bw-small",
"pvb": "application/vnd.3gpp.pic-bw-var",
"tcap": "application/vnd.3gpp2.tcap",
"pwn": "application/vnd.3m.post-it-notes",
"aso": "application/vnd.accpac.simply.aso",
"imp": "application/vnd.accpac.simply.imp",
"acu": "application/vnd.acucobol",
"atc": "application/vnd.acucorp",
"acutc": "application/vnd.acucorp",
"air": "application/vnd.adobe.air-application-installer-package+zip",
"fcdt": "application/vnd.adobe.formscentral.fcdt",
"fxp": "application/vnd.adobe.fxp",
"fxpl": "application/vnd.adobe.fxp",
"xdp": "application/vnd.adobe.xdp+xml",
"xfdf": "application/vnd.adobe.xfdf",
"ahead": "application/vnd.ahead.space",
"azf": "application/vnd.airzip.filesecure.azf",
"azs": "application/vnd.airzip.filesecure.azs",
"azw": "application/vnd.amazon.ebook",
"acc": "application/vnd.americandynamics.acc",
"ami": "application/vnd.amiga.ami",
"apk": "application/vnd.android.package-archive",
"cii": "application/vnd.anser-web-certificate-issue-initiation",
"fti": "application/vnd.anser-web-funds-transfer-initiation",
"atx": "application/vnd.antix.game-component",
"mpkg": "application/vnd.apple.installer+xml",
"m3u8": "application/vnd.apple.mpegurl",
"swi": "application/vnd.aristanetworks.swi",
"iota": "application/vnd.astraea-software.iota",
"aep": "application/vnd.audiograph",
"mpm": "application/vnd.blueice.multipass",
"bmi": "application/vnd.bmi",
"rep": "application/vnd.businessobjects",
"cdxml": "application/vnd.chemdraw+xml",
"mmd": "application/vnd.chipnuts.karaoke-mmd",
"cdy": "application/vnd.cinderella",
"cla": "application/vnd.claymore",
"rp9": "application/vnd.cloanto.rp9",
"c4g": "application/vnd.clonk.c4group",
"c4d": "application/vnd.clonk.c4group",
"c4f": "application/vnd.clonk.c4group",
"c4p": "application/vnd.clonk.c4group",
"c4u": "application/vnd.clonk.c4group",
"c11amc": "application/vnd.cluetrust.cartomobile-config",
"c11amz": "application/vnd.cluetrust.cartomobile-config-pkg",
"csp": "application/vnd.commonspace",
"cdbcmsg": "application/vnd.contact.cmsg",
"cmc": "application/vnd.cosmocaller",
"clkx": "application/vnd.crick.clicker",
"clkk": "application/vnd.crick.clicker.keyboard",
"clkp": "application/vnd.crick.clicker.palette",
"clkt": "application/vnd.crick.clicker.template",
"clkw": "application/vnd.crick.clicker.wordbank",
"wbs": "application/vnd.criticaltools.wbs+xml",
"pml": "application/vnd.ctc-posml",
"ppd": "application/vnd.cups-ppd",
"car": "application/vnd.curl.car",
"pcurl": "application/vnd.curl.pcurl",
"dart": "application/vnd.dart",
"rdz": "application/vnd.data-vision.rdz",
"uvf": "application/vnd.dece.data",
"uvvf": "application/vnd.dece.data",
"uvd": "application/vnd.dece.data",
"uvvd": "application/vnd.dece.data",
"uvt": "application/vnd.dece.ttml+xml",
"uvvt": "application/vnd.dece.ttml+xml",
"uvx": "application/vnd.dece.unspecified",
"uvvx": "application/vnd.dece.unspecified",
"uvz": "application/vnd.dece.zip",
"uvvz": "application/vnd.dece.zip",
"fe_launch": "application/vnd.denovo.fcselayout-link",
"dna": "application/vnd.dna",
"mlp": "application/vnd.dolby.mlp",
"dpg": "application/vnd.dpgraph",
"dfac": "application/vnd.dreamfactory",
"kpxx": "application/vnd.ds-keypoint",
"ait": "application/vnd.dvb.ait",
"svc": "application/vnd.dvb.service",
"geo": "application/vnd.dynageo",
"mag": "application/vnd.ecowin.chart",
"nml": "application/vnd.enliven",
"esf": "application/vnd.epson.esf",
"msf": "application/vnd.epson.msf",
"qam": "application/vnd.epson.quickanime",
"slt": "application/vnd.epson.salt",
"ssf": "application/vnd.epson.ssf",
"es3": "application/vnd.eszigno3+xml",
"et3": "application/vnd.eszigno3+xml",
"ez2": "application/vnd.ezpix-album",
"ez3": "application/vnd.ezpix-package",
"fdf": "application/vnd.fdf",
"mseed": "application/vnd.fdsn.mseed",
"seed": "application/vnd.fdsn.seed",
"dataless": "application/vnd.fdsn.seed",
"gph": "application/vnd.flographit",
"ftc": "application/vnd.fluxtime.clip",
"fm": "application/vnd.framemaker",
"frame": "application/vnd.framemaker",
"maker": "application/vnd.framemaker",
"book": "application/vnd.framemaker",
"fnc": "application/vnd.frogans.fnc",
"ltf": "application/vnd.frogans.ltf",
"fsc": "application/vnd.fsc.weblaunch",
"oas": "application/vnd.fujitsu.oasys",
"oa2": "application/vnd.fujitsu.oasys2",
"oa3": "application/vnd.fujitsu.oasys3",
"fg5": "application/vnd.fujitsu.oasysgp",
"bh2": "application/vnd.fujitsu.oasysprs",
"ddd": "application/vnd.fujixerox.ddd",
"xdw": "application/vnd.fujixerox.docuworks",
"xbd": "application/vnd.fujixerox.docuworks.binder",
"fzs": "application/vnd.fuzzysheet",
"txd": "application/vnd.genomatix.tuxedo",
"ggb": "application/vnd.geogebra.file",
"ggt": "application/vnd.geogebra.tool",
"gex": "application/vnd.geometry-explorer",
"gre": "application/vnd.geometry-explorer",
"gxt": "application/vnd.geonext",
"g2w": "application/vnd.geoplan",
"g3w": "application/vnd.geospace",
"gmx": "application/vnd.gmx",
"kml": "application/vnd.google-earth.kml+xml",
"kmz": "application/vnd.google-earth.kmz",
"gqf": "application/vnd.grafeq",
"gqs": "application/vnd.grafeq",
"gac": "application/vnd.groove-account",
"ghf": "application/vnd.groove-help",
"gim": "application/vnd.groove-identity-message",
"grv": "application/vnd.groove-injector",
"gtm": "application/vnd.groove-tool-message",
"tpl": "application/vnd.groove-tool-template",
"vcg": "application/vnd.groove-vcard",
"hal": "application/vnd.hal+xml",
"zmm": "application/vnd.handheld-entertainment+xml",
"hbci": "application/vnd.hbci",
"les": "application/vnd.hhe.lesson-player",
"hpgl": "application/vnd.hp-hpgl",
"hpid": "application/vnd.hp-hpid",
"hps": "application/vnd.hp-hps",
"jlt": "application/vnd.hp-jlyt",
"pcl": "application/vnd.hp-pcl",
"pclxl": "application/vnd.hp-pclxl",
"sfd-hdstx": "application/vnd.hydrostatix.sof-data",
"mpy": "application/vnd.ibm.minipay",
"afp": "application/vnd.ibm.modcap",
"listafp": "application/vnd.ibm.modcap",
"list3820": "application/vnd.ibm.modcap",
"irm": "application/vnd.ibm.rights-management",
"sc": "application/vnd.ibm.secure-container",
"icc": "application/vnd.iccprofile",
"icm": "application/vnd.iccprofile",
"igl": "application/vnd.igloader",
"ivp": "application/vnd.immervision-ivp",
"ivu": "application/vnd.immervision-ivu",
"igm": "application/vnd.insors.igm",
"xpw": "application/vnd.intercon.formnet",
"xpx": "application/vnd.intercon.formnet",
"i2g": "application/vnd.intergeo",
"qbo": "application/vnd.intu.qbo",
"qfx": "application/vnd.intu.qfx",
"rcprofile": "application/vnd.ipunplugged.rcprofile",
"irp": "application/vnd.irepository.package+xml",
"xpr": "application/vnd.is-xpr",
"fcs": "application/vnd.isac.fcs",
"jam": "application/vnd.jam",
"rms": "application/vnd.jcp.javame.midlet-rms",
"jisp": "application/vnd.jisp",
"joda": "application/vnd.joost.joda-archive",
"ktz": "application/vnd.kahootz",
"ktr": "application/vnd.kahootz",
"karbon": "application/vnd.kde.karbon",
"chrt": "application/vnd.kde.kchart",
"kfo": "application/vnd.kde.kformula",
"flw": "application/vnd.kde.kivio",
"kon": "application/vnd.kde.kontour",
"kpr": "application/vnd.kde.kpresenter",
"kpt": "application/vnd.kde.kpresenter",
"ksp": "application/vnd.kde.kspread",
"kwd": "application/vnd.kde.kword",
"kwt": "application/vnd.kde.kword",
"htke": "application/vnd.kenameaapp",
"kia": "application/vnd.kidspiration",
"kne": "application/vnd.kinar",
"knp": "application/vnd.kinar",
"skp": "application/vnd.koan",
"skd": "application/vnd.koan",
"skt": "application/vnd.koan",
"skm": "application/vnd.koan",
"sse": "application/vnd.kodak-descriptor",
"lasxml": "application/vnd.las.las+xml",
"lbd": "application/vnd.llamagraphics.life-balance.desktop",
"lbe": "application/vnd.llamagraphics.life-balance.exchange+xml",
"apr": "application/vnd.lotus-approach",
"pre": "application/vnd.lotus-freelance",
"nsf": "application/vnd.lotus-notes",
"org": "application/vnd.lotus-organizer",
"scm": "application/vnd.lotus-screencam",
"lwp": "application/vnd.lotus-wordpro",
"portpkg": "application/vnd.macports.portpkg",
"mcd": "application/vnd.mcd",
"mc1": "application/vnd.medcalcdata",
"cdkey": "application/vnd.mediastation.cdkey",
"mwf": "application/vnd.mfer",
"mfm": "application/vnd.mfmp",
"flo": "application/vnd.micrografx.flo",
"igx": "application/vnd.micrografx.igx",
"mif": "application/vnd.mif",
"daf": "application/vnd.mobius.daf",
"dis": "application/vnd.mobius.dis",
"mbk": "application/vnd.mobius.mbk",
"mqy": "application/vnd.mobius.mqy",
"msl": "application/vnd.mobius.msl",
"plc": "application/vnd.mobius.plc",
"txf": "application/vnd.mobius.txf",
"mpn": "application/vnd.mophun.application",
"mpc": "application/vnd.mophun.certificate",
"xul": "application/vnd.mozilla.xul+xml",
"cil": "application/vnd.ms-artgalry",
"cab": "application/vnd.ms-cab-compressed",
"xls": "application/vnd.ms-excel",
"xlm": "application/vnd.ms-excel",
"xla": "application/vnd.ms-excel",
"xlc": "application/vnd.ms-excel",
"xlt": "application/vnd.ms-excel",
"xlw": "application/vnd.ms-excel",
"xlam": "application/vnd.ms-excel.addin.macroenabled.12",
"xlsb": "application/vnd.ms-excel.sheet.binary.macroenabled.12",
"xlsm": "application/vnd.ms-excel.sheet.macroenabled.12",
"xltm": "application/vnd.ms-excel.template.macroenabled.12",
"eot": "application/vnd.ms-fontobject",
"chm": "application/vnd.ms-htmlhelp",
"ims": "application/vnd.ms-ims",
"lrm": "application/vnd.ms-lrm",
"thmx": "application/vnd.ms-officetheme",
"cat": "application/vnd.ms-pki.seccat",
"stl": "application/vnd.ms-pki.stl",
"ppt": "application/vnd.ms-powerpoint",
"pps": "application/vnd.ms-powerpoint",
"pot": "application/vnd.ms-powerpoint",
"ppam": "application/vnd.ms-powerpoint.addin.macroenabled.12",
"pptm": "application/vnd.ms-powerpoint.presentation.macroenabled.12",
"sldm": "application/vnd.ms-powerpoint.slide.macroenabled.12",
"ppsm": "application/vnd.ms-powerpoint.slideshow.macroenabled.12",
"potm": "application/vnd.ms-powerpoint.template.macroenabled.12",
"mpp": "application/vnd.ms-project",
"mpt": "application/vnd.ms-project",
"docm": "application/vnd.ms-word.document.macroenabled.12",
"dotm": "application/vnd.ms-word.template.macroenabled.12",
"wps": "application/vnd.ms-works",
"wks": "application/vnd.ms-works",
"wcm": "application/vnd.ms-works",
"wdb": "application/vnd.ms-works",
"wpl": "application/vnd.ms-wpl",
"xps": "application/vnd.ms-xpsdocument",
"mseq": "application/vnd.mseq",
"mus": "application/vnd.musician",
"msty": "application/vnd.muvee.style",
"taglet": "application/vnd.mynfc",
"nlu": "application/vnd.neurolanguage.nlu",
"ntf": "application/vnd.nitf",
"nitf": "application/vnd.nitf",
"nnd": "application/vnd.noblenet-directory",
"nns": "application/vnd.noblenet-sealer",
"nnw": "application/vnd.noblenet-web",
"ngdat": "application/vnd.nokia.n-gage.data",
"n-gage": "application/vnd.nokia.n-gage.symbian.install",
"rpst": "application/vnd.nokia.radio-preset",
"rpss": "application/vnd.nokia.radio-presets",
"edm": "application/vnd.novadigm.edm",
"edx": "application/vnd.novadigm.edx",
"ext": "application/vnd.novadigm.ext",
"odc": "application/vnd.oasis.opendocument.chart",
"otc": "application/vnd.oasis.opendocument.chart-template",
"odb": "application/vnd.oasis.opendocument.database",
"odf": "application/vnd.oasis.opendocument.formula",
"odft": "application/vnd.oasis.opendocument.formula-template",
"odg": "application/vnd.oasis.opendocument.graphics",
"otg": "application/vnd.oasis.opendocument.graphics-template",
"odi": "application/vnd.oasis.opendocument.image",
"oti": "application/vnd.oasis.opendocument.image-template",
"odp": "application/vnd.oasis.opendocument.presentation",
"otp": "application/vnd.oasis.opendocument.presentation-template",
"ods": "application/vnd.oasis.opendocument.spreadsheet",
"ots": "application/vnd.oasis.opendocument.spreadsheet-template",
"odt": "application/vnd.oasis.opendocument.text",
"odm": "application/vnd.oasis.opendocument.text-master",
"ott": "application/vnd.oasis.opendocument.text-template",
"oth": "application/vnd.oasis.opendocument.text-web",
"xo": "application/vnd.olpc-sugar",
"dd2": "application/vnd.oma.dd2+xml",
"oxt": "application/vnd.openofficeorg.extension",
"pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
"sldx": "application/vnd.openxmlformats-officedocument.presentationml.slide",
"ppsx": "application/vnd.openxmlformats-officedocument.presentationml.slideshow",
"potx": "application/vnd.openxmlformats-officedocument.presentationml.template",
"xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"xltx": "application/vnd.openxmlformats-officedocument.spreadsheetml.template",
"docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"dotx": "application/vnd.openxmlformats-officedocument.wordprocessingml.template",
"mgp": "application/vnd.osgeo.mapguide.package",
"dp": "application/vnd.osgi.dp",
"esa": "application/vnd.osgi.subsystem",
"pdb": "application/vnd.palm",
"pqa": "application/vnd.palm",
"oprc": "application/vnd.palm",
"paw": "application/vnd.pawaafile",
"str": "application/vnd.pg.format",
"ei6": "application/vnd.pg.osasli",
"efif": "application/vnd.picsel",
"wg": "application/vnd.pmi.widget",
"plf": "application/vnd.pocketlearn",
"pbd": "application/vnd.powerbuilder6",
"box": "application/vnd.previewsystems.box",
"mgz": "application/vnd.proteus.magazine",
"qps": "application/vnd.publishare-delta-tree",
"ptid": "application/vnd.pvi.ptid1",
"qxd": "application/vnd.quark.quarkxpress",
"qxt": "application/vnd.quark.quarkxpress",
"qwd": "application/vnd.quark.quarkxpress",
"qwt": "application/vnd.quark.quarkxpress",
"qxl": "application/vnd.quark.quarkxpress",
"qxb": "application/vnd.quark.quarkxpress",
"bed": "application/vnd.realvnc.bed",
"mxl": "application/vnd.recordare.musicxml",
"musicxml": "application/vnd.recordare.musicxml+xml",
"cryptonote": "application/vnd.rig.cryptonote",
"cod": "application/vnd.rim.cod",
"rm": "application/vnd.rn-realmedia",
"rmvb": "application/vnd.rn-realmedia-vbr",
"link66": "application/vnd.route66.link66+xml",
"st": "application/vnd.sailingtracker.track",
"see": "application/vnd.seemail",
"sema": "application/vnd.sema",
"semd": "application/vnd.semd",
"semf": "application/vnd.semf",
"ifm": "application/vnd.shana.informed.formdata",
"itp": "application/vnd.shana.informed.formtemplate",
"iif": "application/vnd.shana.informed.interchange",
"ipk": "application/vnd.shana.informed.package",
"twd": "application/vnd.simtech-mindmapper",
"twds": "application/vnd.simtech-mindmapper",
"mmf": "application/vnd.smaf",
"teacher": "application/vnd.smart.teacher",
"sdkm": "application/vnd.solent.sdkm+xml",
"sdkd": "application/vnd.solent.sdkm+xml",
"dxp": "application/vnd.spotfire.dxp",
"sfs": "application/vnd.spotfire.sfs",
"sdc": "application/vnd.stardivision.calc",
"sda": "application/vnd.stardivision.draw",
"sdd": "application/vnd.stardivision.impress",
"smf": "application/vnd.stardivision.math",
"sdw": "application/vnd.stardivision.writer",
"vor": "application/vnd.stardivision.writer",
"sgl": "application/vnd.stardivision.writer-global",
"smzip": "application/vnd.stepmania.package",
"sm": "application/vnd.stepmania.stepchart",
"sxc": "application/vnd.sun.xml.calc",
"stc": "application/vnd.sun.xml.calc.template",
"sxd": "application/vnd.sun.xml.draw",
"std": "application/vnd.sun.xml.draw.template",
"sxi": "application/vnd.sun.xml.impress",
"sti": "application/vnd.sun.xml.impress.template",
"sxm": "application/vnd.sun.xml.math",
"sxw": "application/vnd.sun.xml.writer",
"sxg": "application/vnd.sun.xml.writer.global",
"stw": "application/vnd.sun.xml.writer.template",
"sus": "application/vnd.sus-calendar",
"susp": "application/vnd.sus-calendar",
"svd": "application/vnd.svd",
"sis": "application/vnd.symbian.install",
"sisx": "application/vnd.symbian.install",
"xsm": "application/vnd.syncml+xml",
"bdm": "application/vnd.syncml.dm+wbxml",
"xdm": "application/vnd.syncml.dm+xml",
"tao": "application/vnd.tao.intent-module-archive",
"pcap": "application/vnd.tcpdump.pcap",
"cap": "application/vnd.tcpdump.pcap",
"dmp": "application/vnd.tcpdump.pcap",
"tmo": "application/vnd.tmobile-livetv",
"tpt": "application/vnd.trid.tpt",
"mxs": "application/vnd.triscape.mxs",
"tra": "application/vnd.trueapp",
"ufd": "application/vnd.ufdl",
"ufdl": "application/vnd.ufdl",
"utz": "application/vnd.uiq.theme",
"umj": "application/vnd.umajin",
"unityweb": "application/vnd.unity",
"uoml": "application/vnd.uoml+xml",
"vcx": "application/vnd.vcx",
"vsd": "application/vnd.visio",
"vst": "application/vnd.visio",
"vss": "application/vnd.visio",
"vsw": "application/vnd.visio",
"vis": "application/vnd.visionary",
"vsf": "application/vnd.vsf",
"wbxml": "application/vnd.wap.wbxml",
"wmlc": "application/vnd.wap.wmlc",
"wmlsc": "application/vnd.wap.wmlscriptc",
"wtb": "application/vnd.webturbo",
"nbp": "application/vnd.wolfram.player",
"wpd": "application/vnd.wordperfect",
"wqd": "application/vnd.wqd",
"stf": "application/vnd.wt.stf",
"xar": "application/vnd.xara",
"xfdl": "application/vnd.xfdl",
"hvd": "application/vnd.yamaha.hv-dic",
"hvs": "application/vnd.yamaha.hv-script",
"hvp": "application/vnd.yamaha.hv-voice",
"osf": "application/vnd.yamaha.openscoreformat",
"osfpvg": "application/vnd.yamaha.openscoreformat.osfpvg+xml",
"saf": "application/vnd.yamaha.smaf-audio",
"spf": "application/vnd.yamaha.smaf-phrase",
"cmp": "application/vnd.yellowriver-custom-menu",
"zir": "application/vnd.zul",
"zirz": "application/vnd.zul",
"zaz": "application/vnd.zzazz.deck+xml",
"vxml": "application/voicexml+xml",
"wgt": "application/widget",
"hlp": "application/winhlp",
"wsdl": "application/wsdl+xml",
"wspolicy": "application/wspolicy+xml",
"7z": "application/x-7z-compressed",
"abw": "application/x-abiword",
"ace": "application/x-ace-compressed",
"dmg": "application/x-apple-diskimage",
"aab": "application/x-authorware-bin",
"x32": "application/x-authorware-bin",
"u32": "application/x-authorware-bin",
"vox": "application/x-authorware-bin",
"aam": "application/x-authorware-map",
"aas": "application/x-authorware-seg",
"bcpio": "application/x-bcpio",
"torrent": "application/x-bittorrent",
"blb": "application/x-blorb",
"blorb": "application/x-blorb",
"bz": "application/x-bzip",
"bz2": "application/x-bzip2",
"boz": "application/x-bzip2",
"cbr": "application/x-cbr",
"cba": "application/x-cbr",
"cbt": "application/x-cbr",
"cbz": "application/x-cbr",
"cb7": "application/x-cbr",
"vcd": "application/x-cdlink",
"cfs": "application/x-cfs-compressed",
"chat": "application/x-chat",
"pgn": "application/x-chess-pgn",
"nsc": "application/x-conference",
"cpio": "application/x-cpio",
"csh": "application/x-csh",
"deb": "application/x-debian-package",
"udeb": "application/x-debian-package",
"dgc": "application/x-dgc-compressed",
"dir": "application/x-director",
"dcr": "application/x-director",
"dxr": "application/x-director",
"cst": "application/x-director",
"cct": "application/x-director",
"cxt": "application/x-director",
"w3d": "application/x-director",
"fgd": "application/x-director",
"swa": "application/x-director",
"wad": "application/x-doom",
"ncx": "application/x-dtbncx+xml",
"dtb": "application/x-dtbook+xml",
"res": "application/x-dtbresource+xml",
"dvi": "application/x-dvi",
"evy": "application/x-envoy",
"eva": "application/x-eva",
"bdf": "application/x-font-bdf",
"gsf": "application/x-font-ghostscript",
"psf": "application/x-font-linux-psf",
"otf": "application/x-font-otf",
"pcf": "application/x-font-pcf",
"snf": "application/x-font-snf",
"ttf": "application/x-font-ttf",
"ttc": "application/x-font-ttf",
"pfa": "application/x-font-type1",
"pfb": "application/x-font-type1",
"pfm": "application/x-font-type1",
"afm": "application/x-font-type1",
"woff": "application/x-font-woff",
"arc": "application/x-freearc",
"spl": "application/x-futuresplash",
"gca": "application/x-gca-compressed",
"ulx": "application/x-glulx",
"gnumeric": "application/x-gnumeric",
"gramps": "application/x-gramps-xml",
"gtar": "application/x-gtar",
"hdf": "application/x-hdf",
"install": "application/x-install-instructions",
"iso": "application/x-iso9660-image",
"jnlp": "application/x-java-jnlp-file",
"latex": "application/x-latex",
"lzh": "application/x-lzh-compressed",
"lha": "application/x-lzh-compressed",
"mie": "application/x-mie",
"prc": "application/x-mobipocket-ebook",
"mobi": "application/x-mobipocket-ebook",
"application": "application/x-ms-application",
"lnk": "application/x-ms-shortcut",
"wmd": "application/x-ms-wmd",
"wmz": "application/x-msmetafile",
"xbap": "application/x-ms-xbap",
"mdb": "application/x-msaccess",
"obd": "application/x-msbinder",
"crd": "application/x-mscardfile",
"clp": "application/x-msclip",
"exe": "application/x-msdownload",
"dll": "application/x-msdownload",
"com": "application/x-msdownload",
"bat": "application/x-msdownload",
"msi": "application/x-msdownload",
"mvb": "application/x-msmediaview",
"m13": "application/x-msmediaview",
"m14": "application/x-msmediaview",
"wmf": "application/x-msmetafile",
"emf": "application/x-msmetafile",
"emz": "application/x-msmetafile",
"mny": "application/x-msmoney",
"pub": "application/x-mspublisher",
"scd": "application/x-msschedule",
"trm": "application/x-msterminal",
"wri": "application/x-mswrite",
"nc": "application/x-netcdf",
"cdf": "application/x-netcdf",
"nzb": "application/x-nzb",
"p12": "application/x-pkcs12",
"pfx": "application/x-pkcs12",
"p7b": "application/x-pkcs7-certificates",
"spc": "application/x-pkcs7-certificates",
"p7r": "application/x-pkcs7-certreqresp",
"rar": "application/x-rar-compressed",
"ris": "application/x-research-info-systems",
"sh": "application/x-sh",
"shar": "application/x-shar",
"swf": "application/x-shockwave-flash",
"xap": "application/x-silverlight-app",
"sql": "application/x-sql",
"sit": "application/x-stuffit",
"sitx": "application/x-stuffitx",
"srt": "application/x-subrip",
"sv4cpio": "application/x-sv4cpio",
"sv4crc": "application/x-sv4crc",
"t3": "application/x-t3vm-image",
"gam": "application/x-tads",
"tar": "application/x-tar",
"tcl": "application/x-tcl",
"tex": "application/x-tex",
"tfm": "application/x-tex-tfm",
"texinfo": "application/x-texinfo",
"texi": "application/x-texinfo",
"obj": "application/x-tgif",
"ustar": "application/x-ustar",
"src": "application/x-wais-source",
"der": "application/x-x509-ca-cert",
"crt": "application/x-x509-ca-cert",
"fig": "application/x-xfig",
"xlf": "application/x-xliff+xml",
"xpi": "application/x-xpinstall",
"xz": "application/x-xz",
"z1": "application/x-zmachine",
"z2": "application/x-zmachine",
"z3": "application/x-zmachine",
"z4": "application/x-zmachine",
"z5": "application/x-zmachine",
"z6": "application/x-zmachine",
"z7": "application/x-zmachine",
"z8": "application/x-zmachine",
"xaml": "application/xaml+xml",
"xdf": "application/xcap-diff+xml",
"xenc": "application/xenc+xml",
"xhtml": "application/xhtml+xml",
"xht": "application/xhtml+xml",
"xml": "application/xml",
"xsl": "application/xml",
"dtd": "application/xml-dtd",
"xop": "application/xop+xml",
"xpl": "application/xproc+xml",
"xslt": "application/xslt+xml",
"xspf": "application/xspf+xml",
"mxml": "application/xv+xml",
"xhvml": "application/xv+xml",
"xvml": "application/xv+xml",
"xvm": "application/xv+xml",
"yang": "application/yang",
"yin": "application/yin+xml",
"zip": "application/zip",
"adp": "audio/adpcm",
"au": "audio/basic",
"snd": "audio/basic",
"mid": "audio/midi",
"midi": "audio/midi",
"kar": "audio/midi",
"rmi": "audio/midi",
"mp4a": "audio/mp4",
"mpga": "audio/mpeg",
"mp2": "audio/mpeg",
"mp2a": "audio/mpeg",
"mp3": "audio/mpeg",
"m2a": "audio/mpeg",
"m3a": "audio/mpeg",
"oga": "audio/ogg",
"ogg": "audio/ogg",
"spx": "audio/ogg",
"s3m": "audio/s3m",
"sil": "audio/silk",
"uva": "audio/vnd.dece.audio",
"uvva": "audio/vnd.dece.audio",
"eol": "audio/vnd.digital-winds",
"dra": "audio/vnd.dra",
"dts": "audio/vnd.dts",
"dtshd": "audio/vnd.dts.hd",
"lvp": "audio/vnd.lucent.voice",
"pya": "audio/vnd.ms-playready.media.pya",
"ecelp4800": "audio/vnd.nuera.ecelp4800",
"ecelp7470": "audio/vnd.nuera.ecelp7470",
"ecelp9600": "audio/vnd.nuera.ecelp9600",
"rip": "audio/vnd.rip",
"weba": "audio/webm",
"aac": "audio/x-aac",
"aif": "audio/x-aiff",
"aiff": "audio/x-aiff",
"aifc": "audio/x-aiff",
"caf": "audio/x-caf",
"flac": "audio/x-flac",
"mka": "audio/x-matroska",
"m3u": "audio/x-mpegurl",
"wax": "audio/x-ms-wax",
"wma": "audio/x-ms-wma",
"ram": "audio/x-pn-realaudio",
"ra": "audio/x-pn-realaudio",
"rmp": "audio/x-pn-realaudio-plugin",
"wav": "audio/x-wav",
"xm": "audio/xm",
"cdx": "chemical/x-cdx",
"cif": "chemical/x-cif",
"cmdf": "chemical/x-cmdf",
"cml": "chemical/x-cml",
"csml": "chemical/x-csml",
"xyz": "chemical/x-xyz",
"bmp": "image/bmp",
"cgm": "image/cgm",
"g3": "image/g3fax",
"gif": "image/gif",
"ief": "image/ief",
"jpeg": "image/jpeg",
"jpg": "image/jpeg",
"jpe": "image/jpeg",
"ktx": "image/ktx",
"png": "image/png",
"btif": "image/prs.btif",
"sgi": "image/sgi",
"svg": "image/svg+xml",
"svgz": "image/svg+xml",
"tiff": "image/tiff",
"tif": "image/tiff",
"psd": "image/vnd.adobe.photoshop",
"uvi": "image/vnd.dece.graphic",
"uvvi": "image/vnd.dece.graphic",
"uvg": "image/vnd.dece.graphic",
"uvvg": "image/vnd.dece.graphic",
"sub": "text/vnd.dvb.subtitle",
"djvu": "image/vnd.djvu",
"djv": "image/vnd.djvu",
"dwg": "image/vnd.dwg",
"dxf": "image/vnd.dxf",
"fbs": "image/vnd.fastbidsheet",
"fpx": "image/vnd.fpx",
"fst": "image/vnd.fst",
"mmr": "image/vnd.fujixerox.edmics-mmr",
"rlc": "image/vnd.fujixerox.edmics-rlc",
"mdi": "image/vnd.ms-modi",
"wdp": "image/vnd.ms-photo",
"npx": "image/vnd.net-fpx",
"wbmp": "image/vnd.wap.wbmp",
"xif": "image/vnd.xiff",
"webp": "image/webp",
"3ds": "image/x-3ds",
"ras": "image/x-cmu-raster",
"cmx": "image/x-cmx",
"fh": "image/x-freehand",
"fhc": "image/x-freehand",
"fh4": "image/x-freehand",
"fh5": "image/x-freehand",
"fh7": "image/x-freehand",
"ico": "image/x-icon",
"sid": "image/x-mrsid-image",
"pcx": "image/x-pcx",
"pic": "image/x-pict",
"pct": "image/x-pict",
"pnm": "image/x-portable-anymap",
"pbm": "image/x-portable-bitmap",
"pgm": "image/x-portable-graymap",
"ppm": "image/x-portable-pixmap",
"rgb": "image/x-rgb",
"tga": "image/x-tga",
"xbm": "image/x-xbitmap",
"xpm": "image/x-xpixmap",
"xwd": "image/x-xwindowdump",
"eml": "message/rfc822",
"mime": "message/rfc822",
"igs": "model/iges",
"iges": "model/iges",
"msh": "model/mesh",
"mesh": "model/mesh",
"silo": "model/mesh",
"dae": "model/vnd.collada+xml",
"dwf": "model/vnd.dwf",
"gdl": "model/vnd.gdl",
"gtw": "model/vnd.gtw",
"mts": "model/vnd.mts",
"vtu": "model/vnd.vtu",
"wrl": "model/vrml",
"vrml": "model/vrml",
"x3db": "model/x3d+binary",
"x3dbz": "model/x3d+binary",
"x3dv": "model/x3d+vrml",
"x3dvz": "model/x3d+vrml",
"x3d": "model/x3d+xml",
"x3dz": "model/x3d+xml",
"appcache": "text/cache-manifest",
"ics": "text/calendar",
"ifb": "text/calendar",
"css": "text/css",
"csv": "text/csv",
"html": "text/html",
"htm": "text/html",
"n3": "text/n3",
"txt": "text/plain",
"text": "text/plain",
"conf": "text/plain",
"def": "text/plain",
"list": "text/plain",
"log": "text/plain",
"in": "text/plain",
"dsc": "text/prs.lines.tag",
"rtx": "text/richtext",
"sgml": "text/sgml",
"sgm": "text/sgml",
"tsv": "text/tab-separated-values",
"t": "text/troff",
"tr": "text/troff",
"roff": "text/troff",
"man": "text/troff",
"me": "text/troff",
"ms": "text/troff",
"ttl": "text/turtle",
"uri": "text/uri-list",
"uris": "text/uri-list",
"urls": "text/uri-list",
"vcard": "text/vcard",
"curl": "text/vnd.curl",
"dcurl": "text/vnd.curl.dcurl",
"scurl": "text/vnd.curl.scurl",
"mcurl": "text/vnd.curl.mcurl",
"fly": "text/vnd.fly",
"flx": "text/vnd.fmi.flexstor",
"gv": "text/vnd.graphviz",
"3dml": "text/vnd.in3d.3dml",
"spot": "text/vnd.in3d.spot",
"jad": "text/vnd.sun.j2me.app-descriptor",
"wml": "text/vnd.wap.wml",
"wmls": "text/vnd.wap.wmlscript",
"s": "text/x-asm",
"asm": "text/x-asm",
"c": "text/x-c",
"cc": "text/x-c",
"cxx": "text/x-c",
"cpp": "text/x-c",
"h": "text/x-c",
"hh": "text/x-c",
"dic": "text/x-c",
"f": "text/x-fortran",
"for": "text/x-fortran",
"f77": "text/x-fortran",
"f90": "text/x-fortran",
"java": "text/x-java-source",
"opml": "text/x-opml",
"p": "text/x-pascal",
"pas": "text/x-pascal",
"nfo": "text/x-nfo",
"etx": "text/x-setext",
"sfv": "text/x-sfv",
"uu": "text/x-uuencode",
"vcs": "text/x-vcalendar",
"vcf": "text/x-vcard",
"3gp": "video/3gpp",
"3g2": "video/3gpp2",
"h261": "video/h261",
"h263": "video/h263",
"h264": "video/h264",
"jpgv": "video/jpeg",
"jpm": "video/jpm",
"jpgm": "video/jpm",
"mj2": "video/mj2",
"mjp2": "video/mj2",
"mp4": "video/mp4",
"mp4v": "video/mp4",
"mpg4": "video/mp4",
"mpeg": "video/mpeg",
"mpg": "video/mpeg",
"mpe": "video/mpeg",
"m1v": "video/mpeg",
"m2v": "video/mpeg",
"ogv": "video/ogg",
"qt": "video/quicktime",
"mov": "video/quicktime",
"uvh": "video/vnd.dece.hd",
"uvvh": "video/vnd.dece.hd",
"uvm": "video/vnd.dece.mobile",
"uvvm": "video/vnd.dece.mobile",
"uvp": "video/vnd.dece.pd",
"uvvp": "video/vnd.dece.pd",
"uvs": "video/vnd.dece.sd",
"uvvs": "video/vnd.dece.sd",
"uvv": "video/vnd.dece.video",
"uvvv": "video/vnd.dece.video",
"dvb": "video/vnd.dvb.file",
"fvt": "video/vnd.fvt",
"mxu": "video/vnd.mpegurl",
"m4u": "video/vnd.mpegurl",
"pyv": "video/vnd.ms-playready.media.pyv",
"uvu": "video/vnd.uvvu.mp4",
"uvvu": "video/vnd.uvvu.mp4",
"viv": "video/vnd.vivo",
"webm": "video/webm",
"f4v": "video/x-f4v",
"fli": "video/x-fli",
"flv": "video/x-flv",
"m4v": "video/x-m4v",
"mkv": "video/x-matroska",
"mk3d": "video/x-matroska",
"mks": "video/x-matroska",
"mng": "video/x-mng",
"asf": "video/x-ms-asf",
"asx": "video/x-ms-asf",
"vob": "video/x-ms-vob",
"wm": "video/x-ms-wm",
"wmv": "video/x-ms-wmv",
"wmx": "video/x-ms-wmx",
"wvx": "video/x-ms-wvx",
"avi": "video/x-msvideo",
"movie": "video/x-sgi-movie",
"smv": "video/x-smv",
"ice": "x-conference/x-cooltalk",
"vtt": "text/vtt",
"crx": "application/x-chrome-extension",
"htc": "text/x-component",
"manifest": "text/cache-manifest",
"buffer": "application/octet-stream",
"m4p": "application/mp4",
"m4a": "audio/mp4",
"ts": "video/MP2T",
"event-stream": "text/event-stream",
"webapp": "application/x-web-app-manifest+json",
"lua": "text/x-lua",
"luac": "application/x-lua-bytecode",
"markdown": "text/x-markdown",
"md": "text/x-markdown",
"mkd": "text/x-markdown"
}
, extensions: {
"application/andrew-inset": "ez",
"application/applixware": "aw",
"application/atom+xml": "atom",
"application/atomcat+xml": "atomcat",
"application/atomsvc+xml": "atomsvc",
"application/ccxml+xml": "ccxml",
"application/cdmi-capability": "cdmia",
"application/cdmi-container": "cdmic",
"application/cdmi-domain": "cdmid",
"application/cdmi-object": "cdmio",
"application/cdmi-queue": "cdmiq",
"application/cu-seeme": "cu",
"application/davmount+xml": "davmount",
"application/docbook+xml": "dbk",
"application/dssc+der": "dssc",
"application/dssc+xml": "xdssc",
"application/ecmascript": "ecma",
"application/emma+xml": "emma",
"application/epub+zip": "epub",
"application/exi": "exi",
"application/font-tdpfr": "pfr",
"application/gml+xml": "gml",
"application/gpx+xml": "gpx",
"application/gxf": "gxf",
"application/hyperstudio": "stk",
"application/inkml+xml": "ink",
"application/ipfix": "ipfix",
"application/java-archive": "jar",
"application/java-serialized-object": "ser",
"application/java-vm": "class",
"application/javascript": "js",
"application/json": "json",
"application/jsonml+json": "jsonml",
"application/lost+xml": "lostxml",
"application/mac-binhex40": "hqx",
"application/mac-compactpro": "cpt",
"application/mads+xml": "mads",
"application/marc": "mrc",
"application/marcxml+xml": "mrcx",
"application/mathematica": "ma",
"application/mathml+xml": "mathml",
"application/mbox": "mbox",
"application/mediaservercontrol+xml": "mscml",
"application/metalink+xml": "metalink",
"application/metalink4+xml": "meta4",
"application/mets+xml": "mets",
"application/mods+xml": "mods",
"application/mp21": "m21",
"application/mp4": "mp4s",
"application/msword": "doc",
"application/mxf": "mxf",
"application/octet-stream": "bin",
"application/oda": "oda",
"application/oebps-package+xml": "opf",
"application/ogg": "ogx",
"application/omdoc+xml": "omdoc",
"application/onenote": "onetoc",
"application/oxps": "oxps",
"application/patch-ops-error+xml": "xer",
"application/pdf": "pdf",
"application/pgp-encrypted": "pgp",
"application/pgp-signature": "asc",
"application/pics-rules": "prf",
"application/pkcs10": "p10",
"application/pkcs7-mime": "p7m",
"application/pkcs7-signature": "p7s",
"application/pkcs8": "p8",
"application/pkix-attr-cert": "ac",
"application/pkix-cert": "cer",
"application/pkix-crl": "crl",
"application/pkix-pkipath": "pkipath",
"application/pkixcmp": "pki",
"application/pls+xml": "pls",
"application/postscript": "ai",
"application/prs.cww": "cww",
"application/pskc+xml": "pskcxml",
"application/rdf+xml": "rdf",
"application/reginfo+xml": "rif",
"application/relax-ng-compact-syntax": "rnc",
"application/resource-lists+xml": "rl",
"application/resource-lists-diff+xml": "rld",
"application/rls-services+xml": "rs",
"application/rpki-ghostbusters": "gbr",
"application/rpki-manifest": "mft",
"application/rpki-roa": "roa",
"application/rsd+xml": "rsd",
"application/rss+xml": "rss",
"application/rtf": "rtf",
"application/sbml+xml": "sbml",
"application/scvp-cv-request": "scq",
"application/scvp-cv-response": "scs",
"application/scvp-vp-request": "spq",
"application/scvp-vp-response": "spp",
"application/sdp": "sdp",
"application/set-payment-initiation": "setpay",
"application/set-registration-initiation": "setreg",
"application/shf+xml": "shf",
"application/smil+xml": "smi",
"application/sparql-query": "rq",
"application/sparql-results+xml": "srx",
"application/srgs": "gram",
"application/srgs+xml": "grxml",
"application/sru+xml": "sru",
"application/ssdl+xml": "ssdl",
"application/ssml+xml": "ssml",
"application/tei+xml": "tei",
"application/thraud+xml": "tfi",
"application/timestamped-data": "tsd",
"application/vnd.3gpp.pic-bw-large": "plb",
"application/vnd.3gpp.pic-bw-small": "psb",
"application/vnd.3gpp.pic-bw-var": "pvb",
"application/vnd.3gpp2.tcap": "tcap",
"application/vnd.3m.post-it-notes": "pwn",
"application/vnd.accpac.simply.aso": "aso",
"application/vnd.accpac.simply.imp": "imp",
"application/vnd.acucobol": "acu",
"application/vnd.acucorp": "atc",
"application/vnd.adobe.air-application-installer-package+zip": "air",
"application/vnd.adobe.formscentral.fcdt": "fcdt",
"application/vnd.adobe.fxp": "fxp",
"application/vnd.adobe.xdp+xml": "xdp",
"application/vnd.adobe.xfdf": "xfdf",
"application/vnd.ahead.space": "ahead",
"application/vnd.airzip.filesecure.azf": "azf",
"application/vnd.airzip.filesecure.azs": "azs",
"application/vnd.amazon.ebook": "azw",
"application/vnd.americandynamics.acc": "acc",
"application/vnd.amiga.ami": "ami",
"application/vnd.android.package-archive": "apk",
"application/vnd.anser-web-certificate-issue-initiation": "cii",
"application/vnd.anser-web-funds-transfer-initiation": "fti",
"application/vnd.antix.game-component": "atx",
"application/vnd.apple.installer+xml": "mpkg",
"application/vnd.apple.mpegurl": "m3u8",
"application/vnd.aristanetworks.swi": "swi",
"application/vnd.astraea-software.iota": "iota",
"application/vnd.audiograph": "aep",
"application/vnd.blueice.multipass": "mpm",
"application/vnd.bmi": "bmi",
"application/vnd.businessobjects": "rep",
"application/vnd.chemdraw+xml": "cdxml",
"application/vnd.chipnuts.karaoke-mmd": "mmd",
"application/vnd.cinderella": "cdy",
"application/vnd.claymore": "cla",
"application/vnd.cloanto.rp9": "rp9",
"application/vnd.clonk.c4group": "c4g",
"application/vnd.cluetrust.cartomobile-config": "c11amc",
"application/vnd.cluetrust.cartomobile-config-pkg": "c11amz",
"application/vnd.commonspace": "csp",
"application/vnd.contact.cmsg": "cdbcmsg",
"application/vnd.cosmocaller": "cmc",
"application/vnd.crick.clicker": "clkx",
"application/vnd.crick.clicker.keyboard": "clkk",
"application/vnd.crick.clicker.palette": "clkp",
"application/vnd.crick.clicker.template": "clkt",
"application/vnd.crick.clicker.wordbank": "clkw",
"application/vnd.criticaltools.wbs+xml": "wbs",
"application/vnd.ctc-posml": "pml",
"application/vnd.cups-ppd": "ppd",
"application/vnd.curl.car": "car",
"application/vnd.curl.pcurl": "pcurl",
"application/vnd.dart": "dart",
"application/vnd.data-vision.rdz": "rdz",
"application/vnd.dece.data": "uvf",
"application/vnd.dece.ttml+xml": "uvt",
"application/vnd.dece.unspecified": "uvx",
"application/vnd.dece.zip": "uvz",
"application/vnd.denovo.fcselayout-link": "fe_launch",
"application/vnd.dna": "dna",
"application/vnd.dolby.mlp": "mlp",
"application/vnd.dpgraph": "dpg",
"application/vnd.dreamfactory": "dfac",
"application/vnd.ds-keypoint": "kpxx",
"application/vnd.dvb.ait": "ait",
"application/vnd.dvb.service": "svc",
"application/vnd.dynageo": "geo",
"application/vnd.ecowin.chart": "mag",
"application/vnd.enliven": "nml",
"application/vnd.epson.esf": "esf",
"application/vnd.epson.msf": "msf",
"application/vnd.epson.quickanime": "qam",
"application/vnd.epson.salt": "slt",
"application/vnd.epson.ssf": "ssf",
"application/vnd.eszigno3+xml": "es3",
"application/vnd.ezpix-album": "ez2",
"application/vnd.ezpix-package": "ez3",
"application/vnd.fdf": "fdf",
"application/vnd.fdsn.mseed": "mseed",
"application/vnd.fdsn.seed": "seed",
"application/vnd.flographit": "gph",
"application/vnd.fluxtime.clip": "ftc",
"application/vnd.framemaker": "fm",
"application/vnd.frogans.fnc": "fnc",
"application/vnd.frogans.ltf": "ltf",
"application/vnd.fsc.weblaunch": "fsc",
"application/vnd.fujitsu.oasys": "oas",
"application/vnd.fujitsu.oasys2": "oa2",
"application/vnd.fujitsu.oasys3": "oa3",
"application/vnd.fujitsu.oasysgp": "fg5",
"application/vnd.fujitsu.oasysprs": "bh2",
"application/vnd.fujixerox.ddd": "ddd",
"application/vnd.fujixerox.docuworks": "xdw",
"application/vnd.fujixerox.docuworks.binder": "xbd",
"application/vnd.fuzzysheet": "fzs",
"application/vnd.genomatix.tuxedo": "txd",
"application/vnd.geogebra.file": "ggb",
"application/vnd.geogebra.tool": "ggt",
"application/vnd.geometry-explorer": "gex",
"application/vnd.geonext": "gxt",
"application/vnd.geoplan": "g2w",
"application/vnd.geospace": "g3w",
"application/vnd.gmx": "gmx",
"application/vnd.google-earth.kml+xml": "kml",
"application/vnd.google-earth.kmz": "kmz",
"application/vnd.grafeq": "gqf",
"application/vnd.groove-account": "gac",
"application/vnd.groove-help": "ghf",
"application/vnd.groove-identity-message": "gim",
"application/vnd.groove-injector": "grv",
"application/vnd.groove-tool-message": "gtm",
"application/vnd.groove-tool-template": "tpl",
"application/vnd.groove-vcard": "vcg",
"application/vnd.hal+xml": "hal",
"application/vnd.handheld-entertainment+xml": "zmm",
"application/vnd.hbci": "hbci",
"application/vnd.hhe.lesson-player": "les",
"application/vnd.hp-hpgl": "hpgl",
"application/vnd.hp-hpid": "hpid",
"application/vnd.hp-hps": "hps",
"application/vnd.hp-jlyt": "jlt",
"application/vnd.hp-pcl": "pcl",
"application/vnd.hp-pclxl": "pclxl",
"application/vnd.hydrostatix.sof-data": "sfd-hdstx",
"application/vnd.ibm.minipay": "mpy",
"application/vnd.ibm.modcap": "afp",
"application/vnd.ibm.rights-management": "irm",
"application/vnd.ibm.secure-container": "sc",
"application/vnd.iccprofile": "icc",
"application/vnd.igloader": "igl",
"application/vnd.immervision-ivp": "ivp",
"application/vnd.immervision-ivu": "ivu",
"application/vnd.insors.igm": "igm",
"application/vnd.intercon.formnet": "xpw",
"application/vnd.intergeo": "i2g",
"application/vnd.intu.qbo": "qbo",
"application/vnd.intu.qfx": "qfx",
"application/vnd.ipunplugged.rcprofile": "rcprofile",
"application/vnd.irepository.package+xml": "irp",
"application/vnd.is-xpr": "xpr",
"application/vnd.isac.fcs": "fcs",
"application/vnd.jam": "jam",
"application/vnd.jcp.javame.midlet-rms": "rms",
"application/vnd.jisp": "jisp",
"application/vnd.joost.joda-archive": "joda",
"application/vnd.kahootz": "ktz",
"application/vnd.kde.karbon": "karbon",
"application/vnd.kde.kchart": "chrt",
"application/vnd.kde.kformula": "kfo",
"application/vnd.kde.kivio": "flw",
"application/vnd.kde.kontour": "kon",
"application/vnd.kde.kpresenter": "kpr",
"application/vnd.kde.kspread": "ksp",
"application/vnd.kde.kword": "kwd",
"application/vnd.kenameaapp": "htke",
"application/vnd.kidspiration": "kia",
"application/vnd.kinar": "kne",
"application/vnd.koan": "skp",
"application/vnd.kodak-descriptor": "sse",
"application/vnd.las.las+xml": "lasxml",
"application/vnd.llamagraphics.life-balance.desktop": "lbd",
"application/vnd.llamagraphics.life-balance.exchange+xml": "lbe",
"application/vnd.lotus-1-2-3": "123",
"application/vnd.lotus-approach": "apr",
"application/vnd.lotus-freelance": "pre",
"application/vnd.lotus-notes": "nsf",
"application/vnd.lotus-organizer": "org",
"application/vnd.lotus-screencam": "scm",
"application/vnd.lotus-wordpro": "lwp",
"application/vnd.macports.portpkg": "portpkg",
"application/vnd.mcd": "mcd",
"application/vnd.medcalcdata": "mc1",
"application/vnd.mediastation.cdkey": "cdkey",
"application/vnd.mfer": "mwf",
"application/vnd.mfmp": "mfm",
"application/vnd.micrografx.flo": "flo",
"application/vnd.micrografx.igx": "igx",
"application/vnd.mif": "mif",
"application/vnd.mobius.daf": "daf",
"application/vnd.mobius.dis": "dis",
"application/vnd.mobius.mbk": "mbk",
"application/vnd.mobius.mqy": "mqy",
"application/vnd.mobius.msl": "msl",
"application/vnd.mobius.plc": "plc",
"application/vnd.mobius.txf": "txf",
"application/vnd.mophun.application": "mpn",
"application/vnd.mophun.certificate": "mpc",
"application/vnd.mozilla.xul+xml": "xul",
"application/vnd.ms-artgalry": "cil",
"application/vnd.ms-cab-compressed": "cab",
"application/vnd.ms-excel": "xls",
"application/vnd.ms-excel.addin.macroenabled.12": "xlam",
"application/vnd.ms-excel.sheet.binary.macroenabled.12": "xlsb",
"application/vnd.ms-excel.sheet.macroenabled.12": "xlsm",
"application/vnd.ms-excel.template.macroenabled.12": "xltm",
"application/vnd.ms-fontobject": "eot",
"application/vnd.ms-htmlhelp": "chm",
"application/vnd.ms-ims": "ims",
"application/vnd.ms-lrm": "lrm",
"application/vnd.ms-officetheme": "thmx",
"application/vnd.ms-pki.seccat": "cat",
"application/vnd.ms-pki.stl": "stl",
"application/vnd.ms-powerpoint": "ppt",
"application/vnd.ms-powerpoint.addin.macroenabled.12": "ppam",
"application/vnd.ms-powerpoint.presentation.macroenabled.12": "pptm",
"application/vnd.ms-powerpoint.slide.macroenabled.12": "sldm",
"application/vnd.ms-powerpoint.slideshow.macroenabled.12": "ppsm",
"application/vnd.ms-powerpoint.template.macroenabled.12": "potm",
"application/vnd.ms-project": "mpp",
"application/vnd.ms-word.document.macroenabled.12": "docm",
"application/vnd.ms-word.template.macroenabled.12": "dotm",
"application/vnd.ms-works": "wps",
"application/vnd.ms-wpl": "wpl",
"application/vnd.ms-xpsdocument": "xps",
"application/vnd.mseq": "mseq",
"application/vnd.musician": "mus",
"application/vnd.muvee.style": "msty",
"application/vnd.mynfc": "taglet",
"application/vnd.neurolanguage.nlu": "nlu",
"application/vnd.nitf": "ntf",
"application/vnd.noblenet-directory": "nnd",
"application/vnd.noblenet-sealer": "nns",
"application/vnd.noblenet-web": "nnw",
"application/vnd.nokia.n-gage.data": "ngdat",
"application/vnd.nokia.n-gage.symbian.install": "n-gage",
"application/vnd.nokia.radio-preset": "rpst",
"application/vnd.nokia.radio-presets": "rpss",
"application/vnd.novadigm.edm": "edm",
"application/vnd.novadigm.edx": "edx",
"application/vnd.novadigm.ext": "ext",
"application/vnd.oasis.opendocument.chart": "odc",
"application/vnd.oasis.opendocument.chart-template": "otc",
"application/vnd.oasis.opendocument.database": "odb",
"application/vnd.oasis.opendocument.formula": "odf",
"application/vnd.oasis.opendocument.formula-template": "odft",
"application/vnd.oasis.opendocument.graphics": "odg",
"application/vnd.oasis.opendocument.graphics-template": "otg",
"application/vnd.oasis.opendocument.image": "odi",
"application/vnd.oasis.opendocument.image-template": "oti",
"application/vnd.oasis.opendocument.presentation": "odp",
"application/vnd.oasis.opendocument.presentation-template": "otp",
"application/vnd.oasis.opendocument.spreadsheet": "ods",
"application/vnd.oasis.opendocument.spreadsheet-template": "ots",
"application/vnd.oasis.opendocument.text": "odt",
"application/vnd.oasis.opendocument.text-master": "odm",
"application/vnd.oasis.opendocument.text-template": "ott",
"application/vnd.oasis.opendocument.text-web": "oth",
"application/vnd.olpc-sugar": "xo",
"application/vnd.oma.dd2+xml": "dd2",
"application/vnd.openofficeorg.extension": "oxt",
"application/vnd.openxmlformats-officedocument.presentationml.presentation": "pptx",
"application/vnd.openxmlformats-officedocument.presentationml.slide": "sldx",
"application/vnd.openxmlformats-officedocument.presentationml.slideshow": "ppsx",
"application/vnd.openxmlformats-officedocument.presentationml.template": "potx",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": "xlsx",
"application/vnd.openxmlformats-officedocument.spreadsheetml.template": "xltx",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": "docx",
"application/vnd.openxmlformats-officedocument.wordprocessingml.template": "dotx",
"application/vnd.osgeo.mapguide.package": "mgp",
"application/vnd.osgi.dp": "dp",
"application/vnd.osgi.subsystem": "esa",
"application/vnd.palm": "pdb",
"application/vnd.pawaafile": "paw",
"application/vnd.pg.format": "str",
"application/vnd.pg.osasli": "ei6",
"application/vnd.picsel": "efif",
"application/vnd.pmi.widget": "wg",
"application/vnd.pocketlearn": "plf",
"application/vnd.powerbuilder6": "pbd",
"application/vnd.previewsystems.box": "box",
"application/vnd.proteus.magazine": "mgz",
"application/vnd.publishare-delta-tree": "qps",
"application/vnd.pvi.ptid1": "ptid",
"application/vnd.quark.quarkxpress": "qxd",
"application/vnd.realvnc.bed": "bed",
"application/vnd.recordare.musicxml": "mxl",
"application/vnd.recordare.musicxml+xml": "musicxml",
"application/vnd.rig.cryptonote": "cryptonote",
"application/vnd.rim.cod": "cod",
"application/vnd.rn-realmedia": "rm",
"application/vnd.rn-realmedia-vbr": "rmvb",
"application/vnd.route66.link66+xml": "link66",
"application/vnd.sailingtracker.track": "st",
"application/vnd.seemail": "see",
"application/vnd.sema": "sema",
"application/vnd.semd": "semd",
"application/vnd.semf": "semf",
"application/vnd.shana.informed.formdata": "ifm",
"application/vnd.shana.informed.formtemplate": "itp",
"application/vnd.shana.informed.interchange": "iif",
"application/vnd.shana.informed.package": "ipk",
"application/vnd.simtech-mindmapper": "twd",
"application/vnd.smaf": "mmf",
"application/vnd.smart.teacher": "teacher",
"application/vnd.solent.sdkm+xml": "sdkm",
"application/vnd.spotfire.dxp": "dxp",
"application/vnd.spotfire.sfs": "sfs",
"application/vnd.stardivision.calc": "sdc",
"application/vnd.stardivision.draw": "sda",
"application/vnd.stardivision.impress": "sdd",
"application/vnd.stardivision.math": "smf",
"application/vnd.stardivision.writer": "sdw",
"application/vnd.stardivision.writer-global": "sgl",
"application/vnd.stepmania.package": "smzip",
"application/vnd.stepmania.stepchart": "sm",
"application/vnd.sun.xml.calc": "sxc",
"application/vnd.sun.xml.calc.template": "stc",
"application/vnd.sun.xml.draw": "sxd",
"application/vnd.sun.xml.draw.template": "std",
"application/vnd.sun.xml.impress": "sxi",
"application/vnd.sun.xml.impress.template": "sti",
"application/vnd.sun.xml.math": "sxm",
"application/vnd.sun.xml.writer": "sxw",
"application/vnd.sun.xml.writer.global": "sxg",
"application/vnd.sun.xml.writer.template": "stw",
"application/vnd.sus-calendar": "sus",
"application/vnd.svd": "svd",
"application/vnd.symbian.install": "sis",
"application/vnd.syncml+xml": "xsm",
"application/vnd.syncml.dm+wbxml": "bdm",
"application/vnd.syncml.dm+xml": "xdm",
"application/vnd.tao.intent-module-archive": "tao",
"application/vnd.tcpdump.pcap": "pcap",
"application/vnd.tmobile-livetv": "tmo",
"application/vnd.trid.tpt": "tpt",
"application/vnd.triscape.mxs": "mxs",
"application/vnd.trueapp": "tra",
"application/vnd.ufdl": "ufd",
"application/vnd.uiq.theme": "utz",
"application/vnd.umajin": "umj",
"application/vnd.unity": "unityweb",
"application/vnd.uoml+xml": "uoml",
"application/vnd.vcx": "vcx",
"application/vnd.visio": "vsd",
"application/vnd.visionary": "vis",
"application/vnd.vsf": "vsf",
"application/vnd.wap.wbxml": "wbxml",
"application/vnd.wap.wmlc": "wmlc",
"application/vnd.wap.wmlscriptc": "wmlsc",
"application/vnd.webturbo": "wtb",
"application/vnd.wolfram.player": "nbp",
"application/vnd.wordperfect": "wpd",
"application/vnd.wqd": "wqd",
"application/vnd.wt.stf": "stf",
"application/vnd.xara": "xar",
"application/vnd.xfdl": "xfdl",
"application/vnd.yamaha.hv-dic": "hvd",
"application/vnd.yamaha.hv-script": "hvs",
"application/vnd.yamaha.hv-voice": "hvp",
"application/vnd.yamaha.openscoreformat": "osf",
"application/vnd.yamaha.openscoreformat.osfpvg+xml": "osfpvg",
"application/vnd.yamaha.smaf-audio": "saf",
"application/vnd.yamaha.smaf-phrase": "spf",
"application/vnd.yellowriver-custom-menu": "cmp",
"application/vnd.zul": "zir",
"application/vnd.zzazz.deck+xml": "zaz",
"application/voicexml+xml": "vxml",
"application/widget": "wgt",
"application/winhlp": "hlp",
"application/wsdl+xml": "wsdl",
"application/wspolicy+xml": "wspolicy",
"application/x-7z-compressed": "7z",
"application/x-abiword": "abw",
"application/x-ace-compressed": "ace",
"application/x-apple-diskimage": "dmg",
"application/x-authorware-bin": "aab",
"application/x-authorware-map": "aam",
"application/x-authorware-seg": "aas",
"application/x-bcpio": "bcpio",
"application/x-bittorrent": "torrent",
"application/x-blorb": "blb",
"application/x-bzip": "bz",
"application/x-bzip2": "bz2",
"application/x-cbr": "cbr",
"application/x-cdlink": "vcd",
"application/x-cfs-compressed": "cfs",
"application/x-chat": "chat",
"application/x-chess-pgn": "pgn",
"application/x-conference": "nsc",
"application/x-cpio": "cpio",
"application/x-csh": "csh",
"application/x-debian-package": "deb",
"application/x-dgc-compressed": "dgc",
"application/x-director": "dir",
"application/x-doom": "wad",
"application/x-dtbncx+xml": "ncx",
"application/x-dtbook+xml": "dtb",
"application/x-dtbresource+xml": "res",
"application/x-dvi": "dvi",
"application/x-envoy": "evy",
"application/x-eva": "eva",
"application/x-font-bdf": "bdf",
"application/x-font-ghostscript": "gsf",
"application/x-font-linux-psf": "psf",
"application/x-font-otf": "otf",
"application/x-font-pcf": "pcf",
"application/x-font-snf": "snf",
"application/x-font-ttf": "ttf",
"application/x-font-type1": "pfa",
"application/x-font-woff": "woff",
"application/x-freearc": "arc",
"application/x-futuresplash": "spl",
"application/x-gca-compressed": "gca",
"application/x-glulx": "ulx",
"application/x-gnumeric": "gnumeric",
"application/x-gramps-xml": "gramps",
"application/x-gtar": "gtar",
"application/x-hdf": "hdf",
"application/x-install-instructions": "install",
"application/x-iso9660-image": "iso",
"application/x-java-jnlp-file": "jnlp",
"application/x-latex": "latex",
"application/x-lzh-compressed": "lzh",
"application/x-mie": "mie",
"application/x-mobipocket-ebook": "prc",
"application/x-ms-application": "application",
"application/x-ms-shortcut": "lnk",
"application/x-ms-wmd": "wmd",
"application/x-ms-wmz": "wmz",
"application/x-ms-xbap": "xbap",
"application/x-msaccess": "mdb",
"application/x-msbinder": "obd",
"application/x-mscardfile": "crd",
"application/x-msclip": "clp",
"application/x-msdownload": "exe",
"application/x-msmediaview": "mvb",
"application/x-msmetafile": "wmf",
"application/x-msmoney": "mny",
"application/x-mspublisher": "pub",
"application/x-msschedule": "scd",
"application/x-msterminal": "trm",
"application/x-mswrite": "wri",
"application/x-netcdf": "nc",
"application/x-nzb": "nzb",
"application/x-pkcs12": "p12",
"application/x-pkcs7-certificates": "p7b",
"application/x-pkcs7-certreqresp": "p7r",
"application/x-rar-compressed": "rar",
"application/x-research-info-systems": "ris",
"application/x-sh": "sh",
"application/x-shar": "shar",
"application/x-shockwave-flash": "swf",
"application/x-silverlight-app": "xap",
"application/x-sql": "sql",
"application/x-stuffit": "sit",
"application/x-stuffitx": "sitx",
"application/x-subrip": "srt",
"application/x-sv4cpio": "sv4cpio",
"application/x-sv4crc": "sv4crc",
"application/x-t3vm-image": "t3",
"application/x-tads": "gam",
"application/x-tar": "tar",
"application/x-tcl": "tcl",
"application/x-tex": "tex",
"application/x-tex-tfm": "tfm",
"application/x-texinfo": "texinfo",
"application/x-tgif": "obj",
"application/x-ustar": "ustar",
"application/x-wais-source": "src",
"application/x-x509-ca-cert": "der",
"application/x-xfig": "fig",
"application/x-xliff+xml": "xlf",
"application/x-xpinstall": "xpi",
"application/x-xz": "xz",
"application/x-zmachine": "z1",
"application/xaml+xml": "xaml",
"application/xcap-diff+xml": "xdf",
"application/xenc+xml": "xenc",
"application/xhtml+xml": "xhtml",
"application/xml": "xml",
"application/xml-dtd": "dtd",
"application/xop+xml": "xop",
"application/xproc+xml": "xpl",
"application/xslt+xml": "xslt",
"application/xspf+xml": "xspf",
"application/xv+xml": "mxml",
"application/yang": "yang",
"application/yin+xml": "yin",
"application/zip": "zip",
"audio/adpcm": "adp",
"audio/basic": "au",
"audio/midi": "mid",
"audio/mp4": "mp4a",
"audio/mpeg": "mpga",
"audio/ogg": "oga",
"audio/s3m": "s3m",
"audio/silk": "sil",
"audio/vnd.dece.audio": "uva",
"audio/vnd.digital-winds": "eol",
"audio/vnd.dra": "dra",
"audio/vnd.dts": "dts",
"audio/vnd.dts.hd": "dtshd",
"audio/vnd.lucent.voice": "lvp",
"audio/vnd.ms-playready.media.pya": "pya",
"audio/vnd.nuera.ecelp4800": "ecelp4800",
"audio/vnd.nuera.ecelp7470": "ecelp7470",
"audio/vnd.nuera.ecelp9600": "ecelp9600",
"audio/vnd.rip": "rip",
"audio/webm": "weba",
"audio/x-aac": "aac",
"audio/x-aiff": "aif",
"audio/x-caf": "caf",
"audio/x-flac": "flac",
"audio/x-matroska": "mka",
"audio/x-mpegurl": "m3u",
"audio/x-ms-wax": "wax",
"audio/x-ms-wma": "wma",
"audio/x-pn-realaudio": "ram",
"audio/x-pn-realaudio-plugin": "rmp",
"audio/x-wav": "wav",
"audio/xm": "xm",
"chemical/x-cdx": "cdx",
"chemical/x-cif": "cif",
"chemical/x-cmdf": "cmdf",
"chemical/x-cml": "cml",
"chemical/x-csml": "csml",
"chemical/x-xyz": "xyz",
"image/bmp": "bmp",
"image/cgm": "cgm",
"image/g3fax": "g3",
"image/gif": "gif",
"image/ief": "ief",
"image/jpeg": "jpeg",
"image/ktx": "ktx",
"image/png": "png",
"image/prs.btif": "btif",
"image/sgi": "sgi",
"image/svg+xml": "svg",
"image/tiff": "tiff",
"image/vnd.adobe.photoshop": "psd",
"image/vnd.dece.graphic": "uvi",
"image/vnd.dvb.subtitle": "sub",
"image/vnd.djvu": "djvu",
"image/vnd.dwg": "dwg",
"image/vnd.dxf": "dxf",
"image/vnd.fastbidsheet": "fbs",
"image/vnd.fpx": "fpx",
"image/vnd.fst": "fst",
"image/vnd.fujixerox.edmics-mmr": "mmr",
"image/vnd.fujixerox.edmics-rlc": "rlc",
"image/vnd.ms-modi": "mdi",
"image/vnd.ms-photo": "wdp",
"image/vnd.net-fpx": "npx",
"image/vnd.wap.wbmp": "wbmp",
"image/vnd.xiff": "xif",
"image/webp": "webp",
"image/x-3ds": "3ds",
"image/x-cmu-raster": "ras",
"image/x-cmx": "cmx",
"image/x-freehand": "fh",
"image/x-icon": "ico",
"image/x-mrsid-image": "sid",
"image/x-pcx": "pcx",
"image/x-pict": "pic",
"image/x-portable-anymap": "pnm",
"image/x-portable-bitmap": "pbm",
"image/x-portable-graymap": "pgm",
"image/x-portable-pixmap": "ppm",
"image/x-rgb": "rgb",
"image/x-tga": "tga",
"image/x-xbitmap": "xbm",
"image/x-xpixmap": "xpm",
"image/x-xwindowdump": "xwd",
"message/rfc822": "eml",
"model/iges": "igs",
"model/mesh": "msh",
"model/vnd.collada+xml": "dae",
"model/vnd.dwf": "dwf",
"model/vnd.gdl": "gdl",
"model/vnd.gtw": "gtw",
"model/vnd.mts": "mts",
"model/vnd.vtu": "vtu",
"model/vrml": "wrl",
"model/x3d+binary": "x3db",
"model/x3d+vrml": "x3dv",
"model/x3d+xml": "x3d",
"text/cache-manifest": "appcache",
"text/calendar": "ics",
"text/css": "css",
"text/csv": "csv",
"text/html": "html",
"text/n3": "n3",
"text/plain": "txt",
"text/prs.lines.tag": "dsc",
"text/richtext": "rtx",
"text/sgml": "sgml",
"text/tab-separated-values": "tsv",
"text/troff": "t",
"text/turtle": "ttl",
"text/uri-list": "uri",
"text/vcard": "vcard",
"text/vnd.curl": "curl",
"text/vnd.curl.dcurl": "dcurl",
"text/vnd.curl.scurl": "scurl",
"text/vnd.curl.mcurl": "mcurl",
"text/vnd.dvb.subtitle": "sub",
"text/vnd.fly": "fly",
"text/vnd.fmi.flexstor": "flx",
"text/vnd.graphviz": "gv",
"text/vnd.in3d.3dml": "3dml",
"text/vnd.in3d.spot": "spot",
"text/vnd.sun.j2me.app-descriptor": "jad",
"text/vnd.wap.wml": "wml",
"text/vnd.wap.wmlscript": "wmls",
"text/x-asm": "s",
"text/x-c": "c",
"text/x-fortran": "f",
"text/x-java-source": "java",
"text/x-opml": "opml",
"text/x-pascal": "p",
"text/x-nfo": "nfo",
"text/x-setext": "etx",
"text/x-sfv": "sfv",
"text/x-uuencode": "uu",
"text/x-vcalendar": "vcs",
"text/x-vcard": "vcf",
"video/3gpp": "3gp",
"video/3gpp2": "3g2",
"video/h261": "h261",
"video/h263": "h263",
"video/h264": "h264",
"video/jpeg": "jpgv",
"video/jpm": "jpm",
"video/mj2": "mj2",
"video/mp4": "mp4",
"video/mpeg": "mpeg",
"video/ogg": "ogv",
"video/quicktime": "qt",
"video/vnd.dece.hd": "uvh",
"video/vnd.dece.mobile": "uvm",
"video/vnd.dece.pd": "uvp",
"video/vnd.dece.sd": "uvs",
"video/vnd.dece.video": "uvv",
"video/vnd.dvb.file": "dvb",
"video/vnd.fvt": "fvt",
"video/vnd.mpegurl": "mxu",
"video/vnd.ms-playready.media.pyv": "pyv",
"video/vnd.uvvu.mp4": "uvu",
"video/vnd.vivo": "viv",
"video/webm": "webm",
"video/x-f4v": "f4v",
"video/x-fli": "fli",
"video/x-flv": "flv",
"video/x-m4v": "m4v",
"video/x-matroska": "mkv",
"video/x-mng": "mng",
"video/x-ms-asf": "asf",
"video/x-ms-vob": "vob",
"video/x-ms-wm": "wm",
"video/x-ms-wmv": "wmv",
"video/x-ms-wmx": "wmx",
"video/x-ms-wvx": "wvx",
"video/x-msvideo": "avi",
"video/x-sgi-movie": "movie",
"video/x-smv": "smv",
"x-conference/x-cooltalk": "ice",
"text/vtt": "vtt",
"application/x-chrome-extension": "crx",
"text/x-component": "htc",
"video/MP2T": "ts",
"text/event-stream": "event-stream",
"application/x-web-app-manifest+json": "webapp",
"text/x-lua": "lua",
"application/x-lua-bytecode": "luac",
"text/x-markdown": "markdown"
}
, extension: function (mimeType) {
var type = mimeType.match(/^\s*([^;\s]*)(?:;|\s|$)/)[1].toLowerCase();
return this.extensions[type];
}
, define: function (map) {
for (var type in map) {
var exts = map[type];
for (var i = 0; i < exts.length; i++) {
if (false && this.types[exts]) {
console.warn(this._loading.replace(/.*\//, ''), 'changes "' + exts[i] + '" extension type from ' +
this.types[exts] + ' to ' + type);
}
this.types[exts[i]] = type;
}
// Default extension is the first one we encounter
if (!this.extensions[type]) {
this.extensions[type] = exts[0];
}
}
}
, charsets: {lookup: function (mimeType, fallback) {
// Assume text types are utf8
return (/^text\//).test(mimeType) ? 'UTF-8' : fallback;
}}
}
mime.types.constructor = undefined
mime.extensions.constructor = undefined
|
PypiClean
|
/variants-0.2.0.tar.gz/variants-0.2.0/docs/installation.rst
|
.. highlight:: shell
============
Installation
============
Stable release
--------------
To install variants, run this command in your terminal:
.. code-block:: console
$ pip install variants
This is the preferred method to install variants, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io/en/stable/
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for variants can be downloaded from the `Github repo`_.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/python-variants/variants
Or download the `tarball`_:
.. code-block:: console
$ curl -OL https://github.com/python-variants/variants/tarball/master
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Github repo: https://github.com/python-variants/variants
.. _tarball: https://github.com/python-variants/variants/tarball/master
|
PypiClean
|
/alipay-sdk-python-pycryptodome-3.3.202.tar.gz/alipay-sdk-python-pycryptodome-3.3.202/alipay/aop/api/request/AlipayMarketingCampaignDiscountStatusUpdateRequest.py
|
import json
from alipay.aop.api.FileItem import FileItem
from alipay.aop.api.constant.ParamConstants import *
from alipay.aop.api.domain.AlipayMarketingCampaignDiscountStatusUpdateModel import AlipayMarketingCampaignDiscountStatusUpdateModel
class AlipayMarketingCampaignDiscountStatusUpdateRequest(object):
def __init__(self, biz_model=None):
self._biz_model = biz_model
self._biz_content = None
self._version = "1.0"
self._terminal_type = None
self._terminal_info = None
self._prod_code = None
self._notify_url = None
self._return_url = None
self._udf_params = None
self._need_encrypt = False
@property
def biz_model(self):
return self._biz_model
@biz_model.setter
def biz_model(self, value):
self._biz_model = value
@property
def biz_content(self):
return self._biz_content
@biz_content.setter
def biz_content(self, value):
if isinstance(value, AlipayMarketingCampaignDiscountStatusUpdateModel):
self._biz_content = value
else:
self._biz_content = AlipayMarketingCampaignDiscountStatusUpdateModel.from_alipay_dict(value)
@property
def version(self):
return self._version
@version.setter
def version(self, value):
self._version = value
@property
def terminal_type(self):
return self._terminal_type
@terminal_type.setter
def terminal_type(self, value):
self._terminal_type = value
@property
def terminal_info(self):
return self._terminal_info
@terminal_info.setter
def terminal_info(self, value):
self._terminal_info = value
@property
def prod_code(self):
return self._prod_code
@prod_code.setter
def prod_code(self, value):
self._prod_code = value
@property
def notify_url(self):
return self._notify_url
@notify_url.setter
def notify_url(self, value):
self._notify_url = value
@property
def return_url(self):
return self._return_url
@return_url.setter
def return_url(self, value):
self._return_url = value
@property
def udf_params(self):
return self._udf_params
@udf_params.setter
def udf_params(self, value):
if not isinstance(value, dict):
return
self._udf_params = value
@property
def need_encrypt(self):
return self._need_encrypt
@need_encrypt.setter
def need_encrypt(self, value):
self._need_encrypt = value
def add_other_text_param(self, key, value):
if not self.udf_params:
self.udf_params = dict()
self.udf_params[key] = value
def get_params(self):
params = dict()
params[P_METHOD] = 'alipay.marketing.campaign.discount.status.update'
params[P_VERSION] = self.version
if self.biz_model:
params[P_BIZ_CONTENT] = json.dumps(obj=self.biz_model.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
if self.biz_content:
if hasattr(self.biz_content, 'to_alipay_dict'):
params['biz_content'] = json.dumps(obj=self.biz_content.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
else:
params['biz_content'] = self.biz_content
if self.terminal_type:
params['terminal_type'] = self.terminal_type
if self.terminal_info:
params['terminal_info'] = self.terminal_info
if self.prod_code:
params['prod_code'] = self.prod_code
if self.notify_url:
params['notify_url'] = self.notify_url
if self.return_url:
params['return_url'] = self.return_url
if self.udf_params:
params.update(self.udf_params)
return params
def get_multipart_params(self):
multipart_params = dict()
return multipart_params
|
PypiClean
|
/pg_seldump-0.3-py3-none-any.whl/seldump/dbreader.py
|
import logging
from functools import lru_cache
import psycopg
from psycopg import sql
from psycopg.rows import namedtuple_row
from .consts import DUMPABLE_KINDS, KIND_TABLE, KIND_PART_TABLE, REVKINDS
from .reader import Reader
from .dbobjects import DbObject, Column, ForeignKey
from .exceptions import DumpError
logger = logging.getLogger("seldump.dbreader")
class DbReader(Reader):
def __init__(self, dsn):
super().__init__()
self.dsn = dsn
@property
@lru_cache(maxsize=1)
def connection(self):
logger.debug("connecting to '%s'", self.dsn)
try:
cnn = psycopg.connect(self.dsn, row_factory=namedtuple_row)
except Exception as e:
raise DumpError("error connecting to the database: %s" % e)
cnn.autocommit = True
return cnn
def cursor(self):
return self.connection.cursor()
def obj_as_string(self, obj):
"""
Convert a `psycopg.sql.Composable` object to string
"""
return obj.as_string(self.connection)
def load_schema(self):
for rec in self._fetch_objects():
obj = DbObject.from_kind(
rec.kind,
oid=rec.oid,
schema=rec.schema,
name=rec.name,
extension=rec.extension,
extcondition=rec.extcondition,
)
self.db.add_object(obj)
for rec in self._fetch_columns():
table = self.db.get(oid=rec.table_oid)
assert table, "no table with oid %s for column %s found" % (
rec.table_oid,
rec.name,
)
col = Column(name=rec.name, type=rec.type)
table.add_column(col)
for rec in self._fetch_fkeys():
table = self.db.get(oid=rec.table_oid)
assert table, "no table with oid %s for foreign key %s found" % (
rec.table_oid,
rec.name,
)
ftable = self.db.get(oid=rec.ftable_oid)
assert ftable, "no table with oid %s for foreign key %s found" % (
rec.ftable_oid,
rec.name,
)
fkey = ForeignKey(
name=rec.name,
table_oid=rec.table_oid,
table_cols=rec.table_cols,
ftable_oid=rec.ftable_oid,
ftable_cols=rec.ftable_cols,
)
table.add_fkey(fkey)
ftable.add_ref_fkey(fkey)
for rec in self._fetch_sequences_deps():
table = self.db.get(oid=rec.table_oid)
assert table, "no table with oid %s for sequence %s found" % (
rec.table_oid,
rec.seq_oid,
)
seq = self.db.get(oid=rec.seq_oid)
assert seq, "no sequence %s found" % rec.seq_oid
self.db.add_sequence_user(seq, table, rec.column)
def _fetch_objects(self):
logger.debug("fetching database objects")
with self.cursor() as cur:
cur.execute(
"""
select
r.oid as oid,
s.nspname as schema,
r.relname as name,
r.relkind as kind,
e.extname as extension,
-- equivalent of
-- extcondition[array_position(extconfig, r.oid)]
-- but array_position not available < PG 9.5
(
select extcondition[row_number]
from (
select unnest, row_number() over ()
from (select unnest(extconfig)) t0
) t1
where unnest = r.oid
) as extcondition
from pg_class r
join pg_namespace s on s.oid = r.relnamespace
left join pg_depend d on d.objid = r.oid and d.deptype = 'e'
left join pg_extension e on d.refobjid = e.oid
where r.relkind = any(%(stateless)s)
and s.nspname != 'information_schema'
and s.nspname !~ '^pg_'
order by s.nspname, r.relname
""",
{"stateless": list(DUMPABLE_KINDS)},
)
return cur.fetchall()
def _fetch_sequences_deps(self):
logger.debug("fetching sequences dependencies")
with self.cursor() as cur:
cur.execute(
"""
select tbl.oid as table_oid, att.attname as column, seq.oid as seq_oid
from pg_depend dep
join pg_attrdef def
on dep.classid = 'pg_attrdef'::regclass and dep.objid = def.oid
join pg_attribute att on (def.adrelid, def.adnum) = (att.attrelid, att.attnum)
join pg_class tbl on tbl.oid = att.attrelid
join pg_class seq
on dep.refclassid = 'pg_class'::regclass
and seq.oid = dep.refobjid
and seq.relkind = 'S'
"""
)
return cur.fetchall()
def _fetch_columns(self):
logger.debug("fetching columns")
with self.cursor() as cur:
# attnum gives their order; attnum < 0 are system columns
# attisdropped flags a dropped column.
cur.execute(
"""
select
attrelid as table_oid,
attname as name,
atttypid::regtype as type
from pg_attribute a
join pg_class r on r.oid = a.attrelid
join pg_namespace s on s.oid = r.relnamespace
where r.relkind = any(%(kinds)s)
and a.attnum > 0
and not attisdropped
and s.nspname != 'information_schema'
and s.nspname !~ '^pg_'
order by a.attrelid, a.attnum
""",
{"kinds": [REVKINDS[KIND_TABLE], REVKINDS[KIND_PART_TABLE]]},
)
return cur.fetchall()
def _fetch_fkeys(self):
logger.debug("fetching foreign keys")
with self.cursor() as cur:
cur.execute(
"""
select
c.conname as name,
c.conrelid as table_oid,
array_agg(ra.attname) as table_cols,
c.confrelid as ftable_oid,
array_agg(fa.attname) as ftable_cols
from pg_constraint c
join (
select oid, generate_series(1, array_length(conkey,1)) as attidx
from pg_constraint
where contype = 'f') exp on c.oid = exp.oid
join pg_attribute ra
on (ra.attrelid, ra.attnum) = (c.conrelid, c.conkey[exp.attidx])
join pg_attribute fa
on (fa.attrelid, fa.attnum) = (c.confrelid, c.confkey[exp.attidx])
join pg_class r on c.conrelid = r.oid
join pg_namespace rs on rs.oid = r.relnamespace
join pg_class fr on c.confrelid = fr.oid
join pg_namespace fs on fs.oid = fr.relnamespace
where rs.nspname != 'information_schema' and rs.nspname !~ '^pg_'
and fs.nspname != 'information_schema' and fs.nspname !~ '^pg_'
group by 1, 2, 4
order by name
"""
)
return cur.fetchall()
def get_sequence_value(self, seq):
"""
Return the last value of a sequence.
"""
with self.cursor() as cur:
cur.execute(sql.SQL("select last_value from {}").format(seq.ident))
val = cur.fetchone()[0]
return val
def copy(self, stmt, file):
"""
Run a copy... to stdout statement.
"""
with self.cursor() as cur:
with cur.copy(stmt) as copy:
for data in copy:
file.write(data)
|
PypiClean
|
/maskrcnn-modanet-1.0.3.tar.gz/maskrcnn-modanet-1.0.3/maskrcnn_modanet/train/generator.py
|
import numpy as np
import random
import warnings
import keras
from keras_retinanet.utils.anchors import (
anchor_targets_bbox,
bbox_transform,
anchors_for_shape,
guess_shapes
)
from keras_retinanet.utils.config import parse_anchor_parameters
from keras_retinanet.utils.image import (
TransformParameters,
adjust_transform_for_image,
apply_transform,
preprocess_image,
resize_image,
)
from keras_retinanet.utils.transform import transform_aabb
class Generator(keras.utils.Sequence):
def __init__(
self,
transform_generator = None,
batch_size=1,
group_method='ratio', # one of 'none', 'random', 'ratio'
shuffle_groups=True,
image_min_side=800, #800
image_max_side=1333, #1333
transform_parameters=None,
compute_shapes=guess_shapes,
compute_anchor_targets=anchor_targets_bbox,
config=None
):
self.transform_generator = transform_generator
self.batch_size = int(batch_size)
self.group_method = group_method
self.shuffle_groups = shuffle_groups
self.image_min_side = image_min_side
self.image_max_side = image_max_side
self.transform_parameters = transform_parameters or TransformParameters()
self.compute_shapes = compute_shapes
self.compute_anchor_targets = compute_anchor_targets
self.config = config
# Define groups
self.group_images()
# Shuffle when initializing
if self.shuffle_groups:
self.on_epoch_end()
def on_epoch_end(self):
random.shuffle(self.groups)
def size(self):
raise NotImplementedError('size method not implemented')
def num_classes(self):
raise NotImplementedError('num_classes method not implemented')
def name_to_label(self, name):
raise NotImplementedError('name_to_label method not implemented')
def label_to_name(self, label):
raise NotImplementedError('label_to_name method not implemented')
def image_aspect_ratio(self, image_index):
raise NotImplementedError('image_aspect_ratio method not implemented')
def load_image(self, image_index):
raise NotImplementedError('load_image method not implemented')
def load_annotations(self, image_index):
raise NotImplementedError('load_annotations method not implemented')
def load_annotations_group(self, group):
return [self.load_annotations(image_index) for image_index in group]
def filter_annotations(self, image_group, annotations_group, group):
""" Filter annotations by removing those that are outside of the image bounds or whose width/height < 0.
"""
# test all annotations
for index, (image, annotations) in enumerate(zip(image_group, annotations_group)):
# test x2 < x1 | y2 < y1 | x1 < 0 | y1 < 0 | x2 <= 0 | y2 <= 0 | x2 >= image.shape[1] | y2 >= image.shape[0]
invalid_indices = np.where(
(annotations['bboxes'][:, 2] <= annotations['bboxes'][:, 0]) |
(annotations['bboxes'][:, 3] <= annotations['bboxes'][:, 1]) |
(annotations['bboxes'][:, 0] < 0) |
(annotations['bboxes'][:, 1] < 0) |
(annotations['bboxes'][:, 2] > image.shape[1]) |
(annotations['bboxes'][:, 3] > image.shape[0])
)[0]
# delete invalid indices
if len(invalid_indices):
image_index = group[index]
image_info = self.coco.loadImgs(self.image_ids[image_index])[0]
warnings.warn('Image with file_name {} (shape {}) contains the following invalid boxes: {}.'.format(
image_info['file_name'],
image.shape,
annotations['bboxes'][invalid_indices, :]
))
for k in annotations_group[index].keys():
if type(annotations_group[index][k]) == list:
for i in invalid_indices[::-1]:
del annotations_group[index][k][i]
else:
annotations_group[index][k] = np.delete(annotations[k], invalid_indices, axis=0)
return image_group, annotations_group
def load_image_group(self, group):
return [self.load_image(image_index) for image_index in group]
def random_transform_group_entry(self, image, annotations, transform=None):
""" Randomly transforms image and annotation.
"""
# randomly transform both image and annotations
if transform or self.transform_generator:
if transform is None:
transform = adjust_transform_for_image(next(self.transform_generator), image, self.transform_parameters.relative_translation)
# apply transformation to image
image = apply_transform(transform, image, self.transform_parameters)
# randomly transform the masks and expand so to have a fake channel dimension
for i, mask in enumerate(annotations['masks']):
annotations['masks'][i] = apply_transform(transform, mask, self.transform_parameters)
annotations['masks'][i] = np.expand_dims(annotations['masks'][i], axis=2)
# Transform the bounding boxes in the annotations.
annotations['bboxes'] = annotations['bboxes'].copy()
for index in range(annotations['bboxes'].shape[0]):
annotations['bboxes'][index, :] = transform_aabb(transform, annotations['bboxes'][index, :])
return image, annotations
def resize_image(self, image):
return resize_image(image, min_side=self.image_min_side, max_side=self.image_max_side)
def preprocess_image(self, image):
return preprocess_image(image)
def preprocess_group_entry(self, image, annotations):
""" Preprocess image and its annotations.
"""
# preprocess the image
image = self.preprocess_image(image)
# randomly transform image and annotations
image, annotations = self.random_transform_group_entry(image, annotations)
# resize image
image, image_scale = self.resize_image(image)
# resize masks
for i in range(len(annotations['masks'])):
annotations['masks'][i], _ = self.resize_image(annotations['masks'][i])
# apply resizing to annotations too
annotations['bboxes'] *= image_scale
# convert to the wanted keras floatx
image = keras.backend.cast_to_floatx(image)
return image, annotations
def preprocess_group(self, image_group, annotations_group):
for index, (image, annotations) in enumerate(zip(image_group, annotations_group)):
# preprocess a single group entry
image, annotations = self.preprocess_group_entry(image, annotations)
# copy processed data back to group
image_group[index] = image
annotations_group[index] = annotations
return image_group, annotations_group
def group_images(self):
# determine the order of the images
order = list(range(self.size()))
if self.group_method == 'random':
random.shuffle(order)
elif self.group_method == 'ratio':
order.sort(key=lambda x: self.image_aspect_ratio(x))
# divide into groups, one group = one batch
self.groups = [[order[x % len(order)] for x in range(i, i + self.batch_size)] for i in range(0, len(order), self.batch_size)]
def compute_inputs(self, image_group):
# get the max image shape
max_shape = tuple(max(image.shape[x] for image in image_group) for x in range(3))
# construct an image batch object
image_batch = np.zeros((self.batch_size,) + max_shape, dtype=keras.backend.floatx())
# copy all images to the upper left part of the image batch object
for image_index, image in enumerate(image_group):
image_batch[image_index, :image.shape[0], :image.shape[1], :image.shape[2]] = image
return image_batch
def generate_anchors(self, image_shape):
anchor_params = None
if self.config and 'anchor_parameters' in self.config:
anchor_params = parse_anchor_parameters(self.config)
return anchors_for_shape(image_shape, anchor_params=anchor_params, shapes_callback=self.compute_shapes)
def compute_targets(self, image_group, annotations_group):
""" Compute target outputs for the network using images and their annotations.
"""
# get the max image shape
max_shape = tuple(max(image.shape[x] for image in image_group) for x in range(3))
anchors = self.generate_anchors(max_shape)
batches = self.compute_anchor_targets(
anchors,
image_group,
annotations_group,
self.num_classes()
)
# copy all annotations / masks to the batch
max_annotations = max(len(a['masks']) for a in annotations_group)
# masks_batch has shape: (batch size, max_annotations, bbox_x1 + bbox_y1 + bbox_x2 + bbox_y2 + label + width + height + max_image_dimension)
masks_batch = np.zeros((self.batch_size, max_annotations, 5 + 2 + max_shape[0] * max_shape[1]), dtype=keras.backend.floatx())
for index, annotations in enumerate(annotations_group):
masks_batch[index, :annotations['bboxes'].shape[0], :4] = annotations['bboxes']
masks_batch[index, :annotations['labels'].shape[0], 4] = annotations['labels']
masks_batch[index, :, 5] = max_shape[1] # width
masks_batch[index, :, 6] = max_shape[0] # height
# add flattened mask
for mask_index, mask in enumerate(annotations['masks']):
masks_batch[index, mask_index, 7:7 + (mask.shape[0] * mask.shape[1])] = mask.flatten()
return list(batches) + [masks_batch]
def compute_input_output(self, group):
# load images and annotations
image_group = self.load_image_group(group)
annotations_group = self.load_annotations_group(group)
# check validity of annotations
image_group, annotations_group = self.filter_annotations(image_group, annotations_group, group)
# perform preprocessing steps
image_group, annotations_group = self.preprocess_group(image_group, annotations_group)
# compute network inputs
inputs = self.compute_inputs(image_group)
# compute network targets
targets = self.compute_targets(image_group, annotations_group)
return inputs, targets
def __len__(self):
"""
Number of batches for generator.
"""
return len(self.groups)
def __getitem__(self, index):
"""
Keras sequence method for generating batches.
"""
group = self.groups[index]
inputs, targets = self.compute_input_output(group)
return inputs, targets
|
PypiClean
|
/Gbtestapi0.3-0.1a10-py3-none-any.whl/gailbot/services/organizer/settings/interface/watsonInterface.py
|
from pydantic import BaseModel, ValidationError
from typing import Dict, Union
from .engineSettingInterface import EngineSettingInterface
from gailbot.core.utils.logger import makelogger
from gailbot.core.utils.download import is_internet_connected
from gailbot.core.engines import Watson
logger = makelogger("watson_interface")
class ValidateWatson(BaseModel):
engine: str
apikey: str
region: str
base_model: str
language_customization_id: str = None
acoustic_customization_id: str = None
class InitSetting(BaseModel):
apikey: str
region: str
class TranscribeSetting(BaseModel):
base_model: str
language_customization_id: str = None
acoustic_customization_id: str = None
class WatsonInterface(EngineSettingInterface):
"""
Interface for the Watson speech to text engine
"""
engine: str
init: InitSetting
transcribe: TranscribeSetting
@property
def engine(self):
return "watson"
def load_watson_setting(setting: Dict[str, str]) -> Union[bool, EngineSettingInterface]:
"""given a dictionary, load the dictionary as a watson setting
Args:
setting (Dict[str, str]): the dictionary that contains the setting data
Returns:
Union[bool , SettingInterface]: if the setting dictionary is validated
by the watson setting interface,
return the google setting interface
as an instance of SettingInterface,
else return false
"""
if (
"engine" not in setting.keys()
or setting["engine"] != "watson"
or not is_internet_connected()
):
return False
try:
logger.info(setting)
setting = setting.copy()
validate = ValidateWatson(**setting)
logger.info(validate)
watson_set = dict()
watson_set["engine"] = setting.pop("engine")
watson_set["init"] = dict()
watson_set["transcribe"] = dict()
watson_set["init"]["apikey"] = setting.pop("apikey")
watson_set["init"]["region"] = setting.pop("region")
watson_set["transcribe"].update(setting)
logger.info(watson_set)
watson_set = WatsonInterface(**watson_set)
assert Watson.valid_init_kwargs(watson_set.init.apikey, watson_set.init.region)
return watson_set
except ValidationError as e:
logger.error(e, exc_info=e)
return False
|
PypiClean
|
/bots-ediint-3.3.1.tar.gz/bots-ediint-3.3.1/bots/pluglib.py
|
from __future__ import unicode_literals
import sys
if sys.version_info[0] > 2:
basestring = unicode = str
import os
#~ import time
import zipfile
import zipimport
import codecs
import django
from django.core import serializers
from django.utils.translation import ugettext as _
from . import models
from . import botslib
from . import botsglobal
''' functions for reading and making plugins.
Reading an making functions are separate functions.
'''
#******************************************
#* read a plugin **************************
#******************************************
### See: https://docs.djangoproject.com/en/dev/topics/db/transactions/#managing-transactions
# if no exception raised: commit, else rollback.
@django.db.transaction.non_atomic_requests
def read_index(filename):
"""process index file in default location."""
try:
importedbotsindex,scriptname = botslib.botsimport('index')
pluglist = importedbotsindex.plugins[:]
if importedbotsindex.__name__ in sys.modules:
del sys.modules[importedbotsindex.__name__]
except:
txt = botslib.txtexc()
raise botslib.PluginError(_('Error in configuration index file. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_('Configuration index file is OK.'))
botsglobal.logger.info(_('Start writing to database.'))
#write content of index file to the bots database
try:
read_index2database(pluglist)
except:
txt = botslib.txtexc()
raise botslib.PluginError(_('Error writing configuration index to database. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_('Writing to database is OK.'))
# if no exception raised: commit, else rollback.
@django.db.transaction.non_atomic_requests
def read_plugin(pathzipfile):
"""process uploaded plugin."""
#test if valid zipfile
if not zipfile.is_zipfile(pathzipfile):
raise botslib.PluginError(_('Plugin is not a valid file.'))
#read index file
try:
myzipimport = zipimport.zipimporter(pathzipfile)
importedbotsindex = myzipimport.load_module('botsindex')
pluglist = importedbotsindex.plugins[:]
if 'botsindex' in sys.modules:
del sys.modules['botsindex']
except:
txt = botslib.txtexc()
raise botslib.PluginError(_('Error in plugin. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_('Plugin is OK.'))
botsglobal.logger.info(_('Start writing to database.'))
#write content of index file to the bots database
try:
read_index2database(pluglist)
except:
txt = botslib.txtexc()
raise botslib.PluginError(_('Error writing plugin to database. Nothing is written. Error:\n%(txt)s'),{'txt':txt})
else:
botsglobal.logger.info(_('Writing to database is OK.'))
#write files to the file system.
botsglobal.logger.info(_('Start writing to files'))
try:
warnrenamed = False #to report in GUI files have been overwritten.
myzip = zipfile.ZipFile(pathzipfile, mode='r')
orgtargetpath = botsglobal.ini.get('directories','botspath')
if (orgtargetpath[-1:] in (os.path.sep, os.path.altsep) and len(os.path.splitdrive(orgtargetpath)[1]) > 1):
orgtargetpath = orgtargetpath[:-1]
for zipfileobject in myzip.infolist():
if zipfileobject.filename not in ['botsindex.py','README','botssys/sqlitedb/botsdb','config/bots.ini'] and os.path.splitext(zipfileobject.filename)[1] not in ['.pyo','.pyc']:
#~ botsglobal.logger.info('Filename in zip "%s".',zipfileobject.filename)
if zipfileobject.filename[0] == '/':
targetpath = zipfileobject.filename[1:]
else:
targetpath = zipfileobject.filename
#convert for correct environment: repacle botssys, config, usersys in filenames
if targetpath.startswith('usersys'):
targetpath = targetpath.replace('usersys',botsglobal.ini.get('directories','usersysabs'),1)
elif targetpath.startswith('botssys'):
targetpath = targetpath.replace('botssys',botsglobal.ini.get('directories','botssys'),1)
elif targetpath.startswith('config'):
targetpath = targetpath.replace('config',botsglobal.ini.get('directories','config'),1)
targetpath = botslib.join(orgtargetpath, targetpath)
#targetpath is OK now.
botsglobal.logger.info(_(' Start writing file: "%(targetpath)s".'),{'targetpath':targetpath})
if botslib.dirshouldbethere(os.path.dirname(targetpath)):
botsglobal.logger.info(_(' Create directory "%(directory)s".'),{'directory':os.path.dirname(targetpath)})
if zipfileobject.filename[-1] == '/': #check if this is a dir; if so continue
continue
if os.path.isfile(targetpath): #check if file already exists
try: #this ***sometimes*** fails. (python25, for static/help/home.html...only there...)
warnrenamed = True
except:
pass
source = myzip.read(zipfileobject.filename)
target = open(targetpath, 'wb')
target.write(source)
target.close()
botsglobal.logger.info(_(' File written: "%(targetpath)s".'),{'targetpath':targetpath})
except:
txt = botslib.txtexc()
myzip.close()
raise botslib.PluginError(_('Error writing files to system. Nothing is written to database. Error:\n%(txt)s'),{'txt':txt})
else:
myzip.close()
botsglobal.logger.info(_('Writing files to filesystem is OK.'))
return warnrenamed
#PLUGINCOMPARELIST: for filtering and sorting the plugins.
PLUGINCOMPARELIST = ['uniek','persist','mutex','ta','filereport','report','ccodetrigger','ccode', 'channel','partner','chanpar','translate','routes','confirmrule']
def read_index2database(orgpluglist):
#sanity checks on pluglist
if not orgpluglist: #list of plugins is empty: is OK. DO nothing
return
if not isinstance(orgpluglist,list): #has to be a list!!
raise botslib.PluginError(_('Plugins should be list of dicts. Nothing is written.'))
for plug in orgpluglist:
if not isinstance(plug,dict):
raise botslib.PluginError(_('Plugins should be list of dicts. Nothing is written.'))
for key in plug.keys():
if not isinstance(key,basestring):
raise botslib.PluginError(_('Key of dict is not a string: "%(plug)s". Nothing is written.'),{'plug':plug})
if 'plugintype' not in plug:
raise botslib.PluginError(_('"Plugintype" missing in: "%(plug)s". Nothing is written.'),{'plug':plug})
#special case: compatibility with bots 1.* plugins.
#in bots 1.*, partnergroup was in separate tabel; in bots 2.* partnergroup is in partner
#later on, partnergroup will get filtered
for plug in orgpluglist[:]:
if plug['plugintype'] == 'partnergroup':
for plugpartner in orgpluglist:
if plugpartner['plugintype'] == 'partner' and plugpartner['idpartner'] == plug['idpartner']:
if 'group' in plugpartner:
plugpartner['group'].append(plug['idpartnergroup'])
else:
plugpartner['group'] = [plug['idpartnergroup']]
break
#copy & filter orgpluglist; do plugtype specific adaptions
pluglist = []
for plug in orgpluglist:
if plug['plugintype'] == 'ccode': #add ccodetrigger. #20101223: this is NOT needed; codetrigger shoudl be in plugin.
for seachccodetriggerplug in pluglist:
if seachccodetriggerplug['plugintype'] == 'ccodetrigger' and seachccodetriggerplug['ccodeid'] == plug['ccodeid']:
break
else:
pluglist.append({'plugintype':'ccodetrigger','ccodeid':plug['ccodeid']})
elif plug['plugintype'] == 'translate': #make some fields None instead of '' (translate formpartner, topartner)
if not plug['frompartner']:
plug['frompartner'] = None
if not plug['topartner']:
plug['topartner'] = None
elif plug['plugintype'] == 'routes':
plug['active'] = False
if 'defer' not in plug:
plug['defer'] = False
else:
if plug['defer'] is None:
plug['defer'] = False
elif plug['plugintype'] == 'channel':
#convert for correct environment: path and mpath in channels
if 'path' in plug and plug['path'].startswith('botssys'):
plug['path'] = plug['path'].replace('botssys',botsglobal.ini.get('directories','botssys_org'),1)
if 'testpath' in plug and plug['testpath'].startswith('botssys'):
plug['testpath'] = plug['testpath'].replace('botssys',botsglobal.ini.get('directories','botssys_org'),1)
elif plug['plugintype'] == 'confirmrule':
plug.pop('id', None) #id is an artificial key, delete,
elif plug['plugintype'] not in PLUGINCOMPARELIST: #if not in PLUGINCOMPARELIST: do not use
continue
pluglist.append(plug)
#sort pluglist: this is needed for relationships
pluglist.sort(key=lambda plug: plug.get('isgroup',False),reverse=True) #sort partners on being partnergroup or not
pluglist.sort(key=lambda plug: PLUGINCOMPARELIST.index(plug['plugintype'])) #sort all plugs on plugintype; are partners/partenrgroups are already sorted, this will still be true in this new sort (python guarantees!)
for plug in pluglist:
botsglobal.logger.info(' Start write to database for: "%(plug)s".',{'plug':plug})
#correction for reading partnergroups
if plug['plugintype'] == 'partner' and plug['isgroup']:
plug['plugintype'] = 'partnergroep'
#remember the plugintype
plugintype = plug['plugintype']
table = django.apps.apps.get_model('bots', plugintype)
#delete fields not in model for compatibility; note that 'plugintype' is also removed.
for key in list(plug.keys()):
try:
table._meta.get_field(key)
except django.db.models.fields.FieldDoesNotExist:
del plug[key]
#get key(s), put in dict 'sleutel'
pk = table._meta.pk.name
if pk == 'id': #'id' is the artificial key django makes, if no key is indicated. Note the django has no 'composite keys'.
sleutel = {}
if table._meta.unique_together:
for key in table._meta.unique_together[0]:
sleutel[key] = plug.pop(key)
else:
sleutel = {pk:plug.pop(pk)}
sleutelorg = sleutel.copy() #make a copy of the original sleutel; this is needed later
#now we have:
#- plugintype (is removed from plug)
#- sleutelorg: original key fields
#- sleutel: unique key fields. mind: translate and confirmrule have empty 'sleutel'
#- plug: rest of database fields
#for sleutel and plug: convert names to real database names
#get real column names for fields in plug
for fieldname in list(plug.keys()):
fieldobject = table._meta.get_field(fieldname)
try:
if fieldobject.column != fieldname: #if name in plug is not the real field name (in database)
plug[fieldobject.column] = plug[fieldname] #add new key in plug
del plug[fieldname] #delete old key in plug
except:
raise botslib.PluginError(_('No field column for: "%(fieldname)s".'),{'fieldname':fieldname})
#get real column names for fields in sleutel; basically the same loop but now for sleutel
for fieldname in list(sleutel.keys()):
fieldobject = table._meta.get_field(fieldname)
try:
if fieldobject.column != fieldname:
sleutel[fieldobject.column] = sleutel[fieldname]
del sleutel[fieldname]
except:
raise botslib.PluginError(_('No field column for: "%(fieldname)s".'),{'fieldname':fieldname})
#find existing entry (if exists)
if sleutelorg: #note that translate and confirmrule have an empty 'sleutel'
listexistingentries = table.objects.filter(**sleutelorg)
elif plugintype == 'translate':
listexistingentries = table.objects.filter(fromeditype=plug['fromeditype'],
frommessagetype=plug['frommessagetype'],
alt=plug['alt'],
frompartner=plug['frompartner_id'],
topartner=plug['topartner_id'])
elif plugintype == 'confirmrule':
listexistingentries = table.objects.filter(confirmtype=plug['confirmtype'],
ruletype=plug['ruletype'],
negativerule=plug['negativerule'],
idroute=plug.get('idroute'),
idchannel=plug.get('idchannel_id'),
messagetype=plug.get('messagetype'),
frompartner=plug.get('frompartner_id'),
topartner=plug.get('topartner_id'))
if listexistingentries:
dbobject = listexistingentries[0] #exists, so use existing db-object
else:
dbobject = table(**sleutel) #create db-object
if plugintype == 'partner': #for partners, first the partner needs to be saved before groups can be made
dbobject.save()
for key,value in plug.items(): #update object with attributes from plugin
setattr(dbobject,key,value)
dbobject.save() #and save the updated object.
botsglobal.logger.info(_(' Write to database is OK.'))
#*********************************************
#* plugout / make a plugin (generate)*********
#*********************************************
def make_index(cleaned_data,filename):
''' generate only the index file of the plugin.
used eg for configuration change management.
'''
plugs = all_database2plug(cleaned_data)
plugsasstring = make_plugs2string(plugs)
filehandler = codecs.open(filename,'w','utf-8')
filehandler.write(plugsasstring)
filehandler.close()
def make_plugin(cleaned_data,filename):
pluginzipfilehandler = zipfile.ZipFile(filename, 'w', zipfile.ZIP_DEFLATED)
plugs = all_database2plug(cleaned_data)
plugsasstring = make_plugs2string(plugs)
pluginzipfilehandler.writestr('botsindex.py',plugsasstring.encode('utf-8')) #write index file to pluginfile
botsglobal.logger.debug(' Write in index:\n %(index)s',{'index':plugsasstring})
files4plugin = plugout_files(cleaned_data)
for dirname, defaultdirname in files4plugin:
pluginzipfilehandler.write(dirname,defaultdirname)
botsglobal.logger.debug(' Write file "%(file)s".',{'file':defaultdirname})
pluginzipfilehandler.close()
def all_database2plug(cleaned_data):
''' get all database objects, serialize these (to dict), adapt.'''
plugs = []
if cleaned_data['databaseconfiguration']:
plugs += \
database2plug(models.channel) + \
database2plug(models.partner) + \
database2plug(models.chanpar) + \
database2plug(models.translate) + \
database2plug(models.routes) + \
database2plug(models.confirmrule)
if cleaned_data['umlists']:
plugs += \
database2plug(models.ccodetrigger) + \
database2plug(models.ccode)
if cleaned_data['databasetransactions']:
plugs += \
database2plug(models.uniek) + \
database2plug(models.mutex) + \
database2plug(models.ta) + \
database2plug(models.filereport) + \
database2plug(models.report)
#~ list(models.persist.objects.all()) + \ #should persist object alos be included?
return plugs
def database2plug(db_table):
#serialize database objects
plugs = serializers.serialize('python', db_table.objects.all())
if plugs:
app, tablename = plugs[0]['model'].split('.', 1)
table = django.apps.apps.get_model(app, tablename)
pk = table._meta.pk.name
#adapt plugs
for plug in plugs:
plug['fields']['plugintype'] = tablename
if pk != 'id':
plug['fields'][pk] = plug['pk']
#convert for correct environment: replace botssys in channels[path, mpath]
if tablename == 'channel':
if 'path' in plug['fields'] and plug['fields']['path'].startswith(botsglobal.ini.get('directories','botssys_org')):
plug['fields']['path'] = plug['fields']['path'].replace(botsglobal.ini.get('directories','botssys_org'),'botssys',1)
if 'testpath' in plug['fields'] and plug['fields']['testpath'].startswith(botsglobal.ini.get('directories','botssys_org')):
plug['fields']['testpath'] = plug['fields']['testpath'].replace(botsglobal.ini.get('directories','botssys_org'),'botssys',1)
return plugs
def make_plugs2string(plugs):
''' return plugs (serialized objects) as unicode strings.
'''
lijst = ['# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals','import datetime',"version = '%s'" % (botsglobal.version),'plugins = [']
lijst.extend(plug2string(plug['fields']) for plug in plugs)
lijst.append(']\n')
return '\n'.join(lijst)
def plug2string(plugdict):
''' like repr() for a dict, but:
- starts with 'plugintype'
- other entries are sorted; this because of predictability
- produce unicode by using str().decode(unicode_escape): bytes->unicode; converts escaped unicode-chrs to correct unicode. repr produces these.
str().decode(): bytes->unicode
str().encode(): unicode->bytes
'''
terug = '{' + repr('plugintype') + ': ' + repr(plugdict.pop('plugintype'))
for key in sorted(plugdict.keys()):
terug += ', ' + repr(key) + ': ' + repr(plugdict[key])
terug += '},'
return terug
def plugout_files(cleaned_data):
''' gather list of files for the plugin that is generated.
'''
files2return = []
usersys = unicode(botsglobal.ini.get('directories','usersysabs'))
botssys = unicode(botsglobal.ini.get('directories','botssys'))
if cleaned_data['fileconfiguration']: #gather from usersys
files2return.extend(plugout_files_bydir(usersys,'usersys'))
if not cleaned_data['charset']: #if edifact charsets are not needed: remove them (are included in default bots installation).
charsetdirs = plugout_files_bydir(os.path.join(usersys,'charsets'),'usersys/charsets')
for charset in charsetdirs:
try:
index = files2return.index(charset)
files2return.pop(index)
except ValueError:
pass
else:
if cleaned_data['charset']: #if edifact charsets are not needed: remove them (are included in default bots installation).
files2return.extend(plugout_files_bydir(os.path.join(usersys,'charsets'),'usersys/charsets'))
if cleaned_data['config']:
config = botsglobal.ini.get('directories','config')
files2return.extend(plugout_files_bydir(config,'config'))
if cleaned_data['data']:
data = botsglobal.ini.get('directories','data')
files2return.extend(plugout_files_bydir(data,'botssys/data'))
if cleaned_data['database']:
files2return.extend(plugout_files_bydir(os.path.join(botssys,'sqlitedb'),'botssys/sqlitedb.copy')) #yeah...readign a plugin with a new database will cause a crash...do this manually...
if cleaned_data['infiles']:
files2return.extend(plugout_files_bydir(os.path.join(botssys,'infile'),'botssys/infile'))
if cleaned_data['logfiles']:
log_file = botsglobal.ini.get('directories','logging')
files2return.extend(plugout_files_bydir(log_file,'botssys/logging'))
return files2return
def plugout_files_bydir(dirname,defaultdirname):
''' gather all files from directory dirname'''
files2return = []
for root, dirs, files in os.walk(dirname):
head, tail = os.path.split(root)
#convert for correct environment: replace dirname with the default directory name
rootinplugin = root.replace(dirname,defaultdirname,1)
for bestand in files:
ext = os.path.splitext(bestand)[1]
if ext and (ext in ['.pyc','.pyo'] or bestand in ['__init__.py']):
continue
files2return.append([os.path.join(root,bestand),os.path.join(rootinplugin,bestand)])
return files2return
|
PypiClean
|
/dara_components-1.1.2-py3-none-any.whl/dara/components/common/grid.py
|
from typing import Optional, Union
from pydantic import BaseModel
from dara.components.common.base_component import LayoutComponent
from dara.core.definitions import ComponentInstance, discover
from dara.core.visual.components.types import Direction
class ScreenBreakpoints(BaseModel):
xs: Optional[int] = None
sm: Optional[int] = None
md: Optional[int] = None
lg: Optional[int] = None
xl: Optional[int] = None
class Column(LayoutComponent):
"""
Column can contains any components similar to Stack. You may align their content as well as defining how many columns it occupies by defining span.
Grid works in a 12 columns per row system. So if you have a column that spans 8 and the next spans 4 they will be added side by side, occupying 67% and 33% of the Grid width respectively.
Also note that, given this system, span + offset values should add to no more than 12, doing so may result in unwanted behavior.
#### Example of simplest Column with alignment:
```python
Grid.Column(Text('Example'), justify='center', align_items='center')
```
#### Example of a Column with fixed span and offset:
```python
Grid.Column(Text('Example'), span=3, offset=4)
```
#### Example of a Column with span which differs depending on breakpoints:
```python
Grid.Column(Text('Example'), span=Grid.Breakpoints(xs=12, sm=6, md=4, lg=3))
```
#### Example of a Column with offset which differs depending on breakpoints:
```python
Grid.Column(Text('Example'), offset=Grid.Breakpoints(xs=6, md=4, lg=3), span=6)
```
:param span: the number of columns this column should span, it can take an `integer` if unchanged across different screen sizes or take `Grid.Breakpoints` which allows you to define the span across five responsive tiers
:param justify: defines horizontal alignment for that column
:param align_items: defines vertical alignment for that column
:param offset: offset column by x number of columns, it can take an `integer` if unchanged across different screen sizes or take `Grid.Breakpoints`. Note that offset + span should add to no more than 12. Values greater that add up to more than 12 can result in unwanted behaviour.
:param direction: The direction to Column children, can be 'vertical' or 'horizontal', default is 'horizontal'
:param hug: Whether to hug the content, defaults to False
"""
# TODO: :param order: optional number denoting the order of priority of the columns, with 1 being first to appear, and 12 the last to be added.
span: Optional[Union[int, ScreenBreakpoints]] = None
justify: Optional[str] = None
align_items: Optional[str] = None
offset: Optional[Union[int, ScreenBreakpoints]]
direction: Direction = Direction.HORIZONTAL
def __init__(self, *args: ComponentInstance, **kwargs):
super().__init__(*args, **kwargs)
class Row(LayoutComponent):
"""
Rows will automatically calculate and wrap columns within it by a 12 column system.
- If columns have an undefined span it will try to fit them equally in a row occupying all space available.
- If columns have a defined span it will respect it and fit wrap other columns into rows such that all rows occupy a maximum of 12 columns.
- It will keep the order of the columns in the row in the order they are defined.
#### Example of Row with `column_gap`:
```python
Grid.Row(
Grid.Column(Text('Cell 0')),
Grid.Column(Text('Cell 1')),
column_gap=2
)
```
:param column_gap: a number containing the desired percentage gap between columns for that row, e.g. 2 would be a 2% gap between columns
:param hug: Whether to hug the content, defaults to False
"""
column_gap: Optional[int] = None
def __init__(self, *args: ComponentInstance, **kwargs):
super().__init__(*args, **kwargs)
@discover
class Grid(LayoutComponent):
"""

Grid Layout provides a flexbox grid with a twelve column system.
Rows will automatically calculate their widths and wrap on the page as needed.
It also allows for responsive desiness by defining column span breakpoints.
"""
row_gap: str = '0.75rem'
breakpoints: Optional[ScreenBreakpoints] = ScreenBreakpoints()
Column = Column
Row = Row
Breakpoints = ScreenBreakpoints
# Dummy init that just passes through arguments to superclass, fixes Pylance complaining about types
def __init__(self, *args: ComponentInstance, **kwargs):
"""
Grid Layout provides a flexbox grid with a twelve column system.
Rows will automatically calculate their widths and wrap on the page as needed.
It also allows for responsive desiness by defining column span breakpoints.
#### Example of a simple Grid component:
```python
from dara.components.common import Grid, Text
Grid(
Grid.Row(
Grid.Column(Text('Cell 0')),
Grid.Column(Text('Cell 1')),
),
Grid.Row(
Grid.Column(Text('Cell 2')),
),
)
```
#### Example of a Grid with fixed span and undefined span columns:
Note how you can let the `Row` wrap itself into different rows with undefined `span` columns filling all available space:
```python
from dara.components.common import Grid, Text
Grid(
Grid.Row(
Grid.Column(Text('Span = 2'), span=2, background='orange'),
Grid.Column(Text('Undefined'), background='cornflowerblue'),
Grid.Column(Text('Span = 6'), span=6, background='coral'),
Grid.Column(Text('Span = 5'), span=5, background='crimson'),
Grid.Column(Text('Undefined'), background='darkmagenta'),
Grid.Column(Text('Undefined'), background='gold'),
Grid.Column(Text('Span = 6'), span=12, background='lightseagreen'),
),
)
```
#### Example of a Responsive Grid:
Here we define how much each column spans foe each screen size type. For `xs` screens each column spans the whole 12 columns available.
For larger and larger screens we allow these to come side by side. For `sm` you have two columns per row, `md` three columns and finally `lg` or bigger screens you can have all four columns side by side.
Here we also show `column_gap` which is a `Row` property allowing you to define some spacing between columns, and `row_gap` a `Grid` property to define spacing between rows.
```python
from dara.components.common import Grid, Text
span_layout = Grid.Breakpoints(xs=12, sm=6, md=4, lg=3)
Grid(
Grid.Row(
Grid.Column(Text('Red'), span=span_layout, background='red'),
Grid.Column(Text('Green'), span=span_layout, background='green'),
Grid.Column(Text('Blue'), span=span_layout, background='blue'),
Grid.Column(Text('Yellow'), span=span_layout, background='yellow'),
column_gap=2,
),
row_gap='10px',
)
```
#### Example of a Custom Breakpoints:
You can also custom define where one or all of the breakpoints happen, This uses the same `Grid.Breakpoints` helper, but now instead of defining the span of each column we define in pixels each breakpoint.
```python
from dara.components.common import Grid, Text
custom_breakpoints = Grid.Breakpoints(xs=0, sm=500, md=600, lg=700, xl=800)
Grid(
Grid.Row(
Grid.Column(Text('Red'), span= Grid.Breakpoints(xs=12, sm=6, md=4, lg=3), background='red'),
Grid.Column(Text('Blue'), span= 4, background='blue'),
),
breakpoints=custom_breakpoints,
)
```
#### Example of a Grid component which only occupies as much space as it needs with the hug property:
```python
from dara.components.common import Grid, Text
Grid(
Grid.Row(
Grid.Column(Text('Cell 0')),
Grid.Column(Text('Cell 1')),
),
Grid.Row(
Grid.Column(Text('Cell 2')),
),
hug=True,
)
```
In the example above each row will only occupy as much space as it needs, that will be the space the text takes. This can be overwritten at a row level.
For example, you could set a specific value for a row height, or even set grow to True/hug to False to allow only one row to grow and the others occupy only the space needed.
:param row_gap: a string containing the desired gap between rows, defaults to 0.75rem
:param breakpoints: optionally pass when the breakpoints should occur in pixels
"""
super().__init__(*args, **kwargs)
|
PypiClean
|
/nonebot_plugin_chatrecorder-0.4.1.tar.gz/nonebot_plugin_chatrecorder-0.4.1/nonebot_plugin_chatrecorder/adapters/kaiheila.py
|
from datetime import datetime
from typing import Any, Dict, Optional, Type
from nonebot.adapters import Bot as BaseBot
from nonebot.message import event_postprocessor
from nonebot_plugin_datastore import create_session
from nonebot_plugin_session import Session, SessionLevel, extract_session
from nonebot_plugin_session.model import get_or_add_session_model
from typing_extensions import override
from ..config import plugin_config
from ..consts import SupportedAdapter, SupportedPlatform
from ..message import (
MessageDeserializer,
MessageSerializer,
register_deserializer,
register_serializer,
serialize_message,
)
from ..model import MessageRecord
try:
from nonebot.adapters.kaiheila import Bot, Message, MessageSegment
from nonebot.adapters.kaiheila.api.model import MessageCreateReturn
from nonebot.adapters.kaiheila.event import MessageEvent
from nonebot.adapters.kaiheila.message import rev_msg_type_map
adapter = SupportedAdapter.kaiheila
@event_postprocessor
async def record_recv_msg(bot: Bot, event: MessageEvent):
session = extract_session(bot, event)
async with create_session() as db_session:
session_model = await get_or_add_session_model(session, db_session)
record = MessageRecord(
session_id=session_model.id,
time=datetime.utcfromtimestamp(event.msg_timestamp / 1000),
type=event.post_type,
message_id=event.msg_id,
message=serialize_message(adapter, event.message),
plain_text=event.message.extract_plain_text(),
)
async with create_session() as db_session:
db_session.add(record)
await db_session.commit()
if plugin_config.chatrecorder_record_send_msg:
@Bot.on_called_api
async def record_send_msg(
bot: BaseBot,
e: Optional[Exception],
api: str,
data: Dict[str, Any],
result: Optional[Dict[str, Any]],
):
if not isinstance(bot, Bot):
return
if e or not result:
return
if not (
isinstance(result, MessageCreateReturn)
and result.msg_id
and result.msg_timestamp
):
return
if api == "message/create":
level = SessionLevel.LEVEL3
channel_id = data["target_id"]
user_id = data.get("temp_target_id")
elif api == "direct-message/create":
level = SessionLevel.LEVEL1
channel_id = None
user_id = data["target_id"]
else:
return
type_code = data["type"]
content = data["content"]
type = rev_msg_type_map.get(type_code, "")
if type == "text":
message = MessageSegment.text(content)
elif type == "image":
message = MessageSegment.image(content)
elif type == "video":
message = MessageSegment.video(content)
elif type == "file":
message = MessageSegment.file(content)
elif type == "audio":
message = MessageSegment.audio(content)
elif type == "kmarkdown":
message = MessageSegment.KMarkdown(content)
elif type == "card":
message = MessageSegment.Card(content)
else:
message = MessageSegment(type, {"content": content})
message = Message(message)
session = Session(
bot_id=bot.self_id,
bot_type=bot.type,
platform=SupportedPlatform.kaiheila,
level=level,
id1=user_id,
id2=None,
id3=channel_id,
)
async with create_session() as db_session:
session_model = await get_or_add_session_model(session, db_session)
record = MessageRecord(
session_id=session_model.id,
time=datetime.utcfromtimestamp(result.msg_timestamp / 1000),
type="message_sent",
message_id=result.msg_id,
message=serialize_message(adapter, message),
plain_text=message.extract_plain_text(),
)
async with create_session() as db_session:
db_session.add(record)
await db_session.commit()
class Serializer(MessageSerializer[Message]):
pass
class Deserializer(MessageDeserializer[Message]):
@classmethod
@override
def get_message_class(cls) -> Type[Message]:
return Message
register_serializer(adapter, Serializer)
register_deserializer(adapter, Deserializer)
except ImportError:
pass
|
PypiClean
|
/Fugue-generator-0.9a1.tar.gz/Fugue-generator-0.9a1/fugue/__init__.py
|
#TODO: Do I actually need all this?
import logging
import os
from sys import argv, executable
from pathlib import Path
import re
import yaml
from lxml import etree as ET
#Used for static file moving/deleting.
from distutils.dir_util import copy_tree
import shutil
# Used for pre-/post-processing.
import subprocess
# Makes this a nice CLI.
import click
from fugue.tools.datasource_handlers import DSHandler_Factory
from fugue.tools import *
HUGE_PARSER = ET.XMLParser(huge_tree=True)
PYTHON_EXEC = executable
def process(commands):
"""Runs `commands`, an array of arrays. Used by preprocess() and postprocess()."""
#TODO: Should be an option to supress exceptions here.
if commands:
for command in commands:
# Make sure we run outside scripts with the same python as fugue.
cmd = [ PYTHON_EXEC if x == 'python' else x for x in command ]
logging.info("Running %s" % (' '.join(cmd), ))
ret = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if ret.returncode == 0:
logging.debug("Ran '%s'. Result: %s" % (' '.join(ret.args), ret.stdout.decode()))
else:
raise RuntimeError("process() command '%s' failed. Error: %s" % (' '.join(ret.args), ret.stderr.decode()))
def _load_config(ctx, file):
logging.debug("Loading configuration file %s." % file)
if ctx.obj == None: ctx.obj = {}
with Path(file).open('r') as f:
ctx.obj['settings'] = yaml.load(f, Loader=yaml.FullLoader)
ctx.obj['project-output'] = Path(ctx.obj['settings']['site']['root']).resolve()
logging.debug("Loaded configuration file.")
def _output_dir(ctx):
outp = Path(ctx.obj['project_root']) / ctx.obj['settings']['site']['root']
outp = outp.resolve()
logging.debug("Checking for and returning directory at %s" % outp)
if not outp.exists():
outp.mkdir(parents=True)
return outp
HERE = Path().resolve()
@click.group(invoke_without_command=True, chain=True)
@click.option('--log-level', '-L',
type=click.Choice(['CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG']),
default="WARNING", help="Set logging level. Defaults to WARNING.")
@click.option('--project', '-p', default=Path('.', 'fugue.project.yaml'),
type=click.Path(), help=r"Choose the project configuration file. Defaults to ./fugue.project.yaml. Ignored if `fugue build` is called with a repository URL.")
@click.option('--data', '-d', default=Path('.', 'fugue-data.xml'),
type=click.Path(), help=r"Choose the data file fugue will create and use. Defaults to ./fugue-data.xml. Ignored if `fugue build` is called with a repository URL.")
@click.pass_context
def fugue(ctx, log_level, project, data):
"""Static site generator using XSL templates."""
"""By default, looks at fugue.project.yaml in the current directory and completes all tasks
needed to generate a complete site.
"""
#TODO: option to not supress stdout and stderr in subprocess.run() calls.
#TODO: Make logging more configurable.
logging.basicConfig(level=getattr(logging, log_level))
click.echo("Starting fugue")
#Load configuration file.
ctx.obj = {'data_file': Path(data),
'config_file': Path(project),}
try:
_load_config(ctx, project)
ctx.obj['project_root'] = Path(project).parent.resolve()
os.chdir(ctx.obj['project_root'])
logging.debug('Changed directory to %s' % ctx.obj['project_root'])
except FileNotFoundError as e:
logging.debug(r"Loading config file failed. Hopefully we're giving build() a repository on the command line.")
#Since chain=True, we can't tell which subcommand is being invoked :(.
if ctx.invoked_subcommand == None:
#Fail.
raise RuntimeError("No Fugue configuration file found and we are not building from a git repository.")
if ctx.invoked_subcommand is None:
logging.debug("No subcommand invoked. Calling build().")
ctx.invoke(build)
@fugue.command()
@click.pass_context
def update(ctx):
"""`git pull` the project's repository."""
targ = str(ctx.obj['project_root'])
cmd = "git -C %s pull origin" % (targ, )
logging.info("Running '%s'." % cmd)
ret = subprocess.run(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.debug("Finished 'git pull': %s" % ret.stdout.decode())
if ret.returncode != 0:
raise RuntimeError("Failed to pull repository. Error: %s" % ret.stderr.decode())
_load_config(ctx, ctx.obj['config_file'])
@fugue.command()
@click.argument("repository", required=False)
@click.option('--no-update', '-n', is_flag=True,
help=r"Do not `git pull` this repository.")
@click.option('--no-fetch', '-N', is_flag=True,
help=r"Do not pull or clone any git repositories. Implies -n.")
@click.option('--no-outside-tasks', '-o', is_flag=True,
help=r"Do not execute pre- or post-processing tasks.")
@click.pass_context
def build(ctx, repository, no_update, no_fetch, no_outside_tasks):
"""Build the entire site from scratch.
Completes all other steps; this is done by
default if no other command is specified.
If <repository> is provided, it is assumed to be the URL of a git repository; it
will be cloned into a subdirectory of the current directory, then the fugue project
there will be built. The `project` and `data` arguments provided to `fugue` will be
interpreted relative to the repository's root."""
logging.debug("Beginning build()")
click.echo(r"Running 'fugue build'. (Re)-building entire site.")
if repository != None:
logging.debug("cloning %s." % repository)
localrepo = Path(repository).stem
logging.debug('local repository directory is %s' % localrepo)
logging.debug('localrepo:' + str(Path(localrepo)))
logging.debug('data: ' + str(ctx.obj['data_file']))
logging.debug('project: ' + str(ctx.obj['config_file']))
logging.debug('data_file will be %s' % str(Path(localrepo, ctx.obj['data_file'])))
logging.debug('project config_file will be %s' % str(Path(localrepo, ctx.obj['config_file'])))
cmd = "git clone %s %s" % (repository, localrepo)
logging.info("Running '%s'." % cmd)
ret = subprocess.run(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.info("Finished 'git clone': %s" % ret.stdout.decode())
if ret.returncode != 0:
raise RuntimeError("Failed to clone repository. Error: %s" % ret.stderr.decode())
logging.debug('Changing working directory to %s' % localrepo)
os.chdir(Path(localrepo))
ctx.obj['config_file'] = ctx.obj['config_file'].resolve()
#TODO: Fail more elegantly if we can't find a config file.
_load_config(ctx, ctx.obj['config_file'])
logging.debug("project config_file '%s' loaded." % str(ctx.obj['config_file']))
ctx.obj['project_root'] = Path().resolve()
logging.debug("Working directory changed to %s" % str(Path().resolve()))
# Do some error checking before we spend an hour downloading gigabytes of data.
if not ctx.obj['data_file'].parent.exists():
#TODO: Should I just make it instead?
logging.error("Data file %s's parent directory does not exist" % str(ctx.obj['data_file']))
raise FileNotFoundError("Data file %s's parent directory does not exist.")
#Verify we can touch this file before we go further.
ctx.obj['data_file'].touch(exist_ok=True)
logging.debug("Data file: %s" % str(ctx.obj['data_file'].resolve()))
if not Path(ctx.obj['config_file']).exists():
raise FileNotFoundError("No fugue project found at %s." % str(Path(ctx.obj['config_file'])))
elif not ctx.obj.get('settings', False):
raise FileNotFoundError("No fugue project found.")
logging.debug("Settings: " + str(ctx.obj['settings']))
logging.debug("Building. Project root: %s" % str(ctx.obj['project_root']))
if not (no_update or no_fetch or repository):
ctx.invoke(update)
#ctx.invoke(clear)
if not no_fetch:
ctx.invoke(fetch)
if not no_outside_tasks:
ctx.invoke(preprocess)
ctx.invoke(collect)
ctx.invoke(static)
ctx.invoke(generate)
if not no_outside_tasks:
ctx.invoke(postprocess)
click.echo("Building complete.")
logging.debug("Ending build()")
#TODO: Finish and test.
'''
@fugue.command()
@click.pass_context
def clear(ctx):
"""Deletes all contents of the output directory.
Preserves files matching the patterns in settings.clear.exclude"""
#NOTE: os.walk() is our friend. Maybe also fnmatch.fnmatch().
outdir = _output_dir(ctx)
click.echo("Clearing the output directory.")
excludes = ctx.obj['settings'].get('clear', {}).get('exclude', [])
logging.debug("Excludes: " + str(excludes))
def exclude_path(pth):
"""Do any of the patterns match pth?"""
for pat in excludes:
if pth.match(pat):
return True
return False
for dr in [x for x in outdir.iterdir() if x.is_dir() and not exclude_path(x.resolve())]:
shutil.rmtree(str(dr.resolve()))
for fl in [x for x in outdir.iterdir() if x.is_file()]:
os.unlink(str(fl.resolve()))
'''
@fugue.command()
@click.pass_context
def preprocess(ctx):
"""Runs all preprocessing directives."""
#TODO: Should be an option to supress exceptions here.
outdir = _output_dir(ctx)
logging.debug("Preprocess: Output dir: %s" % outdir)
click.echo("Running preprocess tasks.")
commands = ctx.obj['settings'].get('preprocess', [])
process(commands)
@fugue.command()
@click.pass_context
def fetch(ctx):
"""Fetches git repositories."""
#For now we'll use subprocess.run(). Is there any benefit to dulwich instead?
#TODO: should probably put this logic in separate modules so we can support svn, fossil, SFTP, etc. sources.
#TODO: git might should support checking out specific branches/tags.
click.echo('Fetching repositories.')
repositories = ctx.obj['settings'].get('repositories', [])
logging.info('Pulling %d repositories.' % len(repositories))
for repo in repositories:
if not Path(repo['target']).exists():
targ = str(Path(repo['target']).resolve())
rootdir = str(Path(repo['target']).resolve().parent)
cmd = "git -C %s clone %s %s" % (rootdir, repo['remote'], targ)
logging.info('%s does not exist; cloning %s into it.' % (repo['target'], repo['remote']))
logging.debug("Running '%s'." % cmd)
ret = subprocess.run(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.debug("Finished 'git clone': %s" % ret.stdout.decode())
if ret.returncode != 0:
raise RuntimeError("Failed to clone repository. Error: %s" % ret.stderr.decode())
else:
targ = str(Path(repo['target']).resolve())
cmd = "git -C %s pull" % (targ, )
logging.info("Running '%s'." % cmd)
ret = subprocess.run(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
logging.debug("Finished 'git pull': %s" % ret.stdout.decode())
if ret.returncode != 0:
raise RuntimeError("Failed to pull repository. Error: %s" % ret.stderr.decode())
@fugue.command()
@click.pass_context
def collect(ctx):
"""Collects all datasources.
Collects all data described in fugue.project.yaml under data-sources
into the xml file specified by --data. Does not imply `fetch`."""
click.echo("Collecting data")
outdir = _output_dir(ctx)
logging.debug("Collecting. Output dir: %s" % outdir)
xmlroot = ET.Element('fugue-data')
projroot = ET.SubElement(xmlroot, 'fugue-config')
#Convert our settings file to XML and add to the XML data document.
dict2xml(ctx.obj['settings'], projroot)
dssroot = ET.SubElement(xmlroot, 'data-sources')
dss = ctx.obj['settings']['data-sources']
for dsname, ds in dss.items():
logging.info("Collecting datasource '%s'." % dsname)
#TODO: Dynamically load modules to deal with different DS types.
dsroot = ET.SubElement(dssroot, dsname)
handler = DSHandler_Factory().build(ds)
handler.write(dsroot)
data_file = ctx.obj['data_file']
logging.info('Writing XML data to %s.' % str(data_file))
data_file.touch(exist_ok=True)
xmlroot.getroottree().write(str(data_file), pretty_print=True, encoding="utf8")
#with data_file.open(mode="wb") as outpfile:
# outpfile.write(ET.tostring(xmlroot, pretty_print=True))
#No need to read this if it's already in memory.
ctx.obj['xmldata'] = xmlroot
@fugue.command()
@click.pass_context
def static(ctx):
"""Copies static directories into output."""
click.echo("Handling static files.")
outdir = _output_dir(ctx)
logging.debug("Moving static files. Output dir: %s" % outdir)
sss = ctx.obj['settings']['static-sources']
logging.info('Deleting static directories')
for ssname, ss in sss.items():
if ss['target'] != '':
target = Path(outdir, ss['target']).resolve()
logging.debug("Deleting %s." % target)
if target.exists():
#TODO: Why does this sometimes throw errors if I don't ignore_errors?
shutil.rmtree(target, ignore_errors=False)
logging.info('Copying static files.')
for ssname, ss in sss.items():
source = Path(ss['source']).resolve()
target = Path(ctx.obj['project-output'], ss['target']).resolve()
logging.debug("Moving " + str(source) + ' to ' + str(target) + ".")
copy_tree(str(source), str(target))
@fugue.command()
@click.pass_context
def generate(ctx):
"""Generates pages from XSL templates. Does not imply `collect` and will fail if the file specified by --data doesn't exist."""
#TODO: Two-step generation (HTML -> XSL -> HTML)
click.echo('Generating pages.')
outdir = _output_dir(ctx)
logging.debug("Generating. Output directory: %s" % str(outdir))
pages = ctx.obj['settings']['pages']
data_file = ctx.obj['data_file']
if 'xmldata' in ctx.obj:
logging.debug("Using previously-loaded data.")
else:
logging.debug("Reading data from %s" % str(data_file))
with data_file.open("rb") as fl:
fdata = fl.read()
ctx.obj['xmldata'] = ET.fromstring(fdata, HUGE_PARSER)
data = ctx.obj['xmldata']
for pagename, page in pages.items():
logging.info("Generating page '%s'." % pagename)
xslt = ET.parse(page['template'])
transform = ET.XSLT(xslt)
#TODO: Pagination should be optional.
params = {
'pagename': "'{}'".format(pagename),
'output_dir': "'{}'".format(outdir.as_posix())
}
for k, v in page.items():
if k not in params.keys():
if type(v) in (int, float):
params[k] = str(v)
if type(v) == str:
if v.startswith('xpath:'):
params[k] = v[len('xpath:'):]
elif 'items' == k: #TODO: Remove. Legacy, for pagination.
params[k] = v
else: #TODO: This will break stuff if v contains a '
params[k] = "'{}'".format(v)
result = transform(data, **params)
#TODO: Make this an option somewhere.
if page['uri']: #If uri is false, just discard the from this template.
flname = page['uri']
target = Path(outdir, flname)
if not target.parent.exists():
target.parent.mkdir(parents=True)
logging.debug("Outputting "+str(target))
#with target.open('wb') as f:
result.write_output(str(target))
@fugue.command()
@click.pass_context
def postprocess(ctx):
"""Runs all postprocessing directives."""
outdir = _output_dir(ctx)
logging.debug("Postprocessing. Output dir: %s" % outdir)
click.echo("Running postprocess tasks.")
commands = ctx.obj['settings'].get('postprocess', [])
process(commands)
if __name__ == '__main__':
STARTED_IN = Path().resolve()
fugue()
os.chdir(STARTED_IN)
|
PypiClean
|
/infoblox-netmri-3.8.0.0.tar.gz/infoblox-netmri-3.8.0.0/infoblox_netmri/api/broker/v3_8_0/auth_privilege_broker.py
|
from ..broker import Broker
class AuthPrivilegeBroker(Broker):
controller = "auth_privileges"
def index(self, **kwargs):
"""Lists the available auth privileges. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param id: The internal NetMRI identifier for this user privilege.
:type id: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param id: The internal NetMRI identifier for this user privilege.
:type id: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` id
:param sort: The data field(s) to use for sorting the output. Default is id. Valid values are id, privilege_name, sequence, description, created_at, updated_at, reference.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each AuthPrivilege. Valid values are id, privilege_name, sequence, description, created_at, updated_at, reference. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_privileges: An array of the AuthPrivilege objects that match the specified input criteria.
:rtype auth_privileges: Array of AuthPrivilege
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def show(self, **kwargs):
"""Shows the details for the specified auth privilege.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The internal NetMRI identifier for this user privilege.
:type id: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_privilege: The auth privilege identified by the specified id.
:rtype auth_privilege: AuthPrivilege
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def search(self, **kwargs):
"""Lists the available auth privileges matching the input criteria. This method provides a more flexible search interface than the index method, but searching using this method is more demanding on the system and will not perform to the same level as the index method. The input fields listed below will be used as in the index method, to filter the result, along with the optional query string and XML filter described below.
**Inputs**
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param created_at: The date and time the record was initially created in NetMRI.
:type created_at: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param created_at: The date and time the record was initially created in NetMRI.
:type created_at: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param description: The description of this user privilege, as shown in the user interface.
:type description: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param description: The description of this user privilege, as shown in the user interface.
:type description: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param id: The internal NetMRI identifier for this user privilege.
:type id: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param id: The internal NetMRI identifier for this user privilege.
:type id: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param privilege_name: The name of this user privilege, as shown in the user interface.
:type privilege_name: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param privilege_name: The name of this user privilege, as shown in the user interface.
:type privilege_name: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param reference: The internal key used to identify this privilege; this is the value shown in the API documentation page for those methods requiring a privilege.
:type reference: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param reference: The internal key used to identify this privilege; this is the value shown in the API documentation page for those methods requiring a privilege.
:type reference: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param sequence: Not used.
:type sequence: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param sequence: Not used.
:type sequence: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param updated_at: The date and time the record was last modified in NetMRI.
:type updated_at: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param updated_at: The date and time the record was last modified in NetMRI.
:type updated_at: Array of DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` id
:param sort: The data field(s) to use for sorting the output. Default is id. Valid values are id, privilege_name, sequence, description, created_at, updated_at, reference.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each AuthPrivilege. Valid values are id, privilege_name, sequence, description, created_at, updated_at, reference. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param query: This value will be matched against auth privileges, looking to see if one or more of the listed attributes contain the passed value. You may also surround the value with '/' and '/' to perform a regular expression search rather than a containment operation. Any record that matches will be returned. The attributes searched are: created_at, description, id, privilege_name, reference, sequence, updated_at.
:type query: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_privileges: An array of the AuthPrivilege objects that match the specified input criteria.
:rtype auth_privileges: Array of AuthPrivilege
"""
return self.api_list_request(self._get_method_fullname("search"), kwargs)
def find(self, **kwargs):
"""Lists the available auth privileges matching the input specification. This provides the most flexible search specification of all the query mechanisms, enabling searching using comparison operations other than equality. However, it is more complex to use and will not perform as efficiently as the index or search methods. In the input descriptions below, 'field names' refers to the following fields: created_at, description, id, privilege_name, reference, sequence, updated_at.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_created_at: The operator to apply to the field created_at. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. created_at: The date and time the record was initially created in NetMRI. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_created_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_created_at: If op_created_at is specified, the field named in this input will be compared to the value in created_at using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_created_at must be specified if op_created_at is specified.
:type val_f_created_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_created_at: If op_created_at is specified, this value will be compared to the value in created_at using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_created_at must be specified if op_created_at is specified.
:type val_c_created_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_description: The operator to apply to the field description. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. description: The description of this user privilege, as shown in the user interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_description: If op_description is specified, the field named in this input will be compared to the value in description using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_description must be specified if op_description is specified.
:type val_f_description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_description: If op_description is specified, this value will be compared to the value in description using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_description must be specified if op_description is specified.
:type val_c_description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_id: The operator to apply to the field id. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. id: The internal NetMRI identifier for this user privilege. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_id: If op_id is specified, the field named in this input will be compared to the value in id using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_id must be specified if op_id is specified.
:type val_f_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_id: If op_id is specified, this value will be compared to the value in id using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_id must be specified if op_id is specified.
:type val_c_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_privilege_name: The operator to apply to the field privilege_name. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. privilege_name: The name of this user privilege, as shown in the user interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_privilege_name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_privilege_name: If op_privilege_name is specified, the field named in this input will be compared to the value in privilege_name using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_privilege_name must be specified if op_privilege_name is specified.
:type val_f_privilege_name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_privilege_name: If op_privilege_name is specified, this value will be compared to the value in privilege_name using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_privilege_name must be specified if op_privilege_name is specified.
:type val_c_privilege_name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_reference: The operator to apply to the field reference. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. reference: The internal key used to identify this privilege; this is the value shown in the API documentation page for those methods requiring a privilege. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_reference: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_reference: If op_reference is specified, the field named in this input will be compared to the value in reference using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_reference must be specified if op_reference is specified.
:type val_f_reference: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_reference: If op_reference is specified, this value will be compared to the value in reference using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_reference must be specified if op_reference is specified.
:type val_c_reference: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_sequence: The operator to apply to the field sequence. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. sequence: Not used. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_sequence: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_sequence: If op_sequence is specified, the field named in this input will be compared to the value in sequence using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_sequence must be specified if op_sequence is specified.
:type val_f_sequence: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_sequence: If op_sequence is specified, this value will be compared to the value in sequence using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_sequence must be specified if op_sequence is specified.
:type val_c_sequence: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_updated_at: The operator to apply to the field updated_at. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. updated_at: The date and time the record was last modified in NetMRI. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_updated_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_updated_at: If op_updated_at is specified, the field named in this input will be compared to the value in updated_at using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_updated_at must be specified if op_updated_at is specified.
:type val_f_updated_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_updated_at: If op_updated_at is specified, this value will be compared to the value in updated_at using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_updated_at must be specified if op_updated_at is specified.
:type val_c_updated_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` id
:param sort: The data field(s) to use for sorting the output. Default is id. Valid values are id, privilege_name, sequence, description, created_at, updated_at, reference.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each AuthPrivilege. Valid values are id, privilege_name, sequence, description, created_at, updated_at, reference. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_privileges: An array of the AuthPrivilege objects that match the specified input criteria.
:rtype auth_privileges: Array of AuthPrivilege
"""
return self.api_list_request(self._get_method_fullname("find"), kwargs)
|
PypiClean
|
/echarts-china-counties-pypkg-0.0.2.tar.gz/echarts-china-counties-pypkg-0.0.2/echarts_china_counties_pypkg/resources/echarts-china-counties-js/2239791f59c98df5421aea4af70306b9.js
|
(function (root, factory) {if (typeof define === 'function' && define.amd) {define(['exports', 'echarts'], factory);} else if (typeof exports === 'object' && typeof exports.nodeName !== 'string') {factory(exports, require('echarts'));} else {factory({}, root.echarts);}}(this, function (exports, echarts) {var log = function (msg) {if (typeof console !== 'undefined') {console && console.error && console.error(msg);}};if (!echarts) {log('ECharts is not Loaded');return;}if (!echarts.registerMap) {log('ECharts Map is not loaded');return;}echarts.registerMap('魏县', {"type":"FeatureCollection","features":[{"type":"Feature","id":"130434","properties":{"name":"魏县","cp":[114.93892,36.359868],"childNum":1},"geometry":{"type":"Polygon","coordinates":["@@A@A@CAA@@@C@ABA@AB@@A@@B@BFPDR@BDJBHBDDLBF@B@B@DCLCHGDABGD@@CFAFABEF@@BDA@B@A@BF@BDJ@@BDBHBJD@@@CB@@IB@AK@GA@@GBA@BFB@@@@@@@A@@DA@AAA@@@AB@AMB@AAAA@C@@JA@@@ABAEA@@@MBACE@BAA@C@A@E@@CA@AIB@AAD@AQA@@@M@@B@@@@G@@B@BA@@BG@@CA@@@A@@BEB@AAACBC@AEA@C@@ECBACA@GB@BGBABE@C@@@A@@DA@E@BBEB@DAB@AA@@CK@@CEBC@BDE@A@@BA@G@@@G@AIA@I@AGKBA@@@BLEBBFKB@BABIB@@@@BBA@BBC@@BDDCBBDC@@BCBBDA@@BBDHFD@B@JCDDAJ@B@HHA@AAALAHD@@DHBBC@EDCHADAB@B@JBFBBBFBLCHAB@BCFAD@BABGH@@ADCAAAAAKIAA@@C@BBBH@BEB@@B@B@B@BL@B@@B@BFB@@HBFC@@B@D@@@D@@GJGJAD@DCL@L@FDN@@@@@DBDDHFRFLH@LAB@DP@@@@BDBFHA@@F@D@DBFBHFJJBB@DBJ@@J@DJL@BFDLDDBBJJJJDH@BDDA@@BDBBH@@GBBB@HB@B@DNF@BB@F@BBDCBBD@@D@BAACBAJCD@BJHADBBDDBB@FCDFF@@CTA@GF@@EF@DAB@@AD@B@@BD@AGHBLAJBFFHDC@@BFBHF@JBDFFH@FBABE@DT@BBHA@C@E@BFAACACBBFC@ABBPDBDBN@@ALA@CDABFH@DH@LBBD@@BFBFBFAFECAHAD@FBHD@@BBBDBDABJBH@BBFED@BABA@E@@F@@@@EA@@CBADIFEDAFC@A@CD@@@@CC@A@@IR@AGNAB@BAAED@@BBHBA@C@E@@F@HBD@BADEBEDA@C@E@AFMHMFCRIJGJKHIHGBAFA@ADANIBCLMBSBUBMDKDY@EBKBIBU@A@@BCBK@E@I@A@ADEFIBC@ABAFGHCJ@D@BAFABCDCHCHCICGCEECGCI@AAEAM@AAQAC@@CAG@]HCBYFG@C@AAEEECAEIICCACAE@EBGBC@E@EACCKAIAA@E@GDIBAAC@A@A@@@A@AA@ECCACCEECAGGGGGGAAKIAAECCEECCCCCGG@@CACACAA@AAA@A@C@G@C@ABA@A@A@C@CBA@A@@B@@ABA@AB@@A@AAA@A@AAA@AAE@A@C@A@A@AAACACA@A@A@@@C@A@C@AAACAAAAA@CAA@AAAA@C@A@AAAAAC@E@A@"],"encodeOffsets":[[117684,36915]]}}],"UTF8Encoding":true});}));
|
PypiClean
|
/taskcc-alipay-sdk-python-3.3.398.tar.gz/taskcc-alipay-sdk-python-3.3.398/alipay/aop/api/request/KoubeiMallScanpurchaseUserverifyVerifyRequest.py
|
import json
from alipay.aop.api.FileItem import FileItem
from alipay.aop.api.constant.ParamConstants import *
from alipay.aop.api.domain.KoubeiMallScanpurchaseUserverifyVerifyModel import KoubeiMallScanpurchaseUserverifyVerifyModel
class KoubeiMallScanpurchaseUserverifyVerifyRequest(object):
def __init__(self, biz_model=None):
self._biz_model = biz_model
self._biz_content = None
self._version = "1.0"
self._terminal_type = None
self._terminal_info = None
self._prod_code = None
self._notify_url = None
self._return_url = None
self._udf_params = None
self._need_encrypt = False
@property
def biz_model(self):
return self._biz_model
@biz_model.setter
def biz_model(self, value):
self._biz_model = value
@property
def biz_content(self):
return self._biz_content
@biz_content.setter
def biz_content(self, value):
if isinstance(value, KoubeiMallScanpurchaseUserverifyVerifyModel):
self._biz_content = value
else:
self._biz_content = KoubeiMallScanpurchaseUserverifyVerifyModel.from_alipay_dict(value)
@property
def version(self):
return self._version
@version.setter
def version(self, value):
self._version = value
@property
def terminal_type(self):
return self._terminal_type
@terminal_type.setter
def terminal_type(self, value):
self._terminal_type = value
@property
def terminal_info(self):
return self._terminal_info
@terminal_info.setter
def terminal_info(self, value):
self._terminal_info = value
@property
def prod_code(self):
return self._prod_code
@prod_code.setter
def prod_code(self, value):
self._prod_code = value
@property
def notify_url(self):
return self._notify_url
@notify_url.setter
def notify_url(self, value):
self._notify_url = value
@property
def return_url(self):
return self._return_url
@return_url.setter
def return_url(self, value):
self._return_url = value
@property
def udf_params(self):
return self._udf_params
@udf_params.setter
def udf_params(self, value):
if not isinstance(value, dict):
return
self._udf_params = value
@property
def need_encrypt(self):
return self._need_encrypt
@need_encrypt.setter
def need_encrypt(self, value):
self._need_encrypt = value
def add_other_text_param(self, key, value):
if not self.udf_params:
self.udf_params = dict()
self.udf_params[key] = value
def get_params(self):
params = dict()
params[P_METHOD] = 'koubei.mall.scanpurchase.userverify.verify'
params[P_VERSION] = self.version
if self.biz_model:
params[P_BIZ_CONTENT] = json.dumps(obj=self.biz_model.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
if self.biz_content:
if hasattr(self.biz_content, 'to_alipay_dict'):
params['biz_content'] = json.dumps(obj=self.biz_content.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':'))
else:
params['biz_content'] = self.biz_content
if self.terminal_type:
params['terminal_type'] = self.terminal_type
if self.terminal_info:
params['terminal_info'] = self.terminal_info
if self.prod_code:
params['prod_code'] = self.prod_code
if self.notify_url:
params['notify_url'] = self.notify_url
if self.return_url:
params['return_url'] = self.return_url
if self.udf_params:
params.update(self.udf_params)
return params
def get_multipart_params(self):
multipart_params = dict()
return multipart_params
|
PypiClean
|
/pygac-fdr-0.2.2.tar.gz/pygac-fdr-0.2.2/README.md
|
# pygac-fdr
Python package for creating a Fundamental Data Record (FDR) of AVHRR GAC data using pygac
[](https://github.com/pytroll/pygac-fdr/actions/workflows/ci.yaml)
[](https://codecov.io/gh/pytroll/pygac-fdr)
[](https://badge.fury.io/py/pygac-fdr)
[](https://doi.org/10.5281/zenodo.5762183)
Installation
============
To install the latest release:
```
pip install pygac-fdr
```
To install the latest development version:
```
pip install git+https://github.com/pytroll/pygac-fdr
```
Usage
=====
To read and calibrate AVHRR GAC level 1b data, adapt the config template in `etc/pygac-fdr.yaml`, then
run:
```
pygac-fdr-run --cfg=my_config.yaml /data/avhrr_gac/NSS.GHRR.M1.D20021.S0*
```
Results are written into the specified output directory in netCDF format. Afterwards, collect and
complement metadata of the generated netCDF files:
```
pygac-fdr-mda-collect --dbfile=test.sqlite3 /data/avhrr_gac/output/*
```
This might take some time, so the results are saved into a database. You can specify files from
multiple platforms; the metadata are analyzed for each platform separately. With a large number
of files you might run into limitations on the size of the command line argument ("Argument list
too long"). In this case use the following command to read the list of filenames from a file
(one per line):
```
pygac-fdr-mda-collect --dbfile=test.sqlite3 @myfiles.txt
```
Finally, update the netCDF metadata inplace:
```
pygac-fdr-mda-update --dbfile=test.sqlite3
```
Tips for AVHRR GAC FDR Users
============================
Checking Global Quality Flag
----------------------------
The global quality flag can be checked from the command line as follows:
```
ncks -CH -v global_quality_flag -s "%d" myfile.nc
```
Cropping Overlap
----------------
Due to the data reception mechanism consecutive AVHRR GAC files often partly contain the same information. This is what
we call overlap. For example some scanlines in the end of file A also occur in the beginning of file B. The
`overlap_free_start` and `overlap_free_end` attributes in `pygac-fdr` output files indicate that overlap. There are two
ways to remove it:
- Cut overlap with subsequent file: Select scanlines `0:overlap_free_end`
- Cut overlap with preceding file: Select scanlines `overlap_free_start:-1`
If, in addition, users want to create daily composites, a file containing observations from two days has to be used
twice: Once only the part before UTC 00:00, and once only the part after UTC 00:00. Cropping overlap and day together
is a little bit more complex, because the overlap might cover UTC 00:00. That is why the `pygac-fdr-crop` utility is
provided:
```
$ pygac-fdr-crop AVHRR-GAC_FDR_1C_N06_19810330T225108Z_19810331T003506Z_...nc --date 19810330
0 8260
$ pygac-fdr-crop AVHRR-GAC_FDR_1C_N06_19810330T225108Z_19810331T003506Z_...nc --date 19810331
8261 12472
```
The returned numbers are start- and end-scanline (0-based).
|
PypiClean
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.