id
stringlengths 1
8
| text
stringlengths 6
1.05M
| dataset_id
stringclasses 1
value |
---|---|---|
/async_dash-0.1.0a0-py3-none-any.whl/dash/html/U.py
|
from dash.development.base_component import Component, _explicitize_args
class U(Component):
"""An U component.
U is a wrapper for the <u> HTML5 element.
For detailed attribute info see:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/u
Keyword arguments:
- children (a list of or a singular dash component, string or number; optional):
The children of this component.
- id (string; optional):
The ID of this component, used to identify dash components in
callbacks. The ID needs to be unique across all of the components
in an app.
- accessKey (string; optional):
Keyboard shortcut to activate or add focus to the element.
- aria-* (string; optional):
A wildcard aria attribute.
- className (string; optional):
Often used with CSS to style elements with common properties.
- contentEditable (string; optional):
Indicates whether the element's content is editable.
- contextMenu (string; optional):
Defines the ID of a <menu> element which will serve as the
element's context menu.
- data-* (string; optional):
A wildcard data attribute.
- dir (string; optional):
Defines the text direction. Allowed values are ltr (Left-To-Right)
or rtl (Right-To-Left).
- draggable (string; optional):
Defines whether the element can be dragged.
- hidden (a value equal to: 'hidden', 'HIDDEN' | boolean; optional):
Prevents rendering of given element, while keeping child elements,
e.g. script elements, active.
- key (string; optional):
A unique identifier for the component, used to improve performance
by React.js while rendering components See
https://reactjs.org/docs/lists-and-keys.html for more info.
- lang (string; optional):
Defines the language used in the element.
- loading_state (dict; optional):
Object that holds the loading state object coming from
dash-renderer.
`loading_state` is a dict with keys:
- component_name (string; optional):
Holds the name of the component that is loading.
- is_loading (boolean; optional):
Determines if the component is loading or not.
- prop_name (string; optional):
Holds which property is loading.
- n_clicks (number; default 0):
An integer that represents the number of times that this element
has been clicked on.
- n_clicks_timestamp (number; default -1):
An integer that represents the time (in ms since 1970) at which
n_clicks changed. This can be used to tell which button was
changed most recently.
- role (string; optional):
The ARIA role attribute.
- spellCheck (string; optional):
Indicates whether spell checking is allowed for the element.
- style (dict; optional):
Defines CSS styles which will override styles previously set.
- tabIndex (string; optional):
Overrides the browser's default tab order and follows the one
specified instead.
- title (string; optional):
Text to be displayed in a tooltip when hovering over the element."""
@_explicitize_args
def __init__(self, children=None, id=Component.UNDEFINED, n_clicks=Component.UNDEFINED, n_clicks_timestamp=Component.UNDEFINED, key=Component.UNDEFINED, role=Component.UNDEFINED, accessKey=Component.UNDEFINED, className=Component.UNDEFINED, contentEditable=Component.UNDEFINED, contextMenu=Component.UNDEFINED, dir=Component.UNDEFINED, draggable=Component.UNDEFINED, hidden=Component.UNDEFINED, lang=Component.UNDEFINED, spellCheck=Component.UNDEFINED, style=Component.UNDEFINED, tabIndex=Component.UNDEFINED, title=Component.UNDEFINED, loading_state=Component.UNDEFINED, **kwargs):
self._prop_names = ['children', 'id', 'accessKey', 'aria-*', 'className', 'contentEditable', 'contextMenu', 'data-*', 'dir', 'draggable', 'hidden', 'key', 'lang', 'loading_state', 'n_clicks', 'n_clicks_timestamp', 'role', 'spellCheck', 'style', 'tabIndex', 'title']
self._type = 'U'
self._namespace = 'dash_html_components'
self._valid_wildcard_attributes = ['data-', 'aria-']
self.available_properties = ['children', 'id', 'accessKey', 'aria-*', 'className', 'contentEditable', 'contextMenu', 'data-*', 'dir', 'draggable', 'hidden', 'key', 'lang', 'loading_state', 'n_clicks', 'n_clicks_timestamp', 'role', 'spellCheck', 'style', 'tabIndex', 'title']
self.available_wildcard_properties = ['data-', 'aria-']
_explicit_args = kwargs.pop('_explicit_args')
_locals = locals()
_locals.update(kwargs) # For wildcard attrs
args = {k: _locals[k] for k in _explicit_args if k != 'children'}
for k in []:
if k not in args:
raise TypeError(
'Required argument `' + k + '` was not specified.')
super(U, self).__init__(children=children, **args)
|
PypiClean
|
/sciapy-0.0.8.tar.gz/sciapy-0.0.8/docs/README.md
|
# SCIAMACHY data tools
[](https://github.com/st-bender/sciapy/actions/workflows/ci_build_and_test.yml)
[](https://sciapy.rtfd.io/en/latest/?badge=latest)
[](https://coveralls.io/github/st-bender/sciapy)
[](https://scrutinizer-ci.com/g/st-bender/sciapy/?branch=master)
[](https://doi.org/10.5281/zenodo.1401370)
[](https://doi.org/10.5281/zenodo.1342701)
## Overview
These SCIAMACHY tools are provided as convenience tools for handling
SCIAMACHY level 1c limb spectra and retrieved level 2 trace-gas densities.
More extensive documentation is provided on [sciapy.rtfd.io](https://sciapy.rtfd.io).
### Level 1c tools
The `sciapy.level1c` submodule provides a few
[conversion tools](sciapy/level1c/README.md) for [SCIAMACHY](http://www.sciamachy.org)
level 1c calibrated spectra, to be used as input for trace gas retrieval with
[scia\_retrieval\_2d](https://github.com/st-bender/scia_retrieval_2d).
**Note that this is *not* a level 1b to level 1c calibration tool.**
For calibrating level 1b spectra (for example SCI\_NL\_\_1P version 8.02
provided by ESA via the
[ESA data browser](https://earth.esa.int/web/guest/data-access/browse-data-products))
to level 1c spectra, use the
[SciaL1C](https://earth.esa.int/web/guest/software-tools/content/-/article/scial1c-command-line-tool-4073)
command line tool or the free software
[nadc\_tools](https://github.com/rmvanhees/nadc_tools).
The first produces `.child` files, the second can output to HDF5 (`.h5`).
**Further note**: `.child` files are currently not supported.
### Level 2 tools
The `sciapy.level2` submodule provides
post-processing tools for trace-gas densities retrieved from SCIAMACHY limb scans.
Support simple operations as combining files into *netcdf*, calculating and noting
local solar time at the retrieval grid points, geomagnetic latitudes, etc.
The level 2 tools also include a simple binning algorithm.
### Regression
The `sciapy.regress` submodule can be used for regression analysis of SCIAMACHY
level 2 trace gas density time series, either directly or as daily zonal means.
It uses the [`regressproxy`](https://regressproxy.readthedocs.io) package
for modelling the proxy input with lag and lifetime decay.
The regression tools support various parameter fitting methods using
[`scipy.optimize`](https://docs.scipy.org/doc/scipy/reference/optimize.html)
and uncertainty evaluation using Markov-Chain Monte-Carlo sampling with
[`emcee`](https://emcee.readthedocs.io).
Further supports covariance modelling via
[`celerite`](https://celerite.readthedocs.io)
and [`george`](https://george.readthedocs.io).
## Install
### Prerequisites
Sciapy uses features from a lot of different packages.
All dependencies will be automatically installed when using
`pip install` or `python setup.py`, see below.
However, to speed up the install or for use
within a `conda` environment, it may be advantageous to
install some of the important packages beforehand:
- `numpy` at least version 1.13.0 for general numerics,
- `scipy` at least version 0.17.0 for scientific numerics,
- `matplotlib` at least version 2.2 for plotting,
- `netCDF4` for the low level netcdf4 interfaces,
- `h5py` for the low level hdf5 interfaces,
- `dask`,
- `toolz`,
- `pandas` and
- `xarray` for the higher level data interfaces,
- `astropy` for (astronomical) time conversions,
- `parse` for ASCII text parsing in `level1c`,
- `pybind11` C++ interface needed by `celerite`
- `celerite` at least version 0.3.0 and
- `george` for Gaussian process modelling,
- `emcee` for MCMC sampling and
- `corner` for the sample histogram plots,
- `regressproxy` for the regression proxy modelling.
Out of these packages, `numpy` is probably the most important one
to be installed first because at least `celerite` needs it for setup.
It may also be a good idea to install
[`pybind11`](https://pybind11.readthedocs.io)
because both `celerite` and `george` use its interface,
and both may fail to install without `pybind11`.
Depending on the setup, `numpy` and `pybind11` can be installed
via `pip`:
```sh
pip install numpy pybind11
```
or [`conda`](https://conda.io):
```sh
conda install numpy pybind11
```
### sciapy
Official releases are available as `pip` packages from the main package repository,
to be found at <https://pypi.org/project/sciapy/>, and which can be installed with:
```sh
$ pip install sciapy
```
The latest development version of
sciapy can be installed with [`pip`](https://pip.pypa.io) directly
from github (see <https://pip.pypa.io/en/stable/reference/pip_install/#vcs-support>
and <https://pip.pypa.io/en/stable/reference/pip_install/#git>):
```sh
$ pip install [-e] git+https://github.com/st-bender/sciapy.git
```
The other option is to use a local clone:
```sh
$ git clone https://github.com/st-bender/sciapy.git
$ cd sciapy
```
and then using `pip` (optionally using `-e`, see
<https://pip.pypa.io/en/stable/reference/pip_install/#install-editable>):
```sh
$ pip install [-e] .
```
or using `setup.py`:
```sh
$ python setup.py install
```
## Usage
The whole module as well as the individual submodules can be loaded as usual:
```python
>>> import sciapy
>>> import sciapy.level1c
>>> import sciapy.level2
>>> import sciapy.regress
```
Basic class and method documentation is accessible via `pydoc`:
```sh
$ pydoc sciapy
```
The submodules' documentation can be accessed with `pydoc` as well:
```sh
$ pydoc sciapy.level1c
$ pydoc sciapy.level2
$ pydoc sciapy.regress
```
## License
This python package is free software: you can redistribute it or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, version 2 (GPLv2), see [local copy](./LICENSE)
or [online version](http://www.gnu.org/licenses/gpl-2.0.html).
|
PypiClean
|
/contrast-agent-5.24.0.tar.gz/contrast-agent-5.24.0/src/contrast/utils/object_utils.py
|
from contrast_vendor import wrapt
import copy
import inspect
NOTIMPLEMENTED_MSG = "This method should be implemented by concrete subclass subclass"
def safe_copy(value):
"""
Return a safe copy of a value
:param value: to be copied
:return: copied value if no exception
"""
try:
return copy.copy(value)
except Exception:
return value
def get_name(obj):
return f"{obj.__module__}.{obj.__name__}" if inspect.isclass(obj) else obj.__name__
class BindingObjectProxy(wrapt.ObjectProxy):
"""
This class changes the default behavior of wrapt's ObjectProxy when accessing
certain bound methods of the proxied object.
Normally, if we access a bound method of a proxied object, the `self` passed to that
method will be the wrapped object, not the proxy itself. This means that inside of
bound methods, we lose the proxy, and the function is allowed to use the un-proxied
version of the object. This is usually not desirable.
This class provides a workaround for this behavior, but only for functions that we
explicitly override here. We haven't come up with a general safe solution to this
problem for all functions (yet).
We've tried overriding __getattr__ to try to rebind bound methods on-the-fly as
they're accessed. This had a bad interaction with BoundFunctionWrapper, which
returns the original (unwrapped) function when accessing `__func__`.
With each of the methods defined here, we're making the following assumptions:
- the underlying object does not have an attribute of the same name OR
- if the underlying object has an attribute of the same name, that attribute is an
instance method
If this doesn't hold, it could lead to very strange / incorrect behavior.
"""
def run(self, *args, **kwargs):
if type(self) is type(self.__wrapped__):
return self.__wrapped__.run(*args, **kwargs)
return self.__wrapped__.run.__func__(self, *args, **kwargs)
|
PypiClean
|
/pydango_pip-1.0.2-py3-none-any.whl/pydango/cinephile.py
|
from getpass import getpass
from pprint import pprint
from datetime import datetime
from pydango import state
from pydango.switchlang import switch
from pydango import (
primary_func,
secondary_func
)
from pydango.primary_func import chunks
from pydango.primary_func import (
create_session,
random_number_generator,
)
from pydango.tables import (
Account,
Category,
Movie,
Payment,
Ticket,
Theater,
theater_schedule,
)
from sqlalchemy.sql import (
update,
and_,
)
engine, session = create_session()
def run():
print('****************** Hello Cinephile ******************')
print()
show_commands()
while True:
action = primary_func.get_action()
with switch(action) as s:
s.case('c', create_account)
s.case('l', log_into_account)
s.case('o', logout)
s.case('s', list_movies)
s.case('n', browse_by_location)
s.case('t', browse_by_category)
s.case('r', purchase_ticket)
s.case('v', view_ticket)
s.case('m', lambda: 'change_mode')
s.case(['x', 'bye', 'exit', 'exit()'], secondary_func.exit_app)
s.default(secondary_func.unknown_command)
if action:
print()
if s.result == 'change_mode':
return
def show_commands():
print('What action would you like to take: ')
print('[C]reate an account')
print('[L]ogin to your account')
print('Log[O]ut of your account')
print('[R]eserve a movie ticket')
print('[V]iew your movie ticket')
print('[S]ee list of available movies')
print('Search for [N]earby theaters')
print('Search by ca[T]egory')
print('[M]ain menu')
print('e[X]it app')
print('[?] Help (this info)')
print()
def create_account():
print("****************** REGISTER ******************")
print()
print("Please provide the following information\n")
email = input("Email (required): ").strip().lower()
credit_card = input("Credit-card number (required, i.e. 4444333399993333): ").strip()
credit_card = int(credit_card)
password = getpass().strip()
zip_code = input("Zip-code (required): ").strip()
zip_code = int(zip_code)
first_name = input("What is your first name? ").strip()
last_name = input("What is your last name? ").strip()
old_account = session.query(Account).filter_by(email=email).first()
if old_account:
secondary_func.error_msg(f"ERROR: Account with email {email} already exists.")
return
account = Account(
email=email,
credit_card=credit_card,
password=password,
zip_code=zip_code,
first_name=first_name,
last_name=last_name
# exclude theater_owner attribute
)
session.add(account)
# Flush
my_account = session.query(Account).filter_by(email=email).first()
session.commit()
state.active_account = account
secondary_func.success_msg(f"\nCreated new account with id {state.active_account.id}")
def log_into_account():
print("****************** LOGIN ******************")
email = input("Email: ").strip()
password = getpass().strip()
account = session.query(Account).filter_by(email=email).first()
if not account:
secondary_func.error_msg(f"Could not find account with email ({email})")
return
elif account.password != password:
secondary_func.error_msg(f"Password does not match")
return
state.active_account = account
secondary_func.success_msg(f"\nYou are now logged in.")
# To help with testing in the Python shell
return state.active_account
def logout():
if state.active_account is None:
print("You are already logged-out.")
return
state.active_account = None
print("You are logged-out.")
def list_movies():
print("****************** BROWSE FOR MOVIES ******************")
print()
# Grab all Movie objects
movies = session.query(Movie).filter_by(active=True).all()
movies_list = [
i.__dict__.copy()
for i in movies
]
# movie __dict__ attribute contains _sa_instance_state which isn't useful
# popped = [i.pop('_sa_instance_state') for i in movies_list]
# create a movie_chunks generator out of movie_list
# to generate 3 items at a time
movie_chunks = chunks(movies_list, 5)
while True:
chunked = next(movie_chunks, None)
if chunked == None:
print("The End")
break
for i in chunked:
print(f"""\nTitle: {i['title']} | Rating: {i['rating']}
Description: {i['description']}""")
more = input("\n--More--<ENTER>\n")
if not more == "":
break
def browse_by_location():
print("****************** BROWSE FOR MOVIES BY LOCATION ******************")
print()
zip_code = input("Enter your zipcode: ").strip()
zip_code = int(zip_code)
theaters = session.query(Theater).filter_by(zip_code=zip_code).all()
if not theaters:
print("There are no theaters in that zip_code.")
by_city = input("Would you like to search by city (Yes or <ENTER to quit>)? ").strip()
if by_city == "":
return
city = input("Enter your city of residence: ").strip()
theaters = session.query(Theater).filter_by(city=city).all()
if not theaters:
print("Sorry, but there are no open theaters in your city.")
return
for i, theater in enumerate(theaters, 1):
movies = theater.movies
print(f"""\n{i}. {theater.name} at {theater.address} {theater.zip_code}
Open: {theater.open_time.strftime('%H:%M:%S')} | Close: {theater.close_time.strftime('%H:%M:%S')}
Prices: {theater.ticket_price}
""")
print(f"\n{theater.name}'s Movies:\n")
if movies:
for movie in movies:
movie = session.query(Movie).filter_by(id=movie.movie_id).first()
print(f"Title: {movie.title} | Rating: {movie.rating}\n")
else:
print("No movies playing currently due to COVID.")
print("Please check back when we get a government that cares about its people.")
def browse_by_category():
print("****************** BROWSE FOR MOVIES BY CATEGORY ******************")
print()
categories = session.query(Category).all()
categories_dict = {
'1': 'Drama',
'2': 'Action',
'3': 'Horror',
'4': 'Scifi',
'5': 'Romance',
'6': 'Comedy'
}
print("Movie categories: \n")
for i, category in enumerate(categories, 1):
print(f"{i}. {category.category_name}")
print()
category = input("Which category are you interested in (Enter a number): ").strip()
category = session.query(Category).filter_by(category_name=categories_dict[category]).first()
movies = category.movies
print(f"Movies for category: {category.category_name}\n")
for i, movie in enumerate(movies, 1):
print(i, movie.title)
def purchase_ticket():
print("****************** PURCHASE TICKETS ******************")
print()
if not state.active_account:
print("You must be logged in to to purchase a ticket.")
return
# Get account credentials that were created on registration
account = state.active_account
# Grab the theater_schedule objects
schedules = session.query(theater_schedule).all()
print("\nMOVIE THEATER SCHEDULES\n")
# List all available movies and theaters and times
# with index loop so they can input a number representing an object
# that will later get mapped to elements of tuples appended to a list
index = 0
for i in schedules:
theater = session.query(Theater).filter_by(id=i.theater_id).first()
movie = session.query(Movie).filter_by(id=i.movie_id).first()
index += 1
print(f"""{index}: {theater.name} {theater.address}, Prices: {theater.ticket_price}
{movie.title}, Schedules: {i.time}, Seats: {i.seats_available}\n""")
ticket_number = input("\nEnter ticket number: ").strip()
ticket_number = int(ticket_number) - 1
quantity = input("How many tickets would you like to purchase: ").strip()
quantity = int(quantity)
category = input("Which category of tickets (i.e. Adult/Child): ").strip()
theaters_list = []
# Creat a tuple of the required information to purchase a ticket
# along with an index so the user can select a tuple
for i, x in enumerate(schedules, 1):
theater = session.query(Theater).filter_by(id=x.theater_id).first()
movie = session.query(Movie).filter_by(id=x.movie_id).first()
payment_id = random_number_generator()
payment_id = int(payment_id)
tup = (i, theater.id, movie.id, x.time, payment_id, account.id)
theaters_list.append(tup)
my_ticket = theaters_list[ticket_number]
# I need to figure out the price for the category chosen for
# this particular theater outside of the loop because we don't want to do this for every theater
my_theater = session.query(Theater).filter_by(id=my_ticket[1]).first()
my_movie = session.query(Movie).filter_by(id=my_ticket[2]).first()
ticket_price = float(my_theater.ticket_price[category])
total = ticket_price * quantity
ticket = Ticket(
theater_id=my_ticket[1],
movie_id=my_ticket[2],
time=my_ticket[3],
payment_id=my_ticket[4],
account_id=my_ticket[5],
quantity=quantity,
total=total
)
payment = Payment(
id=my_ticket[4],
credit_card=account.credit_card,
paid=True
)
session.add(ticket)
session.add(payment)
session.commit()
# I think there's gotta be a better way to do this, but what it's supposed to do
# is update the value of seats_available in theater_schedule
# everytime someone purchases a ticket
my_theater_schedule = session.query(theater_schedule).filter_by(
theater_id=my_ticket[1],
movie_id=my_ticket[2],
time=my_ticket[3]
).first()
new_seats_available = my_theater_schedule.seats_available - quantity
engine.execute(update(theater_schedule).where(and_(theater_schedule.c.theater_id==my_ticket[1],
theater_schedule.c.movie_id==my_ticket[2],
theater_schedule.c.time==my_ticket[3])).values(seats_available=new_seats_available))
ticket_receipt = session.query(Ticket).filter_by(id=ticket.id).first()
print("\nYour receipt: \n")
print(f"""Movie: {my_movie.title} | Location: {my_theater.name} at {my_theater.address}
Time: {ticket_receipt.time} | Quantity: {ticket_receipt.quantity} tickets
Total Price: ${total} \n
Payment Id: {payment.id} | Date of Purchase: {ticket_receipt.created.date()}""")
print("\nEnjoy your movie!\n")
def view_ticket():
print("****************** VIEW MY CURRENT TICKETS ******************")
print()
if not state.active_account:
print("You must be logged in to view a purchased ticket.")
return
# Grab account
account = state.active_account
# Get account-related tickets
tickets = session.query(Ticket).filter_by(account_id=account.id).all()
# If account has no tickets return
if not tickets:
return
# Return only valid tickets - tickets that were purchased today
today = datetime.today().date()
print("\nMy Tickets: \n")
for ticket in tickets:
if ticket.created.date() == today:
theater = session.query(Theater).filter_by(id=ticket.theater_id).first()
movie = session.query(Movie).filter_by(id=ticket.movie_id).first()
payment = session.query(Payment).filter_by(id=ticket.payment_id).first()
if not payment.paid:
status = 'Unpaid'
status = 'Paid'
print(f"""
Movie: {movie.title} | Location: {theater.name} at {theater.address}
Time: {ticket.time} | Quantity: {ticket.quantity} tickets
Total Price: ${ticket.total} | Status: {status}\n
Payment Id: {ticket.payment_id} | Date of Purchase: {ticket.created.date()}\n
""")
|
PypiClean
|
/mesh_sandbox-1.0.9-py3-none-any.whl/mesh_sandbox/common/mex_headers.py
|
import re
import string
from typing import Any, NamedTuple, Optional
from fastapi import Header, HTTPException, status
from ..models.message import Message
from . import strtobool
from .constants import Headers
def ensure_text(text: str, encoding="utf-8", errors="strict"):
if isinstance(text, str):
return text
if isinstance(text, bytes):
return text.decode(encoding, errors)
raise TypeError(f"not expecting type '{type(text)}'")
_INVALID_CONTROL_CHAR_REGEX = re.compile(r".*[\x00-\x1f].*")
def contains_control_chars(value: str):
return _INVALID_CONTROL_CHAR_REGEX.match(ensure_text(value))
class MexHeaders(NamedTuple):
mex_to: str
mex_workflow_id: str
mex_chunk_range: Optional[str]
mex_subject: Optional[str]
mex_localid: Optional[str]
mex_partnerid: Optional[str]
mex_filename: Optional[str]
mex_content_encrypted: bool
mex_content_compressed: bool
mex_content_checksum: Optional[str]
def update(self, **kwargs):
if not kwargs:
return self
updated = self._asdict() # pylint: disable=no-member
updated.update(kwargs)
return MexHeaders(*[updated[f] for f in self._fields]) # pylint: disable=no-member
@classmethod
def from_message(cls, message: Message, chunk_range: Optional[str], **kwargs):
create: dict[str, Any] = {
"mex_to": message.recipient.mailbox_id,
"mex_workflow_id": message.workflow_id,
"mex_chunk_range": chunk_range,
"mex_subject": message.metadata.subject,
"mex_localid": message.metadata.local_id,
"mex_partnerid": message.metadata.partner_id,
"mex_filename": message.metadata.file_name,
"mex_content_encrypted": message.metadata.encrypted,
"mex_content_compressed": message.metadata.compressed,
"mex_content_checksum": message.metadata.checksum,
}
if kwargs:
create.update(kwargs)
return MexHeaders(*[create[f] for f in cls._fields]) # pylint: disable=no-member
def validate_content_checksum(content_checksum: Optional[str]):
if not content_checksum:
return
content_checksum = content_checksum.strip()
special_chars = ":-/"
chars_allowed = string.ascii_letters + string.digits + string.whitespace + special_chars
if all(char in chars_allowed for char in content_checksum):
return
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid checksum")
_URL_REGEX = re.compile("^https?://", re.IGNORECASE)
def _validate_headers(mex_headers: MexHeaders):
bad_fields = []
for key, value in mex_headers._asdict().items():
if type(value) not in (str, bytes):
continue
if not contains_control_chars(ensure_text(value)):
continue
bad_fields.append(key)
if mex_headers.mex_to and _URL_REGEX.match(mex_headers.mex_to):
bad_fields.append(Headers.Mex_To)
if mex_headers.mex_workflow_id and _URL_REGEX.match(mex_headers.mex_workflow_id):
bad_fields.append(Headers.Mex_WorkflowID)
if mex_headers.mex_content_checksum and _URL_REGEX.match(mex_headers.mex_content_checksum):
bad_fields.append(Headers.Mex_Content_Checksum)
if bad_fields:
err = {
"errorEvent": "TRANSFER",
"errorCode": "06",
"errorDescription": "MalformedControlFile",
"fields": bad_fields,
}
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=err)
validate_content_checksum(mex_headers.mex_content_checksum)
# pylint: disable=too-many-arguments
def send_message_mex_headers(
mex_to: str = Header(
..., title=Headers.Mex_To, description="Recipient mailbox ID", example="MAILBOX01", max_length=100
),
mex_workflowid: str = Header(
...,
title=Headers.Mex_WorkflowID,
description="Identifies the type of message being sent e.g. Pathology, GP Capitation.",
max_length=300,
),
mex_chunk_range: str = Header(title=Headers.Mex_Chunk_Range, default="", example="1:2", max_length=20),
mex_subject: str = Header(title=Headers.Mex_Subject, default="", max_length=500),
mex_localid: str = Header(title=Headers.Mex_LocalID, default="", max_length=300),
mex_partnerid: str = Header(title=Headers.Mex_PartnerID, default="", max_length=500),
mex_filename: str = Header(title=Headers.Mex_FileName, default="", max_length=300),
mex_content_encrypted: str = Header(
title=Headers.Mex_Content_Encrypted,
default="",
description="Flag indicating that the original message is encrypted, "
"this has no affect on the content, but will be flowed to the recipient",
example="Y",
include_in_schema=False,
max_length=20,
),
mex_content_compressed: str = Header(
title=Headers.Mex_Content_Compressed,
default="",
description="""Flag indicating that the original message has been compressed by the mesh client""",
example="Y",
include_in_schema=False,
max_length=20,
),
mex_content_checksum: str = Header(
title=Headers.Mex_Content_Checksum,
default="",
description="Checksum of the original message contents, as provided by the message sender",
example="b10a8db164e0754105b7a99be72e3fe5",
max_length=100,
),
) -> MexHeaders:
mex_headers = MexHeaders(
mex_to=(mex_to or "").upper().strip(),
mex_workflow_id=(mex_workflowid or "").strip(),
mex_chunk_range=(mex_chunk_range or "").strip(),
mex_subject=mex_subject,
mex_localid=mex_localid,
mex_partnerid=mex_partnerid,
mex_filename=mex_filename,
mex_content_encrypted=strtobool(mex_content_encrypted) or False,
mex_content_compressed=strtobool(mex_content_compressed) or False,
mex_content_checksum=mex_content_checksum,
)
_validate_headers(mex_headers=mex_headers)
return mex_headers
|
PypiClean
|
/pulumi_aws_native-0.75.1a1693503310.tar.gz/pulumi_aws_native-0.75.1a1693503310/pulumi_aws_native/connectcampaigns/_inputs.py
|
import copy
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'CampaignAnswerMachineDetectionConfigArgs',
'CampaignDialerConfigArgs',
'CampaignOutboundCallConfigArgs',
'CampaignPredictiveDialerConfigArgs',
'CampaignProgressiveDialerConfigArgs',
'CampaignTagArgs',
]
@pulumi.input_type
class CampaignAnswerMachineDetectionConfigArgs:
def __init__(__self__, *,
enable_answer_machine_detection: pulumi.Input[bool]):
"""
The configuration used for answering machine detection during outbound calls
:param pulumi.Input[bool] enable_answer_machine_detection: Flag to decided whether outbound calls should have answering machine detection enabled or not
"""
pulumi.set(__self__, "enable_answer_machine_detection", enable_answer_machine_detection)
@property
@pulumi.getter(name="enableAnswerMachineDetection")
def enable_answer_machine_detection(self) -> pulumi.Input[bool]:
"""
Flag to decided whether outbound calls should have answering machine detection enabled or not
"""
return pulumi.get(self, "enable_answer_machine_detection")
@enable_answer_machine_detection.setter
def enable_answer_machine_detection(self, value: pulumi.Input[bool]):
pulumi.set(self, "enable_answer_machine_detection", value)
@pulumi.input_type
class CampaignDialerConfigArgs:
def __init__(__self__, *,
predictive_dialer_config: Optional[pulumi.Input['CampaignPredictiveDialerConfigArgs']] = None,
progressive_dialer_config: Optional[pulumi.Input['CampaignProgressiveDialerConfigArgs']] = None):
"""
The possible types of dialer config parameters
"""
if predictive_dialer_config is not None:
pulumi.set(__self__, "predictive_dialer_config", predictive_dialer_config)
if progressive_dialer_config is not None:
pulumi.set(__self__, "progressive_dialer_config", progressive_dialer_config)
@property
@pulumi.getter(name="predictiveDialerConfig")
def predictive_dialer_config(self) -> Optional[pulumi.Input['CampaignPredictiveDialerConfigArgs']]:
return pulumi.get(self, "predictive_dialer_config")
@predictive_dialer_config.setter
def predictive_dialer_config(self, value: Optional[pulumi.Input['CampaignPredictiveDialerConfigArgs']]):
pulumi.set(self, "predictive_dialer_config", value)
@property
@pulumi.getter(name="progressiveDialerConfig")
def progressive_dialer_config(self) -> Optional[pulumi.Input['CampaignProgressiveDialerConfigArgs']]:
return pulumi.get(self, "progressive_dialer_config")
@progressive_dialer_config.setter
def progressive_dialer_config(self, value: Optional[pulumi.Input['CampaignProgressiveDialerConfigArgs']]):
pulumi.set(self, "progressive_dialer_config", value)
@pulumi.input_type
class CampaignOutboundCallConfigArgs:
def __init__(__self__, *,
connect_contact_flow_arn: pulumi.Input[str],
connect_queue_arn: pulumi.Input[str],
answer_machine_detection_config: Optional[pulumi.Input['CampaignAnswerMachineDetectionConfigArgs']] = None,
connect_source_phone_number: Optional[pulumi.Input[str]] = None):
"""
The configuration used for outbound calls.
:param pulumi.Input[str] connect_contact_flow_arn: The identifier of the contact flow for the outbound call.
:param pulumi.Input[str] connect_queue_arn: The queue for the call. If you specify a queue, the phone displayed for caller ID is the phone number specified in the queue. If you do not specify a queue, the queue defined in the contact flow is used. If you do not specify a queue, you must specify a source phone number.
:param pulumi.Input[str] connect_source_phone_number: The phone number associated with the Amazon Connect instance, in E.164 format. If you do not specify a source phone number, you must specify a queue.
"""
pulumi.set(__self__, "connect_contact_flow_arn", connect_contact_flow_arn)
pulumi.set(__self__, "connect_queue_arn", connect_queue_arn)
if answer_machine_detection_config is not None:
pulumi.set(__self__, "answer_machine_detection_config", answer_machine_detection_config)
if connect_source_phone_number is not None:
pulumi.set(__self__, "connect_source_phone_number", connect_source_phone_number)
@property
@pulumi.getter(name="connectContactFlowArn")
def connect_contact_flow_arn(self) -> pulumi.Input[str]:
"""
The identifier of the contact flow for the outbound call.
"""
return pulumi.get(self, "connect_contact_flow_arn")
@connect_contact_flow_arn.setter
def connect_contact_flow_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "connect_contact_flow_arn", value)
@property
@pulumi.getter(name="connectQueueArn")
def connect_queue_arn(self) -> pulumi.Input[str]:
"""
The queue for the call. If you specify a queue, the phone displayed for caller ID is the phone number specified in the queue. If you do not specify a queue, the queue defined in the contact flow is used. If you do not specify a queue, you must specify a source phone number.
"""
return pulumi.get(self, "connect_queue_arn")
@connect_queue_arn.setter
def connect_queue_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "connect_queue_arn", value)
@property
@pulumi.getter(name="answerMachineDetectionConfig")
def answer_machine_detection_config(self) -> Optional[pulumi.Input['CampaignAnswerMachineDetectionConfigArgs']]:
return pulumi.get(self, "answer_machine_detection_config")
@answer_machine_detection_config.setter
def answer_machine_detection_config(self, value: Optional[pulumi.Input['CampaignAnswerMachineDetectionConfigArgs']]):
pulumi.set(self, "answer_machine_detection_config", value)
@property
@pulumi.getter(name="connectSourcePhoneNumber")
def connect_source_phone_number(self) -> Optional[pulumi.Input[str]]:
"""
The phone number associated with the Amazon Connect instance, in E.164 format. If you do not specify a source phone number, you must specify a queue.
"""
return pulumi.get(self, "connect_source_phone_number")
@connect_source_phone_number.setter
def connect_source_phone_number(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "connect_source_phone_number", value)
@pulumi.input_type
class CampaignPredictiveDialerConfigArgs:
def __init__(__self__, *,
bandwidth_allocation: pulumi.Input[float]):
"""
Predictive Dialer config
:param pulumi.Input[float] bandwidth_allocation: The bandwidth allocation of a queue resource.
"""
pulumi.set(__self__, "bandwidth_allocation", bandwidth_allocation)
@property
@pulumi.getter(name="bandwidthAllocation")
def bandwidth_allocation(self) -> pulumi.Input[float]:
"""
The bandwidth allocation of a queue resource.
"""
return pulumi.get(self, "bandwidth_allocation")
@bandwidth_allocation.setter
def bandwidth_allocation(self, value: pulumi.Input[float]):
pulumi.set(self, "bandwidth_allocation", value)
@pulumi.input_type
class CampaignProgressiveDialerConfigArgs:
def __init__(__self__, *,
bandwidth_allocation: pulumi.Input[float]):
"""
Progressive Dialer config
:param pulumi.Input[float] bandwidth_allocation: The bandwidth allocation of a queue resource.
"""
pulumi.set(__self__, "bandwidth_allocation", bandwidth_allocation)
@property
@pulumi.getter(name="bandwidthAllocation")
def bandwidth_allocation(self) -> pulumi.Input[float]:
"""
The bandwidth allocation of a queue resource.
"""
return pulumi.get(self, "bandwidth_allocation")
@bandwidth_allocation.setter
def bandwidth_allocation(self, value: pulumi.Input[float]):
pulumi.set(self, "bandwidth_allocation", value)
@pulumi.input_type
class CampaignTagArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
value: pulumi.Input[str]):
"""
A key-value pair to associate with a resource.
:param pulumi.Input[str] key: The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
:param pulumi.Input[str] value: The value for the tag. You can specify a value that's 1 to 256 characters in length.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
The value for the tag. You can specify a value that's 1 to 256 characters in length.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
|
PypiClean
|
/distributions_dataset-1.1.tar.gz/distributions_dataset-1.1/distributions_dataset/Gaussiandistribution.py
|
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
|
PypiClean
|
/glqiwiapi-2.18.3.tar.gz/glqiwiapi-2.18.3/glQiwiApi/qiwi/clients/wallet/types/other.py
|
from typing import Union
from pydantic import Field, validator
from glQiwiApi.types.amount import CurrencyModel
from glQiwiApi.types.base import Base, HashableBase
from glQiwiApi.utils.currency_util import Currency
class CrossRate(HashableBase):
"""Курс валюты"""
rate_from: Union[str, CurrencyModel] = Field(..., alias='from')
rate_to: Union[str, CurrencyModel] = Field(..., alias='to')
rate: float
@validator('rate_from', 'rate_to', pre=True)
def humanize_rates(cls, v): # type: ignore
if not isinstance(v, str):
return v
cur = Currency.get(v)
if not cur:
return v
return cur
class PaymentMethod(Base):
payment_type: str
account_id: str
class PaymentDetails(Base):
"""Набор реквизитов платежа"""
name: str
"""Наименование банка получателя"""
extra_to_bik: str
"""БИК банка получателя"""
to_bik: str
""" БИК банка получателя"""
city: str
"""Город местонахождения получателя"""
info: str = 'Коммерческие организации'
"""Константное значение"""
is_commercial: str = '1'
"""Служебная информация"""
to_name: str
"""Наименование организации"""
to_inn: str
"""ИНН организации"""
to_kpp: str
""" КПП организации"""
nds: str
"""
Признак уплаты НДС.
Если вы оплачиваете квитанцию и в ней не указан НДС,
то строка НДС не облагается. В ином случае, строка В т.ч. НДС
"""
goal: str
"""Назначение платежа"""
urgent: str = '0'
"""
Признак срочного платежа (0 - нет, 1 - да).
Срочный платеж выполняется от 10 минут.
Возможен по будням с 9:00 до 20:30 по московскому времени.
Стоимость услуги — 25 рублей.
"""
account: str
"""Номер счета получателя"""
from_name: str
"""Имя плательщика"""
from_name_p: str
"""Отчество плательщика"""
from_name_f: str
""" Фамилия плательщика"""
requestProtocol: str = 'qw1'
"""Служебная информация, константа"""
toServiceId: str = '1717'
"""Служебная информация, константа"""
__all__ = ('CrossRate', 'PaymentDetails', 'PaymentMethod')
|
PypiClean
|
/gufo_loader-1.0.3-py3-none-any.whl/gufo/loader/__init__.py
|
# Python modules
import inspect
from pkgutil import iter_modules
from threading import Lock
from typing import (
Any,
Callable,
Dict,
Generic,
Iterable,
Iterator,
Optional,
Set,
Tuple,
TypeVar,
cast,
get_args,
)
__version__: str = "1.0.3"
T = TypeVar("T")
class Loader(Generic[T]):
"""
Generic loader. Used as singleton instantiated from generic.
Args:
base: Plugins package name.
bases: Iterable of plugin package names.
strict: Ignore missed plugin packages if set to False, Fail otherwise.
exclude: Iterable of names to be excluded from plugins lists.
Note:
`base` and `bases` parameters are mutually exclusive.
Either `base` or `bases` must be provided.
"""
def __init__(
self: "Loader[T]",
base: Optional[str] = None,
bases: Optional[Iterable[str]] = None,
strict: bool = False,
exclude: Optional[Iterable[str]] = None,
) -> None:
# Pass to generic
super().__init__()
self.strict = strict
self._validate: Optional[Callable[[Any], bool]] = None
# Check settinngs
if base is not None and bases is None:
self._bases = [base]
elif base is None and bases is not None:
self._bases = list(bases)
else:
msg = "Either base or bases should be set"
raise RuntimeError(msg)
# Map bases to physical paths
self._paths = list(self._iter_paths(self._bases))
if not self._paths:
msg = "No valid bases"
raise RuntimeError(msg)
#
self._classes: Dict[str, T] = {} # name -> class
self._lock = Lock()
self._exclude: Set[str] = set(exclude or [])
def _get_item_type(self: "Loader[T]") -> T:
"""
Get type passed to generic.
Returns:
Item type.
Note:
Internal method. Must not be used directly.
"""
return get_args(self.__orig_class__)[0] # type: ignore
def _get_validator(self: "Loader[T]") -> Callable[[Any], bool]:
"""
Get item validator function depending of instance type.
Returns:
Validation callable accepting one argument and returning boolean.
Note:
Internal method. Must not be used directly.
"""
if self._validate is not None:
return self._validate
item_type = self._get_item_type()
if self._is_type(item_type):
# Type[Class]
self._validate = self._is_subclass_validator(
get_args(item_type)[0]
)
else:
self._validate = self._is_instance_validator(item_type)
return self._validate
@staticmethod
def _is_instance_validator(
t: Any, # noqa: ANN401
) -> Callable[[Any], bool]:
"""
Instance validator.
Check if the item is the instance of given type.
Used for subclass and protocol plugin schemes. i.e.
``` py
Loader[BaseClass](...)
```
Args:
t: Arbitrary object from module to check.
Returns:
Validation callable accepting one argument and returning boolean.
Note:
Internal method. Must not be used directly.
"""
def inner(x: Any) -> bool: # noqa: ANN401
return isinstance(x, t)
return inner
@staticmethod
def _is_subclass_validator(
t: Any, # noqa: ANN401
) -> Callable[[Any], bool]:
"""
Instance validator.
Check if the item is subclass of generic class.
Used for subclass scheme. i.e.
``` py
Loader[Type[BaseClass]](...)
```
Args:
t: Arbitrary object from module to check.
Returns:
Validation callable accepting one argument and returning boolean.
Note:
Internal method. Must not be used directly.
"""
def inner(x: Any) -> bool: # noqa: ANN401
return issubclass(x, t)
return inner
@staticmethod
def _is_type(x: Any) -> bool: # noqa: ANN401
"""
Check if the type is the typing.Type generic.
Args:
x: t: Arbitrary object from module to check.
Returns:
true if `x` is the `typing.Type` generic.
Note:
Internal method. Must not be used directly.
"""
return repr(x).startswith("typing.Type[")
def _iter_paths(self: "Loader[T]", bases: Iterable[str]) -> Iterable[str]:
"""
Iterate over all paths.
Iterate all existing and importable paths for each
`bases` item.
Args:
bases: Iterable of python packages name.
Returns:
Iterable of resolved paths.
Note:
Internal method. Must not be used directly.
"""
for b in bases:
try:
m = __import__(b, {}, {}, "*")
paths = getattr(m, "__path__", None)
if paths:
yield paths[0]
except ModuleNotFoundError as e:
if self.strict:
msg = f"Module '{b}' is not found"
raise RuntimeError(msg) from e
def __getitem__(self: "Loader[T]", name: str) -> T:
"""
Get plugin by name.
Returns plugin item depending on generic type.
Args:
name: Name of plugin.
Returns:
Plugin item depending on generic type.
Raises:
KeyError: if plugin is missed.
"""
kls = self.get(name)
if kls is None:
raise KeyError(name)
return kls
def __iter__(self: "Loader[T]") -> Iterator[str]:
"""
Iterate over plugin names.
Iterate over all existing plugin names.
Shortland for
``` py
loader.keys()
```
Returns:
Iterable of plugin names.
"""
return iter(self.keys())
def get(
self: "Loader[T]", name: str, default: Optional[T] = None
) -> Optional[T]:
"""
Get plugin by name.
Return `default` value if plugin is missed.
Args:
name: Name of plugin.
default: Default value, if plugin is missed.
Returns:
Plugin item depending on generic type or default value.
"""
kls = self._get_item(name)
if kls is not None:
return kls
if default is not None:
return default
return None
def _get_item(self: "Loader[T]", name: str) -> Optional[T]:
"""
Get plugin by name.
Search all the packages and get plugin named by `name`.
Args:
name: Plugin name
Returns:
Item found or None
Note:
Internal method. Must not be used directly.
"""
if name in self._exclude:
msg = "Trying to import excluded name"
raise RuntimeError(msg)
with self._lock:
kls = self._classes.get(name)
if kls is not None:
return kls
for b in self._bases:
kls = self._find_item(f"{b}.{name}")
if kls is not None:
self._classes[name] = kls
return kls
return None
def _find_item(self: "Loader[T]", name: str) -> Optional[T]:
"""
Get plugin item from module `name`.
Args:
name: Module name.
Returns:
Item found or None
Note:
Internal method. Must not be used directly.
"""
is_valid = self._get_validator()
try:
module = __import__(name, {}, {}, "*")
for _, member in inspect.getmembers(module):
# Check member is originated from same module
if (
hasattr(member, "__module__")
and member.__module__ != module.__name__
):
continue
# Check member is valid
if not is_valid(member):
continue
# Cast member to proper type
return cast(T, member)
except ImportError:
pass
return None
def keys(self: "Loader[T]") -> Iterable[str]:
"""
Iterate over plugin name.
Iterable yielding all existing plugin names.
Returns:
Iterable of strings with all plugin names.
Note:
`keys()` do not force plugin module loading and instantination.
"""
seen: Set[str] = set()
for mi in iter_modules(self._paths):
if mi.name not in seen and mi.name not in self._exclude:
seen.add(mi.name)
yield from sorted(seen)
def values(self: "Loader[T]") -> Iterable[T]:
"""
Iterate all found plugin items.
Returns:
Iterable of plugin items.
Note:
`values()` will force plugin module loading and instantination.
"""
for name in self:
item = self.get(name)
if item is not None:
yield item
def items(self: "Loader[T]") -> Iterable[Tuple[str, T]]:
"""
Iterate the (`name`, `item`) tuples for all plugin items.
Return:
Iterable of tuples of (`name`, `item`)
None:
`items()` will force plugin module loading and instantination.
"""
for name in self:
item = self.get(name)
if item is not None:
yield name, item
|
PypiClean
|
/reverse-kafka-logger-0.1.2.tar.gz/reverse-kafka-logger-0.1.2/logger/grep_manager.py
|
import re
from collections import defaultdict
from multiprocessing import Process
from kafka import TopicPartition
from logger import kafka_factory
from logger.constant import BATCH_SIZE
def search_messages_in_parallel(topic, brokers, regex):
"""
Messages will be searched in parallel by spawning process per partition.
:param topic:
:param brokers:
:param regex:
:return:
"""
n_partition = _get_n_partition(brokers, topic)
kafka_consumer = kafka_factory.generate_kafka_consumer(brokers)
partition_id_to_start_end_offset = _get_partition_info(kafka_consumer, topic, n_partition)
for partition in xrange(n_partition):
p = Process(
target=_reverse_search_log_per_partition,
args=(brokers, topic, partition, partition_id_to_start_end_offset, regex),
)
p.start()
p.join()
def _get_partition_info(kafka_consumer, topic, n_partition):
partition_to_offset_info = defaultdict(dict)
partitions = [TopicPartition(topic, partition) for partition in xrange(n_partition)]
beginning_offsets = kafka_consumer.beginning_offsets(partitions)
for topic_partition, offset in beginning_offsets.items():
partition_to_offset_info[topic_partition.partition].update({'start_offset': offset})
end_offsets = kafka_consumer.end_offsets(partitions)
for topic_partition, offset in end_offsets.items():
partition_to_offset_info[topic_partition.partition].update({'end_offset': offset})
return partition_to_offset_info
def _reverse_search_log_per_partition(
brokers,
topic,
partition,
partition_id_to_start_end_offset,
regex,
):
"""
This works by using a sliding window mechanism
---------------------------
1 2 3 4 5 6 7 8 9 10 11 12
^
Normal reading kafka starts from the beginning offset to the end
we can seek the offset one by one, but there is an overhead of network
to call the kafka broker, so the idea is to batch get the messages
:param list[str] brokers:
:param str topic:
:param int partition:
:param str regex:
:return:
"""
"""
Kafka consumer can only be instantiated when the sub-process is spawned otherwise the socket is closed
"""
kafka_consumer = kafka_factory.generate_kafka_consumer(brokers, is_singleton=False)
start_offset = partition_id_to_start_end_offset[partition]['start_offset']
end_offset = partition_id_to_start_end_offset[partition]['end_offset']
print 'start_offset: {}, end_offset: {}'.format(start_offset, end_offset)
kafka_consumer.assign([TopicPartition(topic, partition)])
for offset in range(end_offset, start_offset - 1, -BATCH_SIZE):
start_read_offset, end_read_offset = _get_start_end_offset(offset, start_offset)
# assign partition and offset to the kafka consumer
print 'start_read_offset: {}, end_read_offset: {}, assigned_offset: {}'.format(start_read_offset, end_read_offset, offset)
kafka_consumer.seek(
partition=TopicPartition(topic, partition),
offset=start_read_offset,
)
grep_messages_in_batch(kafka_consumer, regex, start_read_offset, end_read_offset)
def _get_start_end_offset(offset, start_offset):
"""
start offset might be less than the offset that can be read. Depending with
the configuration, messages are saved only in particular time period.
:param offset:
:param start_offset:
:return:
"""
start_read_offset = offset - BATCH_SIZE
end_read_offset = offset
if start_read_offset < start_offset:
start_read_offset = start_offset
return start_read_offset, end_read_offset
def grep_messages_in_batch(kafka_consumer, regex, start_offset, end_offset):
"""
KafkaConsumer poll --> works by using intern
:param KafkaConsumer kafka_consumer:
:param str regex:
:param int start_offset:
:param int end_offset:
:return:
"""
for _ in range(start_offset, end_offset):
message = next(kafka_consumer)
if re.match(regex, message.value):
print 'message: {}'.format(message)
def _get_n_partition(brokers, topic):
"""
:param brokers:
:param topic:
:return:
"""
kafka_consumer = kafka_factory.generate_kafka_consumer(brokers, is_singleton=False)
kafka_consumer.subscribe(topics=[topic])
kafka_consumer.topics()
return len(kafka_consumer.partitions_for_topic(unicode(topic)))
|
PypiClean
|
/python-cas-mb-1.5.1.tar.gz/python-cas-mb-1.5.1/cas.py
|
import datetime
import logging
from uuid import uuid4
import requests
from lxml import etree
from six.moves.urllib import parse as urllib_parse
logger = logging.getLogger(__name__)
class CASError(ValueError):
"""CASError type"""
pass
class SingleLogoutMixin(object):
@classmethod
def get_saml_slos(cls, logout_request):
"""returns SAML logout ticket info"""
try:
root = etree.fromstring(logout_request)
return root.xpath(
"//samlp:SessionIndex",
namespaces={'samlp': "urn:oasis:names:tc:SAML:2.0:protocol"})
except etree.XMLSyntaxError:
return None
@classmethod
def verify_logout_request(cls, logout_request, ticket):
"""Verify the single logout request came from the CAS server
Args:
cls (Class)
logout_request (Request)
ticket (str)
Returns:
bool: True if the logout_request is valid, False otherwise
"""
try:
session_index = cls.get_saml_slos(logout_request)
session_index = session_index[0].text
if session_index == ticket:
return True
else:
return False
except (AttributeError, IndexError, TypeError):
return False
class CASClient(object):
def __new__(self, *args, **kwargs):
version = kwargs.pop('version')
if version in (1, '1'):
return CASClientV1(*args, **kwargs)
elif version in (2, '2'):
return CASClientV2(*args, **kwargs)
elif version in (3, '3'):
return CASClientV3(*args, **kwargs)
elif version == 'CAS_2_SAML_1_0':
return CASClientWithSAMLV1(*args, **kwargs)
raise ValueError('Unsupported CAS_VERSION %r' % version)
class CASClientBase(object):
logout_redirect_param_name = 'service'
def __init__(self, service_url=None, server_url=None,
extra_login_params=None, renew=False,
username_attribute=None, verify_ssl_certificate=True):
self.service_url = service_url
self.server_url = server_url
self.extra_login_params = extra_login_params or {}
self.renew = renew
self.username_attribute = username_attribute
self.verify_ssl_certificate = verify_ssl_certificate
pass
def verify_ticket(self, ticket):
"""Verify ticket.
Sub-class must implement this function.
Must return a triple
Returns:
triple: user, attributes, pgtiou
"""
raise NotImplementedError()
def get_login_url(self):
"""Generates CAS login URL
Returns:
str: Login URL
"""
params = {'service': self.service_url}
if self.renew:
params.update({'renew': 'true'})
params.update(self.extra_login_params)
url = urllib_parse.urljoin(self.server_url, 'login')
query = urllib_parse.urlencode(params)
return url + '?' + query
def get_logout_url(self, redirect_url=None):
"""Generates CAS logout URL
Returns:
str: Logout URL
"""
url = urllib_parse.urljoin(self.server_url, 'logout')
if redirect_url:
params = {self.logout_redirect_param_name: redirect_url}
url += '?' + urllib_parse.urlencode(params)
return url
def get_proxy_url(self, pgt):
"""Returns proxy url, given the proxy granting ticket
Returns:
str: Proxy URL
"""
params = urllib_parse.urlencode({'pgt': pgt, 'targetService': self.service_url})
return "%s/proxy?%s" % (self.server_url, params)
def get_proxy_ticket(self, pgt):
"""Get proxy ticket given the proxy granting ticket
Returns:
str: Proxy ticket.
Raises:
CASError: Non 200 http code or bad XML body.
"""
response = requests.get(self.get_proxy_url(pgt), verify=self.verify_ssl_certificate)
if response.status_code == 200:
from lxml import etree
root = etree.fromstring(response.content)
tickets = root.xpath(
"//cas:proxyTicket",
namespaces={"cas": "http://www.yale.edu/tp/cas"}
)
if len(tickets) == 1:
return tickets[0].text
errors = root.xpath(
"//cas:authenticationFailure",
namespaces={"cas": "http://www.yale.edu/tp/cas"}
)
if len(errors) == 1:
raise CASError(errors[0].attrib['code'], errors[0].text)
raise CASError("Bad http code %s" % response.status_code)
class CASClientV1(CASClientBase):
"""CAS Client Version 1"""
logout_redirect_param_name = 'url'
def verify_ticket(self, ticket):
"""Verifies CAS 1.0 authentication ticket.
Returns username on success and None on failure.
"""
params = [('ticket', ticket), ('service', self.service_url)]
url = (urllib_parse.urljoin(self.server_url, 'validate') + '?' +
urllib_parse.urlencode(params))
page = requests.get(
url,
stream=True,
verify=self.verify_ssl_certificate
)
try:
page_iterator = page.iter_lines(chunk_size=8192)
verified = next(page_iterator).strip()
if verified == 'yes':
return next(page_iterator).strip(), None, None
else:
return None, None, None
finally:
page.close()
class CASClientV2(CASClientBase):
"""CAS Client Version 2"""
url_suffix = 'serviceValidate'
logout_redirect_param_name = 'url'
def __init__(self, proxy_callback=None, *args, **kwargs):
"""proxy_callback is for V2 and V3 so V3 is subclass of V2"""
self.proxy_callback = proxy_callback
super(CASClientV2, self).__init__(*args, **kwargs)
def verify_ticket(self, ticket):
"""Verifies CAS 2.0+/3.0+ XML-based authentication ticket and returns extended attributes"""
response = self.get_verification_response(ticket)
return self.verify_response(response)
def get_verification_response(self, ticket):
params = {
'ticket': ticket,
'service': self.service_url
}
if self.proxy_callback:
params.update({'pgtUrl': self.proxy_callback})
base_url = urllib_parse.urljoin(self.server_url, self.url_suffix)
page = requests.get(
base_url,
params=params,
verify=self.verify_ssl_certificate
)
try:
return page.content
finally:
page.close()
@classmethod
def parse_attributes_xml_element(cls, element):
attributes = {}
for attribute in element:
tag = attribute.tag.split("}").pop()
if tag in attributes:
if isinstance(attributes[tag], list):
attributes[tag].append(attribute.text)
else:
attributes[tag] = [attributes[tag]]
attributes[tag].append(attribute.text)
else:
if tag == 'attraStyle':
pass
else:
attributes[tag] = attribute.text
return attributes
@classmethod
def verify_response(cls, response):
logger.debug('%s response - %s', cls.__name__, response)
user, attributes, pgtiou = cls.parse_response_xml(response)
if len(attributes) == 0:
attributes = None
return user, attributes, pgtiou
@classmethod
def parse_response_xml(cls, response):
try:
from xml.etree import ElementTree
except ImportError:
from elementtree import ElementTree
user = None
attributes = {}
pgtiou = None
tree = ElementTree.fromstring(response)
if tree[0].tag.endswith('authenticationSuccess'):
""" Get namespace for looking for elements by tagname """
namespace = tree.tag[0:tree.tag.index('}')+1]
user = tree[0].find('.//' + namespace + 'user').text
for element in tree[0]:
if element.tag.endswith('proxyGrantingTicket'):
pgtiou = element.text
elif element.tag.endswith('attributes') or element.tag.endswith('norEduPerson'):
attributes = cls.parse_attributes_xml_element(element)
return user, attributes, pgtiou
class CASClientV3(CASClientV2, SingleLogoutMixin):
"""CAS Client Version 3"""
url_suffix = 'p3/serviceValidate'
logout_redirect_param_name = 'service'
@classmethod
def parse_attributes_xml_element(cls, element):
attributes = {}
for attribute in element:
tag = attribute.tag.split("}").pop()
if tag in attributes:
if isinstance(attributes[tag], list):
attributes[tag].append(attribute.text)
else:
attributes[tag] = [attributes[tag]]
attributes[tag].append(attribute.text)
else:
attributes[tag] = attribute.text
return attributes
@classmethod
def verify_response(cls, response):
logger.debug('%s response - %s', cls.__name__, response)
return cls.parse_response_xml(response)
SAML_1_0_NS = 'urn:oasis:names:tc:SAML:1.0:'
SAML_1_0_PROTOCOL_NS = '{' + SAML_1_0_NS + 'protocol' + '}'
SAML_1_0_ASSERTION_NS = '{' + SAML_1_0_NS + 'assertion' + '}'
SAML_ASSERTION_TEMPLATE = """<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<samlp:Request xmlns:samlp="urn:oasis:names:tc:SAML:1.0:protocol"
MajorVersion="1"
MinorVersion="1"
RequestID="{request_id}"
IssueInstant="{timestamp}">
<samlp:AssertionArtifact>{ticket}</samlp:AssertionArtifact></samlp:Request>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>"""
class CASClientWithSAMLV1(CASClientV2, SingleLogoutMixin):
"""CASClient 3.0+ with SAML"""
def verify_ticket(self, ticket, **kwargs):
"""Verifies CAS 3.0+ XML-based authentication ticket and returns extended attributes.
@date: 2011-11-30
@author: Carlos Gonzalez Vila <[email protected]>
Returns username and attributes on success and None,None on failure.
"""
try:
from xml.etree import ElementTree
except ImportError:
from elementtree import ElementTree
page = self.fetch_saml_validation(ticket)
try:
user = None
attributes = {}
response = page.content
tree = ElementTree.fromstring(response)
# Find the authentication status
success = tree.find('.//' + SAML_1_0_PROTOCOL_NS + 'StatusCode')
if success is not None and success.attrib['Value'].endswith('Success'):
# User is validated
name_identifier = tree.find('.//' + SAML_1_0_ASSERTION_NS + 'NameIdentifier')
if name_identifier is not None:
user = name_identifier.text
attrs = tree.findall('.//' + SAML_1_0_ASSERTION_NS + 'Attribute')
for at in attrs:
if self.username_attribute in list(at.attrib.values()):
user = at.find(SAML_1_0_ASSERTION_NS + 'AttributeValue').text
attributes['uid'] = user
values = at.findall(SAML_1_0_ASSERTION_NS + 'AttributeValue')
if len(values) > 1:
values_array = []
for v in values:
values_array.append(v.text)
attributes[at.attrib['AttributeName']] = values_array
else:
attributes[at.attrib['AttributeName']] = values[0].text
return user, attributes, None
finally:
page.close()
def fetch_saml_validation(self, ticket):
"""We do the SAML validation"""
headers = {
'soapaction': 'http://www.oasis-open.org/committees/security',
'cache-control': 'no-cache',
'pragma': 'no-cache',
'accept': 'text/xml',
'connection': 'keep-alive',
'content-type': 'text/xml; charset=utf-8',
}
params = {'TARGET': self.service_url}
saml_validate_url = urllib_parse.urljoin(
self.server_url, 'samlValidate',
)
return requests.post(
saml_validate_url,
self.get_saml_assertion(ticket),
params=params,
headers=headers)
@classmethod
def get_saml_assertion(cls, ticket):
"""Get SAML assertion
SAML request values:
- **RequestID** [REQUIRED]: unique identifier for the request
- **IssueInstant** [REQUIRED]: timestamp of the request
- **samlp:AssertionArtifact** [REQUIRED]: the valid CAS Service Ticket
obtained as a response parameter at login.
Example of `/samlValidate` POST request::
POST /cas/samlValidate?TARGET=
Host: cas.example.com
Content-Length: 491
Content-Type: text/xml
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<samlp:Request xmlns:samlp="urn:oasis:names:tc:SAML:1.0:protocol"
MajorVersion="1"
MinorVersion="1"
RequestID="_192.168.16.51.1024506224022"
IssueInstant="2002-06-19T17:03:44.022Z">
<samlp:AssertionArtifact>
ST-1-u4hrm3td92cLxpCvrjylcas.example.com
</samlp:AssertionArtifact>
</samlp:Request>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
see https://djangocas.dev/docs/4.0/CAS-Protocol-Specification.html#samlvalidate-cas-3-0
"""
# RequestID [REQUIRED] - unique identifier for the request
request_id = uuid4()
# e.g. 2014-06-02T09:21:03.071189
timestamp = datetime.datetime.now().isoformat()
return SAML_ASSERTION_TEMPLATE.format(
request_id=request_id,
timestamp=timestamp,
ticket=ticket,
).encode('utf8')
|
PypiClean
|
/graph_lib-0.4.8.tar.gz/graph_lib-0.4.8/graph_lib/wrappers/python/sweepcut.py
|
from operator import itemgetter
import numpy as np
from numpy.ctypeslib import ndpointer
import ctypes
from sys import platform
from os import path
libloc = path.join(path.abspath(path.dirname(__file__)),"../../lib/graph_lib_test/libgraph")
def wrapped_ndptr(*args, **kwargs):
base = ndpointer(*args, **kwargs)
def from_param(cls, obj):
if obj is None:
return obj
return base.from_param(obj)
return type(base.__name__, (base,), {'from_param': classmethod(from_param)})
def sweepcut(n,ai,aj,a,ids,num,values,flag,degrees = None):
float_type = ctypes.c_double
dt = np.dtype(ai[0])
(itype, ctypes_itype) = (np.int64, ctypes.c_int64) if dt.name == 'int64' else (np.uint32, ctypes.c_uint32)
dt = np.dtype(aj[0])
(vtype, ctypes_vtype) = (np.int64, ctypes.c_int64) if dt.name == 'int64' else (np.uint32, ctypes.c_uint32)
#load library
if platform == "linux2":
extension = ".so"
elif platform == "darwin":
extension = ".dylib"
elif platform == "win32":
extension = ".dll"
else:
print("Unknown system type!")
return (True,0,0)
lib=ctypes.cdll.LoadLibrary(libloc+extension)
if (vtype, itype) == (np.int64, np.int64):
fun = lib.sweepcut_with_sorting64 if flag == 0 else lib.sweepcut_without_sorting64
elif (vtype, itype) == (np.uint32, np.int64):
fun = lib.sweepcut_with_sorting32_64 if flag == 0 else lib.sweepcut_without_sorting32_64
else:
fun = lib.sweepcut_with_sorting32 if flag == 0 else lib.sweepcut_without_sorting32
#call C function
ids=np.array(ids,dtype=vtype)
values=np.array(values,dtype=float_type)
results=np.zeros(num,dtype=vtype)
fun.restype=ctypes_vtype
min_cond = np.array([0.0],dtype=float_type)
if degrees is not None:
degrees = np.array(degrees,dtype=float_type)
if flag == 0:
fun.argtypes=[ndpointer(float_type, flags="C_CONTIGUOUS"),
ndpointer(ctypes_vtype, flags="C_CONTIGUOUS"),
ndpointer(ctypes_vtype, flags="C_CONTIGUOUS"),
ctypes_vtype,ctypes_vtype,
ndpointer(ctypes_itype, flags="C_CONTIGUOUS"),
ndpointer(ctypes_vtype, flags="C_CONTIGUOUS"),
ndpointer(float_type, flags="C_CONTIGUOUS"),
ctypes_vtype,
ndpointer(float_type, flags="C_CONTIGUOUS"),
wrapped_ndptr(dtype=float_type,ndim=1,flags="C_CONTIGUOUS")
]
actual_length=fun(values,ids,results,num,n,ai,aj,a,0,min_cond,degrees)
else:
fun.argtypes=[ndpointer(ctypes_vtype, flags="C_CONTIGUOUS"),
ndpointer(ctypes_vtype, flags="C_CONTIGUOUS"),
ctypes_vtype,ctypes_vtype,
ndpointer(ctypes_itype, flags="C_CONTIGUOUS"),
ndpointer(ctypes_vtype, flags="C_CONTIGUOUS"),
ndpointer(float_type, flags="C_CONTIGUOUS"),
ctypes_vtype,
ndpointer(float_type, flags="C_CONTIGUOUS"),
wrapped_ndptr(dtype=float_type,ndim=1,flags="C_CONTIGUOUS")
]
actual_length=fun(ids,results,num,n,ai,aj,a,0,min_cond,degrees)
actual_results=np.empty(actual_length,dtype=vtype)
actual_results[:]=[results[i] for i in range(actual_length)]
min_cond = min_cond[0]
return (actual_length,actual_results,min_cond)
|
PypiClean
|
/contacthub-sdk-python-0.2.tar.gz/contacthub-sdk-python-0.2/contacthub/models/query/query.py
|
from contacthub._api_manager._api_customer import _CustomerAPIManager
from contacthub.errors.operation_not_permitted import OperationNotPermitted
from contacthub.lib.paginated_list import PaginatedList
from contacthub.lib.read_only_list import ReadOnlyList
from contacthub.models.customer import Customer
from contacthub.models.query.criterion import Criterion
from copy import deepcopy
class Query(object):
"""
Query object for applying the specified query in the APIs.
Use this class for interact with the DeclarativeAPIManager Layer or APIManagerLevel and return the queried as object
or json format variables
"""
def __init__(self, node, entity, previous_query=None):
"""
:param previous_query: a query to start creating a new query. This query is the base for the new one.
:param node: the node for applying for fetching data
:param entity: the entity on which apply the query
"""
self.node = node
self.entity = entity
self.condition = None
self.inner_query = None
if previous_query:
self.inner_query = previous_query
if previous_query['type'] == 'simple':
self.condition = previous_query['are']['condition']
@staticmethod
def _combine_query(query1, query2, operation):
"""
Take two queries and complex operation and create a combined query.
:param query1: the first query to combine
:param query2: the second query to combine
:param operation: the operation for combining the query
:return: a new dictionary containing a combined query
"""
if query2.inner_query['type'] == 'combined' and query2.inner_query['conjunction'] == operation:
query_ret = deepcopy(query2.inner_query)
query_ret['queries'].append(query1.inner_query)
else:
if query1.inner_query['type'] == 'combined' and query1.inner_query['conjunction'] == operation:
query_ret = deepcopy(query1.inner_query)
query_ret['queries'].append(query2.inner_query)
else:
query_ret = {'type': 'combined', 'name': 'query', 'conjunction': operation,
'queries': [query1.inner_query, query2.inner_query]}
return query_ret
def __and__(self, other):
if not self.inner_query or not other.inner_query:
raise OperationNotPermitted('Cannot combine empty queries.')
return Query(node=self.node, entity=self.entity,
previous_query=self._combine_query(query1=self, query2=other, operation='INTERSECT'))
def __or__(self, other):
if not self.inner_query or not other.inner_query:
raise OperationNotPermitted('Cannot combine empty queries.')
return Query(node=self.node, entity=self.entity,
previous_query=self._combine_query(query1=self, query2=other, operation='UNION'))
def all(self):
"""
Get all queried data of an entity from the API
:return: a ReadOnly list with all object queried
"""
complete_query = {'name': 'query', 'query': self.inner_query} if self.inner_query else None
if self.entity is Customer:
return PaginatedList(node=self.node, function=_CustomerAPIManager(self.node).get_all, entity_class=Customer,
query=complete_query)
def filter(self, criterion):
"""
Create a new API Like query for Contacthub APIs (JSON Format)
:param criterion: the Criterion object for fields for query data
:return: a Query object containing the JSON object representing a query for the APIs
"""
if self.inner_query and self.inner_query['type'] == 'combined':
raise OperationNotPermitted('Cannot apply a filter on a combined query.')
query_ret = {'type': 'simple', 'name': 'query', 'are': {}}
new_query = {}
if self.condition is None:
new_query = self._filter(criterion)
elif self.condition['type'] == 'atomic':
new_query = self._and_query(deepcopy(self.condition), self._filter(criterion=criterion))
else:
if self.condition['conjunction'] == Criterion.COMPLEX_OPERATORS.AND:
new_query = deepcopy(self.condition)
new_query['conditions'].append(self._filter(criterion=criterion))
elif self.condition['conjunction'] == Criterion.COMPLEX_OPERATORS.OR:
new_query = self._and_query(deepcopy(self.condition), self._filter(criterion=criterion))
query_ret['are']['condition'] = new_query
return Query(node=self.node, entity=self.entity, previous_query=query_ret)
@staticmethod
def _and_query(query1, query2):
"""
Take to dictionary and return a dictionary containing the two queries in AND operation.
:param query1: a dictionary containing the a query to put in AND
:param query2: a dictionary containing the a query to put in AND
:return: a new dictionary with the two queries in AND
"""
query_ret = {'type': 'composite', 'conditions': []}
query_ret['conditions'].append(query1)
query_ret['conditions'].append(query2)
query_ret['conjunction'] = 'and'
return query_ret
def _filter(self, criterion):
"""
Private function for creating atomic or composite subqueries found in major query.
:param criterion: the Criterion object for fields for query data
:return: a JSON object containing a subquery for creating the query for the APIs
"""
if criterion.operator in Criterion.SIMPLE_OPERATORS.OPERATORS:
atomic_query = {'type': 'atomic'}
entity_field = criterion.first_element
fields = [entity_field.field]
while not type(entity_field.entity) is type(self.entity):
entity_field = entity_field.entity
fields.append(entity_field.field)
attribute = ''
for field in reversed(fields):
attribute += field
attribute += '.'
attribute = attribute[:-1]
atomic_query['attribute'] = attribute
atomic_query['operator'] = criterion.operator
if criterion.second_element:
atomic_query['value'] = criterion.second_element
return atomic_query
else:
if criterion.operator in Criterion.COMPLEX_OPERATORS.OPERATORS:
composite_query = {'type': 'composite', 'conditions': [], 'conjunction': criterion.operator}
first_element = self._filter(criterion.first_element)
second_element = self._filter(criterion.second_element)
composite_query['conditions'].append(first_element)
composite_query['conditions'].append(second_element)
return composite_query
|
PypiClean
|
/setuptools_cpp_cuda-0.1.7-py3-none-any.whl/setuptools_cpp_cuda/build_ext.py
|
import collections
import copy
import os
import re
import shlex
import subprocess
import sys
import warnings
from distutils.command.build_ext import build_ext
from pathlib import Path
from typing import List, Optional, Collection
from .extension import CUDA_HOME
from .ninja_build import is_ninja_available, _write_ninja_file_and_compile_objects
from .utils import _is_cuda_file, IS_WINDOWS
COMMON_MSVC_FLAGS = ['/MD', '/wd4819', '/wd4251', '/wd4244', '/wd4267', '/wd4275', '/wd4018', '/wd4190', '/EHsc']
MSVC_IGNORE_CUDAFE_WARNINGS = [
'base_class_has_different_dll_interface',
'field_without_dll_interface',
'dll_interface_conflict_none_assumed',
'dll_interface_conflict_dllexport_assumed'
]
COMMON_NVCC_FLAGS = [
'-D__CUDA_NO_HALF_OPERATORS__',
'-D__CUDA_NO_HALF_CONVERSIONS__',
'-D__CUDA_NO_BFLOAT16_CONVERSIONS__',
'-D__CUDA_NO_HALF2_OPERATORS__',
'--expt-relaxed-constexpr'
]
class BuildExtension(build_ext, object):
r'''
A custom :mod:`setuptools` build extension .
This :class:`setuptools.build_ext` subclass takes care of passing the
minimum required compiler flags (e.g. ``-std=c++14``) as well as mixed
C++/CUDA compilation (and support for CUDA files in general).
When using :class:`build_cuda_ext`, it is allowed to supply a dictionary
for ``extra_compile_args`` (rather than the usual list) that maps from
languages (``cxx`` or ``nvcc``) to a list of additional compiler flags to
supply to the compiler. This makes it possible to supply different flags to
the C++ and CUDA compiler during mixed compilation.
``use_ninja`` (bool): If ``use_ninja`` is ``True`` (default), then we
attempt to build using the Ninja backend. Ninja greatly speeds up
compilation compared to the standard ``setuptools.build_ext``.
Fallbacks to the standard distutils backend if Ninja is not available.
.. note::
By default, the Ninja backend uses #CPUS + 2 workers to build the
extension. This may use up too many resources on some systems. One
can control the number of workers by setting the `MAX_JOBS` environment
variable to a non-negative number.
'''
@classmethod
def with_options(cls, **options):
r'''
Returns a subclass with alternative constructor that extends any original keyword
arguments to the original constructor with the given options.
'''
class cls_with_options(cls): # type: ignore
def __init__(self, *args, **kwargs):
kwargs.update(options)
super().__init__(*args, **kwargs)
return cls_with_options
def __init__(self, *args, **kwargs) -> None:
super(BuildExtension, self).__init__(*args, **kwargs)
self.no_python_abi_suffix = kwargs.get("no_python_abi_suffix", False)
self.use_ninja = kwargs.get('use_ninja', False)
if self.use_ninja:
# Test if we can use ninja. Fallback otherwise.
msg = ()
if not is_ninja_available():
warnings.warn('Attempted to use ninja as the build_cuda_ext backend but we could not find ninja.'
' Falling back to using the slow distutils backend.')
self.use_ninja = False
def finalize_options(self) -> None:
super().finalize_options()
if self.use_ninja:
self.force = True
def build_extensions(self) -> None:
self.compiler.src_extensions += ['.cu', '.cuh']
# Save the original _compile method for later.
if self.compiler.compiler_type == 'msvc':
self.compiler._cpp_extensions += ['.cu', '.cuh']
original_compile = self.compiler.compile
original_spawn = self.compiler.spawn
else:
original_compile = self.compiler._compile
def append_std14_if_no_std_present(cflags) -> None:
# NVCC does not allow multiple -std to be passed, so we avoid
# overriding the option if the user explicitly passed it.
cpp_flag_prefix = '/std:' if self.compiler.compiler_type == 'msvc' else '-std='
cpp_flag = cpp_flag_prefix + 'c++14'
if not any(flag.startswith(cpp_flag_prefix) for flag in cflags):
cflags.append(cpp_flag)
def unix_cuda_flags(cflags):
cflags = (COMMON_NVCC_FLAGS +
['--compiler-options', "'-fPIC'"] +
cflags + _get_cuda_arch_flags(cflags))
# NVCC does not allow multiple -ccbin/--compiler-bindir to be passed, so we avoid
# overriding the option if the user explicitly passed it.
_ccbin = os.getenv("CC")
if (
_ccbin is not None
and not any([flag.startswith('-ccbin') or flag.startswith('--compiler-bindir') for flag in cflags])
):
cflags.extend(['-ccbin', _ccbin])
return cflags
def convert_to_absolute_paths_inplace(paths):
# Helper function. See Note [Absolute include_dirs]
if paths is not None:
for i in range(len(paths)):
paths[i] = str(Path(paths[i]).absolute())
def unix_wrap_single_compile(obj, src, ext, cc_args, extra_postargs, pp_opts) -> None:
# Copy before we make any modifications.
cflags = copy.deepcopy(extra_postargs)
original_compiler = self.compiler.compiler_so
try:
if _is_cuda_file(src):
nvcc = [str(CUDA_HOME / 'bin' / 'nvcc')]
self.compiler.set_executable('compiler_so', nvcc)
if isinstance(cflags, dict):
cflags = cflags['nvcc']
cflags = unix_cuda_flags(cflags)
elif isinstance(cflags, dict):
cflags = cflags['cxx']
append_std14_if_no_std_present(cflags)
original_compile(obj, src, ext, cc_args, cflags, pp_opts)
finally:
# Put the original compiler back in place.
self.compiler.set_executable('compiler_so', original_compiler)
def unix_wrap_ninja_compile(sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
r"""Compiles sources by outputting a ninja file and running it."""
# NB: I copied some lines from self.compiler (which is an instance
# of distutils.UnixCCompiler). See the following link.
# https://github.com/python/cpython/blob/f03a8f8d5001963ad5b5b28dbd95497e9cc15596/Lib/distutils/ccompiler.py#L564-L567
# This can be fragile, but a lot of other repos also do this
# (see https://github.com/search?q=_setup_compile&type=Code)
# so it is probably OK; we'll also get CI signal if/when
# we update our python version (which is when distutils can be
# upgraded)
# Use absolute path for output_dir so that the object file paths
# (`objects`) get generated with absolute paths.
output_dir = Path(output_dir).absolute()
# See Note [Absolute include_dirs]
convert_to_absolute_paths_inplace(self.compiler.include_dirs)
_, objects, extra_postargs, pp_opts, _ = \
self.compiler._setup_compile(output_dir, macros,
include_dirs, sources,
depends, extra_postargs)
common_cflags = self.compiler._get_cc_args(pp_opts, debug, extra_preargs)
extra_cc_cflags = self.compiler.compiler_so[1:]
with_cuda = any(map(_is_cuda_file, sources))
# extra_postargs can be either:
# - a dict mapping cxx/nvcc to extra flags
# - a list of extra flags.
if isinstance(extra_postargs, dict):
post_cflags = extra_postargs['cxx']
else:
post_cflags = list(extra_postargs)
append_std14_if_no_std_present(post_cflags)
cuda_post_cflags = None
cuda_cflags = None
if with_cuda:
cuda_cflags = common_cflags
if isinstance(extra_postargs, dict):
cuda_post_cflags = extra_postargs['nvcc']
else:
cuda_post_cflags = list(extra_postargs)
cuda_post_cflags = unix_cuda_flags(cuda_post_cflags)
append_std14_if_no_std_present(cuda_post_cflags)
cuda_cflags = [shlex.quote(f) for f in cuda_cflags]
cuda_post_cflags = [shlex.quote(f) for f in cuda_post_cflags]
if isinstance(extra_postargs, dict) and 'nvcc_dlink' in extra_postargs:
cuda_dlink_post_cflags = unix_cuda_flags(extra_postargs['nvcc_dlink'])
else:
cuda_dlink_post_cflags = None
_write_ninja_file_and_compile_objects(
sources=sources,
objects=objects,
cflags=[shlex.quote(f) for f in extra_cc_cflags + common_cflags],
post_cflags=[shlex.quote(f) for f in post_cflags],
cuda_cflags=cuda_cflags,
cuda_post_cflags=cuda_post_cflags,
cuda_dlink_post_cflags=cuda_dlink_post_cflags,
build_directory=output_dir,
verbose=True,
with_cuda=with_cuda)
# Return *all* object filenames, not just the ones we just built.
return objects
def win_cuda_flags(cflags):
return (COMMON_NVCC_FLAGS +
cflags + _get_cuda_arch_flags(cflags))
def win_wrap_single_compile(sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
self.cflags = copy.deepcopy(extra_postargs)
extra_postargs = None
def spawn(cmd):
# Using regex to match src, obj and include files
src_regex = re.compile('/T([pc])(.*)')
src_list = [
m.group(2) for m in (src_regex.match(elem) for elem in cmd)
if m
]
obj_regex = re.compile('/Fo(.*)')
obj_list = [
m.group(1) for m in (obj_regex.match(elem) for elem in cmd)
if m
]
include_regex = re.compile(r'(([-/])I.*)')
include_list = [
m.group(1)
for m in (include_regex.match(elem) for elem in cmd) if m
]
if len(src_list) >= 1 and len(obj_list) >= 1:
src = src_list[0]
obj = obj_list[0]
if _is_cuda_file(src):
nvcc = str(CUDA_HOME / 'bin' / 'nvcc')
if isinstance(self.cflags, dict):
cflags = self.cflags['nvcc']
elif isinstance(self.cflags, list):
cflags = self.cflags
else:
cflags = []
cflags = win_cuda_flags(cflags) + ['--use-local-env']
for flag in COMMON_MSVC_FLAGS:
cflags = ['-Xcompiler', flag] + cflags
for ignore_warning in MSVC_IGNORE_CUDAFE_WARNINGS:
cflags = ['-Xcudafe', '--diag_suppress=' + ignore_warning] + cflags
cmd = [str(nvcc), '-c', src, '-o', obj] + include_list + cflags
elif isinstance(self.cflags, dict):
cflags = COMMON_MSVC_FLAGS + self.cflags['cxx']
cmd += cflags
elif isinstance(self.cflags, list):
cflags = COMMON_MSVC_FLAGS + self.cflags
cmd += cflags
return original_spawn(cmd)
try:
self.compiler.spawn = spawn
return original_compile(sources, output_dir, macros,
include_dirs, debug, extra_preargs,
extra_postargs, depends)
finally:
self.compiler.spawn = original_spawn
def win_wrap_ninja_compile(sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
if not self.compiler.initialized:
self.compiler.initialize()
output_dir = Path(output_dir).absolute()
# Note [Absolute include_dirs]
# Convert relative path in self.compiler.include_dirs to absolute path if any,
# For ninja build, the build location is not local, the build happens
# in a in script created build folder, relative path lost their correctness.
# To be consistent with jit extension, we allow user to enter relative include_dirs
# in setuptools.setup, and we convert the relative path to absolute path here
convert_to_absolute_paths_inplace(self.compiler.include_dirs)
_, objects, extra_postargs, pp_opts, _ = \
self.compiler._setup_compile(str(output_dir), macros,
include_dirs, sources,
depends, extra_postargs)
common_cflags = extra_preargs or []
cflags = []
if debug:
cflags.extend(self.compiler.compile_options_debug)
else:
cflags.extend(self.compiler.compile_options)
common_cflags.extend(COMMON_MSVC_FLAGS)
cflags = cflags + common_cflags + pp_opts
with_cuda = any(map(_is_cuda_file, sources))
# extra_postargs can be either:
# - a dict mapping cxx/nvcc to extra flags
# - a list of extra flags.
if isinstance(extra_postargs, dict):
post_cflags = extra_postargs['cxx']
else:
post_cflags = list(extra_postargs)
append_std14_if_no_std_present(post_cflags)
# extra_postargs can be either:
# - a dict mapping cxx/nvcc to extra flags
# - a list of extra flags.
if isinstance(extra_postargs, dict):
post_cflags = extra_postargs['cxx']
else:
post_cflags = list(extra_postargs)
append_std14_if_no_std_present(post_cflags)
cuda_post_cflags = None
cuda_cflags = None
if with_cuda:
cuda_cflags = ['--use-local-env']
for common_cflag in common_cflags:
cuda_cflags.append('-Xcompiler')
cuda_cflags.append(common_cflag)
for ignore_warning in MSVC_IGNORE_CUDAFE_WARNINGS:
cuda_cflags.append('-Xcudafe')
cuda_cflags.append('--diag_suppress=' + ignore_warning)
cuda_cflags.extend(pp_opts)
if isinstance(extra_postargs, dict):
cuda_post_cflags = extra_postargs['nvcc']
else:
cuda_post_cflags = list(extra_postargs)
cuda_post_cflags = win_cuda_flags(cuda_post_cflags)
cflags = _nt_quote_args(cflags)
post_cflags = _nt_quote_args(post_cflags)
if with_cuda:
cuda_cflags = _nt_quote_args(cuda_cflags)
cuda_post_cflags = _nt_quote_args(cuda_post_cflags)
if isinstance(extra_postargs, dict) and 'nvcc_dlink' in extra_postargs:
cuda_dlink_post_cflags = win_cuda_flags(extra_postargs['nvcc_dlink'])
else:
cuda_dlink_post_cflags = None
_write_ninja_file_and_compile_objects(
sources=sources,
objects=objects,
cflags=cflags,
post_cflags=post_cflags,
cuda_cflags=cuda_cflags,
cuda_post_cflags=cuda_post_cflags,
cuda_dlink_post_cflags=cuda_dlink_post_cflags,
build_directory=output_dir,
verbose=True,
with_cuda=with_cuda)
# Return *all* object filenames, not just the ones we just built.
return objects
# Monkey-patch the _compile or compile method.
# https://github.com/python/cpython/blob/dc0284ee8f7a270b6005467f26d8e5773d76e959/Lib/distutils/ccompiler.py#L511
if self.compiler.compiler_type == 'msvc':
if self.use_ninja:
self.compiler.compile = win_wrap_ninja_compile
else:
self.compiler.compile = win_wrap_single_compile
else:
if self.use_ninja:
self.compiler.compile = unix_wrap_ninja_compile
else:
self.compiler._compile = unix_wrap_single_compile
build_ext.build_extensions(self)
def get_ext_filename(self, ext_name):
# Get the original shared library name. For Python 3, this name will be
# suffixed with "<SOABI>.so", where <SOABI> will be something like
# cpython-37m-x86_64-linux-gnu.
ext_filename = super(BuildExtension, self).get_ext_filename(ext_name)
# If `no_python_abi_suffix` is `True`, we omit the Python 3 ABI
# component. This makes building shared libraries with setuptools that
# aren't Python modules nicer.
if self.no_python_abi_suffix:
# The parts will be e.g. ["my_extension", "cpython-37m-x86_64-linux-gnu", "so"].
ext_filename_parts = ext_filename.split('.')
# Omit the second to last element.
without_abi = ext_filename_parts[:-2] + ext_filename_parts[-1:]
ext_filename = '.'.join(without_abi)
return ext_filename
def _add_compile_flag(self, extension, flag):
extension.extra_compile_args = copy.deepcopy(extension.extra_compile_args)
if isinstance(extension.extra_compile_args, dict):
for args in extension.extra_compile_args.values():
args.append(flag)
else:
extension.extra_compile_args.append(flag)
def _get_cuda_arch_flags(cflags: Optional[List[str]] = None) -> List[str]:
r'''
Determine CUDA arch flags to use.
For an arch, say "6.1", the added compile flag will be
``-gencode=arch=compute_61,code=sm_61``.
For an added "+PTX", an additional
``-gencode=arch=compute_xx,code=compute_xx`` is added.
See select_compute_arch.cmake for corresponding named and supported arches
when building with CMake.
'''
# If cflags is given, there may already be user-provided arch flags in it
# (from `extra_compile_args`)
if cflags is not None:
for flag in cflags:
if 'arch' in flag:
return []
return []
# Note: keep combined names ("arch1+arch2") above single names, otherwise
# string replacement may not do the right thing
named_arches = collections.OrderedDict([
('Kepler+Tesla', '3.7'),
('Kepler', '3.0;3.5+PTX'),
('Maxwell+Tegra', '5.3'),
('Maxwell', '5.0;5.2+PTX'),
('Pascal', '6.0;6.1+PTX'),
('Volta', '7.0+PTX'),
('Turing', '7.5+PTX'),
('Ampere', '8.0;8.6+PTX'),
('Ada', '8.9+PTX'),
('Hopper', '9.0+PTX'),
])
supported_arches = ['3.0', '3.5', '3.7', '5.0', '5.2', '5.3', '6.0', '6.1', '6.2',
'7.0', '7.2', '7.5', '8.0', '8.6', '8.9', '9.0']
valid_arch_strings = supported_arches + [s + "+PTX" for s in supported_arches]
# The default is sm_30 for CUDA 9.x and 10.x
# First check for an env var (same as used by the main setup.py)
# Can be one or more architectures, e.g. "6.1" or "3.5;5.2;6.0;6.1;7.0+PTX"
# See cmake/Modules_CUDA_fix/upstream/FindCUDA/select_compute_arch.cmake
# If not given, determine what's best for the GPU / CUDA version that can be found
arch_list = []
try:
# the assumption is that the extension should run on any of the currently visible cards,
# which could be of different types - therefore all archs for visible cards should be included
supported_sm = [int(arch.split('_')[1]) for arch in get_arch_list() if 'sm_' in arch]
max_supported_sm = max((sm // 10, sm % 10) for sm in supported_sm)
for cap in get_device_capability_str():
capability = (int(cap[0]), int(cap[2]))
# Capability of the device may be higher than what's supported by the user's
# NVCC, causing compilation error. User's NVCC is expected to match the one
# used to build pytorch, so we use the maximum supported capability of pytorch
# to clamp the capability.
capability = min(max_supported_sm, capability)
arch = f'{capability[0]}.{capability[1]}'
if arch not in arch_list:
arch_list.append(arch)
arch_list = sorted(arch_list)
arch_list[-1] += '+PTX'
except subprocess.CalledProcessError:
return []
flags = []
for arch in arch_list:
if arch not in valid_arch_strings:
raise ValueError(f"Unknown CUDA arch ({arch}) or GPU not supported")
else:
num = arch[0] + arch[2]
flags.append(f'-gencode=arch=compute_{num},code=sm_{num}')
if arch.endswith('+PTX'):
flags.append(f'-gencode=arch=compute_{num},code=compute_{num}')
return sorted(list(set(flags)))
def _nt_quote_args(args: Optional[List[str]]) -> List[str]:
"""Quote command-line arguments for DOS/Windows conventions.
Just wraps every argument which contains blanks in double quotes, and
returns a new argument list.
"""
# Cover None-type
if not args:
return []
return [f'"{arg}"' if ' ' in arg else arg for arg in args]
def fix_dll(libraries: Collection[str]) -> List[str]:
"""
Fix the Python 3.8+ Windows problem ("ImportError: DLL load failed: The specified module could not be found") by
using static version of the included library. Alternatively you can use :func:`os.add_dll_directory` in your
module's "__init__.py" to make your software locate the missing DLLs.
:param libraries: List of libraries to be used.
:return: Static version of all of your libraries (adding "_static" to each name).
"""
if sys.version_info >= (3, 8) and IS_WINDOWS:
libraries = list(libraries) # To drain generators
for i, library in enumerate(libraries):
if not library.endswith('_static'):
libraries[i] += '_static'
return libraries
|
PypiClean
|
/com.precisely.apis-16.0.3-py3-none-any.whl/com/precisely/apis/model/grid.py
|
import re # noqa: F401
import sys # noqa: F401
from com.precisely.apis.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from com.precisely.apis.exceptions import ApiAttributeError
def lazy_import():
from com.precisely.apis.model.common_geometry import CommonGeometry
globals()['CommonGeometry'] = CommonGeometry
class Grid(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'code': (str,), # noqa: E501
'geometry': (CommonGeometry,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'code': 'code', # noqa: E501
'geometry': 'geometry', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""Grid - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
code (str): [optional] # noqa: E501
geometry (CommonGeometry): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""Grid - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
code (str): [optional] # noqa: E501
geometry (CommonGeometry): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")
|
PypiClean
|
/phiture_soda_core-3.0.14.1-py3-none-any.whl/soda/scan.py
|
from __future__ import annotations
import json
import logging
import os
import textwrap
from datetime import datetime, timezone
from soda.__version__ import SODA_CORE_VERSION
from soda.common.json_helper import JsonHelper
from soda.common.log import Log, LogLevel
from soda.common.logs import Logs
from soda.common.undefined_instance import undefined
from soda.execution.check.check import Check
from soda.execution.check_outcome import CheckOutcome
from soda.execution.data_source_scan import DataSourceScan
from soda.execution.metric.derived_metric import DerivedMetric
from soda.execution.metric.metric import Metric
from soda.profiling.discover_table_result_table import DiscoverTablesResultTable
from soda.profiling.profile_columns_result import ProfileColumnsResultTable
from soda.profiling.sample_tables_result import SampleTablesResultTable
from soda.sampler.default_sampler import DefaultSampler
from soda.sampler.sampler import Sampler
from soda.soda_cloud.historic_descriptor import HistoricDescriptor
from soda.soda_cloud.soda_cloud import SodaCloud
from soda.sodacl.location import Location
from soda.sodacl.sodacl_cfg import SodaCLCfg
from soda.telemetry.soda_telemetry import SodaTelemetry
logger = logging.getLogger(__name__)
verbose = False
soda_telemetry = SodaTelemetry.get_instance()
class Scan:
def __init__(self):
from soda.configuration.configuration import Configuration
from soda.execution.check.check import Check
from soda.execution.data_source_manager import DataSourceManager
from soda.execution.query.query import Query
# Using this instead of utcnow() as that creates tz naive object, this has explicitly utc set. More info https://docs.python.org/3/library/datetime.html#datetime.datetime.utcnow
now = datetime.now(tz=timezone.utc)
self.sampler: Sampler | None = None
self._logs = Logs(logger)
self._scan_definition_name: str | None = None
self._scan_results_file: str | None = None
self._data_source_name: str | None = None
self._variables: dict[str, object] = {"NOW": now.isoformat()}
self._configuration: Configuration = Configuration(scan=self)
self._sodacl_cfg: SodaCLCfg = SodaCLCfg(scan=self)
self._file_paths: set[str] = set()
self._data_timestamp: datetime = now
self._scan_start_timestamp: datetime = now
# FIXME: this attribute cannot be None if typed as `datetime`
self._scan_end_timestamp: datetime | None = None
self._data_source_manager = DataSourceManager(self._logs, self._configuration)
self._data_source_scans: list[DataSourceScan] = []
self._metrics: set[Metric] = set()
self._checks: list[Check] = []
self._queries: list[Query] = []
self._profile_columns_result_tables: list[ProfileColumnsResultTable] = []
self._discover_tables_result_tables: list[DiscoverTablesResultTable] = []
self._sample_tables_result_tables: list[SampleTablesResultTable] = []
self._logs.info(f"Soda Core {SODA_CORE_VERSION}")
self.scan_results: dict = {}
def build_scan_results(self) -> dict:
checks = [check.get_dict() for check in self._checks if check.outcome is not None and check.archetype is None]
automated_monitoring_checks = [
check.get_dict() for check in self._checks if check.outcome is not None and check.archetype is not None
]
# TODO: [SODA-608] separate profile columns and sample tables by aligning with the backend team
profiling = [
profile_table.get_dict()
for profile_table in self._profile_columns_result_tables + self._sample_tables_result_tables
]
return JsonHelper.to_jsonnable( # type: ignore
{
"definitionName": self._scan_definition_name,
"defaultDataSource": self._data_source_name,
"dataTimestamp": self._data_timestamp,
"scanStartTimestamp": self._scan_start_timestamp,
"scanEndTimestamp": self._scan_end_timestamp,
"hasErrors": self.has_error_logs(),
"hasWarnings": self.has_check_warns(),
"hasFailures": self.has_check_fails(),
"metrics": [metric.get_dict() for metric in self._metrics],
# If archetype is not None, it means that check is automated monitoring
"checks": checks,
# TODO Queries are not supported by Soda Cloud yet.
# "queries": [query.get_cloud_dict() for query in scan._queries],
"automatedMonitoringChecks": automated_monitoring_checks,
"profiling": profiling,
"metadata": [
discover_tables_result.get_dict() for discover_tables_result in self._discover_tables_result_tables
],
"logs": [log.get_dict() for log in self._logs.logs],
}
)
def set_data_source_name(self, data_source_name: str):
"""
Specifies which datasource to use for the checks.
"""
self._data_source_name = data_source_name
def set_scan_definition_name(self, scan_definition_name: str):
"""
The scan definition name is required if the scan is connected to Soda Cloud in order to correlate subsequent scans from the same pipeline.
"""
self._scan_definition_name = scan_definition_name
def set_verbose(self, verbose_var: bool = True):
self._logs.verbose = verbose_var
global verbose
verbose = verbose_var
def set_scan_results_file(self, set_scan_results_file: str):
self._scan_results_file = set_scan_results_file
def add_configuration_yaml_file(self, file_path: str):
"""
Adds configurations from a YAML file on the given path.
:param str file_path: is a string file_path pointing to a configuration file.
~ will be expanded to the user home dir.
"""
try:
configuration_yaml_str = self._read_file("configuration", file_path)
self._parse_configuration_yaml_str(
configuration_yaml_str=configuration_yaml_str,
file_path=file_path,
)
except Exception as e:
self._logs.error(
f"Could not add configuration from file path {file_path}",
exception=e,
)
def add_configuration_yaml_files(self, path: str, recursive: bool | None = True, suffixes: str | None = None):
"""
Adds all configurations all YAML files matching the given file path or scanning the given path as a directory.
:param str path: is a string that typically is the path to a directory, but it can also be a configuration file.
~ will be expanded to the user home dir the directory in which to search for configuration files.
:param bool recursive: controls if nested directories also will be scanned. Default recursive=True.
:param List[str] suffixes: is optional and is used when recursive scanning directories to only load files
having a given extension or suffix. Default suffixes=[".yml", ".yaml"]
"""
try:
configuration_yaml_file_paths = self._collect_file_paths(path=path, recursive=recursive, suffixes=suffixes)
for configuration_yaml_file_path in configuration_yaml_file_paths:
self.add_configuration_yaml_file(file_path=configuration_yaml_file_path)
except Exception as e:
self._logs.error(f"Could not add configuration files from dir {path}", exception=e)
def add_configuration_yaml_str(self, environment_yaml_str: str, file_path: str = "yaml string"):
"""
Adds configurations from a YAML formatted string.
Parameter file_path is optional and can be used to get the location of the log/error in the logs.
"""
try:
self._parse_configuration_yaml_str(
configuration_yaml_str=environment_yaml_str,
file_path=file_path,
)
except Exception as e:
self._logs.error(
f"Could not add environment configurations from string",
exception=e,
)
def _parse_configuration_yaml_str(self, configuration_yaml_str: str, file_path: str = "yaml string"):
from soda.configuration.configuration_parser import ConfigurationParser
environment_parse = ConfigurationParser(
configuration=self._configuration,
logs=self._logs,
file_path=file_path,
)
environment_parse.parse_environment_yaml_str(configuration_yaml_str)
def add_spark_session(self, spark_session, data_source_name: str = "spark_df"):
"""
Pass a spark_session to the scan. Only required in case of PySpark scans.
"""
try:
self._configuration.add_spark_session(data_source_name=data_source_name, spark_session=spark_session)
except Exception as e:
self._logs.error(
f"Could not add environment spark session for data_source {data_source_name}",
exception=e,
)
def add_sodacl_yaml_files(
self,
path: str,
recursive: bool | None = True,
suffixes: list[str] | None = None,
):
"""
Adds all the files in the given directory to the scan as SodaCL files.
:param str path: is a string that typically represents a directory, but it can also be a SodaCL file.
~ will be expanded to the user home dir the directory in which to search for SodaCL files.
:param bool recursive: controls if nested directories also will be scanned. Default recursive=True.
:param List[str] suffixes: is optional and is used when recursive scanning directories to only load files
having a given extension or suffix. Default suffixes=[".yml", ".yaml"]
"""
try:
sodacl_yaml_file_paths = self._collect_file_paths(path=path, recursive=recursive, suffixes=suffixes)
for sodacl_yaml_file_path in sodacl_yaml_file_paths:
self.add_sodacl_yaml_file(file_path=sodacl_yaml_file_path)
except Exception as e:
self._logs.error(f"Could not add SodaCL files from dir {dir}", exception=e)
def _collect_file_paths(
self,
path: str,
recursive: bool | None,
suffixes: list[str] | None,
) -> list[str]:
if isinstance(path, str):
if path.endswith("/"):
path = path[:-1]
file_system = self._configuration.file_system
path = file_system.expand_user(path)
paths_to_scan = [path]
file_paths = []
is_root = True
while len(paths_to_scan) > 0:
path = paths_to_scan.pop()
if file_system.exists(path):
if file_system.is_file(path) and (
suffixes is None or any(suffix is None or path.endswith(suffix) for suffix in suffixes)
):
file_paths.append(path)
elif file_system.is_dir(path) and (is_root or recursive):
is_root = False
if suffixes is None:
suffixes = [".yml", ".yaml"]
for dir_entry in file_system.scan_dir(path):
paths_to_scan.append(f"{path}/{dir_entry.name}")
else:
self._logs.error(f'Path "{path}" does not exist')
return file_paths
else:
self._logs.error(f"Path is not a string: {type(path).__name__}")
return []
def add_sodacl_yaml_file(self, file_path: str):
"""
Add a SodaCL YAML file to the scan on the given file_path.
"""
try:
sodacl_yaml_str = self._read_file("SodaCL", file_path)
if file_path not in self._file_paths:
self._file_paths.add(file_path)
self._parse_sodacl_yaml_str(sodacl_yaml_str=sodacl_yaml_str, file_path=file_path)
else:
self._logs.debug(f"Skipping duplicate file addition for {file_path}")
except Exception as e:
self._logs.error(f"Could not add SodaCL file {file_path}", exception=e)
def add_sodacl_yaml_str(self, sodacl_yaml_str: str):
"""
Add a SodaCL YAML string to the scan.
"""
try:
unique_name = "sodacl_string"
if unique_name in self._file_paths:
number: int = 2
while f"{unique_name}_{number}" in self._file_paths:
number += 1
unique_name = f"{unique_name}_{number}"
file_path = f"{unique_name}.yml"
self._parse_sodacl_yaml_str(sodacl_yaml_str=sodacl_yaml_str, file_path=file_path)
except Exception as e:
self._logs.error(f"Could not add SodaCL string", exception=e)
def _parse_sodacl_yaml_str(self, sodacl_yaml_str: str, file_path: str = None):
from soda.sodacl.sodacl_parser import SodaCLParser
sodacl_parser = SodaCLParser(
sodacl_cfg=self._sodacl_cfg,
logs=self._logs,
file_path=file_path,
data_source_name=self._data_source_name,
)
sodacl_parser.parse_sodacl_yaml_str(sodacl_yaml_str)
def _read_file(self, file_type: str, file_path: str) -> str:
file_location = Location(file_path)
file_system = self._configuration.file_system
resolved_file_path = file_system.expand_user(file_path)
if not file_system.exists(resolved_file_path):
self._logs.error(
f"File {resolved_file_path} does not exist",
location=file_location,
)
return None
if file_system.is_dir(resolved_file_path):
self._logs.error(
f"File {resolved_file_path} exists, but is a directory",
location=file_location,
)
return None
try:
self._logs.debug(f'Reading {file_type} file "{resolved_file_path}"')
file_content_str = file_system.file_read_as_str(resolved_file_path)
if not isinstance(file_content_str, str):
self._logs.error(
f"Error reading file {resolved_file_path} from the file system",
location=file_location,
)
return file_content_str
except Exception as e:
self._logs.error(
f"Error reading file {resolved_file_path} from the file system",
location=file_location,
exception=e,
)
def add_variables(self, variables: dict[str, str]):
"""
Add variables to the scan. Keys and values must be strings.
"""
try:
self._variables.update(variables)
except Exception as e:
variables_text = json.dumps(variables)
self._logs.error(f"Could not add variables {variables_text}", exception=e)
def disable_telemetry(self):
"""
Disables all telemetry. For more information see Soda's public statements on telemetry. TODO add links.
"""
self._configuration.telemetry = None
def execute(self) -> int:
self._logs.debug("Scan execution starts")
exit_value = 0
try:
from soda.execution.column import Column
from soda.execution.metric.column_metrics import ColumnMetrics
from soda.execution.partition import Partition
from soda.execution.table import Table
# Disable Soda Cloud if it is not properly configured
if self._configuration.soda_cloud:
if not isinstance(self._scan_definition_name, str):
self._logs.error(
"scan.set_scan_definition_name(...) is not set and it is required to make the Soda Cloud integration work. For this scan, Soda Cloud will be disabled."
)
self._configuration.soda_cloud = None
else:
if self._configuration.soda_cloud.is_samples_disabled():
self._configuration.sampler = DefaultSampler()
else:
self._configuration.sampler = DefaultSampler()
# Override the sampler, if it is configured programmatically
if self.sampler is not None:
self._configuration.sampler = self.sampler
if self._configuration.sampler:
# ensure the sampler is configured with the scan logs
self._configuration.sampler.logs = self._logs
# Resolve the for each table checks and add them to the scan_cfg data structures
self.__resolve_for_each_dataset_checks()
# Resolve the for each column checks and add them to the scan_cfg data structures
self.__resolve_for_each_column_checks()
# For each data_source, build up the DataSourceScan data structures
for data_source_scan_cfg in self._sodacl_cfg.data_source_scan_cfgs.values():
# This builds up the data structures that correspond to the cfg model
data_source_scan = self._get_or_create_data_source_scan(data_source_scan_cfg.data_source_name)
if data_source_scan:
for check_cfg in data_source_scan_cfg.check_cfgs:
# Data source checks are created here, i.e. no dataset associated (e.g. failed rows check)
self.__create_check(check_cfg, data_source_scan)
for table_cfg in data_source_scan_cfg.tables_cfgs.values():
table: Table = data_source_scan.get_or_create_table(table_cfg.table_name)
for column_configurations_cfg in table_cfg.column_configurations_cfgs.values():
column: Column = table.get_or_create_column(column_configurations_cfg.column_name)
column.set_column_configuration_cfg(column_configurations_cfg)
for partition_cfg in table_cfg.partition_cfgs:
partition: Partition = table.get_or_create_partition(partition_cfg.partition_name)
partition.set_partition_cfg(partition_cfg)
for check_cfg in partition_cfg.check_cfgs:
self.__create_check(check_cfg, data_source_scan, partition)
if partition_cfg.column_checks_cfgs:
for column_checks_cfg in partition_cfg.column_checks_cfgs.values():
column_metrics: ColumnMetrics = partition.get_or_create_column_metrics(
column_checks_cfg.column_name
)
column_metrics.set_column_check_cfg(column_checks_cfg)
if column_checks_cfg.check_cfgs:
for check_cfg in column_checks_cfg.check_cfgs:
self.__create_check(
check_cfg,
data_source_scan,
partition,
column_metrics.column,
)
# Each data_source is asked to create metric values that are returned as a list of query results
for data_source_scan in self._data_source_scans:
data_source_scan.execute_queries()
# Compute derived metric values
for metric in self._metrics:
if isinstance(metric, DerivedMetric):
metric.compute_derived_metric_values()
# Run profiling, data samples, automated monitoring, sample tables
try:
self.run_data_source_scan()
except Exception as e:
self._logs.error(f"""An error occurred while executing data source scan""", exception=e)
# Evaluates the checks based on all the metric values
for check in self._checks:
# First get the metric values for this check
check_metrics = {}
missing_value_metrics = []
for check_metric_name, metric in check.metrics.items():
if metric.value is not undefined:
check_metrics[check_metric_name] = metric
else:
missing_value_metrics.append(metric)
check_historic_data = {}
# For each check get the historic data
if check.historic_descriptors:
for hd_key, hd in check.historic_descriptors.items():
check_historic_data[hd_key] = self.__get_historic_data_from_soda_cloud_metric_store(hd)
if not missing_value_metrics:
try:
check.evaluate(check_metrics, check_historic_data)
except BaseException as e:
self._logs.error(
f"Evaluation of check {check.check_cfg.source_line} failed: {e}",
location=check.check_cfg.location,
exception=e,
)
else:
missing_metrics_str = ",".join([str(metric) for metric in missing_value_metrics])
self._logs.error(
f"Metrics '{missing_metrics_str}' were not computed for check '{check.check_cfg.source_line}'"
)
self._logs.info("Scan summary:")
self.__log_queries(having_exception=False)
self.__log_queries(having_exception=True)
checks_pass_count = self.__log_checks(CheckOutcome.PASS)
checks_warn_count = self.__log_checks(CheckOutcome.WARN)
warn_text = "warning" if checks_warn_count == 1 else "warnings"
checks_fail_count = self.__log_checks(CheckOutcome.FAIL)
fail_text = "failure" if checks_warn_count == 1 else "failures"
error_count = len(self.get_error_logs())
error_text = "error" if error_count == 1 else "errors"
self.__log_checks(None)
checks_not_evaluated = len(self._checks) - checks_pass_count - checks_warn_count - checks_fail_count
if len(self._checks) == 0:
self._logs.warning("No checks found, 0 checks evaluated.")
if checks_not_evaluated:
self._logs.info(f"{checks_not_evaluated} checks not evaluated.")
if error_count > 0:
self._logs.info(f"{error_count} errors.")
if checks_warn_count + checks_fail_count + error_count == 0 and len(self._checks) > 0:
if checks_not_evaluated:
self._logs.info(
f"Apart from the checks that have not been evaluated, no failures, no warnings and no errors."
)
else:
self._logs.info(f"All is good. No failures. No warnings. No errors.")
elif error_count > 0:
exit_value = 3
self._logs.info(
f"Oops! {error_count} {error_text}. {checks_fail_count} {fail_text}. {checks_warn_count} {warn_text}. {checks_pass_count} pass."
)
elif checks_fail_count > 0:
exit_value = 2
self._logs.info(
f"Oops! {checks_fail_count} {fail_text}. {checks_warn_count} {warn_text}. {error_count} {error_text}. {checks_pass_count} pass."
)
elif checks_warn_count > 0:
exit_value = 1
self._logs.info(
f"Only {checks_warn_count} {warn_text}. {checks_fail_count} {fail_text}. {error_count} {error_text}. {checks_pass_count} pass."
)
if error_count > 0:
Log.log_errors(self.get_error_logs())
# Telemetry data
soda_telemetry.set_attributes(
{
"pass_count": checks_pass_count,
"error_count": error_count,
"failures_count": checks_fail_count,
}
)
except Exception as e:
exit_value = 3
self._logs.error(f"Error occurred while executing scan.", exception=e)
finally:
try:
self._scan_end_timestamp = datetime.now(tz=timezone.utc)
if self._configuration.soda_cloud:
self._logs.info("Sending results to Soda Cloud")
self._configuration.soda_cloud.send_scan_results(self)
if "send_scan_results" in self._configuration.soda_cloud.soda_cloud_trace_ids:
cloud_trace_id = self._configuration.soda_cloud.soda_cloud_trace_ids["send_scan_results"]
self._logs.info(f"Soda Cloud Trace: {cloud_trace_id}")
else:
self._logs.info("Soda Cloud Trace ID not available.")
except Exception as e:
exit_value = 3
self._logs.error(f"Error occurred while sending scan results to soda cloud.", exception=e)
self._close()
self.scan_results = self.build_scan_results()
if self._scan_results_file is not None:
logger.info(f"Saving scan results to {self._scan_results_file}")
with open(self._scan_results_file, "w") as f:
json.dump(SodaCloud.build_scan_results(self), f)
# Telemetry data
soda_telemetry.set_attributes(
{
"scan_exit_code": exit_value,
"checks_count": len(self._checks),
"queries_count": len(self._queries),
"metrics_count": len(self._metrics),
}
)
if self._configuration.soda_cloud:
for (
request_name,
trace_id,
) in self._configuration.soda_cloud.soda_cloud_trace_ids.items():
soda_telemetry.set_attribute(f"soda_cloud_trace_id__{request_name}", trace_id)
return exit_value
def run_data_source_scan(self):
for data_source_scan in self._data_source_scans:
for data_source_cfg in data_source_scan.data_source_scan_cfg.data_source_cfgs:
data_source_name = data_source_scan.data_source_scan_cfg.data_source_name
data_source_scan = self._get_or_create_data_source_scan(data_source_name)
if data_source_scan:
data_source_scan.run(data_source_cfg, self)
else:
data_source_names = ", ".join(self._data_source_manager.data_source_properties_by_name.keys())
self._logs.error(
f"Could not run monitors on data_source {data_source_name} because It is not "
f"configured: {data_source_names}"
)
def __checks_to_text(self, checks: list[Check]):
return "\n".join([str(check) for check in checks])
def _close(self):
self._data_source_manager.close_all_connections()
def __create_check(self, check_cfg, data_source_scan=None, partition=None, column=None):
from soda.execution.check.check import Check
check = Check.create(
check_cfg=check_cfg,
data_source_scan=data_source_scan,
partition=partition,
column=column,
)
self._checks.append(check)
def __resolve_for_each_dataset_checks(self):
data_source_name = self._data_source_name
for index, for_each_dataset_cfg in enumerate(self._sodacl_cfg.for_each_dataset_cfgs):
include_tables = [include.table_name_filter for include in for_each_dataset_cfg.includes]
exclude_tables = [include.table_name_filter for include in for_each_dataset_cfg.excludes]
data_source_scan = self._get_or_create_data_source_scan(data_source_name)
if data_source_scan:
query_name = f"for_each_dataset_{for_each_dataset_cfg.table_alias_name}[{index}]"
table_names = data_source_scan.data_source.get_table_names(
include_tables=include_tables,
exclude_tables=exclude_tables,
query_name=query_name,
)
logger.info(f"Instantiating for each for {table_names}")
for table_name in table_names:
data_source_scan_cfg = self._sodacl_cfg.get_or_create_data_source_scan_cfgs(data_source_name)
table_cfg = data_source_scan_cfg.get_or_create_table_cfg(table_name)
partition_cfg = table_cfg.find_partition(None, None)
for check_cfg_template in for_each_dataset_cfg.check_cfgs:
check_cfg = check_cfg_template.instantiate_for_each_dataset(
name=self.jinja_resolve(
check_cfg_template.name,
variables={for_each_dataset_cfg.table_alias_name: table_name},
),
table_alias=for_each_dataset_cfg.table_alias_name,
table_name=table_name,
partition_name=partition_cfg.partition_name,
)
column_name = check_cfg.get_column_name()
if column_name:
column_checks_cfg = partition_cfg.get_or_create_column_checks(column_name)
column_checks_cfg.add_check_cfg(check_cfg)
else:
partition_cfg.add_check_cfg(check_cfg)
def __resolve_for_each_column_checks(self):
if self._sodacl_cfg.for_each_column_cfgs:
raise NotImplementedError("TODO")
def _get_or_create_data_source_scan(self, data_source_name: str) -> DataSourceScan:
from soda.execution.data_source import DataSource
from soda.sodacl.data_source_scan_cfg import DataSourceScanCfg
data_source_scan = next(
(
data_source_scan
for data_source_scan in self._data_source_scans
if data_source_scan.data_source.data_source_name == data_source_name
),
None,
)
if data_source_scan is None:
data_source_scan_cfg = self._sodacl_cfg.data_source_scan_cfgs.get(data_source_name)
if data_source_scan_cfg is None:
data_source_scan_cfg = DataSourceScanCfg(data_source_name)
data_source_name = data_source_scan_cfg.data_source_name
data_source: DataSource = self._data_source_manager.get_data_source(data_source_name)
if data_source:
data_source_scan = data_source.create_data_source_scan(self, data_source_scan_cfg)
self._data_source_scans.append(data_source_scan)
return data_source_scan
def jinja_resolve(
self,
definition: str,
variables: dict[str, object] = None,
location: Location | None = None,
):
if isinstance(definition, str) and "${" in definition:
from soda.common.jinja import Jinja
jinja_variables = self._variables.copy()
if isinstance(variables, dict):
jinja_variables.update(variables)
try:
return Jinja.resolve(definition, jinja_variables)
except BaseException as e:
self._logs.error(
message=f"Error resolving Jinja template {definition}: {e}",
location=location,
exception=e,
)
else:
return definition
def __get_historic_data_from_soda_cloud_metric_store(
self, historic_descriptor: HistoricDescriptor
) -> dict[str, object]:
if self._configuration.soda_cloud:
return self._configuration.soda_cloud.get_historic_data(historic_descriptor)
else:
self._logs.error("Soda Core must be configured to connect to Soda Cloud to use change-over-time checks.")
return {}
def _find_existing_metric(self, metric) -> Metric:
return next(
(existing_metric for existing_metric in self._metrics if existing_metric == metric),
None,
)
def _add_metric(self, metric):
self._metrics.add(metric)
def __log_queries(self, having_exception: bool) -> int:
count = sum((query.exception is None) != having_exception for query in self._queries)
if count > 0:
status_text = "ERROR" if having_exception else "OK"
queries_text = "query" if len(self._queries) == 1 else "queries"
self._logs.debug(f"{count}/{len(self._queries)} {queries_text} {status_text}")
for query in self._queries:
query_text = f"\n{query.sql}" if query.exception else ""
self._logs.debug(f" {query.query_name} [{status_text}] {query.duration}{query_text}")
if query.exception:
exception_str = str(query.exception)
exception_str = textwrap.indent(text=exception_str, prefix=" ")
self._logs.debug(exception_str)
return count
def __log_checks(self, check_outcome: CheckOutcome | None) -> int:
count = sum(check.outcome == check_outcome for check in self._checks)
if count > 0:
outcome_text = "NOT EVALUATED" if check_outcome is None else f"{check_outcome.value.upper()}ED"
checks_text = "check" if len(self._checks) == 1 else "checks"
self._logs.info(f"{count}/{len(self._checks)} {checks_text} {outcome_text}: ")
checks_by_partition = {}
other_checks = []
for check in self._checks:
if check.outcome == check_outcome:
partition = check.partition
if partition:
partition_name = f" [{partition.partition_name}]" if partition.partition_name else ""
partition_title = f"{partition.table.table_name}{partition_name} in {partition.data_source_scan.data_source.data_source_name}"
checks_by_partition.setdefault(partition_title, []).append(check)
else:
other_checks.append(check)
for (
partition_title,
partition_checks,
) in checks_by_partition.items():
if len(partition_checks) > 0:
self._logs.info(f" {partition_title}")
self.__log_check_group(partition_checks, " ", check_outcome, outcome_text)
if len(other_checks) > 0:
self.__log_check_group(other_checks, " ", check_outcome, outcome_text)
return count
def __log_check_group(self, checks, indent, check_outcome, outcome_text):
for check in checks:
location = ""
if verbose:
location = f"[{check.check_cfg.location.file_path}] "
self._logs.info(f"{indent}{check.name} {location}[{outcome_text}]")
if self._logs.verbose or check_outcome != CheckOutcome.PASS:
for diagnostic in check.get_log_diagnostic_lines():
self._logs.info(f"{indent} {diagnostic}")
def get_variable(self, variable_name: str, default_value: str | None = None) -> str | None:
# Note: ordering here must be the same as in Jinja.OsContext.resolve_or_missing: First env vars, then scan vars
if variable_name in os.environ:
return os.environ[variable_name]
elif variable_name in self._variables:
return self._variables[variable_name]
return default_value
def get_scan_results(self) -> dict:
return self.scan_results
def get_logs_text(self) -> str | None:
return self.__logs_to_text(self._logs.logs)
def has_error_logs(self) -> bool:
return any(log.level == LogLevel.ERROR for log in self._logs.logs)
def get_error_logs(self) -> list[Log]:
return [log for log in self._logs.logs if log.level == LogLevel.ERROR]
def get_error_logs_text(self) -> str | None:
return self.__logs_to_text(self.get_error_logs())
def assert_no_error_logs(self) -> None:
if self.has_error_logs():
raise AssertionError(self.get_error_logs_text())
def has_error_or_warning_logs(self) -> bool:
return any(log.level in [LogLevel.ERROR, LogLevel.WARNING] for log in self._logs.logs)
def get_error_or_warning_logs(self) -> list[Log]:
return [log for log in self._logs.logs if log.level in [LogLevel.ERROR, LogLevel.WARNING]]
def get_error_or_warning_logs_text(self) -> str | None:
return self.__logs_to_text(self.get_error_or_warning_logs())
def assert_no_error_nor_warning_logs(self) -> None:
if self.has_error_or_warning_logs():
raise AssertionError(self.get_logs_text())
def assert_has_error(self, expected_error_message: str):
if all(
[
expected_error_message not in log.message and expected_error_message not in str(log.exception)
for log in self.get_error_logs()
]
):
raise AssertionError(
f'Expected error message "{expected_error_message}" did not occur in the error logs:\n{self.get_logs_text()}'
)
def __logs_to_text(self, logs: list[Log]):
if len(logs) == 0:
return None
return "\n".join([str(log) for log in logs])
def has_check_fails(self) -> bool:
for check in self._checks:
if check.outcome == CheckOutcome.FAIL:
return True
return False
def has_check_warns(self) -> bool:
for check in self._checks:
if check.outcome == CheckOutcome.WARN:
return True
return False
def has_check_warns_or_fails(self) -> bool:
for check in self._checks:
if check.outcome in [CheckOutcome.FAIL, CheckOutcome.WARN]:
return True
return False
def assert_no_checks_fail(self):
if len(self.get_checks_fail()):
raise AssertionError(f"Check results failed: \n{self.get_checks_fail_text()}")
def get_checks_fail(self) -> list[Check]:
return [check for check in self._checks if check.outcome == CheckOutcome.FAIL]
def get_checks_fail_text(self) -> str | None:
return self.__checks_to_text(self.get_checks_fail())
def assert_no_checks_warn_or_fail(self):
if len(self.get_checks_warn_or_fail()):
raise AssertionError(f"Check results having warn or fail outcome: \n{self.get_checks_warn_or_fail_text()}")
def get_checks_warn_or_fail(self) -> list[Check]:
return [check for check in self._checks if check.outcome in [CheckOutcome.WARN, CheckOutcome.FAIL]]
def has_checks_warn_or_fail(self) -> bool:
return len(self.get_checks_warn_or_fail()) > 0
def get_checks_warn_or_fail_text(self) -> str | None:
return self.__checks_to_text(self.get_checks_warn_or_fail())
def get_all_checks_text(self) -> str | None:
return self.__checks_to_text(self._checks)
def has_soda_cloud_connection(self):
return self._configuration.soda_cloud is not None
|
PypiClean
|
/SOSPy-0.2-py3-none-any.whl/SOSTOOLSPy/inconvhull.py
|
import numpy as np
import sympy as sym
from sympy import *
from numpy import *
from scipy.linalg import orth
from scipy import linalg
from SOSTOOLSPy.useconvhulln import useconvhulln
from scipy.linalg import null_space
from sympy import Matrix
def inconvhull(Z1,Z2):
#First, find the affine subspace where everything lives
#(for instance, in the homogeneous case)
nmons=Z2.shape[0]
#Translate so it goes through the origin
mr=np.asarray(Z2.mean(0))
Rzero=Z2 - np.matlib.repmat(mr,nmons,1)
#The columns of N generate the subspace
N=null_space(Rzero)
# Z2*N should be constant
cval=np.asarray(np.matmul(Z2,N).mean(0))
#Get only the monomials in the subspace
tol=0.01
sum_ix=np.sum(abs(np.matmul(Z1,N) - np.matlib.repmat(cval,Z1.shape[0],1)),axis=1)
ix=[]
for i in range(len(sum_ix)):
if sum_ix[i]<tol:
ix=np.concatenate((ix,i),axis=None)
nZ1 = Z1[ix.astype(int),:]
# Now, the inequalities:
# Find an orthonormal basis for the subspace
# (I really should do both things at the same time...)
# Project to the lower dimensional space, so convhull works nicely
Q=orth(Rzero.T)
if (matmul(Z2,Q)).shape[1]>1:
A,B=useconvhulln(matmul(Z2,Q))
#Find the ones that satisfy the inequalities, and keep them.
ix_temp= (np.matlib.repmat(B,1,nZ1.shape[0]) -matmul(matmul(A,Q.T),nZ1.T)).min(0)
ix=[]
for i in range(len(ix_temp)):
if ix_temp[i]>-tol:
ix=np.concatenate((ix,i),axis=None)
Z3=nZ1[ix.astype(int),:]
elif (matmul(Z2,Q)).shape[1]==1:
A=np.array([[1],[-1]])
B=np.array([max(matmul(Z2,Q)),-min(matmul(Z2,Q))])
ix_temp= (np.matlib.repmat(B,1,nZ1.shape[0]) -matmul(matmul(A,Q.T),nZ1.T)).min(0)
ix=[]
for i in range(len(ix_temp)):
if ix_temp[i]>-tol:
ix=np.concatenate((ix,i),axis=None)
Z3=nZ1[ix.astype(int),:]
else:
Z3=nZ1
Z3=np.unique(Z3, axis=0)
return Z3
|
PypiClean
|
/maxcord.py-1.0-py3-none-any.whl/discord/ext/commands/cooldowns.py
|
from discord.enums import Enum
import time
import asyncio
from collections import deque
from ...abc import PrivateChannel
from .errors import MaxConcurrencyReached
__all__ = (
'BucketType',
'Cooldown',
'CooldownMapping',
'MaxConcurrency',
)
class BucketType(Enum):
default = 0
user = 1
guild = 2
channel = 3
member = 4
category = 5
role = 6
def get_key(self, msg):
if self is BucketType.user:
return msg.author.id
elif self is BucketType.guild:
return (msg.guild or msg.author).id
elif self is BucketType.channel:
return msg.channel.id
elif self is BucketType.member:
return ((msg.guild and msg.guild.id), msg.author.id)
elif self is BucketType.category:
return (msg.channel.category or msg.channel).id
elif self is BucketType.role:
# we return the channel id of a private-channel as there are only roles in guilds
# and that yields the same result as for a guild with only the @everyone role
# NOTE: PrivateChannel doesn't actually have an id attribute but we assume we are
# recieving a DMChannel or GroupChannel which inherit from PrivateChannel and do
return (msg.channel if isinstance(msg.channel, PrivateChannel) else msg.author.top_role).id
def __call__(self, msg):
return self.get_key(msg)
class Cooldown:
__slots__ = ('rate', 'per', 'type', '_window', '_tokens', '_last')
def __init__(self, rate, per, type):
self.rate = int(rate)
self.per = float(per)
self.type = type
self._window = 0.0
self._tokens = self.rate
self._last = 0.0
if not callable(self.type):
raise TypeError('Cooldown type must be a BucketType or callable')
def get_tokens(self, current=None):
if not current:
current = time.time()
tokens = self._tokens
if current > self._window + self.per:
tokens = self.rate
return tokens
def get_retry_after(self, current=None):
current = current or time.time()
tokens = self.get_tokens(current)
if tokens == 0:
return self.per - (current - self._window)
return 0.0
def update_rate_limit(self, current=None):
current = current or time.time()
self._last = current
self._tokens = self.get_tokens(current)
# first token used means that we start a new rate limit window
if self._tokens == self.rate:
self._window = current
# check if we are rate limited
if self._tokens == 0:
return self.per - (current - self._window)
# we're not so decrement our tokens
self._tokens -= 1
# see if we got rate limited due to this token change, and if
# so update the window to point to our current time frame
if self._tokens == 0:
self._window = current
def reset(self):
self._tokens = self.rate
self._last = 0.0
def copy(self):
return Cooldown(self.rate, self.per, self.type)
def __repr__(self):
return '<Cooldown rate: {0.rate} per: {0.per} window: {0._window} tokens: {0._tokens}>'.format(self)
class CooldownMapping:
def __init__(self, original):
self._cache = {}
self._cooldown = original
def copy(self):
ret = CooldownMapping(self._cooldown)
ret._cache = self._cache.copy()
return ret
@property
def valid(self):
return self._cooldown is not None
@classmethod
def from_cooldown(cls, rate, per, type):
return cls(Cooldown(rate, per, type))
def _bucket_key(self, msg):
return self._cooldown.type(msg)
def _verify_cache_integrity(self, current=None):
# we want to delete all cache objects that haven't been used
# in a cooldown window. e.g. if we have a command that has a
# cooldown of 60s and it has not been used in 60s then that key should be deleted
current = current or time.time()
dead_keys = [k for k, v in self._cache.items() if current > v._last + v.per]
for k in dead_keys:
del self._cache[k]
def get_bucket(self, message, current=None):
if self._cooldown.type is BucketType.default:
return self._cooldown
self._verify_cache_integrity(current)
key = self._bucket_key(message)
if key not in self._cache:
bucket = self._cooldown.copy()
self._cache[key] = bucket
else:
bucket = self._cache[key]
return bucket
def update_rate_limit(self, message, current=None):
bucket = self.get_bucket(message, current)
return bucket.update_rate_limit(current)
class _Semaphore:
"""This class is a version of a semaphore.
If you're wondering why asyncio.Semaphore isn't being used,
it's because it doesn't expose the internal value. This internal
value is necessary because I need to support both `wait=True` and
`wait=False`.
An asyncio.Queue could have been used to do this as well -- but it is
not as inefficient since internally that uses two queues and is a bit
overkill for what is basically a counter.
"""
__slots__ = ('value', 'loop', '_waiters')
def __init__(self, number):
self.value = number
self.loop = asyncio.get_event_loop()
self._waiters = deque()
def __repr__(self):
return '<_Semaphore value={0.value} waiters={1}>'.format(self, len(self._waiters))
def locked(self):
return self.value == 0
def is_active(self):
return len(self._waiters) > 0
def wake_up(self):
while self._waiters:
future = self._waiters.popleft()
if not future.done():
future.set_result(None)
return
async def acquire(self, *, wait=False):
if not wait and self.value <= 0:
# signal that we're not acquiring
return False
while self.value <= 0:
future = self.loop.create_future()
self._waiters.append(future)
try:
await future
except:
future.cancel()
if self.value > 0 and not future.cancelled():
self.wake_up()
raise
self.value -= 1
return True
def release(self):
self.value += 1
self.wake_up()
class MaxConcurrency:
__slots__ = ('number', 'per', 'wait', '_mapping')
def __init__(self, number, *, per, wait):
self._mapping = {}
self.per = per
self.number = number
self.wait = wait
if number <= 0:
raise ValueError('max_concurrency \'number\' cannot be less than 1')
if not isinstance(per, BucketType):
raise TypeError('max_concurrency \'per\' must be of type BucketType not %r' % type(per))
def copy(self):
return self.__class__(self.number, per=self.per, wait=self.wait)
def __repr__(self):
return '<MaxConcurrency per={0.per!r} number={0.number} wait={0.wait}>'.format(self)
def get_key(self, message):
return self.per.get_key(message)
async def acquire(self, message):
key = self.get_key(message)
try:
sem = self._mapping[key]
except KeyError:
self._mapping[key] = sem = _Semaphore(self.number)
acquired = await sem.acquire(wait=self.wait)
if not acquired:
raise MaxConcurrencyReached(self.number, self.per)
async def release(self, message):
# Technically there's no reason for this function to be async
# But it might be more useful in the future
key = self.get_key(message)
try:
sem = self._mapping[key]
except KeyError:
# ...? peculiar
return
else:
sem.release()
if sem.value >= self.number and not sem.is_active():
del self._mapping[key]
|
PypiClean
|
/jupyterhub_url_sharing-0.1.0.tar.gz/jupyterhub_url_sharing-0.1.0/node_modules/webpack/lib/dependencies/HarmonyExportExpressionDependency.js
|
"use strict";
const ConcatenationScope = require("../ConcatenationScope");
const RuntimeGlobals = require("../RuntimeGlobals");
const makeSerializable = require("../util/makeSerializable");
const HarmonyExportInitFragment = require("./HarmonyExportInitFragment");
const NullDependency = require("./NullDependency");
/** @typedef {import("webpack-sources").ReplaceSource} ReplaceSource */
/** @typedef {import("../Dependency")} Dependency */
/** @typedef {import("../Dependency").ExportsSpec} ExportsSpec */
/** @typedef {import("../DependencyTemplate").DependencyTemplateContext} DependencyTemplateContext */
/** @typedef {import("../ModuleGraph")} ModuleGraph */
/** @typedef {import("../ModuleGraphConnection").ConnectionState} ConnectionState */
/** @typedef {import("../serialization/ObjectMiddleware").ObjectDeserializerContext} ObjectDeserializerContext */
/** @typedef {import("../serialization/ObjectMiddleware").ObjectSerializerContext} ObjectSerializerContext */
class HarmonyExportExpressionDependency extends NullDependency {
constructor(range, rangeStatement, prefix, declarationId) {
super();
this.range = range;
this.rangeStatement = rangeStatement;
this.prefix = prefix;
this.declarationId = declarationId;
}
get type() {
return "harmony export expression";
}
/**
* Returns the exported names
* @param {ModuleGraph} moduleGraph module graph
* @returns {ExportsSpec | undefined} export names
*/
getExports(moduleGraph) {
return {
exports: ["default"],
priority: 1,
terminalBinding: true,
dependencies: undefined
};
}
/**
* @param {ModuleGraph} moduleGraph the module graph
* @returns {ConnectionState} how this dependency connects the module to referencing modules
*/
getModuleEvaluationSideEffectsState(moduleGraph) {
// The expression/declaration is already covered by SideEffectsFlagPlugin
return false;
}
/**
* @param {ObjectSerializerContext} context context
*/
serialize(context) {
const { write } = context;
write(this.range);
write(this.rangeStatement);
write(this.prefix);
write(this.declarationId);
super.serialize(context);
}
/**
* @param {ObjectDeserializerContext} context context
*/
deserialize(context) {
const { read } = context;
this.range = read();
this.rangeStatement = read();
this.prefix = read();
this.declarationId = read();
super.deserialize(context);
}
}
makeSerializable(
HarmonyExportExpressionDependency,
"webpack/lib/dependencies/HarmonyExportExpressionDependency"
);
HarmonyExportExpressionDependency.Template = class HarmonyExportDependencyTemplate extends (
NullDependency.Template
) {
/**
* @param {Dependency} dependency the dependency for which the template should be applied
* @param {ReplaceSource} source the current replace source which can be modified
* @param {DependencyTemplateContext} templateContext the context object
* @returns {void}
*/
apply(
dependency,
source,
{
module,
moduleGraph,
runtimeTemplate,
runtimeRequirements,
initFragments,
runtime,
concatenationScope
}
) {
const dep = /** @type {HarmonyExportExpressionDependency} */ (dependency);
const { declarationId } = dep;
const exportsName = module.exportsArgument;
if (declarationId) {
let name;
if (typeof declarationId === "string") {
name = declarationId;
} else {
name = ConcatenationScope.DEFAULT_EXPORT;
source.replace(
declarationId.range[0],
declarationId.range[1] - 1,
`${declarationId.prefix}${name}${declarationId.suffix}`
);
}
if (concatenationScope) {
concatenationScope.registerExport("default", name);
} else {
const used = moduleGraph
.getExportsInfo(module)
.getUsedName("default", runtime);
if (used) {
const map = new Map();
map.set(used, `/* export default binding */ ${name}`);
initFragments.push(new HarmonyExportInitFragment(exportsName, map));
}
}
source.replace(
dep.rangeStatement[0],
dep.range[0] - 1,
`/* harmony default export */ ${dep.prefix}`
);
} else {
let content;
const name = ConcatenationScope.DEFAULT_EXPORT;
if (runtimeTemplate.supportsConst()) {
content = `/* harmony default export */ const ${name} = `;
if (concatenationScope) {
concatenationScope.registerExport("default", name);
} else {
const used = moduleGraph
.getExportsInfo(module)
.getUsedName("default", runtime);
if (used) {
runtimeRequirements.add(RuntimeGlobals.exports);
const map = new Map();
map.set(used, name);
initFragments.push(new HarmonyExportInitFragment(exportsName, map));
} else {
content = `/* unused harmony default export */ var ${name} = `;
}
}
} else if (concatenationScope) {
content = `/* harmony default export */ var ${name} = `;
concatenationScope.registerExport("default", name);
} else {
const used = moduleGraph
.getExportsInfo(module)
.getUsedName("default", runtime);
if (used) {
runtimeRequirements.add(RuntimeGlobals.exports);
// This is a little bit incorrect as TDZ is not correct, but we can't use const.
content = `/* harmony default export */ ${exportsName}[${JSON.stringify(
used
)}] = `;
} else {
content = `/* unused harmony default export */ var ${name} = `;
}
}
if (dep.range) {
source.replace(
dep.rangeStatement[0],
dep.range[0] - 1,
content + "(" + dep.prefix
);
source.replace(dep.range[1], dep.rangeStatement[1] - 0.5, ");");
return;
}
source.replace(dep.rangeStatement[0], dep.rangeStatement[1] - 1, content);
}
}
};
module.exports = HarmonyExportExpressionDependency;
|
PypiClean
|
/l-thonny-4.1.2.tar.gz/l-thonny-4.1.2/thonny/vendored_libs/filelock/_api.py
|
from __future__ import annotations
import contextlib
import logging
import os
import time
import warnings
from abc import ABC, abstractmethod
from threading import Lock
from types import TracebackType
from typing import Any
from ._error import Timeout
_LOGGER = logging.getLogger("filelock")
# This is a helper class which is returned by :meth:`BaseFileLock.acquire` and wraps the lock to make sure __enter__
# is not called twice when entering the with statement. If we would simply return *self*, the lock would be acquired
# again in the *__enter__* method of the BaseFileLock, but not released again automatically. issue #37 (memory leak)
class AcquireReturnProxy:
"""A context aware object that will release the lock file when exiting."""
def __init__(self, lock: BaseFileLock) -> None:
self.lock = lock
def __enter__(self) -> BaseFileLock:
return self.lock
def __exit__(
self,
exc_type: type[BaseException] | None, # noqa: U100
exc_value: BaseException | None, # noqa: U100
traceback: TracebackType | None, # noqa: U100
) -> None:
self.lock.release()
class BaseFileLock(ABC, contextlib.ContextDecorator):
"""Abstract base class for a file lock object."""
def __init__(self, lock_file: str | os.PathLike[Any], timeout: float = -1) -> None:
"""
Create a new lock object.
:param lock_file: path to the file
:param timeout: default timeout when acquiring the lock, in seconds. It will be used as fallback value in
the acquire method, if no timeout value (``None``) is given. If you want to disable the timeout, set it
to a negative value. A timeout of 0 means, that there is exactly one attempt to acquire the file lock.
"""
# The path to the lock file.
self._lock_file: str = os.fspath(lock_file)
# The file descriptor for the *_lock_file* as it is returned by the os.open() function.
# This file lock is only NOT None, if the object currently holds the lock.
self._lock_file_fd: int | None = None
# The default timeout value.
self._timeout: float = timeout
# We use this lock primarily for the lock counter.
self._thread_lock: Lock = Lock()
# The lock counter is used for implementing the nested locking mechanism. Whenever the lock is acquired, the
# counter is increased and the lock is only released, when this value is 0 again.
self._lock_counter: int = 0
@property
def lock_file(self) -> str:
""":return: path to the lock file"""
return self._lock_file
@property
def timeout(self) -> float:
"""
:return: the default timeout value, in seconds
.. versionadded:: 2.0.0
"""
return self._timeout
@timeout.setter
def timeout(self, value: float | str) -> None:
"""
Change the default timeout value.
:param value: the new value, in seconds
"""
self._timeout = float(value)
@abstractmethod
def _acquire(self) -> None:
"""If the file lock could be acquired, self._lock_file_fd holds the file descriptor of the lock file."""
raise NotImplementedError
@abstractmethod
def _release(self) -> None:
"""Releases the lock and sets self._lock_file_fd to None."""
raise NotImplementedError
@property
def is_locked(self) -> bool:
"""
:return: A boolean indicating if the lock file is holding the lock currently.
.. versionchanged:: 2.0.0
This was previously a method and is now a property.
"""
return self._lock_file_fd is not None
def acquire(
self,
timeout: float | None = None,
poll_interval: float = 0.05,
*,
poll_intervall: float | None = None,
blocking: bool = True,
) -> AcquireReturnProxy:
"""
Try to acquire the file lock.
:param timeout: maximum wait time for acquiring the lock, ``None`` means use the default :attr:`~timeout` is and
if ``timeout < 0``, there is no timeout and this method will block until the lock could be acquired
:param poll_interval: interval of trying to acquire the lock file
:param poll_intervall: deprecated, kept for backwards compatibility, use ``poll_interval`` instead
:param blocking: defaults to True. If False, function will return immediately if it cannot obtain a lock on the
first attempt. Otherwise this method will block until the timeout expires or the lock is acquired.
:raises Timeout: if fails to acquire lock within the timeout period
:return: a context object that will unlock the file when the context is exited
.. code-block:: python
# You can use this method in the context manager (recommended)
with lock.acquire():
pass
# Or use an equivalent try-finally construct:
lock.acquire()
try:
pass
finally:
lock.release()
.. versionchanged:: 2.0.0
This method returns now a *proxy* object instead of *self*,
so that it can be used in a with statement without side effects.
"""
# Use the default timeout, if no timeout is provided.
if timeout is None:
timeout = self.timeout
if poll_intervall is not None:
msg = "use poll_interval instead of poll_intervall"
warnings.warn(msg, DeprecationWarning, stacklevel=2)
poll_interval = poll_intervall
# Increment the number right at the beginning. We can still undo it, if something fails.
with self._thread_lock:
self._lock_counter += 1
lock_id = id(self)
lock_filename = self._lock_file
start_time = time.monotonic()
try:
while True:
with self._thread_lock:
if not self.is_locked:
_LOGGER.debug("Attempting to acquire lock %s on %s", lock_id, lock_filename)
self._acquire()
if self.is_locked:
_LOGGER.debug("Lock %s acquired on %s", lock_id, lock_filename)
break
elif blocking is False:
_LOGGER.debug("Failed to immediately acquire lock %s on %s", lock_id, lock_filename)
raise Timeout(self._lock_file)
elif 0 <= timeout < time.monotonic() - start_time:
_LOGGER.debug("Timeout on acquiring lock %s on %s", lock_id, lock_filename)
raise Timeout(self._lock_file)
else:
msg = "Lock %s not acquired on %s, waiting %s seconds ..."
_LOGGER.debug(msg, lock_id, lock_filename, poll_interval)
time.sleep(poll_interval)
except BaseException: # Something did go wrong, so decrement the counter.
with self._thread_lock:
self._lock_counter = max(0, self._lock_counter - 1)
raise
return AcquireReturnProxy(lock=self)
def release(self, force: bool = False) -> None:
"""
Releases the file lock. Please note, that the lock is only completely released, if the lock counter is 0. Also
note, that the lock file itself is not automatically deleted.
:param force: If true, the lock counter is ignored and the lock is released in every case/
"""
with self._thread_lock:
if self.is_locked:
self._lock_counter -= 1
if self._lock_counter == 0 or force:
lock_id, lock_filename = id(self), self._lock_file
_LOGGER.debug("Attempting to release lock %s on %s", lock_id, lock_filename)
self._release()
self._lock_counter = 0
_LOGGER.debug("Lock %s released on %s", lock_id, lock_filename)
def __enter__(self) -> BaseFileLock:
"""
Acquire the lock.
:return: the lock object
"""
self.acquire()
return self
def __exit__(
self,
exc_type: type[BaseException] | None, # noqa: U100
exc_value: BaseException | None, # noqa: U100
traceback: TracebackType | None, # noqa: U100
) -> None:
"""
Release the lock.
:param exc_type: the exception type if raised
:param exc_value: the exception value if raised
:param traceback: the exception traceback if raised
"""
self.release()
def __del__(self) -> None:
"""Called when the lock object is deleted."""
self.release(force=True)
__all__ = [
"BaseFileLock",
"AcquireReturnProxy",
]
|
PypiClean
|
/love_course_2016_2019-2023.3.1.0-py3-none-any.whl/LoveCourse20162019/docs/ai-shang-qing-gan/爱上情感《魅力男神全套》:21聊如指掌2.0:聊如指掌1.0:06你的8分女神却叫我老公.md
|
# 爱上情感《魅力男神全套》:21 聊如指掌2.0:聊如指掌1.0:06你的8分女神却叫我老公
Y bebodian 2020 云然,还藏了出来,上面来表演,明白嗎,多去一次,我们开始今天的一个直播,开始今天的直播,欢迎收看我们的直播间,如果声音正常给我,调味以往看一下,欢迎收看我们的直播间。
我们会在每天上,还有发言的本质,比起来比起来比起来,所以关键每一种关,最美可帮她的一些,知识以及干活,那么如果说你是第一次来,说听我们的直播,可以关注一下我们的公众人号,爱上练学,威胁公众人号。
爱上练学,或者是天家向我们的客服威胁,PUA老华博班,那么你在直播投题,说到一个我们的直播通知,那么今天我们讲的一个主题,是去分享一段,我最前的一个两个,韩基础跟女生的两个,韩基础,每一个教科术式的。
如何让女生对你表来,以及如何去让女生对白韩女老公,我想起来很过分,那么想学的,今天我给大家去直播这一段两个韩基础,我们开始看今天的一个,开始看今天的一个两个来基础,没有女生,大家看到女生在。
对女生当时也是我,在1月份直播的时候,聊了一个面子,在我1月份直播的时候聊了一个女生,那么在这个样子,这个女生呢,当然加上错,反而就不错,我们就先进到那边去聊,先进到那边去聊,那么OK。
大家看这一次是当时是1月14号,我跟她聊天,开始把她1月14号,这个女生当时是,主动给我把那个招呼,因为我在前天晚上,跟她聊天,有一段聊天,这个女生在前面的以前的那一段天,我可能以前讲过。
那么今天我们从这里开始想,从她开始的,如何的,好像跟我们表白,如何的,让她叫我们老公,从这里开始去,讲解这段聊天,那么比如说这里,我们因为前一天晚上,我跟她聊天,我主动跟她说了晚安,我主动跟她说了晚安。
对吧,那么女生到第二天早上,女生会说早安,那么如果说女生对你感兴趣,如果女生对你有好感,当你对她说了晚安,之后如果是这个女生睡觉了,她到了第二天早上,她就会主动给你开启一个,先话题,比如说她在早上想了。
这样的会跟你说早安,那么这是一个兴趣,指标,说你们这是一个女生对你兴趣,你说对你有好感的一个,一个可以试别的一个窗口,可以试别的一个指标,那么女生当时是给我,大的一个早安,这我大的一个表情了,对吧。
大的一个早安一个表情,那么我对吧,当我当时,因为当时她大点多,还没有起床,我没起床的时候,她过了很久,过了很久之后,我说,A你打起那么早,我刚刚才睡醒了,对吧,而你怎么起那么早,我刚睡醒了,女生说。
A我习惯了,对对对对对对对,女生她习惯了,然后非常简单,我们怎么说呢,我说你真棒,因为,现在,这么冷,这种天啊冬天对吧,冬天能够早起,还是非常好的,我说你真棒,你真棒,你说发了一个,害羞的一个能表情。
一个天号玩这个表情对吧,之后我就没有理他,之后我就没有理他了,这就是一个非常牛逼的地方,真的吗,因为什么呢,因为这个,我当时是要有事情去做,对吧,我当时是要有事情去做,因为我不可能,就刚给你。
在这晚上去聊天,很多兄弟们啊,聊天聊起来没完了,什么点聊起来没完了呢,对吧,好不容易女生主动给你,说过咱,得到这个机会,对吧,从非要从这个八点多,一直聊到,慢慢的11月2点,一直聊到晚上,从头聊到尾。
非常错误的一种方法,在这里可以说的,非常错误的一部分,那么大家看到,我在这里的,我跟他调了没几句,这个话题我就切了,我就没聊了,到了晚上的,八点多的时候,在晚上八点多的时候,晚上八点多。
我主动又开始了个新话题,我主动开始了个新话题,就说,因为我刚吃过,我刚吃过,这种女生说,你怎么走,你怎么老是,晚上才有苦吗,对吧,女生开始,女生开始,怎么呢,因为白天我们没有底涵,这就是我们刚才。
所说的那个指标对吧,就说,为什么这里,就是说我们要切掉话题,不要去了吗,因为要让女生对我们心心的,我们不然直接不回头吧,女生会想一,这个男人可能会,对吧,女生会,他有可能会主动来讨做,他有可能会。
不主动来讨做,为什么呢,他如果主动来打招呼,是因为对吧,对你,就他的信去特别想,他不来打招呼,是因为你身上性格,他会觉得,你不回我的,我也不哄你,你不理我,你身上性格,你身上性格,你身上性格,你这样子。
对吧,那么我觉得,造成女生对你,情绪上的一种投资,女生当他对你,情绪投资的时候,你就很容易,对他产生性,你就很容易去,跟他产生关系,你就很容易去跟他产生关系,那么在这里呢,就是这样子。
因为他对我产生的情绪投资,我在晚上,八点头的时候,原来说,我刚吃过,女生没有去接我,吃过的话,他反而自己心里,开启了一个话题,他说,你怎么冷事,晚上才有,八点头,你很可能那个表现,对不对,意思是说。
你白天没有陪我,你白天干吗去,这就是女生背后的孩子,他说的,女生背后的孩子,是我们的一个聊天技巧,恋爱方法里的,聊天技巧,再做,逆想和试围,听你们,如果说,你想学习的,什么叫做,逆想聊天试围。
可以去天家,像我们的科普微信,PUA,逆想聊天试围,就是说,你看这里,女生的话题是,你怎么老设法,上才有,意思,他背后的孩子,你怎么白天过来找我,还好吧,你怎么,白天没有跟没有陪我,聊天。
老是这个聊天来找我,那么我们就,反其倒是政治,你想把他试围就钻一,那你是不是想我,对啊,因为我之前没有找你,我们从这里已经,读出来了,就是他这一句话,我们能够看到,突然女生其实,他是想,怎么样了。
想在前面,跟我们全身一种聊天,全身一种不动,那么我们呢,就用这一句,说你想我,对不对,非常牛逼逼逼逼逼,对他都逼,那你想我了吧,你声说,嗯,把女生说,嗯,把你声说,嗯,对对,你能说我今天,那么我当时。
我都明白他这一发啥意思了,我说你今天干嘛去了,我就会想他干嘛去了,这就是政治的聊天了,聊天,不是说你非非常之,发力不少的,你不是说聊天,你非非常比较干嘛,聊得非常的精彩,你中间,你该产生一些,黏细感。
那么只有说,你们能产生黏细感,黏细感,才能最终出来运会,你们才能够,对吧,在现实中相见,如果是你们的聊天,没有黏细感,那么你,是无法去,把女生的约出来了,所以说我建议,很多兄弟们在聊生的时候。
可以加强一下黏细感,如果说女生你又不出来她,女生不愿意跟你出来约会,那是因为你黏细感,做过为什么不成,那么我们这里,就是做黏细感,像花蕾琴那样,我问你今天干什么去玩,你觉得说,我从家里回宿舍。
我回家里回宿舍,我说你今天可怜,把你的宿舍在哪里,哪里,主义还是黏细感,像那些那样的,因为她宿舍在哪里,方面我们后期跟她,约会,方面后期的那约会,那么女生告诉我,她的宿舍在哪里,发展她说。
她发展她的宿舍的位置,直接发了个位置过来,之后女生说,之后女生说,你在干嘛呢,你说问我,在干什么,那么我们没有回答,我们说的,我说你平时上班,你平时上班都住,那样嘛,平时上班是否都住在宿舍。
看到一下这个女生,她的一个物流的情况,方面我们后期去进行邀约,这就是我们聊天,我说我在拉板拉,这把我上来拉好,这就是聊天,就是说,非常那个华少,非常这个华历,非常简单,非常的生活化,你在做什么事情。
你都可以告诉你,那么当时我正在上次,我就告诉你,我正在做什么,对吧,非常好玩,有人说,有人说,我知道了,大了个表情,我说你从家楼很近,下软了我们可以去,从家楼玩,这里面是一个模糊要约什么玩,那么能看懂。
这里是一个模糊要约吗,那么,我要判断一下,这个女生的宿舍,你拿你比较近,之后我就要去,去模糊要约,就告诉我们以后,我们可以去模模那里去玩,这是一个模糊要约,那么如果是模糊要约,你能通过。
方便你后期的一个要约,就是聊天是有章程,聊天的是有邮程的,你在每一步,应该干什么事情,那么这个东西,在我们的联合法,找一个对于非常强制的去教学,那么如果说你在聊天,以及在约会上,也非常大的问题。
那么我建议,你可以去好好的聊天,像我们联合法看上,可以进行一个保密去学习,那么我们的联合法,我们联合法我们的原假是1598,现在我们正在做一个活动的,在春节节前我们正在做一个活动,一直需要1398。
那么今天,报名的声音非常直播,今天已经目前已经报名了,大概有10多名的声音了,我们还剩最后的5个名字,还有5个名字,一开始我们不用把的时候,有10个名字了,现在只剩最后5个名字了,现在如果说想报。
一定要趁早,一定要趁早,OK,那么继续啊,那么女生说好,那么这里证明了,证明OK,她一开始我们出来了,证明我们可以出来一个会了,这就是罗夫要约什么,罗夫要约通过,很多现在没有进行过,罗夫要约就去。
直接要约女生,这种情绪也是非常后的,那么我们建议我们的学员,建议我们的学员,一定要先,想要约之前一定要先做罗夫要约,如果说你的罗夫要约,那会通过,能够大大的增加,你去要约女生,OK,我们再继续啊。
女生说,好,你再干嘛,你生又问我,再干嘛,我说我在想你,是不是悲惨的,这个标准发力一个答案,女生问你再干嘛,对不对,那么前面我已经告诉她了,我在干什么对不对,那么她又问一句你再干嘛。
那么这种我们应该挑情,要去干嘛,这个时候应该挑情,罗夫说我再想你,这就是挑情,你能理解吗,罗夫,你比如说我也是,你说非常的配合我们,非常的配合我们挑情,那么证明上个证明我们的那个信,已经很紧张了。
证明我们对她的信息已经很强了,之后呢,女生说,把了一个很开胃表情,不容易,那你肯定,我说宝宝你肯定很喜欢吃糖,女生说,没有我不喜欢吃的,所以我比较胖,对吧,我说胖一点,没关系,我不喜欢太瘦的,有肉才好。
我说这里能有点前前的信仗,有点话题在里面,对吧,大家应该都等一会儿,太瘦了,我不知道,我不喜欢,因为太瘦了没有薰午,对不对,我说有肉才好,对吧,有肉才好,对不对,我们去聊天,偶尔的时候要聊一些信仪话。
当然不吃去聊,非常吃恶的信仪话题,你要聊什么,要回前几年的,一就是说,最红这一不在这里,我可以你发的这句话,她背后的喊一切是,女生是能够明白的,那女生又发的这不表情,你说,默默大家了吧,还在忙吗。
女生问我,是不是还在忙,还在忙,我说,我说,我这就洗书好了,到床上了,我能那个表情真可爱,这个表情,因为我能看,因为这个表情,是这个女孩,她自己做的自己的一个表情,就是这个表情,是她自己对吧。
所以我跨一下,所以我跨一下,Rain这个小表情真可爱,我因为我呢,看了一转这个表情,是她自己,是她自己,我可以当作对呀,我说对保高的表情,有蒙到我,我好看,OK,之后我女生说,我那个时候,比较瘦,对吧。
她当家,就表情的时候,比较瘦,对不对,那我说,那你给我看,一个你现在的,我可以严你的肉,这都是非常,调情的一些话数,对,非常调整,因为到这个阶段,到这个阶段,我既可以要约,也可以好情,也可以拉成关系。
这会变得非常简单,对吧,因为她对我的配合,肚子都非常的好,那么心里面能力间,就说,通过前面的聊天,让她对我们产场到,非常强的一种,吸引,让她对我们产场了,非常强的一种,好感,因为女生说,不要。
我现在特别的惩罚,不要,我现在会对惩罚,我让她发一个,现在的果然,不愿意去发,为什么她不愿意去发,她很有可能,她现在或者是,谢了庄啊,或者现在她的头发,也没写,或者怎样,她现在的庄,还可以跟她遭告。
所以说女生不会去发,那么这里,没有关系,没有关系的,我们可以坚持一下,如果是坚持一下,你这一项一人互发,那就没关系,我们就不需要再去,强迫女生去做她不喜欢的事情,而笨蛋,我又不是只看,我话题一转,我说。
你平时你几点下班,一人再聊,你一人再聊,物流相关的一些问题,比如说我六点下班,你,你那都是几点,我说我自己安排的吧,我说我自己安排,我几点下班,我说感觉你比较好,不热适,我可以在这里,她还在你身上。
感觉你比较好,这是一个复格,什么是复格,你给女生书造一种,你喜欢的风格,你给女生书造一种,你喜欢人物这个,你去书造出来,这样的女生,一定是你最喜欢的,我在这里去复格,比如说我也比较停藏,上一次,我说你。
我在前面打过这儿,我说你这样的,我说感觉你比较听话,比较听话,打同停好,她打同她停好,我这里面重新打了一场,她说你为什么,她说你为啥喜欢听话的呢,对吧,我就没理她了,这里我就没理她了。
她说你为什么喜欢听话的,其实这里是一个小小的一个对策,是个小小的对策,女生对你反应好比方一样,我给你放出一个对策来,大家想想女生问你,为什么喜欢听话的,或者这一话改动一下,因为女生说你喜欢我。
或者是你喜欢什么样的,女生那么这都是对策,包括这一种,也是对策,那么这个对策要求你去解,如果说现在在场,听直播的兄弟们,在场面是直播的兄弟们,你们会如何去解,女生的这一个对策来,那么大家考不到一下。
你们是如何去解,这一种对策,成成一二三,三五说,不为什么,就是喜欢,也还不说,我发现,现在兄弟们来听课的兄弟们,都找了非常多的知识啊,聊天都已经跟着还不错了,都能够用不一些,比较简单的一些,费草导。
都能够用不过,那么,这就是这个来听课,好处的,你们经常来学习,经常来听课,一定能够,能够学习到,能够学习到一些知识,对,OK吗,我们继续啊,那么我在这里呢,我因为我当时,买的去跟那个聊这个话。
我买的去跟那个聊这个话,我说,我在看,我说你在看电视吗,我说你在看电视吗,女生说,我不听话,我就不听话,我在看抖音,我在看抖音,我说,那你有露,你有露这个抖音吗,你发给我看一看,没有有小视频。
我没有没有,没有露这个抖音的吧,那么女生,这过发了一段,自己的一个小视频过来过,发了自己的小视频过来,那么我们,要不要看一下,有没有小视频,如果想看很多,发一波,六两个看一点,要不要看一段,就说。
发了这个小视频吗,对吧,OK,你们有好多,现在好多,所以那么打完之了,那么我们给大家看一看,女生那把这个小视频,不受,好多小视频哦,笑一下,来你看,来,好碰啊,女生发了一段,最小视频过来的吧。
我们大家看的,那把这个小视频,来看男人看,好多小视频哦,笑一下,来你看,我相信,好碰你,也想找一个这样的比例,对,那么如果说,你可以帮你我们的面,也帮,去学习如果找一个,这个样子的比例,那么如果说。
你帮助了念言话,找一个这种类型的女生,其实是很简单的对吧,那么我们导师的吧,想搞定这样的女生,就变得更简单,更简单,OK,那么如果说,去了解可能相关的内容,可以去添加一下我们的科普危险,PUA。
那我们发过吧,去自选一场我们的念话,然后我们的科普呢,现在还有,最后的五个名字对吧,是我们的一个优惠活动讲,那么如果说我们的名字结束了,没结束了,我们就回过我们一个原价了,OK,那么OK,我在这里呢。
既然女生给你发了一段视频,我们应该去发一下,玩一下,玩一下,我说你眼睛一张一张,对吧,猛到我的心里了,女生也会开心,女生也会很开心,因为你花女生了,那么每一个人都喜欢,被别人死了,花对吧。
包括男生你吃一样的,其实现在有个人,现在来花你,我觉得你的内心,也是很开心,即使她花的,很这个官方,或者是,花的很为何对吧,其实你长得很丑,但是你被别人,花你长得很帅,我相信你的内心,一定都是很高兴。
所以说呢,所以说,所以说,所以说这个,我们应该吃掉的时候,去花在你身上,那么什么叫做吃掉的时候,我们有一起这个,视频就是教你的了,什么时候,应该去花女生,伸手应该去,把她压女生,然后在我的电话。
内部科学里,我们就想起这个教训了,那么OK在这里,我们就全都到去花,对吧,有些人说,关注在哪,关注在哪,关注在,这个页面上,有一个主播,你给点几下关注,OK,你或者是天家长的客服威信,PUA。
那我包包啊,我们或者你有其他的客服威信,就不需要重够天家了,我们继续,女生说,有时候被花了,这个笑地的表情,然后发了个笑地的表情,你会说,我有瘦铃,我们就能告诉我,他发展瘦,他有瘦铃对吧,因为他的视频。
那个用了特效瘦铃,那么其实这个,不用说我平常看得出来,可能会看得出来,我说我说脑骨的一下,还不错,你可以捏你肉肉的,它的点可能跟这个上面,就是表情一样,比较有点硬而悲,的为了有点硬而悲。
我说以我脑骨的一下,还不错,捏你的肉肉,对,而想一下这种画面,这是个聊天,那就是画面感的聊天,你想一下,只有你们是什么样的关系,你可以去捏它,你捏上肉肉的,只有说,它是你的女人,不是蓝迷失。
它是你的女人,所以说你三个去捏它,用蓝迷的不散烤,你说不要,化妆不可以捏你,对吧,会掉的,女生没有一句,女生没有说,你捏我练对吧,女生说的是,因为我化妆,所以说你不能捏我的脸,因为我化妆。
你要是捏我的脸的话,我脸上的死了,就会去掉了,对不对,那么女生默认,它不化妆的钱,我们可以去捏它,或者说,女生它没有反转,我们对它一个,是体生疾,那我这就是聊天,我们即使,通过文字聊天。
依然可以跟女生产生,知体生疾,什么叫知体生疾,就是说,你跟它,两个人,牵手啊,摸脸啊,团捏脸啊,就等一些,知体生疾,主知体生疾,我们通过聊天,就提前干派了,提前涉死了,我什么叫涉死了,我们提前告诉他。
未来你跟我们见面,我会对你怎样,那么当你们开始见面的时候,你在去做对应的一个事情的时候,女生就不会让她靠,女生给你会抵抗,对吧,那么如果说当,你在聊天上,已经给她做了一些,知体生疾上的一些话。
但是你见面的时候,女生却不同意你去,就是你去做的,自己身体的时候,女生对你的感觉很强,那是因为什么呢,是因为你见光死了,因为你见光死了,所以说,你会不会产生这样的一个反应,OK,我女生说,会掉,我说。
那你是要擦多厚的粉,如果你要擦多厚的粉,去打压一下,我前面刮太一下,这后这里,然后再打压一下,说那你要擦多厚的粉,我女生说,不厚,也是会掉,你说,即使擦粉,不厚,也会掉,我们继续怎么样了,继续去做。
知体生疾,什么叫继续做知体生疾,有时候太萌的装的法,就不能亲吻,太萌的装就不能接吻,对不对,我们通过前面的,用手去捏它们的脸,把一个知体生体升级到了上面,升级到了一个亲吻,就能看见这个生体。
所以是非常这块,聊天,就是应该这样去了,很多就没有回去聊天,或者说是跟女生聊了三五个月,好几年,一两个女生,只是相处成为一种防御的关系,就是因为你的聊天不懂生机,不懂文字生机,不懂知体生体,聊天。
也可以知体生机,比如我在这里说,如果你化了太萌的装,我就不能跟你去接吻,对不对,这就是一个知体生体,我和她现在话题都已经聊到,我和她亲吻上面去了,对不对,我们只要约会,只要我贪婚我出来约会。
那么我会怎么样了,我们就能够,我们就能够很切纵,跟她去接吻,这种牛逼的一个地方,比如说这个女孩说,爱女孩子的事情不懂,管理孩子生机不懂她,咱们这里都有所谓,然后说,有时候她我不用懂,比如说我不需要懂。
有你懂,我就会,我是男生,我为什么非要去懂女孩子的事情,这是一个非常好玩的事情,就是说,这里我相信很多兄弟们,如果没有学习过年来学,没有学习过我们官方课程的,所以会怎么去做,她在这里有很多怎么做。
她在这里有很多甩,我怎么不懂,不懂你教我,对不对,这就是一个非常错误的做,我们的证据都都把说,我不用懂,因为我有你就过了,因为我是一个男人,女人的事情我为什么要懂,对不对,我需要懂的是我的事业。
等等我其他的方面,一些没有一个,这些东西你不懂你,对不对,但是女人的事我不需要懂,这就是牛逼啊,这就是牛逼,这个发生非常容易,那么,就是一种大道总裁的感觉,前面有人说,我不是说不大道的吧。
哪有自己说自己比较大道的,这些东西是通过人,是通过你的语言,你的风格,你聊天的感觉,给人出来的,而不是你自己去夸自己说,哎我比较大道,对不对,不是你自己去说自己比较大道,你就变得比较大道,而是通过。
聊天的感觉,聊天给对方的感觉,聊天的时候,给对方的感觉,对方觉得你很大了,那么这时候对方会说,你怎么那么大道,而不是自己去说,哎我很大道,这种做的,就不是证据,就不是证据,终于做,就不好,你再继续啊。
那女生,这时候说哎我要睡了,把女生跳下,我要睡了,我上开热,然后我好宽呀,之后,把那个,接吻那个,一个小孩子,一个亲朋的一个表情,一个亲朋的表情,女生说,晚安了,晚安了,对不对,OK。
这就是我们别人留意的地方,你看,她在这里,你连续发了四条信息,发了四条消息,听不懂,很,你们在平时跟你知道,你们都留生,一定不会有这么好的办法,是什么,因为你没有学过,系统的聊天,那流程去走,那么聊天。
那时候有流程可以去走的,晚安你只按到这个流程去走,可以把你生,对你的一个情绪,聊播脑,最高,不然,调动你是一个新学,最重要的一个,聊天比较努力的一个,我说,哎,暴暴,这里我刚才说不懂,到了第二天早上。
因为还有主动播,我说的早啊,然后我上班了,我上班了一张自己,上班所谓一个照片,然后自己在这个,电脑前面上班了一个照片,女生发了一个自己,在上电脑前的那个照片,对不对,女生早啊,我已经上班了,对不对。
那么OK,我依然过了很久,我说我刚醒,往往我刚醒,然后再看看我们这里,慢慢的,我们的称呼已经开始改叠,慢慢的我们称呼已经改叠,然后我在这里的称呼,已经改变成什么呢,已经改变成多,已经改变成这个,对不对。
已经改变成什么,好了,OK,女生发说,哎呀你这么幸福啊,你这么幸福啊,你这么幸福啊,我还来得到一个表现,对吧,这里对对,女生说你这么幸福,来考考一下,你们会怎么成的,你们会怎么成的,你们能不能,对吧。
在这里去跟她调起,怎么许知道,OK,那么有兄里面知道,这里,当下是你能再跟她调天,你们知道该怎么去对过命,你们在场上,你们都不知道怎么去回吗,在场上,你们能不能都不知道怎么回吗,小心心说,幸福就是。
和你一起过好奇,这就不好了,什么叫过好奇,因为你这个答案,对吧,你是在追助她,你什么叫你的追助她,是因为,和她一起过好奇,这就不好,这就不好,你应该要反过来,来以后我们一起光体,这里的重心是和你。
那么这就不好,OK,刚才你说,我先和和你一起光体,这是在追助她的一种谣法,追助女生的一种行为,所以说不建议这样的确量,你自己失去了姜平性,那么大家想想,每个生都喜欢姜平,女生都喜欢,比较有时候。
如果说你是一个姜平,把姜平性比较高度来生,女生就会打下来追助你,为什么这个女生,对我的反应那么好,我还为什么这个女生,每一次跟我回复,都回复好几条情绪,就是因为我对她来说,是一个姜平,把我每一个聊天。
我都是一个姜平性,每一句聊天,我聊过去的,我都是一个姜平,所以说女生对我的反应,才不如此之好,正叫过姜平性,姜平性呢,咱们练话,什么叫,也有小试试去教训,那如果说,你想去学习。
怎么可以去报名我们的练案报告上,那么如果说,那你在聊天上,有非常多的问题,可以去报名的,聊话报告上,我们现在来聊话可能,有一个优惠的相国,那这个原降是,八海之旧,一直都没有过优惠,这次优惠。
比较力度比较大,那你进去,自觉有没有客户微信,或者是自觉你,是一张任何一个客户,或者是天下,我们的客户微信,PUA,包括吧,去学习一下,我看我们再看一下其他人的会呐,有个兄弟说,有你在我会补习。
有你在我会更习,有你在我会更习,他已经在了,他已经在了,你是不是意思,你拿到桌,哎有你睡的,我实际上会不进,不就是我,早上看到你送的花,当然辛苦了,他让你送的花,花有幸好,还可以,幸不是两个人的事。
你认为呢,这就推开你什么,不大好,推开你什么不大好,你怎么会,你怎么会,你自己来你什么不大好,推开你什么不大好,最近就跟说,因为有你每天早上的,陪伴,还可以,产品就说,不信,你不要不信,你还可以。
有你一大早就要我起床,当然性感,也还要做,对啊,对吧,包括昨晚的晚安,你睡,你睡,你睡到现在,你教睡到现在,你追,到我,你会补习,很多学这么打的这个答案,都不是特别的完美,都不是特别的,我们大家看一下。
我当时就能够回答,对吧,我说,我要,就是你这个答案,你要联合上下,什么叫联合上下文呢,就说,大家应该都学过一种,叫做,成上其下,我就是你写过文样成上其下,聊天,一样要成上其下,你要去结合。
他上面发的内容,以及你后边,他会如何去回复你,这叫做成上其下,对吧,结合,他前面发的内容,你去给出一个回复,这后你回复的这个东西,又要,方便他后边再去回复你,对这叫做成上其下的一个聊天。
那么我是这么说呢,我说对啊,因为我一醒来,就看到,跑跑发的消息,所以说,才会比较新,这里,也在划他,也在跑,一句话,再划他一段话,对不对,女孩发了一个表情,你还发了一个表情,ok,我也发了一个表情。
晚安就是,聊到晚安,晚安就是聊到,有它的意思,你这一句话,我给聊到,聊个她,有话会觉得比较高兴,好的,所以你把我聊到刺吧,那么这里呢,我们应该怎么去做呢,既然你还已经,这么大的窗口了,那么我们会应该。
措施这个窗口,那我说,那你快点跟我去表白,就这样,我就可以为你去拒绝所有的女人,这就是一个,比较邏遇的女人,是吧,表白的一个话数,我相信你听过我课程的很多球铃铸的吧,未来这个话肯定会比,会比难掉。
有时候很多球铃铸,会把我的话数,给拿出去使用,对不对,ok,我说,这样我就可以,为你拒绝所有的女人,对我女生说,那我要给你一个,大的东西,你要吗,晚安,我说要,她可以说,她叫不我,我要。
她只能给大部分切身她自己,那么女生在这里呢,已经表白了,在这里已经,你说对我们开始表白了,女生对我们一个开始的吧,怎么样已经开始对我们的好感觉,非常之强,我们在一共通过的基本上,一共23分的表面。
23分的表面,30分的表面,比如说,就会主动对我们表白了,让她对我表白着火锅,我怎么没有,没有考验她一下,怎么样考验她一下,我说,你要专一的话,我就要,考验在这个里面,你说对呀,很关门的那种,加黑貓。
那种软,我说,那我就要了你吧,以后,就是我的女人,不准变得欺负你,你还说我告白成功了吗,我还差一步五对吧,还差一步,叫一个好平的,对,叫一个好平的,这就是这个,有请入身,一步一步的去,引导场。
引导他干嘛,引导他叫我们老公对吧,我说叫一个好平的对吧,我说你应该叫我们老公对吧,你就是说刚开始,哎你刚开始不老公对吧,在这个人给他害羞,不大好意思,你老说你快叫,你就是说我们老公对吧,那么这里呢。
你还就已经,形成了你还叫了,你不吧,非常的简单,你生就已经叫老公,不是你真怪貓的大人吗,我你还说你干吗呢,那后边的聊天已经不动了吗,已经到这里了,你还已经叫什么,女生已经开始了吧,你还已经开始,叫老公。
那么这一聊聊天呢,就是非常教科舒适的,如何去引导女生,对你告白,并告白之后,如何顺利地让她去叫你老公,这就是比较牛逼的一辆,这就是非常教科舒适的,把相应能听到,今天这一趟可能就用一趟,你们聊天。
一定有一个小小的突破,一年有小小的提升,那么后边当然,我跟她聊了很多话起,并且,我后来跟她出来约会了,她们约会,大家可想而知的,你们已经到了,对吧,已经到了这个,对方叫你老公了,是不是变得非常简单。
你出来约会,后边的对方简单,那么你的约会呢,就跟她没有那么难了,因为对方已经叫你老公了,其实来说你的关系,已经从我生人升级到了男女朋友,当你们在出来约会的时候,我来当你们在出来约会的时候,那么你想一下。
对方是你女朋友,出来约会的时候,我跟她非常的简单,大家好,我是爱上情感教育的创始人乐天,我在2004年进入情感行业,2015年创办了爱上情感教育,至今我们的视频节目,全网报发量达到了2。1以上。
本私数十万帮助数千里兄弟,找到了幸福,我们这里与解决中国男性的情感问题,乐来能够帮助到更多的兄弟,获得爱情,我们开创了一号具体可护者的方法,他不需要你有钱,不需要你帅,也完全不需要你养得高。
这道方法能够成功的吸引你喜欢的女生,也喜欢上你,他会教你怎么和女生认识,聊天,并且能够把它约出来,甚至能够在很短的时间内得到他,但他彻底地爱上你,这道方法被我定义为对爱方法,对爱方法是爱上情感教育的。
再线学习课程,从1。0到6。0,我们经历了一次,遇到一次的随便,学习的练爱方法6。0,将以超过4个机的视频形式,重建给你,等各练爱方法6。0,分成了6个天章,第1个天章是正确的练爱方法6。0。
从经历是否以为打动一个人,或者说感动一个女人,她就能成为你的女朋友,所以我们会对她做很多自以为,很浪漫的事情,我们活在自己的世界里,每天都会因为,她可能答应你的追求而逃罪。
整体情立即停止你那些错过的想法和行为,因为这只会让她比你越来越远,追踪本身就是一件低价值的行为,而真正能够让你得到她的方式是细艺,计划心理就算说,女人更容易被高盛存价值的男人所细,而发展到今天。
还有其他更为重要的民宿,影响的你的细腻,在第1个天章,我就会进行详细的教学,最好偏那只精准魅力来人,其实中国88%以上的男生,都不懂如何去打单主题,更有网友说,中国的男人可以不想中国的女人。
因为中国的男人一点都不懂打单,形象不仅侧名的会反映出你的社交地位的高低,而一个好的形象会,极大的满古女人的虚同心,有一句话教过,没有丑男人只有男男人,我们的课程包含了方旋,专权制定全套的外型改造化案。
以及提升生活品质,让你成为一个精緻,而又有内力的男人,那么第3,第4篇章是展示面的拍摄,以及女性自源的收集,文化先教你如何拍照,如何自拍以及如何拍摄出,吸引女性的展示面,包括自拍的角度。
以及后期这些展示面的修图技巧,教你如何快速的在这条软件上,展示出自己疯狂的生活,这些提前的准备,都是在味道的一个目的,就是让你能够有,更多的机会,认识更多的女生,我们有套完的了,女性自源建设方案。
这道方案可以保证你每天,至少认识3到5个,甚至10个以上,高研这个女生,我们说了,教你在世界软件上,认识女生,还会包括街头搭闪,睡觉圈,等等认识女生的方式,让你不在外面,没有没打烦恼,在第5篇章。
我们主要讲如何去跟女生聊天,很多男人身边,也无让出举心动的女生,但因为缺乏正组的聊天技巧,把自己聊成了废胎,迫备女生发了好人卡,把原本属于你的爱情,告诉阿拉。
我们潜在组统人员在经历了他过4年的时战和颜罪当中,开状除了一套戚和中国人的聊天体系,可以快速地聊脖起,女生的情绪,甚至仅仅去通过聊天,就可以让女生带上你,在这一篇章,我会把这套聊天体系。
以及聊天各种有效的技巧,设立含在了20多个点,通过理论和时战讲解的方式,让你彻底明白,如何去跟女生聊天,在我们的观察,还有他过40天,保持和女生,拐着了聊天纪录,碰你学习。
在第六天聊天纪约会以及确定关系,你是不是约会的时候经常能长,或者说不知道跟你聊什么,约会的时候约会了一次的女生,再也不跟你出来约会第二次,这一些都是因为你没有党过政治的约会技巧,在这一部分。
我们会通过专业人士办到方式,将你如何去涉及一个政治的约会流程,约会中如何去跟美、生活,以及转场到私密公间之后的包裹,如果我们会教你和女生发生亲密关系,之后如果去定位,你和她的关系让她成为你的朋友。
或者说长期关系或者短期恋人,最后除了对六个片章的完整年来,一起之外,课程还将负责给你翻外片,以及高级片在翻外片和高级片中,我将教会你更多的把握一只巧,恋爱方法除了这些系统的视频课程外。
你还将加入内部学习微信交流群,每周会有一道免质的外外語音打一课程,以及由团队所有导师分享的私密把握干货和技巧,以上就是恋爱方法六点零的简单课程介绍,按照情感教育已经经济了很长时间的市场考验。
我们之所以能够生存下来,因为我们将教学质量看到最终是有高质量的视频课程,语音课程以及文字教学,每天数十小时的教学大议让学院在娱乐中学习,诸富学院完成教学作业。
这一切都是为了能够让学院能够做到真实而快速的提升,白人一生当中就免得大事生存和繁殖,我们为你解决了提动一件,我们之类让每一个中国男人用我选择爱情的能力,这种能力值得你不顾一切,即刻听家我们的话我们微信。
B-A-LIVE企伝9与我们联系,我乐天我们先上见
|
PypiClean
|
/de_toolkit-0.9.12-py3-none-any.whl/de_toolkit/templates/js/assets/underscore-min.js
|
!function(){var n="object"==typeof self&&self.self===self&&self||"object"==typeof global&&global.global===global&&global||this||{},r=n._,e=Array.prototype,o=Object.prototype,s="undefined"!=typeof Symbol?Symbol.prototype:null,u=e.push,c=e.slice,p=o.toString,i=o.hasOwnProperty,t=Array.isArray,a=Object.keys,l=Object.create,f=function(){},h=function(n){return n instanceof h?n:this instanceof h?void(this._wrapped=n):new h(n)};"undefined"==typeof exports||exports.nodeType?n._=h:("undefined"!=typeof module&&!module.nodeType&&module.exports&&(exports=module.exports=h),exports._=h),h.VERSION="1.9.1";var v,y=function(u,i,n){if(void 0===i)return u;switch(null==n?3:n){case 1:return function(n){return u.call(i,n)};case 3:return function(n,r,t){return u.call(i,n,r,t)};case 4:return function(n,r,t,e){return u.call(i,n,r,t,e)}}return function(){return u.apply(i,arguments)}},d=function(n,r,t){return h.iteratee!==v?h.iteratee(n,r):null==n?h.identity:h.isFunction(n)?y(n,r,t):h.isObject(n)&&!h.isArray(n)?h.matcher(n):h.property(n)};h.iteratee=v=function(n,r){return d(n,r,1/0)};var g=function(u,i){return i=null==i?u.length-1:+i,function(){for(var n=Math.max(arguments.length-i,0),r=Array(n),t=0;t<n;t++)r[t]=arguments[t+i];switch(i){case 0:return u.call(this,r);case 1:return u.call(this,arguments[0],r);case 2:return u.call(this,arguments[0],arguments[1],r)}var e=Array(i+1);for(t=0;t<i;t++)e[t]=arguments[t];return e[i]=r,u.apply(this,e)}},m=function(n){if(!h.isObject(n))return{};if(l)return l(n);f.prototype=n;var r=new f;return f.prototype=null,r},b=function(r){return function(n){return null==n?void 0:n[r]}},j=function(n,r){return null!=n&&i.call(n,r)},x=function(n,r){for(var t=r.length,e=0;e<t;e++){if(null==n)return;n=n[r[e]]}return t?n:void 0},_=Math.pow(2,53)-1,A=b("length"),w=function(n){var r=A(n);return"number"==typeof r&&0<=r&&r<=_};h.each=h.forEach=function(n,r,t){var e,u;if(r=y(r,t),w(n))for(e=0,u=n.length;e<u;e++)r(n[e],e,n);else{var i=h.keys(n);for(e=0,u=i.length;e<u;e++)r(n[i[e]],i[e],n)}return n},h.map=h.collect=function(n,r,t){r=d(r,t);for(var e=!w(n)&&h.keys(n),u=(e||n).length,i=Array(u),o=0;o<u;o++){var a=e?e[o]:o;i[o]=r(n[a],a,n)}return i};var O=function(c){return function(n,r,t,e){var u=3<=arguments.length;return function(n,r,t,e){var u=!w(n)&&h.keys(n),i=(u||n).length,o=0<c?0:i-1;for(e||(t=n[u?u[o]:o],o+=c);0<=o&&o<i;o+=c){var a=u?u[o]:o;t=r(t,n[a],a,n)}return t}(n,y(r,e,4),t,u)}};h.reduce=h.foldl=h.inject=O(1),h.reduceRight=h.foldr=O(-1),h.find=h.detect=function(n,r,t){var e=(w(n)?h.findIndex:h.findKey)(n,r,t);if(void 0!==e&&-1!==e)return n[e]},h.filter=h.select=function(n,e,r){var u=[];return e=d(e,r),h.each(n,function(n,r,t){e(n,r,t)&&u.push(n)}),u},h.reject=function(n,r,t){return h.filter(n,h.negate(d(r)),t)},h.every=h.all=function(n,r,t){r=d(r,t);for(var e=!w(n)&&h.keys(n),u=(e||n).length,i=0;i<u;i++){var o=e?e[i]:i;if(!r(n[o],o,n))return!1}return!0},h.some=h.any=function(n,r,t){r=d(r,t);for(var e=!w(n)&&h.keys(n),u=(e||n).length,i=0;i<u;i++){var o=e?e[i]:i;if(r(n[o],o,n))return!0}return!1},h.contains=h.includes=h.include=function(n,r,t,e){return w(n)||(n=h.values(n)),("number"!=typeof t||e)&&(t=0),0<=h.indexOf(n,r,t)},h.invoke=g(function(n,t,e){var u,i;return h.isFunction(t)?i=t:h.isArray(t)&&(u=t.slice(0,-1),t=t[t.length-1]),h.map(n,function(n){var r=i;if(!r){if(u&&u.length&&(n=x(n,u)),null==n)return;r=n[t]}return null==r?r:r.apply(n,e)})}),h.pluck=function(n,r){return h.map(n,h.property(r))},h.where=function(n,r){return h.filter(n,h.matcher(r))},h.findWhere=function(n,r){return h.find(n,h.matcher(r))},h.max=function(n,e,r){var t,u,i=-1/0,o=-1/0;if(null==e||"number"==typeof e&&"object"!=typeof n[0]&&null!=n)for(var a=0,c=(n=w(n)?n:h.values(n)).length;a<c;a++)null!=(t=n[a])&&i<t&&(i=t);else e=d(e,r),h.each(n,function(n,r,t){u=e(n,r,t),(o<u||u===-1/0&&i===-1/0)&&(i=n,o=u)});return i},h.min=function(n,e,r){var t,u,i=1/0,o=1/0;if(null==e||"number"==typeof e&&"object"!=typeof n[0]&&null!=n)for(var a=0,c=(n=w(n)?n:h.values(n)).length;a<c;a++)null!=(t=n[a])&&t<i&&(i=t);else e=d(e,r),h.each(n,function(n,r,t){((u=e(n,r,t))<o||u===1/0&&i===1/0)&&(i=n,o=u)});return i},h.shuffle=function(n){return h.sample(n,1/0)},h.sample=function(n,r,t){if(null==r||t)return w(n)||(n=h.values(n)),n[h.random(n.length-1)];var e=w(n)?h.clone(n):h.values(n),u=A(e);r=Math.max(Math.min(r,u),0);for(var i=u-1,o=0;o<r;o++){var a=h.random(o,i),c=e[o];e[o]=e[a],e[a]=c}return e.slice(0,r)},h.sortBy=function(n,e,r){var u=0;return e=d(e,r),h.pluck(h.map(n,function(n,r,t){return{value:n,index:u++,criteria:e(n,r,t)}}).sort(function(n,r){var t=n.criteria,e=r.criteria;if(t!==e){if(e<t||void 0===t)return 1;if(t<e||void 0===e)return-1}return n.index-r.index}),"value")};var k=function(o,r){return function(e,u,n){var i=r?[[],[]]:{};return u=d(u,n),h.each(e,function(n,r){var t=u(n,r,e);o(i,n,t)}),i}};h.groupBy=k(function(n,r,t){j(n,t)?n[t].push(r):n[t]=[r]}),h.indexBy=k(function(n,r,t){n[t]=r}),h.countBy=k(function(n,r,t){j(n,t)?n[t]++:n[t]=1});var S=/[^\ud800-\udfff]|[\ud800-\udbff][\udc00-\udfff]|[\ud800-\udfff]/g;h.toArray=function(n){return n?h.isArray(n)?c.call(n):h.isString(n)?n.match(S):w(n)?h.map(n,h.identity):h.values(n):[]},h.size=function(n){return null==n?0:w(n)?n.length:h.keys(n).length},h.partition=k(function(n,r,t){n[t?0:1].push(r)},!0),h.first=h.head=h.take=function(n,r,t){return null==n||n.length<1?null==r?void 0:[]:null==r||t?n[0]:h.initial(n,n.length-r)},h.initial=function(n,r,t){return c.call(n,0,Math.max(0,n.length-(null==r||t?1:r)))},h.last=function(n,r,t){return null==n||n.length<1?null==r?void 0:[]:null==r||t?n[n.length-1]:h.rest(n,Math.max(0,n.length-r))},h.rest=h.tail=h.drop=function(n,r,t){return c.call(n,null==r||t?1:r)},h.compact=function(n){return h.filter(n,Boolean)};var M=function(n,r,t,e){for(var u=(e=e||[]).length,i=0,o=A(n);i<o;i++){var a=n[i];if(w(a)&&(h.isArray(a)||h.isArguments(a)))if(r)for(var c=0,l=a.length;c<l;)e[u++]=a[c++];else M(a,r,t,e),u=e.length;else t||(e[u++]=a)}return e};h.flatten=function(n,r){return M(n,r,!1)},h.without=g(function(n,r){return h.difference(n,r)}),h.uniq=h.unique=function(n,r,t,e){h.isBoolean(r)||(e=t,t=r,r=!1),null!=t&&(t=d(t,e));for(var u=[],i=[],o=0,a=A(n);o<a;o++){var c=n[o],l=t?t(c,o,n):c;r&&!t?(o&&i===l||u.push(c),i=l):t?h.contains(i,l)||(i.push(l),u.push(c)):h.contains(u,c)||u.push(c)}return u},h.union=g(function(n){return h.uniq(M(n,!0,!0))}),h.intersection=function(n){for(var r=[],t=arguments.length,e=0,u=A(n);e<u;e++){var i=n[e];if(!h.contains(r,i)){var o;for(o=1;o<t&&h.contains(arguments[o],i);o++);o===t&&r.push(i)}}return r},h.difference=g(function(n,r){return r=M(r,!0,!0),h.filter(n,function(n){return!h.contains(r,n)})}),h.unzip=function(n){for(var r=n&&h.max(n,A).length||0,t=Array(r),e=0;e<r;e++)t[e]=h.pluck(n,e);return t},h.zip=g(h.unzip),h.object=function(n,r){for(var t={},e=0,u=A(n);e<u;e++)r?t[n[e]]=r[e]:t[n[e][0]]=n[e][1];return t};var F=function(i){return function(n,r,t){r=d(r,t);for(var e=A(n),u=0<i?0:e-1;0<=u&&u<e;u+=i)if(r(n[u],u,n))return u;return-1}};h.findIndex=F(1),h.findLastIndex=F(-1),h.sortedIndex=function(n,r,t,e){for(var u=(t=d(t,e,1))(r),i=0,o=A(n);i<o;){var a=Math.floor((i+o)/2);t(n[a])<u?i=a+1:o=a}return i};var E=function(i,o,a){return function(n,r,t){var e=0,u=A(n);if("number"==typeof t)0<i?e=0<=t?t:Math.max(t+u,e):u=0<=t?Math.min(t+1,u):t+u+1;else if(a&&t&&u)return n[t=a(n,r)]===r?t:-1;if(r!=r)return 0<=(t=o(c.call(n,e,u),h.isNaN))?t+e:-1;for(t=0<i?e:u-1;0<=t&&t<u;t+=i)if(n[t]===r)return t;return-1}};h.indexOf=E(1,h.findIndex,h.sortedIndex),h.lastIndexOf=E(-1,h.findLastIndex),h.range=function(n,r,t){null==r&&(r=n||0,n=0),t||(t=r<n?-1:1);for(var e=Math.max(Math.ceil((r-n)/t),0),u=Array(e),i=0;i<e;i++,n+=t)u[i]=n;return u},h.chunk=function(n,r){if(null==r||r<1)return[];for(var t=[],e=0,u=n.length;e<u;)t.push(c.call(n,e,e+=r));return t};var N=function(n,r,t,e,u){if(!(e instanceof r))return n.apply(t,u);var i=m(n.prototype),o=n.apply(i,u);return h.isObject(o)?o:i};h.bind=g(function(r,t,e){if(!h.isFunction(r))throw new TypeError("Bind must be called on a function");var u=g(function(n){return N(r,u,t,this,e.concat(n))});return u}),h.partial=g(function(u,i){var o=h.partial.placeholder,a=function(){for(var n=0,r=i.length,t=Array(r),e=0;e<r;e++)t[e]=i[e]===o?arguments[n++]:i[e];for(;n<arguments.length;)t.push(arguments[n++]);return N(u,a,this,this,t)};return a}),(h.partial.placeholder=h).bindAll=g(function(n,r){var t=(r=M(r,!1,!1)).length;if(t<1)throw new Error("bindAll must be passed function names");for(;t--;){var e=r[t];n[e]=h.bind(n[e],n)}}),h.memoize=function(e,u){var i=function(n){var r=i.cache,t=""+(u?u.apply(this,arguments):n);return j(r,t)||(r[t]=e.apply(this,arguments)),r[t]};return i.cache={},i},h.delay=g(function(n,r,t){return setTimeout(function(){return n.apply(null,t)},r)}),h.defer=h.partial(h.delay,h,1),h.throttle=function(t,e,u){var i,o,a,c,l=0;u||(u={});var f=function(){l=!1===u.leading?0:h.now(),i=null,c=t.apply(o,a),i||(o=a=null)},n=function(){var n=h.now();l||!1!==u.leading||(l=n);var r=e-(n-l);return o=this,a=arguments,r<=0||e<r?(i&&(clearTimeout(i),i=null),l=n,c=t.apply(o,a),i||(o=a=null)):i||!1===u.trailing||(i=setTimeout(f,r)),c};return n.cancel=function(){clearTimeout(i),l=0,i=o=a=null},n},h.debounce=function(t,e,u){var i,o,a=function(n,r){i=null,r&&(o=t.apply(n,r))},n=g(function(n){if(i&&clearTimeout(i),u){var r=!i;i=setTimeout(a,e),r&&(o=t.apply(this,n))}else i=h.delay(a,e,this,n);return o});return n.cancel=function(){clearTimeout(i),i=null},n},h.wrap=function(n,r){return h.partial(r,n)},h.negate=function(n){return function(){return!n.apply(this,arguments)}},h.compose=function(){var t=arguments,e=t.length-1;return function(){for(var n=e,r=t[e].apply(this,arguments);n--;)r=t[n].call(this,r);return r}},h.after=function(n,r){return function(){if(--n<1)return r.apply(this,arguments)}},h.before=function(n,r){var t;return function(){return 0<--n&&(t=r.apply(this,arguments)),n<=1&&(r=null),t}},h.once=h.partial(h.before,2),h.restArguments=g;var I=!{toString:null}.propertyIsEnumerable("toString"),T=["valueOf","isPrototypeOf","toString","propertyIsEnumerable","hasOwnProperty","toLocaleString"],B=function(n,r){var t=T.length,e=n.constructor,u=h.isFunction(e)&&e.prototype||o,i="constructor";for(j(n,i)&&!h.contains(r,i)&&r.push(i);t--;)(i=T[t])in n&&n[i]!==u[i]&&!h.contains(r,i)&&r.push(i)};h.keys=function(n){if(!h.isObject(n))return[];if(a)return a(n);var r=[];for(var t in n)j(n,t)&&r.push(t);return I&&B(n,r),r},h.allKeys=function(n){if(!h.isObject(n))return[];var r=[];for(var t in n)r.push(t);return I&&B(n,r),r},h.values=function(n){for(var r=h.keys(n),t=r.length,e=Array(t),u=0;u<t;u++)e[u]=n[r[u]];return e},h.mapObject=function(n,r,t){r=d(r,t);for(var e=h.keys(n),u=e.length,i={},o=0;o<u;o++){var a=e[o];i[a]=r(n[a],a,n)}return i},h.pairs=function(n){for(var r=h.keys(n),t=r.length,e=Array(t),u=0;u<t;u++)e[u]=[r[u],n[r[u]]];return e},h.invert=function(n){for(var r={},t=h.keys(n),e=0,u=t.length;e<u;e++)r[n[t[e]]]=t[e];return r},h.functions=h.methods=function(n){var r=[];for(var t in n)h.isFunction(n[t])&&r.push(t);return r.sort()};var R=function(c,l){return function(n){var r=arguments.length;if(l&&(n=Object(n)),r<2||null==n)return n;for(var t=1;t<r;t++)for(var e=arguments[t],u=c(e),i=u.length,o=0;o<i;o++){var a=u[o];l&&void 0!==n[a]||(n[a]=e[a])}return n}};h.extend=R(h.allKeys),h.extendOwn=h.assign=R(h.keys),h.findKey=function(n,r,t){r=d(r,t);for(var e,u=h.keys(n),i=0,o=u.length;i<o;i++)if(r(n[e=u[i]],e,n))return e};var q,K,z=function(n,r,t){return r in t};h.pick=g(function(n,r){var t={},e=r[0];if(null==n)return t;h.isFunction(e)?(1<r.length&&(e=y(e,r[1])),r=h.allKeys(n)):(e=z,r=M(r,!1,!1),n=Object(n));for(var u=0,i=r.length;u<i;u++){var o=r[u],a=n[o];e(a,o,n)&&(t[o]=a)}return t}),h.omit=g(function(n,t){var r,e=t[0];return h.isFunction(e)?(e=h.negate(e),1<t.length&&(r=t[1])):(t=h.map(M(t,!1,!1),String),e=function(n,r){return!h.contains(t,r)}),h.pick(n,e,r)}),h.defaults=R(h.allKeys,!0),h.create=function(n,r){var t=m(n);return r&&h.extendOwn(t,r),t},h.clone=function(n){return h.isObject(n)?h.isArray(n)?n.slice():h.extend({},n):n},h.tap=function(n,r){return r(n),n},h.isMatch=function(n,r){var t=h.keys(r),e=t.length;if(null==n)return!e;for(var u=Object(n),i=0;i<e;i++){var o=t[i];if(r[o]!==u[o]||!(o in u))return!1}return!0},q=function(n,r,t,e){if(n===r)return 0!==n||1/n==1/r;if(null==n||null==r)return!1;if(n!=n)return r!=r;var u=typeof n;return("function"===u||"object"===u||"object"==typeof r)&&K(n,r,t,e)},K=function(n,r,t,e){n instanceof h&&(n=n._wrapped),r instanceof h&&(r=r._wrapped);var u=p.call(n);if(u!==p.call(r))return!1;switch(u){case"[object RegExp]":case"[object String]":return""+n==""+r;case"[object Number]":return+n!=+n?+r!=+r:0==+n?1/+n==1/r:+n==+r;case"[object Date]":case"[object Boolean]":return+n==+r;case"[object Symbol]":return s.valueOf.call(n)===s.valueOf.call(r)}var i="[object Array]"===u;if(!i){if("object"!=typeof n||"object"!=typeof r)return!1;var o=n.constructor,a=r.constructor;if(o!==a&&!(h.isFunction(o)&&o instanceof o&&h.isFunction(a)&&a instanceof a)&&"constructor"in n&&"constructor"in r)return!1}e=e||[];for(var c=(t=t||[]).length;c--;)if(t[c]===n)return e[c]===r;if(t.push(n),e.push(r),i){if((c=n.length)!==r.length)return!1;for(;c--;)if(!q(n[c],r[c],t,e))return!1}else{var l,f=h.keys(n);if(c=f.length,h.keys(r).length!==c)return!1;for(;c--;)if(l=f[c],!j(r,l)||!q(n[l],r[l],t,e))return!1}return t.pop(),e.pop(),!0},h.isEqual=function(n,r){return q(n,r)},h.isEmpty=function(n){return null==n||(w(n)&&(h.isArray(n)||h.isString(n)||h.isArguments(n))?0===n.length:0===h.keys(n).length)},h.isElement=function(n){return!(!n||1!==n.nodeType)},h.isArray=t||function(n){return"[object Array]"===p.call(n)},h.isObject=function(n){var r=typeof n;return"function"===r||"object"===r&&!!n},h.each(["Arguments","Function","String","Number","Date","RegExp","Error","Symbol","Map","WeakMap","Set","WeakSet"],function(r){h["is"+r]=function(n){return p.call(n)==="[object "+r+"]"}}),h.isArguments(arguments)||(h.isArguments=function(n){return j(n,"callee")});var D=n.document&&n.document.childNodes;"function"!=typeof/./&&"object"!=typeof Int8Array&&"function"!=typeof D&&(h.isFunction=function(n){return"function"==typeof n||!1}),h.isFinite=function(n){return!h.isSymbol(n)&&isFinite(n)&&!isNaN(parseFloat(n))},h.isNaN=function(n){return h.isNumber(n)&&isNaN(n)},h.isBoolean=function(n){return!0===n||!1===n||"[object Boolean]"===p.call(n)},h.isNull=function(n){return null===n},h.isUndefined=function(n){return void 0===n},h.has=function(n,r){if(!h.isArray(r))return j(n,r);for(var t=r.length,e=0;e<t;e++){var u=r[e];if(null==n||!i.call(n,u))return!1;n=n[u]}return!!t},h.noConflict=function(){return n._=r,this},h.identity=function(n){return n},h.constant=function(n){return function(){return n}},h.noop=function(){},h.property=function(r){return h.isArray(r)?function(n){return x(n,r)}:b(r)},h.propertyOf=function(r){return null==r?function(){}:function(n){return h.isArray(n)?x(r,n):r[n]}},h.matcher=h.matches=function(r){return r=h.extendOwn({},r),function(n){return h.isMatch(n,r)}},h.times=function(n,r,t){var e=Array(Math.max(0,n));r=y(r,t,1);for(var u=0;u<n;u++)e[u]=r(u);return e},h.random=function(n,r){return null==r&&(r=n,n=0),n+Math.floor(Math.random()*(r-n+1))},h.now=Date.now||function(){return(new Date).getTime()};var L={"&":"&","<":"<",">":">",'"':""","'":"'","`":"`"},P=h.invert(L),W=function(r){var t=function(n){return r[n]},n="(?:"+h.keys(r).join("|")+")",e=RegExp(n),u=RegExp(n,"g");return function(n){return n=null==n?"":""+n,e.test(n)?n.replace(u,t):n}};h.escape=W(L),h.unescape=W(P),h.result=function(n,r,t){h.isArray(r)||(r=[r]);var e=r.length;if(!e)return h.isFunction(t)?t.call(n):t;for(var u=0;u<e;u++){var i=null==n?void 0:n[r[u]];void 0===i&&(i=t,u=e),n=h.isFunction(i)?i.call(n):i}return n};var C=0;h.uniqueId=function(n){var r=++C+"";return n?n+r:r},h.templateSettings={evaluate:/<%([\s\S]+?)%>/g,interpolate:/<%=([\s\S]+?)%>/g,escape:/<%-([\s\S]+?)%>/g};var J=/(.)^/,U={"'":"'","\\":"\\","\r":"r","\n":"n","\u2028":"u2028","\u2029":"u2029"},V=/\\|'|\r|\n|\u2028|\u2029/g,$=function(n){return"\\"+U[n]};h.template=function(i,n,r){!n&&r&&(n=r),n=h.defaults({},n,h.templateSettings);var t,e=RegExp([(n.escape||J).source,(n.interpolate||J).source,(n.evaluate||J).source].join("|")+"|$","g"),o=0,a="__p+='";i.replace(e,function(n,r,t,e,u){return a+=i.slice(o,u).replace(V,$),o=u+n.length,r?a+="'+\n((__t=("+r+"))==null?'':_.escape(__t))+\n'":t?a+="'+\n((__t=("+t+"))==null?'':__t)+\n'":e&&(a+="';\n"+e+"\n__p+='"),n}),a+="';\n",n.variable||(a="with(obj||{}){\n"+a+"}\n"),a="var __t,__p='',__j=Array.prototype.join,"+"print=function(){__p+=__j.call(arguments,'');};\n"+a+"return __p;\n";try{t=new Function(n.variable||"obj","_",a)}catch(n){throw n.source=a,n}var u=function(n){return t.call(this,n,h)},c=n.variable||"obj";return u.source="function("+c+"){\n"+a+"}",u},h.chain=function(n){var r=h(n);return r._chain=!0,r};var G=function(n,r){return n._chain?h(r).chain():r};h.mixin=function(t){return h.each(h.functions(t),function(n){var r=h[n]=t[n];h.prototype[n]=function(){var n=[this._wrapped];return u.apply(n,arguments),G(this,r.apply(h,n))}}),h},h.mixin(h),h.each(["pop","push","reverse","shift","sort","splice","unshift"],function(r){var t=e[r];h.prototype[r]=function(){var n=this._wrapped;return t.apply(n,arguments),"shift"!==r&&"splice"!==r||0!==n.length||delete n[0],G(this,n)}}),h.each(["concat","join","slice"],function(n){var r=e[n];h.prototype[n]=function(){return G(this,r.apply(this._wrapped,arguments))}}),h.prototype.value=function(){return this._wrapped},h.prototype.valueOf=h.prototype.toJSON=h.prototype.value,h.prototype.toString=function(){return String(this._wrapped)},"function"==typeof define&&define.amd&&define("underscore",[],function(){return h})}();
|
PypiClean
|
/ensmallen_graph-0.6.0-cp37-cp37m-manylinux2010_x86_64.whl/ensmallen_graph/datasets/string/oscillatorianigroviridis.py
|
from typing import Dict
from ..automatic_graph_retrieval import AutomaticallyRetrievedGraph
from ...ensmallen_graph import EnsmallenGraph # pylint: disable=import-error
def OscillatoriaNigroviridis(
directed: bool = False,
verbose: int = 2,
cache_path: str = "graphs/string",
**additional_graph_kwargs: Dict
) -> EnsmallenGraph:
"""Return new instance of the Oscillatoria nigroviridis graph.
The graph is automatically retrieved from the STRING repository.
Parameters
-------------------
directed: bool = False,
Wether to load the graph as directed or undirected.
By default false.
verbose: int = 2,
Wether to show loading bars during the retrieval and building
of the graph.
cache_path: str = "graphs",
Where to store the downloaded graphs.
additional_graph_kwargs: Dict,
Additional graph kwargs.
Returns
-----------------------
Instace of Oscillatoria nigroviridis graph.
Report
---------------------
At the time of rendering these methods (please see datetime below), the graph
had the following characteristics:
Datetime: 2021-02-02 19:56:14.487647
The undirected graph Oscillatoria nigroviridis has 5717 nodes and 776553
weighted edges, of which none are self-loops. The graph is dense as it
has a density of 0.04753 and has 19 connected components, where the component
with most nodes has 5679 nodes and the component with the least nodes has
2 nodes. The graph median node degree is 248, the mean node degree is 271.66,
and the node degree mode is 2. The top 5 most central nodes are 179408.Osc7112_4292
(degree 1806), 179408.Osc7112_3862 (degree 1794), 179408.Osc7112_4598 (degree
1689), 179408.Osc7112_4909 (degree 1687) and 179408.Osc7112_1306 (degree
1672).
References
---------------------
Please cite the following if you use the data:
@article{szklarczyk2019string,
title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},
author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},
journal={Nucleic acids research},
volume={47},
number={D1},
pages={D607--D613},
year={2019},
publisher={Oxford University Press}
}
Usage example
----------------------
The usage of this graph is relatively straightforward:
.. code:: python
# First import the function to retrieve the graph from the datasets
from ensmallen_graph.datasets.string import OscillatoriaNigroviridis
# Then load the graph
graph = OscillatoriaNigroviridis()
# Finally, you can do anything with it, for instance, compute its report:
print(graph)
# If you need to run a link prediction task with validation,
# you can split the graph using a connected holdout as follows:
train_graph, validation_graph = graph.connected_holdout(
# You can use an 80/20 split the holdout, for example.
train_size=0.8,
# The random state is used to reproduce the holdout.
random_state=42,
# Wether to show a loading bar.
verbose=True
)
# Remember that, if you need, you can enable the memory-time trade-offs:
train_graph.enable(
vector_sources=True,
vector_destinations=True,
vector_outbounds=True
)
# Consider using the methods made available in the Embiggen package
# to run graph embedding or link prediction tasks.
"""
return AutomaticallyRetrievedGraph(
graph_name="OscillatoriaNigroviridis",
dataset="string",
directed=directed,
verbose=verbose,
cache_path=cache_path,
additional_graph_kwargs=additional_graph_kwargs
)()
|
PypiClean
|
/NlvWxPython-4.2.0-cp37-cp37m-win_amd64.whl/wx/py/PyShell.py
|
"""PyShell is a python shell application."""
# The next two lines, and the other code below that makes use of
# ``__main__`` and ``original``, serve the purpose of cleaning up the
# main namespace to look as much as possible like the regular Python
# shell environment.
import __main__
original = list(__main__.__dict__)
__author__ = "Patrick K. O'Brien <[email protected]>"
import wx
import os
class App(wx.App):
"""PyShell standalone application."""
def OnInit(self):
import os
import wx
from wx import py
self.SetAppName("pyshell")
confDir = wx.StandardPaths.Get().GetUserDataDir()
if not os.path.exists(confDir):
os.mkdir(confDir)
fileName = os.path.join(confDir, 'config')
self.config = wx.FileConfig(localFilename=fileName)
self.config.SetRecordDefaults(True)
self.frame = py.shell.ShellFrame(config=self.config, dataDir=confDir)
self.frame.Show()
self.SetTopWindow(self.frame)
return True
'''
The main() function needs to handle being imported, such as with the
pyshell script that wxPython installs:
#!/usr/bin/env python
from wx.py.PyShell import main
main()
'''
def main():
"""The main function for the PyShell program."""
# Cleanup the main namespace, leaving the App class.
import __main__
md = __main__.__dict__
keepers = original
keepers.append('App')
for key in list(md):
if key not in keepers:
del md[key]
# Create an application instance.
app = App(0)
# Cleanup the main namespace some more.
if 'App' in md and md['App'] is App:
del md['App']
if '__main__' in md and md['__main__'] is __main__:
del md['__main__']
# Mimic the contents of the standard Python shell's sys.path.
import sys
if sys.path[0]:
sys.path[0] = ''
# Add the application object to the sys module's namespace.
# This allows a shell user to do:
# >>> import sys
# >>> sys.app.whatever
sys.app = app
del sys
# Start the wxPython event loop.
app.MainLoop()
if __name__ == '__main__':
main()
|
PypiClean
|
/nova-27.1.0.tar.gz/nova-27.1.0/doc/source/admin/common/nova-show-usage-statistics-for-hosts-instances.rst
|
=============================================
Show usage statistics for hosts and instances
=============================================
You can show basic statistics on resource usage for hosts and instances.
.. note::
For more sophisticated monitoring, see the
`Ceilometer <https://docs.openstack.org/ceilometer/latest/>`__ project. You can
also use tools, such as `Ganglia <http://ganglia.info/>`__ or
`Graphite <http://graphite.wikidot.com/>`__, to gather more detailed
data.
Show host usage statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~
The following examples show the host usage statistics for a host called
``devstack``.
* List the hosts and the nova-related services that run on them:
.. code-block:: console
$ openstack host list
+-----------+-------------+----------+
| Host Name | Service | Zone |
+-----------+-------------+----------+
| devstack | conductor | internal |
| devstack | compute | nova |
| devstack | network | internal |
| devstack | scheduler | internal |
+-----------+-------------+----------+
* Get a summary of resource usage of all of the instances running on the host:
.. code-block:: console
$ openstack host show devstack
+----------+----------------------------------+-----+-----------+---------+
| Host | Project | CPU | MEMORY MB | DISK GB |
+----------+----------------------------------+-----+-----------+---------+
| devstack | (total) | 2 | 4003 | 157 |
| devstack | (used_now) | 3 | 5120 | 40 |
| devstack | (used_max) | 3 | 4608 | 40 |
| devstack | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 |
| devstack | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 |
+----------+----------------------------------+-----+-----------+---------+
The ``CPU`` column shows the sum of the virtual CPUs for instances running on
the host.
The ``MEMORY MB`` column shows the sum of the memory (in MB) allocated to the
instances that run on the host.
The ``DISK GB`` column shows the sum of the root and ephemeral disk sizes (in
GB) of the instances that run on the host.
The row that has the value ``used_now`` in the ``PROJECT`` column shows the
sum of the resources allocated to the instances that run on the host, plus
the resources allocated to the host itself.
The row that has the value ``used_max`` in the ``PROJECT`` column shows the
sum of the resources allocated to the instances that run on the host.
.. note::
These values are computed by using information about the flavors of the
instances that run on the hosts. This command does not query the CPU
usage, memory usage, or hard disk usage of the physical host.
Show instance usage statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Get CPU, memory, I/O, and network statistics for an instance.
#. List instances:
.. code-block:: console
$ openstack server list
+----------+----------------------+--------+------------------+--------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+----------+----------------------+--------+------------------+--------+----------+
| 84c6e... | myCirrosServer | ACTIVE | private=10.0.0.3 | cirros | m1.tiny |
| 8a995... | myInstanceFromVolume | ACTIVE | private=10.0.0.4 | ubuntu | m1.small |
+----------+----------------------+--------+------------------+--------+----------+
#. Get diagnostic statistics:
.. note::
As of microversion v2.48, diagnostics information for all virt drivers will
have a standard format as below. Before microversion 2.48, each hypervisor
had its own format. For more details on diagnostics response message see
`server diagnostics api
<https://docs.openstack.org/api-ref/compute/#servers-diagnostics-servers-diagnostics>`__
documentation.
.. code-block:: console
$ nova diagnostics myCirrosServer
+----------------+------------------------------------------------------------------------+
| Property | Value |
+----------------+------------------------------------------------------------------------+
| config_drive | False |
| cpu_details | [] |
| disk_details | [{"read_requests": 887, "errors_count": -1, "read_bytes": 20273152, |
| | "write_requests": 89, "write_bytes": 303104}] |
| driver | libvirt |
| hypervisor | qemu |
| hypervisor_os | linux |
| memory_details | {"used": 0, "maximum": 0} |
| nic_details | [{"rx_packets": 9, "rx_drop": 0, "tx_octets": 1464, "tx_errors": 0, |
| | "mac_address": "fa:16:3e:fa:db:d3", "rx_octets": 958, "rx_rate": null, |
| | "rx_errors": 0, "tx_drop": 0, "tx_packets": 9, "tx_rate": null}] |
| num_cpus | 0 |
| num_disks | 1 |
| num_nics | 1 |
| state | running |
| uptime | 5528 |
+----------------+------------------------------------------------------------------------+
``config_drive`` indicates if the config drive is supported on the
instance.
``cpu_details`` contains a list of details per vCPU.
``disk_details`` contains a list of details per disk.
``driver`` indicates the current driver on which the VM is running.
``hypervisor`` indicates the current hypervisor on which the VM is running.
``nic_details`` contains a list of details per vNIC.
``uptime`` is the amount of time in seconds that the VM has been running.
|
Diagnostics prior to v2.48:
.. code-block:: console
$ nova diagnostics myCirrosServer
+---------------------------+--------+
| Property | Value |
+---------------------------+--------+
| memory | 524288 |
| memory-actual | 524288 |
| memory-rss | 6444 |
| tap1fec8fb8-7a_rx | 22137 |
| tap1fec8fb8-7a_rx_drop | 0 |
| tap1fec8fb8-7a_rx_errors | 0 |
| tap1fec8fb8-7a_rx_packets | 166 |
| tap1fec8fb8-7a_tx | 18032 |
| tap1fec8fb8-7a_tx_drop | 0 |
| tap1fec8fb8-7a_tx_errors | 0 |
| tap1fec8fb8-7a_tx_packets | 130 |
| vda_errors | -1 |
| vda_read | 2048 |
| vda_read_req | 2 |
| vda_write | 182272 |
| vda_write_req | 74 |
+---------------------------+--------+
* Get summary statistics for each project:
.. code-block:: console
$ openstack usage list
Usage from 2013-06-25 to 2013-07-24:
+---------+---------+--------------+-----------+---------------+
| Project | Servers | RAM MB-Hours | CPU Hours | Disk GB-Hours |
+---------+---------+--------------+-----------+---------------+
| demo | 1 | 344064.44 | 672.00 | 0.00 |
| stack | 3 | 671626.76 | 327.94 | 6558.86 |
+---------+---------+--------------+-----------+---------------+
|
PypiClean
|
/xespiano-0.0.5.tar.gz/xespiano-0.0.5/mkpiano/comm/SerialPort/__init__.py
|
import os
import threading
from time import sleep
import serial.tools.list_ports
import serial
import mkpiano
def connect(port,baudrate=115200):
"""
.. code-block:: python
:linenos:
from mkpiano import SerialPort
from mkpiano import MegaPi
uart = SerialPort.connect("COM3")
board = MegaPi.connect(uart)
"""
uart = SerialPort(port,baudrate)
return uart
create = connect
class SerialPort():
"""
"""
def __init__(self, port="/dev/ttyAMA0", baudrate=115200, timeout=1):
self.exiting = False
self._is_sending = True
self._responses = []
self._queue = []
self._ser = None
try:
self._ser = serial.Serial(port,baudrate)
self._ser.timeout = 0.01
sleep(1)
self._thread = threading.Thread(target=self._on_read,args=(self._callback,))
self._thread.daemon = True
self._thread.start()
self._exit_thread = threading.Thread(target=self._on_exiting,args=())
self._exit_thread.daemon = True
self._exit_thread.start()
mkpiano.add_port(self)
except Exception as ex:
print('$#&*@{"err_code":101,"err_msg":"串口被占用","device":"mkpiano","extra":"'+str(ex)+'"}@*&#$')
def setup(self,callback):
self._responses.append(callback)
@property
def type(self):
return "uart"
def _callback(self,received):
for method in self._responses:
method(received)
def _on_read(self,callback):
while True:
if self.exiting:
break
if self._is_sending:
self.__sending()
if self.is_open():
# if self.in_waiting()>0:
buf = self.read()
for i in range(len(buf)):
callback(buf[i])
sleep(0.001)
def send(self,buffer):
if self.is_open():
self._queue.append(buffer)
# sleep(0.002)
def __sending(self):
if len(self._queue)>0:
if self.is_open():
buf = self._queue[0]
try:
self._ser.write(buf)
except serial.serialutil.SerialException as e:
self.exiting = True
print("\033[1;33m连接失败,未检测到设备\033[0m")
print('$#&*@{"err_code":102,"err_msg":"发送数据失败,未检测到设备","device":"mkpiano","extra":{}}@*&#$')
return None
self._queue.pop(0)
def read(self):
try:
return self._ser.read(self.in_waiting())
except serial.serialutil.SerialException as e:
self.exiting = True
print("\033[1;33m连接失败,未检测到设备\033[0m")
print('$#&*@{"err_code":102,"err_msg":"读取数据失败,未检测到设备","device":"mkpiano","extra":{}}@*&#$')
return []
def enable_sending(self):
self._is_sending = True
def disable_sending(self):
self._is_sending = False
def _on_exiting(self):
while True:
if self.exiting:
self.exit()
break
sleep(0.001)
def is_open(self):
if not self._ser is None:
return self._ser.isOpen()
return False
def in_waiting(self):
return self._ser.inWaiting()
def close(self):
self._ser.close()
def exit(self):
if not self._thread is None:
self._is_sending = False
self.exiting = True
self._thread.join()
self._thread = None
self.close()
os._exit(0)
@staticmethod
def list():
"""
获取串口列表
.. code-block:: python
:linenos:
from mkpiano import SerialPort
print(SerialPort.list())
:param: 无
:return: 串口列表
"""
return serial.tools.list_ports.comports()
|
PypiClean
|
/cdumay-http-client-0.0.15.tar.gz/cdumay-http-client-0.0.15/README.rst
|
.. image:: https://img.shields.io/pypi/v/cdumay-http-client.svg
:target: https://pypi.python.org/pypi/cdumay-http-client/
:alt: Latest Version
.. image:: https://travis-ci.org/cdumay/cdumay-http-client.svg?branch=master
:target: https://travis-ci.org/cdumay/cdumay-http-client
:alt: Latest version
.. image:: https://readthedocs.org/projects/cdumay-http-client/badge/?version=latest
:target: http://cdumay-http-client.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/badge/license-BSD3-blue.svg
:target: https://github.com/cdumay/cdumay-http-client/blob/master/LICENSE
cdumay-http-client
==================
This library is a basic HTTP client for NON-REST api with exception formatting.
Quickstart
----------
First, install cdumay-rest-client using
`pip <https://pip.pypa.io/en/stable/>`_:
$ pip install cdumay-http-client
Next, add a `HttpClient` instance to your code:
.. code-block:: python
from cdumay_http_client.client import HttpClient
client = HttpClient(server="http://warp.myhost.com/api/v0")
print(client.do_request(method="POST", path="/exec", data=[...]))
Exception
---------
You can use `marshmallow <https://marshmallow.readthedocs.io/en/latest>`_
to serialize exceptions:
.. code-block:: python
import json, sys
from cdumay_http_client.client import HttpClient
from cdumay_http_client.exceptions import HTTPException, HTTPExceptionValidator
try:
client = HttpClient(server="http://warp.myhost.com/api/v0")
data = client.do_request(method="GET", path="/me")
except HTTPException as exc:
data = HTTPExceptionValidator().dump(exc).data
json.dump(data, sys.stdout, sort_keys=True, indent=4, separators=(',', ': '))
Result:
.. code-block:: python
{
"code": 404,
"extra": {},
"message": "Not Found"
}
License
-------
Licensed under `BSD 3-Clause License <./LICENSE>`_ or https://opensource.org/licenses/BSD-3-Clause.
|
PypiClean
|
/python-docx-2023-0.2.17.tar.gz/python-docx-2023-0.2.17/docx/image/jpeg.py
|
from __future__ import absolute_import, division, print_function
from ..compat import BytesIO
from .constants import JPEG_MARKER_CODE, MIME_TYPE
from .helpers import BIG_ENDIAN, StreamReader
from .image import BaseImageHeader
from .tiff import Tiff
class Jpeg(BaseImageHeader):
"""
Base class for JFIF and EXIF subclasses.
"""
@property
def content_type(self):
"""
MIME content type for this image, unconditionally `image/jpeg` for
JPEG images.
"""
return MIME_TYPE.JPEG
@property
def default_ext(self):
"""
Default filename extension, always 'jpg' for JPG images.
"""
return 'jpg'
class Exif(Jpeg):
"""
Image header parser for Exif image format
"""
@classmethod
def from_stream(cls, stream):
"""
Return |Exif| instance having header properties parsed from Exif
image in *stream*.
"""
markers = _JfifMarkers.from_stream(stream)
# print('\n%s' % markers)
px_width = markers.sof.px_width
px_height = markers.sof.px_height
horz_dpi = markers.app1.horz_dpi
vert_dpi = markers.app1.vert_dpi
return cls(px_width, px_height, horz_dpi, vert_dpi)
class Jfif(Jpeg):
"""
Image header parser for JFIF image format
"""
@classmethod
def from_stream(cls, stream):
"""
Return a |Jfif| instance having header properties parsed from image
in *stream*.
"""
markers = _JfifMarkers.from_stream(stream)
px_width = markers.sof.px_width
px_height = markers.sof.px_height
horz_dpi = markers.app0.horz_dpi
vert_dpi = markers.app0.vert_dpi
return cls(px_width, px_height, horz_dpi, vert_dpi)
class _JfifMarkers(object):
"""
Sequence of markers in a JPEG file, perhaps truncated at first SOS marker
for performance reasons.
"""
def __init__(self, markers):
super(_JfifMarkers, self).__init__()
self._markers = list(markers)
def __str__(self): # pragma: no cover
"""
Returns a tabular listing of the markers in this instance, which can
be handy for debugging and perhaps other uses.
"""
header = ' offset seglen mc name\n======= ====== == ====='
tmpl = '%7d %6d %02X %s'
rows = []
for marker in self._markers:
rows.append(tmpl % (
marker.offset, marker.segment_length,
ord(marker.marker_code), marker.name
))
lines = [header] + rows
return '\n'.join(lines)
@classmethod
def from_stream(cls, stream):
"""
Return a |_JfifMarkers| instance containing a |_JfifMarker| subclass
instance for each marker in *stream*.
"""
marker_parser = _MarkerParser.from_stream(stream)
markers = []
for marker in marker_parser.iter_markers():
markers.append(marker)
if marker.marker_code == JPEG_MARKER_CODE.SOS:
break
return cls(markers)
@property
def app0(self):
"""
First APP0 marker in image markers.
"""
for m in self._markers:
if m.marker_code == JPEG_MARKER_CODE.APP0:
return m
raise KeyError('no APP0 marker in image')
@property
def app1(self):
"""
First APP1 marker in image markers.
"""
for m in self._markers:
if m.marker_code == JPEG_MARKER_CODE.APP1:
return m
raise KeyError('no APP1 marker in image')
@property
def sof(self):
"""
First start of frame (SOFn) marker in this sequence.
"""
for m in self._markers:
if m.marker_code in JPEG_MARKER_CODE.SOF_MARKER_CODES:
return m
raise KeyError('no start of frame (SOFn) marker in image')
class _MarkerParser(object):
"""
Service class that knows how to parse a JFIF stream and iterate over its
markers.
"""
def __init__(self, stream_reader):
super(_MarkerParser, self).__init__()
self._stream = stream_reader
@classmethod
def from_stream(cls, stream):
"""
Return a |_MarkerParser| instance to parse JFIF markers from
*stream*.
"""
stream_reader = StreamReader(stream, BIG_ENDIAN)
return cls(stream_reader)
def iter_markers(self):
"""
Generate a (marker_code, segment_offset) 2-tuple for each marker in
the JPEG *stream*, in the order they occur in the stream.
"""
marker_finder = _MarkerFinder.from_stream(self._stream)
start = 0
marker_code = None
while marker_code != JPEG_MARKER_CODE.EOI:
marker_code, segment_offset = marker_finder.next(start)
marker = _MarkerFactory(
marker_code, self._stream, segment_offset
)
yield marker
start = segment_offset + marker.segment_length
class _MarkerFinder(object):
"""
Service class that knows how to find the next JFIF marker in a stream.
"""
def __init__(self, stream):
super(_MarkerFinder, self).__init__()
self._stream = stream
@classmethod
def from_stream(cls, stream):
"""
Return a |_MarkerFinder| instance to find JFIF markers in *stream*.
"""
return cls(stream)
def next(self, start):
"""
Return a (marker_code, segment_offset) 2-tuple identifying and
locating the first marker in *stream* occuring after offset *start*.
The returned *segment_offset* points to the position immediately
following the 2-byte marker code, the start of the marker segment,
for those markers that have a segment.
"""
position = start
while True:
# skip over any non-\xFF bytes
position = self._offset_of_next_ff_byte(start=position)
# skip over any \xFF padding bytes
position, byte_ = self._next_non_ff_byte(start=position+1)
# 'FF 00' sequence is not a marker, start over if found
if byte_ == b'\x00':
continue
# this is a marker, gather return values and break out of scan
marker_code, segment_offset = byte_, position+1
break
return marker_code, segment_offset
def _next_non_ff_byte(self, start):
"""
Return an offset, byte 2-tuple for the next byte in *stream* that is
not '\xFF', starting with the byte at offset *start*. If the byte at
offset *start* is not '\xFF', *start* and the returned *offset* will
be the same.
"""
self._stream.seek(start)
byte_ = self._read_byte()
while byte_ == b'\xFF':
byte_ = self._read_byte()
offset_of_non_ff_byte = self._stream.tell() - 1
return offset_of_non_ff_byte, byte_
def _offset_of_next_ff_byte(self, start):
"""
Return the offset of the next '\xFF' byte in *stream* starting with
the byte at offset *start*. Returns *start* if the byte at that
offset is a hex 255; it does not necessarily advance in the stream.
"""
self._stream.seek(start)
byte_ = self._read_byte()
while byte_ != b'\xFF':
byte_ = self._read_byte()
offset_of_ff_byte = self._stream.tell() - 1
return offset_of_ff_byte
def _read_byte(self):
"""
Return the next byte read from stream. Raise Exception if stream is
at end of file.
"""
byte_ = self._stream.read(1)
if not byte_: # pragma: no cover
raise Exception('unexpected end of file')
return byte_
def _MarkerFactory(marker_code, stream, offset):
"""
Return |_Marker| or subclass instance appropriate for marker at *offset*
in *stream* having *marker_code*.
"""
if marker_code == JPEG_MARKER_CODE.APP0:
marker_cls = _App0Marker
elif marker_code == JPEG_MARKER_CODE.APP1:
marker_cls = _App1Marker
elif marker_code in JPEG_MARKER_CODE.SOF_MARKER_CODES:
marker_cls = _SofMarker
else:
marker_cls = _Marker
return marker_cls.from_stream(stream, marker_code, offset)
class _Marker(object):
"""
Base class for JFIF marker classes. Represents a marker and its segment
occuring in a JPEG byte stream.
"""
def __init__(self, marker_code, offset, segment_length):
super(_Marker, self).__init__()
self._marker_code = marker_code
self._offset = offset
self._segment_length = segment_length
@classmethod
def from_stream(cls, stream, marker_code, offset):
"""
Return a generic |_Marker| instance for the marker at *offset* in
*stream* having *marker_code*.
"""
if JPEG_MARKER_CODE.is_standalone(marker_code):
segment_length = 0
else:
segment_length = stream.read_short(offset)
return cls(marker_code, offset, segment_length)
@property
def marker_code(self):
"""
The single-byte code that identifies the type of this marker, e.g.
``'\xE0'`` for start of image (SOI).
"""
return self._marker_code
@property
def name(self): # pragma: no cover
return JPEG_MARKER_CODE.marker_names[self._marker_code]
@property
def offset(self): # pragma: no cover
return self._offset
@property
def segment_length(self):
"""
The length in bytes of this marker's segment
"""
return self._segment_length
class _App0Marker(_Marker):
"""
Represents a JFIF APP0 marker segment.
"""
def __init__(
self, marker_code, offset, length, density_units, x_density,
y_density):
super(_App0Marker, self).__init__(marker_code, offset, length)
self._density_units = density_units
self._x_density = x_density
self._y_density = y_density
@property
def horz_dpi(self):
"""
Horizontal dots per inch specified in this marker, defaults to 72 if
not specified.
"""
return self._dpi(self._x_density)
@property
def vert_dpi(self):
"""
Vertical dots per inch specified in this marker, defaults to 72 if
not specified.
"""
return self._dpi(self._y_density)
def _dpi(self, density):
"""
Return dots per inch corresponding to *density* value.
"""
if self._density_units == 1:
dpi = density
elif self._density_units == 2:
dpi = int(round(density * 2.54))
else:
dpi = 72
return dpi
@classmethod
def from_stream(cls, stream, marker_code, offset):
"""
Return an |_App0Marker| instance for the APP0 marker at *offset* in
*stream*.
"""
# field off type notes
# ------------------ --- ----- -------------------
# segment length 0 short
# JFIF identifier 2 5 chr 'JFIF\x00'
# major JPEG version 7 byte typically 1
# minor JPEG version 8 byte typically 1 or 2
# density units 9 byte 1=inches, 2=cm
# horz dots per unit 10 short
# vert dots per unit 12 short
# ------------------ --- ----- -------------------
segment_length = stream.read_short(offset)
density_units = stream.read_byte(offset, 9)
x_density = stream.read_short(offset, 10)
y_density = stream.read_short(offset, 12)
return cls(
marker_code, offset, segment_length, density_units, x_density,
y_density
)
class _App1Marker(_Marker):
"""
Represents a JFIF APP1 (Exif) marker segment.
"""
def __init__(self, marker_code, offset, length, horz_dpi, vert_dpi):
super(_App1Marker, self).__init__(marker_code, offset, length)
self._horz_dpi = horz_dpi
self._vert_dpi = vert_dpi
@classmethod
def from_stream(cls, stream, marker_code, offset):
"""
Extract the horizontal and vertical dots-per-inch value from the APP1
header at *offset* in *stream*.
"""
# field off len type notes
# -------------------- --- --- ----- ----------------------------
# segment length 0 2 short
# Exif identifier 2 6 6 chr 'Exif\x00\x00'
# TIFF byte order 8 2 2 chr 'II'=little 'MM'=big endian
# meaning of universe 10 2 2 chr '*\x00' or '\x00*' depending
# IFD0 off fr/II or MM 10 16 long relative to ...?
# -------------------- --- --- ----- ----------------------------
segment_length = stream.read_short(offset)
if cls._is_non_Exif_APP1_segment(stream, offset):
return cls(marker_code, offset, segment_length, 72, 72)
tiff = cls._tiff_from_exif_segment(stream, offset, segment_length)
return cls(
marker_code, offset, segment_length, tiff.horz_dpi, tiff.vert_dpi
)
@property
def horz_dpi(self):
"""
Horizontal dots per inch specified in this marker, defaults to 72 if
not specified.
"""
return self._horz_dpi
@property
def vert_dpi(self):
"""
Vertical dots per inch specified in this marker, defaults to 72 if
not specified.
"""
return self._vert_dpi
@classmethod
def _is_non_Exif_APP1_segment(cls, stream, offset):
"""
Return True if the APP1 segment at *offset* in *stream* is NOT an
Exif segment, as determined by the ``'Exif\x00\x00'`` signature at
offset 2 in the segment.
"""
stream.seek(offset+2)
exif_signature = stream.read(6)
return exif_signature != b'Exif\x00\x00'
@classmethod
def _tiff_from_exif_segment(cls, stream, offset, segment_length):
"""
Return a |Tiff| instance parsed from the Exif APP1 segment of
*segment_length* at *offset* in *stream*.
"""
# wrap full segment in its own stream and feed to Tiff()
stream.seek(offset+8)
segment_bytes = stream.read(segment_length-8)
substream = BytesIO(segment_bytes)
return Tiff.from_stream(substream)
class _SofMarker(_Marker):
"""
Represents a JFIF start of frame (SOFx) marker segment.
"""
def __init__(
self, marker_code, offset, segment_length, px_width, px_height):
super(_SofMarker, self).__init__(marker_code, offset, segment_length)
self._px_width = px_width
self._px_height = px_height
@classmethod
def from_stream(cls, stream, marker_code, offset):
"""
Return an |_SofMarker| instance for the SOFn marker at *offset* in
stream.
"""
# field off type notes
# ------------------ --- ----- ----------------------------
# segment length 0 short
# Data precision 2 byte
# Vertical lines 3 short px_height
# Horizontal lines 5 short px_width
# ------------------ --- ----- ----------------------------
segment_length = stream.read_short(offset)
px_height = stream.read_short(offset, 3)
px_width = stream.read_short(offset, 5)
return cls(marker_code, offset, segment_length, px_width, px_height)
@property
def px_height(self):
"""
Image height in pixels
"""
return self._px_height
@property
def px_width(self):
"""
Image width in pixels
"""
return self._px_width
|
PypiClean
|
/qub-sherlock-2.2.4.tar.gz/qub-sherlock-2.2.4/sherlock/transient_classifier.py
|
from __future__ import print_function
from __future__ import division
from astrocalc.coords import unit_conversion
import copy
from fundamentals.mysql import insert_list_of_dictionaries_into_database_tables
from fundamentals import fmultiprocess
import psutil
from sherlock.commonutils import get_crossmatch_catalogues_column_map
from sherlock.imports import ned
from HMpTy.htm import sets
from HMpTy.mysql import conesearch
from fundamentals.renderer import list_of_dictionaries
from fundamentals.mysql import readquery, directory_script_runner, writequery
from fundamentals import tools
import numpy as np
from operator import itemgetter
from datetime import datetime, date, time, timedelta
from builtins import zip
from builtins import str
from builtins import range
from builtins import object
from past.utils import old_div
import sys
import os
import collections
import codecs
import re
import math
import time
import inspect
import yaml
from random import randint
os.environ['TERM'] = 'vt100'
theseBatches = []
crossmatchArray = []
class transient_classifier(object):
"""
*The Sherlock Transient Classifier*
**Key Arguments**
- ``log`` -- logger
- ``settings`` -- the settings dictionary
- ``update`` -- update the transient database with crossmatch results (boolean)
- ``ra`` -- right ascension of a single transient source. Default *False*
- ``dec`` -- declination of a single transient source. Default *False*
- ``name`` -- the ID of a single transient source. Default *False*
- ``verbose`` -- amount of details to print about crossmatches to stdout. 0|1|2 Default *0*
- ``updateNed`` -- update the local NED database before running the classifier. Classification will not be as accuracte the NED database is not up-to-date. Default *True*.
- ``daemonMode`` -- run sherlock in daemon mode. In daemon mode sherlock remains live and classifies sources as they come into the database. Default *True*
- ``updatePeakMags`` -- update peak magnitudes in human-readable annotation of objects (can take some time - best to run occationally)
- ``lite`` -- return only a lite version of the results with the topped ranked matches only. Default *False*
- ``oneRun`` -- only process one batch of transients, usful for unit testing. Default *False*
**Usage**
To setup your logger, settings and database connections, please use the ``fundamentals`` package (`see tutorial here <http://fundamentals.readthedocs.io/en/latest/#tutorial>`_).
To initiate a transient_classifier object, use the following:
.. todo::
- update the package tutorial if needed
The sherlock classifier can be run in one of two ways. The first is to pass into the coordinates of an object you wish to classify:
```python
from sherlock import transient_classifier
classifier = transient_classifier(
log=log,
settings=settings,
ra="08:57:57.19",
dec="+43:25:44.1",
name="PS17gx",
verbose=0
)
classifications, crossmatches = classifier.classify()
```
The crossmatches returned are a list of dictionaries giving details of the crossmatched sources. The classifications returned are a list of classifications resulting from these crossmatches. The lists are ordered from most to least likely classification and the indicies for the crossmatch and the classification lists are synced.
The second way to run the classifier is to not pass in a coordinate set and therefore cause sherlock to run the classifier on the transient database referenced in the sherlock settings file:
```python
from sherlock import transient_classifier
classifier = transient_classifier(
log=log,
settings=settings,
update=True
)
classifier.classify()
```
Here the transient list is selected out of the database using the ``transient query`` value in the settings file:
```yaml
database settings:
transients:
user: myusername
password: mypassword
db: nice_transients
host: 127.0.0.1
transient table: transientBucket
transient query: "select primaryKeyId as 'id', transientBucketId as 'alt_id', raDeg 'ra', decDeg 'dec', name 'name', sherlockClassification as 'object_classification'
from transientBucket where object_classification is null
transient primary id column: primaryKeyId
transient classification column: sherlockClassification
tunnel: False
```
By setting ``update=True`` the classifier will update the ``sherlockClassification`` column of the ``transient table`` with new classification and populate the ``sherlock_crossmatches`` table with key details of the crossmatched sources from the catalogues database. By setting ``update=False`` results are printed to stdout but the database is not updated (useful for dry runs and testing new algorithms),
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
# INITIALISATION
def __init__(
self,
log,
settings=False,
update=False,
ra=False,
dec=False,
name=False,
verbose=0,
updateNed=True,
daemonMode=False,
updatePeakMags=True,
oneRun=False,
lite=False
):
self.log = log
log.debug("instansiating a new 'classifier' object")
self.settings = settings
self.update = update
self.ra = ra
self.dec = dec
self.name = name
self.cl = False
self.verbose = verbose
self.updateNed = updateNed
self.daemonMode = daemonMode
self.updatePeakMags = updatePeakMags
self.oneRun = oneRun
self.lite = lite
self.filterPreference = [
"R", "_r", "G", "V", "_g", "B", "I", "_i", "_z", "J", "H", "K", "U", "_u", "_y", "W1", "unkMag"
]
# COLLECT ADVANCED SETTINGS IF AVAILABLE
parentDirectory = os.path.dirname(__file__)
advs = parentDirectory + "/advanced_settings.yaml"
level = 0
exists = False
count = 1
while not exists and len(advs) and count < 10:
count += 1
level -= 1
exists = os.path.exists(advs)
if not exists:
advs = "/".join(parentDirectory.split("/")
[:level]) + "/advanced_settings.yaml"
print(advs)
if not exists:
advs = {}
else:
with open(advs, 'r') as stream:
advs = yaml.safe_load(stream)
# MERGE ADVANCED SETTINGS AND USER SETTINGS (USER SETTINGS OVERRIDE)
self.settings = {**advs, **self.settings}
# INITIAL ACTIONS
# SETUP DATABASE CONNECTIONS
# SETUP ALL DATABASE CONNECTIONS
from sherlock import database
db = database(
log=self.log,
settings=self.settings
)
dbConns, dbVersions = db.connect()
self.dbVersions = dbVersions
self.transientsDbConn = dbConns["transients"]
self.cataloguesDbConn = dbConns["catalogues"]
# SIZE OF BATCHES TO SPLIT TRANSIENT INTO BEFORE CLASSIFYING
self.largeBatchSize = self.settings["database-batch-size"]
self.miniBatchSize = 1000
# LITE VERSION CANNOT BE RUN ON A DATABASE QUERY AS YET
if self.ra == False:
self.lite = False
# IS SHERLOCK CLASSIFIER BEING QUERIED FROM THE COMMAND-LINE?
if self.ra and self.dec:
self.cl = True
if not self.name:
self.name = "Transient"
# ASTROCALC UNIT CONVERTER OBJECT
self.converter = unit_conversion(
log=self.log
)
if self.ra and not isinstance(self.ra, float) and ":" in self.ra:
# ASTROCALC UNIT CONVERTER OBJECT
self.ra = self.converter.ra_sexegesimal_to_decimal(
ra=self.ra
)
self.dec = self.converter.dec_sexegesimal_to_decimal(
dec=self.dec
)
# DATETIME REGEX - EXPENSIVE OPERATION, LET"S JUST DO IT ONCE
self.reDatetime = re.compile('^[0-9]{4}-[0-9]{2}-[0-9]{2}T')
return None
def classify(self):
"""
*classify the transients selected from the transient selection query in the settings file or passed in via the CL or other code*
**Return**
- ``crossmatches`` -- list of dictionaries of crossmatched associated sources
- ``classifications`` -- the classifications assigned to the transients post-crossmatches (dictionary of rank ordered list of classifications)
See class docstring for usage.
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
global theseBatches
global crossmatchArray
self.log.debug('starting the ``classify`` method')
remaining = 1
# THE COLUMN MAPS - WHICH COLUMNS IN THE CATALOGUE TABLES = RA, DEC,
# REDSHIFT, MAG ETC
colMaps = get_crossmatch_catalogues_column_map(
log=self.log,
dbConn=self.cataloguesDbConn
)
if self.transientsDbConn and self.update:
self._create_tables_if_not_exist()
import time
start_time = time.time()
# COUNT SEARCHES
sa = self.settings["search algorithm"]
searchCount = 0
brightnessFilters = ["bright", "faint", "general"]
for search_name, searchPara in list(sa.items()):
for bf in brightnessFilters:
if bf in searchPara:
searchCount += 1
cpuCount = psutil.cpu_count()
if searchCount > cpuCount:
searchCount = cpuCount
miniBatchSize = self.miniBatchSize
while remaining:
# IF A TRANSIENT HAS NOT BEEN PASSED IN VIA THE COMMAND-LINE, THEN
# QUERY THE TRANSIENT DATABASE
if not self.ra and not self.dec:
# COUNT REMAINING TRANSIENTS
from fundamentals.mysql import readquery
sqlQuery = self.settings["database settings"][
"transients"]["transient count"]
thisInt = randint(0, 100)
if "where" in sqlQuery:
sqlQuery = sqlQuery.replace(
"where", "where %(thisInt)s=%(thisInt)s and " % locals())
if remaining == 1 or remaining < self.largeBatchSize:
rows = readquery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn,
)
remaining = rows[0]["count(*)"]
else:
remaining = remaining - self.largeBatchSize
print(
"%(remaining)s transient sources requiring a classification remain" % locals())
# START THE TIME TO TRACK CLASSIFICATION SPPED
start_time = time.time()
# A LIST OF DICTIONARIES OF TRANSIENT METADATA
transientsMetadataList = self._get_transient_metadata_from_database_list()
count = len(transientsMetadataList)
print(
" now classifying the next %(count)s transient sources" % locals())
# EXAMPLE OF TRANSIENT METADATA
# { 'name': 'PS17gx',
# 'alt_id': 'PS17gx',
# 'object_classification': 'SN',
# 'dec': '+43:25:44.1',
# 'id': 1,
# 'ra': '08:57:57.19'}
# TRANSIENT PASSED VIA COMMAND-LINE
else:
# CONVERT SINGLE TRANSIENTS TO LIST
if not isinstance(self.ra, list):
self.ra = [self.ra]
self.dec = [self.dec]
self.name = [self.name]
# GIVEN TRANSIENTS UNIQUE NAMES IF NOT PROVIDED
if not self.name[0]:
self.name = []
for i, v in enumerate(self.ra):
self.name.append("transient_%(i)05d" % locals())
transientsMetadataList = []
for r, d, n in zip(self.ra, self.dec, self.name):
transient = {
'name': n,
'object_classification': None,
'dec': d,
'id': n,
'ra': r
}
transientsMetadataList.append(transient)
remaining = 0
if self.oneRun:
remaining = 0
if len(transientsMetadataList) == 0:
if self.daemonMode == False:
remaining = 0
print("No transients need classified")
return None, None
else:
print(
"No remaining transients need classified, will try again in 5 mins")
time.sleep("10")
# FROM THE LOCATIONS OF THE TRANSIENTS, CHECK IF OUR LOCAL NED DATABASE
# NEEDS UPDATED
if self.updateNed:
self._update_ned_stream(
transientsMetadataList=transientsMetadataList
)
# SOME TESTING SHOWED THAT 25 IS GOOD
total = len(transientsMetadataList)
batches = int((old_div(float(total), float(miniBatchSize))) + 1.)
if batches == 0:
batches = 1
start = 0
end = 0
theseBatches = []
for i in range(batches):
end = end + miniBatchSize
start = i * miniBatchSize
thisBatch = transientsMetadataList[start:end]
theseBatches.append(thisBatch)
if self.verbose > 1:
print("BATCH SIZE = %(total)s" % locals())
print("MINI BATCH SIZE = %(batches)s x %(miniBatchSize)s" % locals())
poolSize = self.settings["cpu-pool-size"]
if poolSize and batches < poolSize:
poolSize = batches
start_time2 = time.time()
if self.verbose > 1:
print("START CROSSMATCH")
crossmatchArray = fmultiprocess(log=self.log, function=_crossmatch_transients_against_catalogues,
inputArray=list(range(len(theseBatches))), poolSize=poolSize, settings=self.settings, colMaps=colMaps)
if self.verbose > 1:
print("FINISH CROSSMATCH/START RANKING: %d" %
(time.time() - start_time2,))
start_time2 = time.time()
classifications = {}
crossmatches = []
for sublist in crossmatchArray:
sublist = sorted(
sublist, key=itemgetter('transient_object_id'))
# REORGANISE INTO INDIVIDUAL TRANSIENTS FOR RANKING AND
# TOP-LEVEL CLASSIFICATION EXTRACTION
batch = []
if len(sublist) != 0:
transientId = sublist[0]['transient_object_id']
for s in sublist:
if s['transient_object_id'] != transientId:
# RANK TRANSIENT CROSSMATCH BATCH
cl, cr = self._rank_classifications(
batch, colMaps)
crossmatches.extend(cr)
classifications = dict(
list(classifications.items()) + list(cl.items()))
transientId = s['transient_object_id']
batch = [s]
else:
batch.append(s)
# RANK FINAL BATCH
cl, cr = self._rank_classifications(
batch, colMaps)
classifications = dict(
list(classifications.items()) + list(cl.items()))
crossmatches.extend(cr)
for t in transientsMetadataList:
if t["id"] not in classifications:
classifications[t["id"]] = ["ORPHAN"]
# UPDATE THE TRANSIENT DATABASE IF UPDATE REQUESTED (ADD DATA TO
# tcs_crossmatch_table AND A CLASSIFICATION TO THE ORIGINAL TRANSIENT
# TABLE)
if self.verbose > 1:
print("FINISH RANKING/START UPDATING TRANSIENT DB: %d" %
(time.time() - start_time2,))
start_time2 = time.time()
if self.update and not self.ra:
self._update_transient_database(
crossmatches=crossmatches,
classifications=classifications,
transientsMetadataList=transientsMetadataList,
colMaps=colMaps
)
if self.verbose > 1:
print("FINISH UPDATING TRANSIENT DB/START ANNOTATING TRANSIENT DB: %d" %
(time.time() - start_time2,))
start_time2 = time.time()
# COMMAND-LINE SINGLE CLASSIFICATION
if self.ra:
classifications = self.update_classification_annotations_and_summaries(
False, True, crossmatches, classifications)
for k, v in classifications.items():
if len(v) == 1 and v[0] == "ORPHAN":
v.append(
"No contexual information is available for this transient")
if self.lite != False:
crossmatches = self._lighten_return(crossmatches)
if self.cl:
self._print_results_to_stdout(
classifications=classifications,
crossmatches=crossmatches
)
return classifications, crossmatches
if self.updatePeakMags and self.settings["database settings"]["transients"]["transient peak magnitude query"]:
self.update_peak_magnitudes()
# BULK RUN -- NOT A COMMAND-LINE SINGLE CLASSIFICATION
self.update_classification_annotations_and_summaries(
self.updatePeakMags)
print("FINISH ANNOTATING TRANSIENT DB: %d" %
(time.time() - start_time2,))
start_time2 = time.time()
classificationRate = old_div(count, (time.time() - start_time))
print(
"Sherlock is classify at a rate of %(classificationRate)2.1f transients/sec" % locals())
self.log.debug('completed the ``classify`` method')
return None, None
def _get_transient_metadata_from_database_list(
self):
"""use the transient query in the settings file to generate a list of transients to corssmatch and classify
**Return**
- ``transientsMetadataList``
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug(
'starting the ``_get_transient_metadata_from_database_list`` method')
sqlQuery = self.settings["database settings"][
"transients"]["transient query"] + " limit " + str(self.largeBatchSize)
thisInt = randint(0, 100)
if "where" in sqlQuery:
sqlQuery = sqlQuery.replace(
"where", "where %(thisInt)s=%(thisInt)s and " % locals())
transientsMetadataList = readquery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn,
quiet=False
)
self.log.debug(
'completed the ``_get_transient_metadata_from_database_list`` method')
return transientsMetadataList
def _update_ned_stream(
self,
transientsMetadataList
):
""" update the NED stream within the catalogues database at the locations of the transients
**Key Arguments**
- ``transientsMetadataList`` -- the list of transient metadata lifted from the database.
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_update_ned_stream`` method')
coordinateList = []
for i in transientsMetadataList:
# thisList = str(i["ra"]) + " " + str(i["dec"])
thisList = (i["ra"], i["dec"])
coordinateList.append(thisList)
coordinateList = self._remove_previous_ned_queries(
coordinateList=coordinateList
)
# MINIMISE COORDINATES IN LIST TO REDUCE NUMBER OF REQUIRE NED QUERIES
coordinateList = self._consolidate_coordinateList(
coordinateList=coordinateList
)
stream = ned(
log=self.log,
settings=self.settings,
coordinateList=coordinateList,
radiusArcsec=self.settings["ned stream search radius arcec"]
)
stream.ingest()
sqlQuery = """SET session sql_mode = "";""" % locals(
)
writequery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.cataloguesDbConn
)
sqlQuery = """update tcs_cat_ned_stream set magnitude = CAST(`magnitude_filter` AS DECIMAL(5,2)) where magnitude is null and magnitude_filter is not null;""" % locals(
)
writequery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.cataloguesDbConn
)
self.log.debug('completed the ``_update_ned_stream`` method')
return None
def _remove_previous_ned_queries(
self,
coordinateList):
"""iterate through the transient locations to see if we have recent local NED coverage of that area already
**Key Arguments**
- ``coordinateList`` -- set of coordinate to check for previous queries
**Return**
- ``updatedCoordinateList`` -- coordinate list with previous queries removed
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_remove_previous_ned_queries`` method')
# 1 DEGREE QUERY RADIUS
radius = 60. * 60.
updatedCoordinateList = []
keepers = []
# CALCULATE THE OLDEST RESULTS LIMIT
now = datetime.now()
td = timedelta(
days=self.settings["ned stream refresh rate in days"])
refreshLimit = now - td
refreshLimit = refreshLimit.strftime("%Y-%m-%d %H:%M:%S")
raList = []
raList[:] = [c[0] for c in coordinateList]
decList = []
decList[:] = [c[1] for c in coordinateList]
# MATCH COORDINATES AGAINST PREVIOUS NED SEARCHES
cs = conesearch(
log=self.log,
dbConn=self.cataloguesDbConn,
tableName="tcs_helper_ned_query_history",
columns="*",
ra=raList,
dec=decList,
radiusArcsec=radius,
separations=True,
distinct=True,
sqlWhere="dateQueried > '%(refreshLimit)s'" % locals(),
closest=False
)
matchIndies, matches = cs.search()
# DETERMINE WHICH COORDINATES REQUIRE A NED QUERY
curatedMatchIndices = []
curatedMatches = []
for i, m in zip(matchIndies, matches.list):
match = False
row = m
row["separationArcsec"] = row["cmSepArcsec"]
raStream = row["raDeg"]
decStream = row["decDeg"]
radiusStream = row["arcsecRadius"]
dateStream = row["dateQueried"]
angularSeparation = row["separationArcsec"]
if angularSeparation + self.settings["first pass ned search radius arcec"] < radiusStream:
curatedMatchIndices.append(i)
curatedMatches.append(m)
# NON MATCHES
for i, v in enumerate(coordinateList):
if i not in curatedMatchIndices:
updatedCoordinateList.append(v)
self.log.debug('completed the ``_remove_previous_ned_queries`` method')
return updatedCoordinateList
def _update_transient_database(
self,
crossmatches,
classifications,
transientsMetadataList,
colMaps):
""" update transient database with classifications and crossmatch results
**Key Arguments**
- ``crossmatches`` -- the crossmatches and associations resulting from the catlaogue crossmatches
- ``classifications`` -- the classifications assigned to the transients post-crossmatches (dictionary of rank ordered list of classifications)
- ``transientsMetadataList`` -- the list of transient metadata lifted from the database.
- ``colMaps`` -- maps of the important column names for each table/view in the crossmatch-catalogues database
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_update_transient_database`` method')
import time
start_time = time.time()
print("UPDATING TRANSIENTS DATABASE WITH RESULTS")
print("DELETING OLD RESULTS")
now = datetime.now()
now = now.strftime("%Y-%m-%d_%H-%M-%S-%f")
transientTable = self.settings["database settings"][
"transients"]["transient table"]
transientTableClassCol = self.settings["database settings"][
"transients"]["transient classification column"]
transientTableIdCol = self.settings["database settings"][
"transients"]["transient primary id column"]
# COMBINE ALL CROSSMATCHES INTO A LIST OF DICTIONARIES TO DUMP INTO
# DATABASE TABLE
transientIDs = [str(c)
for c in list(classifications.keys())]
transientIDs = ",".join(transientIDs)
# REMOVE PREVIOUS MATCHES
sqlQuery = """delete from sherlock_crossmatches where transient_object_id in (%(transientIDs)s);""" % locals(
)
writequery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn,
)
sqlQuery = """delete from sherlock_classifications where transient_object_id in (%(transientIDs)s);""" % locals(
)
writequery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn,
)
print("FINISHED DELETING OLD RESULTS/ADDING TO CROSSMATCHES: %d" %
(time.time() - start_time,))
start_time = time.time()
if len(crossmatches):
insert_list_of_dictionaries_into_database_tables(
dbConn=self.transientsDbConn,
log=self.log,
dictList=crossmatches,
dbTableName="sherlock_crossmatches",
dateModified=True,
batchSize=10000,
replace=True,
dbSettings=self.settings["database settings"][
"transients"]
)
print("FINISHED ADDING TO CROSSMATCHES/UPDATING CLASSIFICATIONS IN TRANSIENT TABLE: %d" %
(time.time() - start_time,))
start_time = time.time()
sqlQuery = ""
inserts = []
for k, v in list(classifications.items()):
thisInsert = {
"transient_object_id": k,
"classification": v[0]
}
inserts.append(thisInsert)
print("FINISHED UPDATING CLASSIFICATIONS IN TRANSIENT TABLE/UPDATING sherlock_classifications TABLE: %d" %
(time.time() - start_time,))
start_time = time.time()
insert_list_of_dictionaries_into_database_tables(
dbConn=self.transientsDbConn,
log=self.log,
dictList=inserts,
dbTableName="sherlock_classifications",
dateModified=True,
batchSize=10000,
replace=True,
dbSettings=self.settings["database settings"][
"transients"]
)
print("FINISHED UPDATING sherlock_classifications TABLE: %d" %
(time.time() - start_time,))
start_time = time.time()
self.log.debug('completed the ``_update_transient_database`` method')
return None
def _rank_classifications(
self,
crossmatchArray,
colMaps):
"""*rank the classifications returned from the catalogue crossmatcher, annotate the results with a classification rank-number (most likely = 1) and a rank-score (weight of classification)*
**Key Arguments**
- ``crossmatchArrayIndex`` -- the index of list of unranked crossmatch classifications
- ``colMaps`` -- dictionary of dictionaries with the name of the database-view (e.g. `tcs_view_agn_milliquas_v4_5`) as the key and the column-name dictary map as value (`{view_name: {columnMap}}`).
**Return**
- ``classifications`` -- the classifications assigned to the transients post-crossmatches
- ``crossmatches`` -- the crossmatches annotated with rankings and rank-scores
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_rank_classifications`` method')
crossmatches = crossmatchArray
# GROUP CROSSMATCHES INTO DISTINCT SOURCES (DUPLICATE ENTRIES OF THE
# SAME ASTROPHYSICAL SOURCE ACROSS MULTIPLE CATALOGUES)
ra, dec = list(zip(*[(r["raDeg"], r["decDeg"]) for r in crossmatches]))
from HMpTy.htm import sets
xmatcher = sets(
log=self.log,
ra=ra,
dec=dec,
radius=1. / (60. * 60.), # in degrees
sourceList=crossmatches
)
groupedMatches = xmatcher.match
associatationTypeOrder = ["AGN", "CV", "NT", "SN", "VS", "BS"]
# ADD DISTINCT-SOURCE KEY
dupKey = 0
distinctMatches = []
for x in groupedMatches:
dupKey += 1
mergedMatch = copy.deepcopy(x[0])
mergedMatch["merged_rank"] = int(dupKey)
# ADD OTHER ESSENTIAL KEYS
for e in ['z', 'photoZ', 'photoZErr']:
if e not in mergedMatch:
mergedMatch[e] = None
bestQualityCatalogue = colMaps[mergedMatch[
"catalogue_view_name"]]["object_type_accuracy"]
bestDirectDistance = {
"direct_distance": mergedMatch["direct_distance"],
"direct_distance_modulus": mergedMatch["direct_distance_modulus"],
"direct_distance_scale": mergedMatch["direct_distance_scale"],
"qual": colMaps[mergedMatch["catalogue_view_name"]]["object_type_accuracy"]
}
if not mergedMatch["direct_distance"]:
bestDirectDistance["qual"] = 0
bestSpecz = {
"z": mergedMatch["z"],
"distance": mergedMatch["distance"],
"distance_modulus": mergedMatch["distance_modulus"],
"scale": mergedMatch["scale"],
"qual": colMaps[mergedMatch["catalogue_view_name"]]["object_type_accuracy"]
}
if not mergedMatch["distance"]:
bestSpecz["qual"] = 0
bestPhotoz = {
"photoZ": mergedMatch["photoZ"],
"photoZErr": mergedMatch["photoZErr"],
"qual": colMaps[mergedMatch["catalogue_view_name"]]["object_type_accuracy"]
}
if not mergedMatch["photoZ"]:
bestPhotoz["qual"] = 0
# ORDER THESE FIRST IN NAME LISTING
mergedMatch["search_name"] = None
mergedMatch["catalogue_object_id"] = None
primeCats = ["NED", "SDSS", "MILLIQUAS"]
for cat in primeCats:
for i, m in enumerate(x):
# MERGE SEARCH NAMES
snippet = m["search_name"].split(" ")[0].upper()
if cat.upper() in snippet:
if not mergedMatch["search_name"]:
mergedMatch["search_name"] = m["search_name"].split(" ")[
0].upper()
elif "/" not in mergedMatch["search_name"] and snippet not in mergedMatch["search_name"].upper():
mergedMatch["search_name"] = mergedMatch["search_name"].split(
" ")[0].upper() + "/" + m["search_name"].split(" ")[0].upper()
elif snippet not in mergedMatch["search_name"].upper():
mergedMatch[
"search_name"] += "/" + m["search_name"].split(" ")[0].upper()
elif "/" not in mergedMatch["search_name"]:
mergedMatch["search_name"] = mergedMatch["search_name"].split(
" ")[0].upper()
mergedMatch["catalogue_table_name"] = mergedMatch[
"search_name"]
# MERGE CATALOGUE SOURCE NAMES
if not mergedMatch["catalogue_object_id"]:
mergedMatch["catalogue_object_id"] = str(
m["catalogue_object_id"])
# NOW ADD THE REST
for i, m in enumerate(x):
# MERGE SEARCH NAMES
snippet = m["search_name"].split(" ")[0].upper()
if snippet not in primeCats:
if not mergedMatch["search_name"]:
mergedMatch["search_name"] = m["search_name"].split(" ")[
0].upper()
elif "/" not in mergedMatch["search_name"] and snippet not in mergedMatch["search_name"].upper():
mergedMatch["search_name"] = mergedMatch["search_name"].split(
" ")[0].upper() + "/" + m["search_name"].split(" ")[0].upper()
elif snippet not in mergedMatch["search_name"].upper():
mergedMatch[
"search_name"] += "/" + m["search_name"].split(" ")[0].upper()
elif "/" not in mergedMatch["search_name"]:
mergedMatch["search_name"] = mergedMatch["search_name"].split(
" ")[0].upper()
mergedMatch["catalogue_table_name"] = mergedMatch[
"search_name"]
# MERGE CATALOGUE SOURCE NAMES
if not mergedMatch["catalogue_object_id"]:
mergedMatch["catalogue_object_id"] = str(
m["catalogue_object_id"])
# else:
# mergedMatch["catalogue_object_id"] = str(
# mergedMatch["catalogue_object_id"])
# m["catalogue_object_id"] = str(
# m["catalogue_object_id"])
# if m["catalogue_object_id"].replace(" ", "").lower() not in mergedMatch["catalogue_object_id"].replace(" ", "").lower():
# mergedMatch["catalogue_object_id"] += "/" + \
# m["catalogue_object_id"]
for i, m in enumerate(x):
m["merged_rank"] = int(dupKey)
if i > 0:
# MERGE ALL BEST MAGNITUDE MEASUREMENTS
for f in self.filterPreference:
if f in m and m[f] and (f not in mergedMatch or (f + "Err" in mergedMatch and f + "Err" in m and (mergedMatch[f + "Err"] == None or (m[f + "Err"] and mergedMatch[f + "Err"] > m[f + "Err"])))):
mergedMatch[f] = m[f]
try:
mergedMatch[f + "Err"] = m[f + "Err"]
except:
pass
mergedMatch["original_search_radius_arcsec"] = "multiple"
mergedMatch["catalogue_object_subtype"] = "multiple"
mergedMatch["catalogue_view_name"] = "multiple"
# DETERMINE BEST CLASSIFICATION
if mergedMatch["classificationReliability"] == 3 and m["classificationReliability"] < 3:
mergedMatch["association_type"] = m["association_type"]
mergedMatch["catalogue_object_type"] = m[
"catalogue_object_type"]
mergedMatch["classificationReliability"] = m[
"classificationReliability"]
if m["classificationReliability"] != 3 and colMaps[m["catalogue_view_name"]]["object_type_accuracy"] > bestQualityCatalogue:
bestQualityCatalogue = colMaps[
m["catalogue_view_name"]]["object_type_accuracy"]
mergedMatch["association_type"] = m["association_type"]
mergedMatch["catalogue_object_type"] = m[
"catalogue_object_type"]
mergedMatch["classificationReliability"] = m[
"classificationReliability"]
if m["classificationReliability"] != 3 and colMaps[m["catalogue_view_name"]]["object_type_accuracy"] == bestQualityCatalogue and m["association_type"] in associatationTypeOrder and (mergedMatch["association_type"] not in associatationTypeOrder or associatationTypeOrder.index(m["association_type"]) < associatationTypeOrder.index(mergedMatch["association_type"])):
mergedMatch["association_type"] = m["association_type"]
mergedMatch["catalogue_object_type"] = m[
"catalogue_object_type"]
mergedMatch["classificationReliability"] = m[
"classificationReliability"]
# FIND BEST DISTANCES
if "direct_distance" in m and m["direct_distance"] and colMaps[m["catalogue_view_name"]]["object_type_accuracy"] > bestDirectDistance["qual"]:
bestDirectDistance = {
"direct_distance": m["direct_distance"],
"direct_distance_modulus": m["direct_distance_modulus"],
"direct_distance_scale": m["direct_distance_scale"],
"catalogue_object_type": m["catalogue_object_type"],
"qual": colMaps[m["catalogue_view_name"]]["object_type_accuracy"]
}
# FIND BEST SPEC-Z
if "z" in m and m["z"] and colMaps[m["catalogue_view_name"]]["object_type_accuracy"] > bestSpecz["qual"]:
bestSpecz = {
"z": m["z"],
"distance": m["distance"],
"distance_modulus": m["distance_modulus"],
"scale": m["scale"],
"catalogue_object_type": m["catalogue_object_type"],
"qual": colMaps[m["catalogue_view_name"]]["object_type_accuracy"]
}
# FIND BEST PHOT-Z
if "photoZ" in m and m["photoZ"] and colMaps[m["catalogue_view_name"]]["object_type_accuracy"] > bestPhotoz["qual"]:
bestPhotoz = {
"photoZ": m["photoZ"],
"photoZErr": m["photoZErr"],
"distance": m["distance"],
"distance_modulus": m["distance_modulus"],
"scale": m["scale"],
"catalogue_object_type": m["catalogue_object_type"],
"qual": colMaps[m["catalogue_view_name"]]["object_type_accuracy"]
}
# CLOSEST ANGULAR SEP & COORDINATES
if m["separationArcsec"] < mergedMatch["separationArcsec"]:
mergedMatch["separationArcsec"] = m["separationArcsec"]
mergedMatch["raDeg"] = m["raDeg"]
mergedMatch["decDeg"] = m["decDeg"]
# MERGE THE BEST RESULTS
for l in [bestPhotoz, bestSpecz, bestDirectDistance]:
for k, v in list(l.items()):
if k != "qual" and v:
mergedMatch[k] = v
mergedMatch["catalogue_object_id"] = str(mergedMatch[
"catalogue_object_id"]).replace(" ", "")
# RECALULATE PHYSICAL DISTANCE SEPARATION
if mergedMatch["direct_distance_scale"]:
mergedMatch["physical_separation_kpc"] = mergedMatch[
"direct_distance_scale"] * mergedMatch["separationArcsec"]
elif mergedMatch["scale"]:
mergedMatch["physical_separation_kpc"] = mergedMatch[
"scale"] * mergedMatch["separationArcsec"]
if "/" in mergedMatch["search_name"]:
mergedMatch["search_name"] = "multiple"
distinctMatches.append(mergedMatch)
crossmatches = []
for xm, gm in zip(distinctMatches, groupedMatches):
# SPEC-Z GALAXIES
if (xm["physical_separation_kpc"] is not None and xm["physical_separation_kpc"] != "null" and xm["physical_separation_kpc"] < 20. and (("z" in xm and xm["z"] is not None) or "photoZ" not in xm or xm["photoZ"] is None or xm["photoZ"] < 0.)):
rankScore = xm["classificationReliability"] * 1000 + 2. - \
(50 - old_div(xm["physical_separation_kpc"], 20))
# PHOTO-Z GALAXIES
elif (xm["physical_separation_kpc"] is not None and xm["physical_separation_kpc"] != "null" and xm["physical_separation_kpc"] < 20. and xm["association_type"] == "SN"):
rankScore = xm["classificationReliability"] * 1000 + 5 - \
(50 - old_div(xm["physical_separation_kpc"], 20))
# NOT SPEC-Z, NON PHOTO-Z GALAXIES & PHOTO-Z GALAXIES
elif (xm["association_type"] == "SN"):
rankScore = xm["classificationReliability"] * 1000 + 5.
# VS
elif (xm["association_type"] == "VS"):
rankScore = xm["classificationReliability"] * \
1000 + xm["separationArcsec"] + 2.
# BS
elif (xm["association_type"] == "BS"):
rankScore = xm["classificationReliability"] * \
1000 + xm["separationArcsec"]
else:
rankScore = xm["classificationReliability"] * \
1000 + xm["separationArcsec"] + 10.
xm["rankScore"] = rankScore
crossmatches.append(xm)
if len(gm) > 1:
for g in gm:
g["rankScore"] = rankScore
crossmatches = sorted(
crossmatches, key=itemgetter('rankScore'), reverse=False)
crossmatches = sorted(
crossmatches, key=itemgetter('transient_object_id'))
transient_object_id = None
uniqueIndexCheck = []
classifications = {}
crossmatchesKeep = []
rank = 0
transClass = []
for xm in crossmatches:
rank += 1
if rank == 1:
transClass.append(xm["association_type"])
classifications[xm["transient_object_id"]] = transClass
if rank == 1 or self.lite == False:
xm["rank"] = rank
crossmatchesKeep.append(xm)
crossmatches = crossmatchesKeep
crossmatchesKeep = []
if self.lite == False:
for xm in crossmatches:
group = groupedMatches[xm["merged_rank"] - 1]
xm["merged_rank"] = None
crossmatchesKeep.append(xm)
if len(group) > 1:
groupKeep = []
uniqueIndexCheck = []
for g in group:
g["merged_rank"] = xm["rank"]
g["rankScore"] = xm["rankScore"]
index = "%(catalogue_table_name)s%(catalogue_object_id)s" % g
# IF WE HAVE HIT A NEW SOURCE
if index not in uniqueIndexCheck:
uniqueIndexCheck.append(index)
crossmatchesKeep.append(g)
crossmatches = crossmatchesKeep
self.log.debug('completed the ``_rank_classifications`` method')
return classifications, crossmatches
def _print_results_to_stdout(
self,
classifications,
crossmatches):
"""*print the classification and crossmatch results for a single transient object to stdout*
**Key Arguments**
- ``crossmatches`` -- the unranked crossmatch classifications
- ``classifications`` -- the classifications assigned to the transients post-crossmatches (dictionary of rank ordered list of classifications)
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_print_results_to_stdout`` method')
if self.verbose == 0:
return
crossmatchesCopy = copy.deepcopy(crossmatches)
# REPORT ONLY THE MOST PREFERED MAGNITUDE VALUE
basic = ["association_type", "rank", "rankScore", "catalogue_table_name", "catalogue_object_id", "catalogue_object_type", "catalogue_object_subtype",
"raDeg", "decDeg", "separationArcsec", "physical_separation_kpc", "direct_distance", "distance", "z", "photoZ", "photoZErr", "Mag", "MagFilter", "MagErr", "classificationReliability", "merged_rank"]
verbose = ["search_name", "catalogue_view_name", "original_search_radius_arcsec", "direct_distance_modulus", "distance_modulus", "direct_distance_scale", "major_axis_arcsec", "scale", "U", "UErr",
"B", "BErr", "V", "VErr", "R", "RErr", "I", "IErr", "J", "JErr", "H", "HErr", "K", "KErr", "_u", "_uErr", "_g", "_gErr", "_r", "_rErr", "_i", "_iErr", "_z", "_zErr", "_y", "G", "GErr", "_yErr", "unkMag"]
dontFormat = ["decDeg", "raDeg", "rank",
"catalogue_object_id", "catalogue_object_subtype", "merged_rank"]
if self.verbose == 2:
basic = basic + verbose
for n in self.name:
if n in classifications:
headline = "\n" + n + "'s Predicted Classification: " + \
classifications[n][0]
else:
headline = n + "'s Predicted Classification: ORPHAN"
print(headline)
print("Suggested Associations:")
myCrossmatches = []
myCrossmatches[:] = [c for c in crossmatchesCopy if c[
"transient_object_id"] == n]
for c in myCrossmatches:
for f in self.filterPreference:
if f in c and c[f]:
c["Mag"] = c[f]
c["MagFilter"] = f.replace("_", "").replace("Mag", "")
if f + "Err" in c:
c["MagErr"] = c[f + "Err"]
else:
c["MagErr"] = None
break
allKeys = []
for c in myCrossmatches:
for k, v in list(c.items()):
if k not in allKeys:
allKeys.append(k)
for c in myCrossmatches:
for k in allKeys:
if k not in c:
c[k] = None
printCrossmatches = []
for c in myCrossmatches:
ordDict = collections.OrderedDict(sorted({}.items()))
for k in basic:
if k in c:
if k == "catalogue_table_name":
c[k] = c[k].replace(
"tcs_cat_", "").replace("_", " ")
if k == "classificationReliability":
if c[k] == 1:
c["classification reliability"] = "synonym"
elif c[k] == 2:
c["classification reliability"] = "association"
elif c[k] == 3:
c["classification reliability"] = "annotation"
k = "classification reliability"
if k == "catalogue_object_subtype" and "sdss" in c["catalogue_table_name"]:
if c[k] == 6:
c[k] = "galaxy"
elif c[k] == 3:
c[k] = "star"
columnName = k.replace(
"tcs_cat_", "").replace("_", " ")
value = c[k]
if k not in dontFormat:
try:
ordDict[columnName] = "%(value)0.2f" % locals()
except:
ordDict[columnName] = value
else:
ordDict[columnName] = value
printCrossmatches.append(ordDict)
outputFormat = None
# outputFormat = "csv"
from fundamentals.renderer import list_of_dictionaries
dataSet = list_of_dictionaries(
log=self.log,
listOfDictionaries=printCrossmatches
)
if outputFormat == "csv":
tableData = dataSet.csv(filepath=None)
else:
tableData = dataSet.table(filepath=None)
print(tableData)
self.log.debug('completed the ``_print_results_to_stdout`` method')
return None
def _lighten_return(
self,
crossmatches):
"""*lighten the classification and crossmatch results for smaller database footprint*
**Key Arguments**
- ``classifications`` -- the classifications assigned to the transients post-crossmatches (dictionary of rank ordered list of classifications)
"""
self.log.debug('starting the ``_lighten_return`` method')
# REPORT ONLY THE MOST PREFERED MAGNITUDE VALUE
basic = ["transient_object_id", "association_type", "catalogue_table_name", "catalogue_object_id", "catalogue_object_type",
"raDeg", "decDeg", "separationArcsec", "northSeparationArcsec", "eastSeparationArcsec", "physical_separation_kpc", "direct_distance", "distance", "z", "photoZ", "photoZErr", "Mag", "MagFilter", "MagErr", "classificationReliability", "major_axis_arcsec"]
verbose = ["search_name", "catalogue_view_name", "original_search_radius_arcsec", "direct_distance_modulus", "distance_modulus", "direct_distance_scale", "scale", "U", "UErr",
"B", "BErr", "V", "VErr", "R", "RErr", "I", "IErr", "J", "JErr", "H", "HErr", "K", "KErr", "_u", "_uErr", "_g", "_gErr", "_r", "_rErr", "_i", "_iErr", "_z", "_zErr", "_y", "G", "GErr", "_yErr", "unkMag"]
dontFormat = ["decDeg", "raDeg", "rank",
"catalogue_object_id", "catalogue_object_subtype", "merged_rank", "classificationReliability"]
if self.verbose == 2:
basic = basic + verbose
for c in crossmatches:
for f in self.filterPreference:
if f in c and c[f]:
c["Mag"] = c[f]
c["MagFilter"] = f.replace("_", "").replace("Mag", "")
if f + "Err" in c:
c["MagErr"] = c[f + "Err"]
else:
c["MagErr"] = None
break
allKeys = []
for c in crossmatches:
for k, v in list(c.items()):
if k not in allKeys:
allKeys.append(k)
for c in crossmatches:
for k in allKeys:
if k not in c:
c[k] = None
liteCrossmatches = []
for c in crossmatches:
ordDict = collections.OrderedDict(sorted({}.items()))
for k in basic:
if k in c:
if k == "catalogue_table_name":
c[k] = c[k].replace(
"tcs_cat_", "").replace("_", " ")
if k == "catalogue_object_subtype" and "sdss" in c["catalogue_table_name"]:
if c[k] == 6:
c[k] = "galaxy"
elif c[k] == 3:
c[k] = "star"
columnName = k.replace(
"tcs_cat_", "")
value = c[k]
if k not in dontFormat:
try:
ordDict[columnName] = float(f'{value:0.2f}')
except:
ordDict[columnName] = value
else:
ordDict[columnName] = value
liteCrossmatches.append(ordDict)
self.log.debug('completed the ``_lighten_return`` method')
return liteCrossmatches
def _consolidate_coordinateList(
self,
coordinateList):
"""*match the coordinate list against itself with the parameters of the NED search queries to minimise duplicated NED queries*
**Key Arguments**
- ``coordinateList`` -- the original coordinateList.
**Return**
- ``updatedCoordinateList`` -- the coordinate list with duplicated search areas removed
**Usage**
.. todo::
- add usage info
- create a sublime snippet for usage
- update package tutorial if needed
```python
usage code
```
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_consolidate_coordinateList`` method')
raList = []
raList[:] = np.array([c[0] for c in coordinateList])
decList = []
decList[:] = np.array([c[1] for c in coordinateList])
nedStreamRadius = old_div(self.settings[
"ned stream search radius arcec"], (60. * 60.))
firstPassNedSearchRadius = old_div(self.settings[
"first pass ned search radius arcec"], (60. * 60.))
radius = nedStreamRadius - firstPassNedSearchRadius
# LET'S BE CONSERVATIVE
# radius = radius * 0.9
xmatcher = sets(
log=self.log,
ra=raList,
dec=decList,
radius=radius, # in degrees
sourceList=coordinateList,
convertToArray=False
)
allMatches = xmatcher.match
updatedCoordianteList = []
for aSet in allMatches:
updatedCoordianteList.append(aSet[0])
self.log.debug('completed the ``_consolidate_coordinateList`` method')
return updatedCoordianteList
def classification_annotations(
self):
"""*add a detialed classification annotation to each classification in the sherlock_classifications table*
**Key Arguments**
# -
**Return**
- None
**Usage**
.. todo::
- add usage info
- create a sublime snippet for usage
- write a command-line tool for this method
- update package tutorial with command-line tool info if needed
```python
usage code
```
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``classification_annotations`` method')
from fundamentals.mysql import readquery
sqlQuery = u"""
select * from sherlock_classifications cl, sherlock_crossmatches xm where cl.transient_object_id=xm.transient_object_id and cl.annotation is null
""" % locals()
topXMs = readquery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn
)
for xm in topXMs:
annotation = []
classType = xm["classificationReliability"]
if classType == 1:
annotation.append("is synonymous with")
elif classType in [2, 3]:
annotation.append("is possibly associated with")
self.log.debug('completed the ``classification_annotations`` method')
return None
def update_classification_annotations_and_summaries(
self,
updatePeakMagnitudes=True,
cl=False,
crossmatches=False,
classifications=False):
"""*update classification annotations and summaries*
**Key Arguments**
- ``updatePeakMagnitudes`` -- update the peak magnitudes in the annotations to give absolute magnitudes. Default *True*
- ``cl`` -- reporting only to the command-line, do not update database. Default *False*
- ``crossmatches`` -- crossmatches will be passed for the single classifications to report annotations from command-line
- ``classifications`` -- classifications will be passed for the single classifications to have annotation appended to the dictionary for stand-alone non-database scripts
**Return**
- None
**Usage**
.. todo::
- add usage info
- create a sublime snippet for usage
- write a command-line tool for this method
- update package tutorial with command-line tool info if needed
```python
usage code
```
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug(
'starting the ``update_classification_annotations_and_summaries`` method')
# import time
# start_time = time.time()
# print "COLLECTING TRANSIENTS WITH NO ANNOTATIONS"
# BULK RUN
if crossmatches == False:
if updatePeakMagnitudes:
sqlQuery = u"""
SELECT * from sherlock_crossmatches cm, sherlock_classifications cl where rank =1 and cl.transient_object_id= cm.transient_object_id and ((cl.classification not in ("AGN","CV","BS","VS") AND cm.dateLastModified > DATE_SUB(NOW(), INTERVAL 1 Day)) or cl.annotation is null)
-- SELECT * from sherlock_crossmatches cm, sherlock_classifications cl where rank =1 and cl.transient_object_id= cm.transient_object_id and (cl.annotation is null or cl.dateLastModified is null or cl.dateLastModified > DATE_SUB(NOW(), INTERVAL 30 DAY)) order by cl.dateLastModified asc limit 100000
""" % locals()
else:
sqlQuery = u"""
SELECT * from sherlock_crossmatches cm, sherlock_classifications cl where rank =1 and cl.transient_object_id=cm.transient_object_id and cl.summary is null
""" % locals()
rows = readquery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn
)
# COMMAND-LINE SINGLE CLASSIFICATION
else:
rows = crossmatches
# print "FINISHED COLLECTING TRANSIENTS WITH NO ANNOTATIONS/GENERATING ANNOTATIONS: %d" % (time.time() - start_time,)
# start_time = time.time()
updates = []
for row in rows:
annotation, summary, sep = self.generate_match_annotation(
match=row, updatePeakMagnitudes=updatePeakMagnitudes)
if cl and "rank" in row and row["rank"] == 1:
if classifications != False:
classifications[
row["transient_object_id"]].append(annotation)
if self.verbose != 0:
print("\n" + annotation)
update = {
"transient_object_id": row["transient_object_id"],
"annotation": annotation,
"summary": summary,
"separationArcsec": sep
}
updates.append(update)
if cl:
return classifications
# print "FINISHED GENERATING ANNOTATIONS/ADDING ANNOTATIONS TO TRANSIENT DATABASE: %d" % (time.time() - start_time,)
# start_time = time.time()
insert_list_of_dictionaries_into_database_tables(
dbConn=self.transientsDbConn,
log=self.log,
dictList=updates,
dbTableName="sherlock_classifications",
dateModified=True,
batchSize=10000,
replace=True,
dbSettings=self.settings["database settings"]["transients"]
)
# print "FINISHED ADDING ANNOTATIONS TO TRANSIENT DATABASE/UPDATING ORPHAN ANNOTATIONS: %d" % (time.time() - start_time,)
# start_time = time.time()
sqlQuery = """update sherlock_classifications set annotation = "The transient location is not matched against any known catalogued source", summary = "No catalogued match" where classification = 'ORPHAN' and summary is null """ % locals()
writequery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn,
)
# print "FINISHED UPDATING ORPHAN ANNOTATIONS: %d" % (time.time() - start_time,)
# start_time = time.time()
self.log.debug(
'completed the ``update_classification_annotations_and_summaries`` method')
return None
# use the tab-trigger below for new method
def update_peak_magnitudes(
self):
"""*update peak magnitudes*
**Key Arguments**
# -
**Return**
- None
**Usage**
.. todo::
- add usage info
- create a sublime snippet for usage
- write a command-line tool for this method
- update package tutorial with command-line tool info if needed
```python
usage code
```
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``update_peak_magnitudes`` method')
sqlQuery = self.settings["database settings"][
"transients"]["transient peak magnitude query"]
sqlQuery = """UPDATE sherlock_crossmatches s,
(%(sqlQuery)s) t
SET
s.transientAbsMag = ROUND(t.mag - IFNULL(direct_distance_modulus,
distance_modulus),
2)
WHERE
IFNULL(direct_distance_modulus,
distance_modulus) IS NOT NULL
AND (s.association_type not in ("AGN","CV","BS","VS")
or s.transientAbsMag is null)
AND t.id = s.transient_object_id
AND (s.dateLastModified > DATE_SUB(NOW(), INTERVAL 1 DAY));""" % locals()
writequery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn,
)
self.log.debug('completed the ``update_peak_magnitudes`` method')
return None
def _create_tables_if_not_exist(
self):
"""*create the sherlock helper tables if they don't yet exist*
**Key Arguments**
# -
**Return**
- None
**Usage**
.. todo::
- add usage info
- create a sublime snippet for usage
- write a command-line tool for this method
- update package tutorial with command-line tool info if needed
```python
usage code
```
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
self.log.debug('starting the ``_create_tables_if_not_exist`` method')
transientTable = self.settings["database settings"][
"transients"]["transient table"]
transientTableClassCol = self.settings["database settings"][
"transients"]["transient classification column"]
transientTableIdCol = self.settings["database settings"][
"transients"]["transient primary id column"]
crossmatchTable = "sherlock_crossmatches"
createStatement = """
CREATE TABLE IF NOT EXISTS `sherlock_crossmatches` (
`transient_object_id` bigint(20) unsigned DEFAULT NULL,
`catalogue_object_id` varchar(200) COLLATE utf8_unicode_ci DEFAULT NULL,
`catalogue_table_id` smallint(5) unsigned DEFAULT NULL,
`separationArcsec` double DEFAULT NULL,
`northSeparationArcsec` double DEFAULT NULL,
`eastSeparationArcsec` double DEFAULT NULL,
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`z` double DEFAULT NULL,
`scale` double DEFAULT NULL,
`distance` double DEFAULT NULL,
`distance_modulus` double DEFAULT NULL,
`photoZ` double DEFAULT NULL,
`photoZErr` double DEFAULT NULL,
`association_type` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
`dateCreated` datetime DEFAULT NULL,
`physical_separation_kpc` double DEFAULT NULL,
`catalogue_object_type` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
`catalogue_object_subtype` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
`association_rank` int(11) DEFAULT NULL,
`catalogue_table_name` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`catalogue_view_name` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`rank` int(11) DEFAULT NULL,
`rankScore` double DEFAULT NULL,
`search_name` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`major_axis_arcsec` double DEFAULT NULL,
`direct_distance` double DEFAULT NULL,
`direct_distance_scale` double DEFAULT NULL,
`direct_distance_modulus` double DEFAULT NULL,
`raDeg` double DEFAULT NULL,
`decDeg` double DEFAULT NULL,
`original_search_radius_arcsec` double DEFAULT NULL,
`catalogue_view_id` int(11) DEFAULT NULL,
`U` double DEFAULT NULL,
`UErr` double DEFAULT NULL,
`B` double DEFAULT NULL,
`BErr` double DEFAULT NULL,
`V` double DEFAULT NULL,
`VErr` double DEFAULT NULL,
`R` double DEFAULT NULL,
`RErr` double DEFAULT NULL,
`I` double DEFAULT NULL,
`IErr` double DEFAULT NULL,
`J` double DEFAULT NULL,
`JErr` double DEFAULT NULL,
`H` double DEFAULT NULL,
`HErr` double DEFAULT NULL,
`K` double DEFAULT NULL,
`KErr` double DEFAULT NULL,
`_u` double DEFAULT NULL,
`_uErr` double DEFAULT NULL,
`_g` double DEFAULT NULL,
`_gErr` double DEFAULT NULL,
`_r` double DEFAULT NULL,
`_rErr` double DEFAULT NULL,
`_i` double DEFAULT NULL,
`_iErr` double DEFAULT NULL,
`_z` double DEFAULT NULL,
`_zErr` double DEFAULT NULL,
`_y` double DEFAULT NULL,
`_yErr` double DEFAULT NULL,
`G` double DEFAULT NULL,
`GErr` double DEFAULT NULL,
`W1` double DEFAULT NULL,
`W1Err` double DEFAULT NULL,
`unkMag` double DEFAULT NULL,
`unkMagErr` double DEFAULT NULL,
`dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`updated` tinyint(4) DEFAULT '0',
`classificationReliability` tinyint(4) DEFAULT NULL,
`transientAbsMag` double DEFAULT NULL,
`merged_rank` tinyint(4) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `key_transient_object_id` (`transient_object_id`),
KEY `key_catalogue_object_id` (`catalogue_object_id`),
KEY `idx_separationArcsec` (`separationArcsec`),
KEY `idx_rank` (`rank`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
CREATE TABLE IF NOT EXISTS `sherlock_classifications` (
`transient_object_id` bigint(20) NOT NULL,
`classification` varchar(45) DEFAULT NULL,
`annotation` TEXT COLLATE utf8_unicode_ci DEFAULT NULL,
`summary` VARCHAR(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`separationArcsec` DOUBLE DEFAULT NULL,
`matchVerified` TINYINT NULL DEFAULT NULL,
`developmentComment` VARCHAR(100) NULL,
`dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,
`dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,
`updated` varchar(45) DEFAULT '0',
PRIMARY KEY (`transient_object_id`),
KEY `key_transient_object_id` (`transient_object_id`),
KEY `idx_summary` (`summary`),
KEY `idx_classification` (`classification`),
KEY `idx_dateLastModified` (`dateLastModified`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
""" % locals()
# A FIX FOR MYSQL VERSIONS < 5.6
triggers = []
if float(self.dbVersions["transients"][:3]) < 5.6:
createStatement = createStatement.replace(
"`dateLastModified` datetime DEFAULT CURRENT_TIMESTAMP,", "`dateLastModified` datetime DEFAULT NULL,")
createStatement = createStatement.replace(
"`dateCreated` datetime DEFAULT CURRENT_TIMESTAMP,", "`dateCreated` datetime DEFAULT NULL,")
triggers.append("""
CREATE TRIGGER dateCreated
BEFORE INSERT ON `%(crossmatchTable)s`
FOR EACH ROW
BEGIN
IF NEW.dateCreated IS NULL THEN
SET NEW.dateCreated = NOW();
SET NEW.dateLastModified = NOW();
END IF;
END""" % locals())
try:
writequery(
log=self.log,
sqlQuery=createStatement,
dbConn=self.transientsDbConn,
Force=True
)
except:
self.log.info(
"Could not create table (`%(crossmatchTable)s`). Probably already exist." % locals())
sqlQuery = u"""
SHOW TRIGGERS;
""" % locals()
rows = readquery(
log=self.log,
sqlQuery=sqlQuery,
dbConn=self.transientsDbConn,
)
# DON'T ADD TRIGGERS IF THEY ALREADY EXIST
for r in rows:
if r["Trigger"] in ("sherlock_classifications_BEFORE_INSERT", "sherlock_classifications_AFTER_INSERT"):
return None
triggers.append("""CREATE TRIGGER `sherlock_classifications_BEFORE_INSERT` BEFORE INSERT ON `sherlock_classifications` FOR EACH ROW
BEGIN
IF new.classification = "ORPHAN" THEN
SET new.annotation = "The transient location is not matched against any known catalogued source", new.summary = "No catalogued match";
END IF;
END""" % locals())
triggers.append("""CREATE TRIGGER `sherlock_classifications_AFTER_INSERT` AFTER INSERT ON `sherlock_classifications` FOR EACH ROW
BEGIN
update `%(transientTable)s` set `%(transientTableClassCol)s` = new.classification
where `%(transientTableIdCol)s` = new.transient_object_id;
END""" % locals())
for t in triggers:
try:
writequery(
log=self.log,
sqlQuery=t,
dbConn=self.transientsDbConn,
Force=True
)
except:
self.log.info(
"Could not create trigger (`%(crossmatchTable)s`). Probably already exist." % locals())
self.log.debug('completed the ``_create_tables_if_not_exist`` method')
return None
# use the tab-trigger below for new method
def generate_match_annotation(
self,
match,
updatePeakMagnitudes=False):
"""*generate a human readale annotation for the transient-catalogue source match*
**Key Arguments**
- ``match`` -- the source crossmatched against the transient
- ``updatePeakMagnitudes`` -- update the peak magnitudes in the annotations to give absolute magnitudes. Default *False*
**Return**
- None
**Usage**
```python
usage code
```
---
```eval_rst
.. todo::
- add usage info
- create a sublime snippet for usage
- write a command-line tool for this method
- update package tutorial with command-line tool info if needed
```
"""
self.log.debug('starting the ``generate_match_annotation`` method')
if "catalogue_object_subtype" not in match:
match["catalogue_object_subtype"] = None
catalogue = match["catalogue_table_name"]
objectId = match["catalogue_object_id"]
objectType = match["catalogue_object_type"]
objectSubtype = match["catalogue_object_subtype"]
catalogueString = catalogue
if catalogueString is None:
badGuy = match["transient_object_id"]
print(f"Issue with object {badGuy}")
raise TypeError(f"Issue with object {badGuy}")
if "catalogue" not in catalogueString.lower():
catalogueString = catalogue + " catalogue"
if "/" in catalogueString:
catalogueString += "s"
if "ned" in catalogue.lower():
objectId = objectId.replace("+", "%2B")
objectId = '''<a href="https://ned.ipac.caltech.edu/cgi-bin/objsearch?objname=%(objectId)s&extend=no&hconst=73&omegam=0.27&omegav=0.73&corr_z=1&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=30000.0&list_limit=5&img_stamp=YES">%(objectId)s</a>''' % locals()
elif "sdss" in catalogue.lower():
objectId = "http://skyserver.sdss.org/dr12/en/tools/explore/Summary.aspx?id=%(objectId)s" % locals(
)
ra = self.converter.ra_decimal_to_sexegesimal(
ra=match["raDeg"],
delimiter=""
)
dec = self.converter.dec_decimal_to_sexegesimal(
dec=match["decDeg"],
delimiter=""
)
betterName = "SDSS J" + ra[0:9] + dec[0:9]
objectId = '''<a href="%(objectId)s">%(betterName)s</a>''' % locals()
elif "milliquas" in catalogue.lower():
thisName = objectId
objectId = objectId.replace(" ", "+")
objectId = '''<a href="https://heasarc.gsfc.nasa.gov/db-perl/W3Browse/w3table.pl?popupFrom=Query+Results&tablehead=name%%3Dheasarc_milliquas%%26description%%3DMillion+Quasars+Catalog+%%28MILLIQUAS%%29%%2C+Version+4.8+%%2822+June+2016%%29%%26url%%3Dhttp%%3A%%2F%%2Fheasarc.gsfc.nasa.gov%%2FW3Browse%%2Fgalaxy-catalog%%2Fmilliquas.html%%26archive%%3DN%%26radius%%3D1%%26mission%%3DGALAXY+CATALOG%%26priority%%3D5%%26tabletype%%3DObject&dummy=Examples+of+query+constraints%%3A&varon=name&bparam_name=%%3D%%22%(objectId)s%%22&bparam_name%%3A%%3Aunit=+&bparam_name%%3A%%3Aformat=char25&varon=ra&bparam_ra=&bparam_ra%%3A%%3Aunit=degree&bparam_ra%%3A%%3Aformat=float8%%3A.5f&varon=dec&bparam_dec=&bparam_dec%%3A%%3Aunit=degree&bparam_dec%%3A%%3Aformat=float8%%3A.5f&varon=bmag&bparam_bmag=&bparam_bmag%%3A%%3Aunit=mag&bparam_bmag%%3A%%3Aformat=float8%%3A4.1f&varon=rmag&bparam_rmag=&bparam_rmag%%3A%%3Aunit=mag&bparam_rmag%%3A%%3Aformat=float8%%3A4.1f&varon=redshift&bparam_redshift=&bparam_redshift%%3A%%3Aunit=+&bparam_redshift%%3A%%3Aformat=float8%%3A6.3f&varon=radio_name&bparam_radio_name=&bparam_radio_name%%3A%%3Aunit=+&bparam_radio_name%%3A%%3Aformat=char22&varon=xray_name&bparam_xray_name=&bparam_xray_name%%3A%%3Aunit=+&bparam_xray_name%%3A%%3Aformat=char22&bparam_lii=&bparam_lii%%3A%%3Aunit=degree&bparam_lii%%3A%%3Aformat=float8%%3A.5f&bparam_bii=&bparam_bii%%3A%%3Aunit=degree&bparam_bii%%3A%%3Aformat=float8%%3A.5f&bparam_broad_type=&bparam_broad_type%%3A%%3Aunit=+&bparam_broad_type%%3A%%3Aformat=char4&bparam_optical_flag=&bparam_optical_flag%%3A%%3Aunit=+&bparam_optical_flag%%3A%%3Aformat=char3&bparam_red_psf_flag=&bparam_red_psf_flag%%3A%%3Aunit=+&bparam_red_psf_flag%%3A%%3Aformat=char1&bparam_blue_psf_flag=&bparam_blue_psf_flag%%3A%%3Aunit=+&bparam_blue_psf_flag%%3A%%3Aformat=char1&bparam_ref_name=&bparam_ref_name%%3A%%3Aunit=+&bparam_ref_name%%3A%%3Aformat=char6&bparam_ref_redshift=&bparam_ref_redshift%%3A%%3Aunit=+&bparam_ref_redshift%%3A%%3Aformat=char6&bparam_qso_prob=&bparam_qso_prob%%3A%%3Aunit=percent&bparam_qso_prob%%3A%%3Aformat=int2%%3A3d&bparam_alt_name_1=&bparam_alt_name_1%%3A%%3Aunit=+&bparam_alt_name_1%%3A%%3Aformat=char22&bparam_alt_name_2=&bparam_alt_name_2%%3A%%3Aunit=+&bparam_alt_name_2%%3A%%3Aformat=char22&Entry=&Coordinates=J2000&Radius=Default&Radius_unit=arcsec&NR=CheckCaches%%2FGRB%%2FSIMBAD%%2BSesame%%2FNED&Time=&ResultMax=1000&displaymode=Display&Action=Start+Search&table=heasarc_milliquas">%(thisName)s</a>''' % locals()
if objectSubtype and str(objectSubtype).lower() in ["uvs", "radios", "xray", "qso", "irs", 'uves', 'viss', 'hii', 'gclstr', 'ggroup', 'gpair', 'gtrpl']:
objectType = objectSubtype
if objectType == "star":
objectType = "stellar source"
elif objectType == "agn":
objectType = "AGN"
elif objectType == "cb":
objectType = "CV"
elif objectType == "unknown":
objectType = "unclassified source"
sep = match["separationArcsec"]
if match["classificationReliability"] == 1:
classificationReliability = "synonymous"
psep = match["physical_separation_kpc"]
if psep:
location = '%(sep)0.1f" (%(psep)0.1f Kpc) from the %(objectType)s core' % locals(
)
else:
location = '%(sep)0.1f" from the %(objectType)s core' % locals(
)
else:
# elif match["classificationReliability"] in (2, 3):
classificationReliability = "possibly associated"
n = float(match["northSeparationArcsec"])
if n > 0:
nd = "S"
else:
nd = "N"
e = float(match["eastSeparationArcsec"])
if e > 0:
ed = "W"
else:
ed = "E"
n = math.fabs(float(n))
e = math.fabs(float(e))
psep = match["physical_separation_kpc"]
if psep:
location = '%(n)0.2f" %(nd)s, %(e)0.2f" %(ed)s (%(psep)0.1f Kpc) from the %(objectType)s centre' % locals(
)
else:
location = '%(n)0.2f" %(nd)s, %(e)0.2f" %(ed)s from the %(objectType)s centre' % locals(
)
location = location.replace("unclassified", "object's")
best_mag = None
best_mag_error = None
best_mag_filter = None
filters = ["R", "V", "B", "I", "J", "G", "H", "K", "U",
"_r", "_g", "_i", "_g", "_z", "_y", "_u", "W1", "unkMag"]
for f in filters:
if f in match and match[f] and not best_mag:
best_mag = match[f]
try:
best_mag_error = match[f + "Err"]
except:
pass
subfilter = f.replace(
"_", "").replace("Mag", "")
best_mag_filter = f.replace(
"_", "").replace("Mag", "") + "="
if "unk" in best_mag_filter:
best_mag_filter = ""
subfilter = ''
if not best_mag_filter:
if str(best_mag).lower() in ("8", "11", "18"):
best_mag_filter = "an "
else:
best_mag_filter = "a "
else:
if str(best_mag_filter)[0].lower() in ("r", "i", "h"):
best_mag_filter = "an " + best_mag_filter
else:
best_mag_filter = "a " + best_mag_filter
if not best_mag:
best_mag = "an unknown-"
best_mag_filter = ""
else:
best_mag = "%(best_mag)0.2f " % locals()
distance = None
if "direct_distance" in match and match["direct_distance"]:
d = match["direct_distance"]
distance = "distance of %(d)0.1f Mpc" % locals()
if match["z"]:
z = match["z"]
distance += "(z=%(z)0.3f)" % locals()
elif "z" in match and match["z"]:
z = match["z"]
distance = "z=%(z)0.3f" % locals()
elif "photoZ" in match and match["photoZ"]:
z = match["photoZ"]
zErr = match["photoZErr"]
if not zErr:
distance = "photoZ=%(z)0.3f" % locals()
else:
distance = "photoZ=%(z)0.3f (±%(zErr)0.3f)" % locals()
if distance:
distance = "%(distance)s" % locals()
distance_modulus = None
if match["direct_distance_modulus"]:
distance_modulus = match["direct_distance_modulus"]
elif match["distance_modulus"]:
distance_modulus = match["distance_modulus"]
if updatePeakMagnitudes:
if distance:
absMag = match["transientAbsMag"]
absMag = """ A host %(distance)s implies a transient <em>M =</em> %(absMag)s mag.""" % locals(
)
else:
absMag = ""
else:
if distance and distance_modulus:
absMag = "%(distance_modulus)0.2f" % locals()
absMag = """ A host %(distance)s implies a <em>m - M =</em> %(absMag)s.""" % locals(
)
else:
absMag = ""
annotation = "The transient is %(classificationReliability)s with <em>%(objectId)s</em>; %(best_mag_filter)s%(best_mag)smag %(objectType)s found in the %(catalogueString)s. It's located %(location)s.%(absMag)s" % locals()
try:
summary = '%(sep)0.1f" from %(objectType)s in %(catalogue)s' % locals()
except:
badGuy = match["transient_object_id"]
print(f"Issue with object {badGuy}")
raise TypeError(f"Issue with object {badGuy}")
self.log.debug('completed the ``generate_match_annotation`` method')
return annotation, summary, sep
# use the tab-trigger below for new method
# xt-class-method
def _crossmatch_transients_against_catalogues(
transientsMetadataListIndex,
log,
settings,
colMaps):
"""run the transients through the crossmatch algorithm in the settings file
**Key Arguments**
- ``transientsMetadataListIndex`` -- the list of transient metadata lifted from the database.
- ``colMaps`` -- dictionary of dictionaries with the name of the database-view (e.g. `tcs_view_agn_milliquas_v4_5`) as the key and the column-name dictary map as value (`{view_name: {columnMap}}`).
**Return**
- ``crossmatches`` -- a list of dictionaries of the associated sources crossmatched from the catalogues database
.. todo ::
- update key arguments values and definitions with defaults
- update return values and definitions
- update usage examples and text
- update docstring text
- check sublime snippet exists
- clip any useful text to docs mindmap
- regenerate the docs and check redendering of this docstring
"""
from fundamentals.mysql import database
from sherlock import transient_catalogue_crossmatch
global theseBatches
log.debug(
'starting the ``_crossmatch_transients_against_catalogues`` method')
# SETUP ALL DATABASE CONNECTIONS
transientsMetadataList = theseBatches[transientsMetadataListIndex]
dbConn = database(
log=log,
dbSettings=settings["database settings"]["static catalogues"]
).connect()
cm = transient_catalogue_crossmatch(
log=log,
dbConn=dbConn,
transients=transientsMetadataList,
settings=settings,
colMaps=colMaps
)
crossmatches = cm.match()
log.debug(
'completed the ``_crossmatch_transients_against_catalogues`` method')
return crossmatches
|
PypiClean
|
/pulumi_azure_native-2.5.1a1693590910.tar.gz/pulumi_azure_native-2.5.1a1693590910/pulumi_azure_native/machinelearningservices/v20230401preview/get_registry_code_container.py
|
import copy
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
__all__ = [
'GetRegistryCodeContainerResult',
'AwaitableGetRegistryCodeContainerResult',
'get_registry_code_container',
'get_registry_code_container_output',
]
@pulumi.output_type
class GetRegistryCodeContainerResult:
"""
Azure Resource Manager resource envelope.
"""
def __init__(__self__, code_container_properties=None, id=None, name=None, system_data=None, type=None):
if code_container_properties and not isinstance(code_container_properties, dict):
raise TypeError("Expected argument 'code_container_properties' to be a dict")
pulumi.set(__self__, "code_container_properties", code_container_properties)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if system_data and not isinstance(system_data, dict):
raise TypeError("Expected argument 'system_data' to be a dict")
pulumi.set(__self__, "system_data", system_data)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="codeContainerProperties")
def code_container_properties(self) -> 'outputs.CodeContainerResponse':
"""
[Required] Additional attributes of the entity.
"""
return pulumi.get(self, "code_container_properties")
@property
@pulumi.getter
def id(self) -> str:
"""
Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the resource
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="systemData")
def system_data(self) -> 'outputs.SystemDataResponse':
"""
Azure Resource Manager metadata containing createdBy and modifiedBy information.
"""
return pulumi.get(self, "system_data")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
"""
return pulumi.get(self, "type")
class AwaitableGetRegistryCodeContainerResult(GetRegistryCodeContainerResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetRegistryCodeContainerResult(
code_container_properties=self.code_container_properties,
id=self.id,
name=self.name,
system_data=self.system_data,
type=self.type)
def get_registry_code_container(code_name: Optional[str] = None,
registry_name: Optional[str] = None,
resource_group_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetRegistryCodeContainerResult:
"""
Azure Resource Manager resource envelope.
:param str code_name: Container name.
:param str registry_name: Name of Azure Machine Learning registry. This is case-insensitive
:param str resource_group_name: The name of the resource group. The name is case insensitive.
"""
__args__ = dict()
__args__['codeName'] = code_name
__args__['registryName'] = registry_name
__args__['resourceGroupName'] = resource_group_name
opts = pulumi.InvokeOptions.merge(_utilities.get_invoke_opts_defaults(), opts)
__ret__ = pulumi.runtime.invoke('azure-native:machinelearningservices/v20230401preview:getRegistryCodeContainer', __args__, opts=opts, typ=GetRegistryCodeContainerResult).value
return AwaitableGetRegistryCodeContainerResult(
code_container_properties=pulumi.get(__ret__, 'code_container_properties'),
id=pulumi.get(__ret__, 'id'),
name=pulumi.get(__ret__, 'name'),
system_data=pulumi.get(__ret__, 'system_data'),
type=pulumi.get(__ret__, 'type'))
@_utilities.lift_output_func(get_registry_code_container)
def get_registry_code_container_output(code_name: Optional[pulumi.Input[str]] = None,
registry_name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetRegistryCodeContainerResult]:
"""
Azure Resource Manager resource envelope.
:param str code_name: Container name.
:param str registry_name: Name of Azure Machine Learning registry. This is case-insensitive
:param str resource_group_name: The name of the resource group. The name is case insensitive.
"""
...
|
PypiClean
|
/hassmart_homeassistant-0.65.4.tar.gz/hassmart_homeassistant-0.65.4/homeassistant/components/sensor/knx.py
|
import voluptuous as vol
from homeassistant.components.knx import ATTR_DISCOVER_DEVICES, DATA_KNX
from homeassistant.components.sensor import PLATFORM_SCHEMA
from homeassistant.const import CONF_NAME
from homeassistant.core import callback
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers.entity import Entity
CONF_ADDRESS = 'address'
CONF_TYPE = 'type'
DEFAULT_NAME = 'KNX Sensor'
DEPENDENCIES = ['knx']
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Required(CONF_ADDRESS): cv.string,
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
vol.Optional(CONF_TYPE): cv.string,
})
async def async_setup_platform(hass, config, async_add_devices,
discovery_info=None):
"""Set up sensor(s) for KNX platform."""
if discovery_info is not None:
async_add_devices_discovery(hass, discovery_info, async_add_devices)
else:
async_add_devices_config(hass, config, async_add_devices)
@callback
def async_add_devices_discovery(hass, discovery_info, async_add_devices):
"""Set up sensors for KNX platform configured via xknx.yaml."""
entities = []
for device_name in discovery_info[ATTR_DISCOVER_DEVICES]:
device = hass.data[DATA_KNX].xknx.devices[device_name]
entities.append(KNXSensor(hass, device))
async_add_devices(entities)
@callback
def async_add_devices_config(hass, config, async_add_devices):
"""Set up sensor for KNX platform configured within platform."""
import xknx
sensor = xknx.devices.Sensor(
hass.data[DATA_KNX].xknx,
name=config.get(CONF_NAME),
group_address=config.get(CONF_ADDRESS),
value_type=config.get(CONF_TYPE))
hass.data[DATA_KNX].xknx.devices.add(sensor)
async_add_devices([KNXSensor(hass, sensor)])
class KNXSensor(Entity):
"""Representation of a KNX sensor."""
def __init__(self, hass, device):
"""Initialize of a KNX sensor."""
self.device = device
self.hass = hass
self.async_register_callbacks()
@callback
def async_register_callbacks(self):
"""Register callbacks to update hass after device was changed."""
async def after_update_callback(device):
"""Call after device was updated."""
# pylint: disable=unused-argument
await self.async_update_ha_state()
self.device.register_device_updated_cb(after_update_callback)
@property
def name(self):
"""Return the name of the KNX device."""
return self.device.name
@property
def available(self):
"""Return True if entity is available."""
return self.hass.data[DATA_KNX].connected
@property
def should_poll(self):
"""No polling needed within KNX."""
return False
@property
def state(self):
"""Return the state of the sensor."""
return self.device.resolve_state()
@property
def unit_of_measurement(self):
"""Return the unit this state is expressed in."""
return self.device.unit_of_measurement()
@property
def device_state_attributes(self):
"""Return the state attributes."""
return None
|
PypiClean
|
/moody-templates-0.9.1.tar.gz/moody-templates-0.9.1/src/moody/base.py
|
import re
from moody.errors import TemplateRenderError
class Context:
"""The state of a template during render time."""
__slots__ = ("params", "meta", "buffer")
def __init__(self, params, meta, buffer):
"""Initializes the Context."""
self.params = params
self.meta = meta
self.buffer = buffer
def sub_context(self, params=None, meta=None):
"""
Creates a new subcontext that is scoped to a block.
Changes to the sub-context will not affect the parent context, although
the buffer is shared.
"""
sub_params = self.params.copy()
sub_params.update(params or {})
sub_meta = self.meta.copy()
sub_meta.update(meta or {})
return Context(sub_params, sub_meta, self.buffer)
def read(self):
"""Reads the contents of the buffer as a string."""
return "".join(self.buffer)
RE_NAME = re.compile("^[a-zA-Z_][a-zA-Z_0-9]*$")
def name_setter(name):
"""
Returns a function that will assign a value to a name in a given context.
The returned function has a signature of set_name(context, value).
"""
# Parse the names.
if "," in name:
names = [name.strip() for name in name.split(",")]
if not names[-1]:
names.pop()
def setter(context, value):
# Handle variable expansion.
value = iter(value)
for name_part in names:
try:
context.params[name_part] = next(value)
except StopIteration:
raise ValueError("Not enough values to unpack.")
# Make sure there are no more values.
try:
next(value)
except StopIteration:
pass
else:
raise ValueError("Need more than {} values to unpack.".format(len(names)))
else:
names = (name,)
def setter(context, value):
context.params[name] = value
# Make sure that the names are valid.
for name in names:
if not RE_NAME.match(name):
raise ValueError("{!r} is not a valid variable name. Only letters, numbers and undescores are allowed.".format(name))
# Return the setter.
return setter
def expression_evaluator(expression):
expression = compile(expression, "<string>", "eval")
def evaluator(context):
return eval(expression, context.meta, context.params)
return evaluator
class TemplateFragment:
"""A fragment of a template."""
__slots__ = ("_nodes", "_name",)
def __init__(self, nodes, name):
"""Initializes the TemplateFragment."""
self._nodes = nodes
self._name = name
def _render_to_context(self, context):
"""Renders the template to the given context."""
for lineno, node in self._nodes:
try:
node(context)
except TemplateRenderError:
raise
except Exception as ex:
raise TemplateRenderError(str(ex), self._name, lineno) from ex
class Template(TemplateFragment):
"""A compiled template."""
__slots__ = ("_params", "_meta",)
def __init__(self, nodes, name, params, meta):
"""Initializes the template."""
super(Template, self).__init__(nodes, name)
self._params = params
self._meta = meta
def _render_to_sub_context(self, context, meta):
"""Renders the template to the given context."""
# Generate the params.
sub_params = self._params.copy()
sub_params.update(context.params)
# Generate the meta.
sub_meta = self._meta.copy()
sub_meta.update(meta)
# Generate the sub context.
self._render_to_context(context.sub_context(sub_params, sub_meta))
def render(self, **params):
"""Renders the template, returning the string result."""
# Create the params.
context_params = self._params.copy()
context_params.update(params)
# Create the context.
context = Context(context_params, self._meta, [])
# Render the template.
self._render_to_context(context)
return context.read()
|
PypiClean
|
/apache-superset-jwi078-0.35.0.tar.gz/apache-superset-jwi078-0.35.0/superset/examples/css_templates.py
|
import textwrap
from superset import db
from superset.models.core import CssTemplate
def load_css_templates():
"""Loads 2 css templates to demonstrate the feature"""
print("Creating default CSS templates")
obj = db.session.query(CssTemplate).filter_by(template_name="Flat").first()
if not obj:
obj = CssTemplate(template_name="Flat")
css = textwrap.dedent(
"""\
.gridster div.widget {
transition: background-color 0.5s ease;
background-color: #FAFAFA;
border: 1px solid #CCC;
box-shadow: none;
border-radius: 0px;
}
.gridster div.widget:hover {
border: 1px solid #000;
background-color: #EAEAEA;
}
.navbar {
transition: opacity 0.5s ease;
opacity: 0.05;
}
.navbar:hover {
opacity: 1;
}
.chart-header .header{
font-weight: normal;
font-size: 12px;
}
/*
var bnbColors = [
//rausch hackb kazan babu lima beach tirol
'#ff5a5f', '#7b0051', '#007A87', '#00d1c1', '#8ce071', '#ffb400', '#b4a76c',
'#ff8083', '#cc0086', '#00a1b3', '#00ffeb', '#bbedab', '#ffd266', '#cbc29a',
'#ff3339', '#ff1ab1', '#005c66', '#00b3a5', '#55d12e', '#b37e00', '#988b4e',
];
*/
"""
)
obj.css = css
db.session.merge(obj)
db.session.commit()
obj = db.session.query(CssTemplate).filter_by(template_name="Courier Black").first()
if not obj:
obj = CssTemplate(template_name="Courier Black")
css = textwrap.dedent(
"""\
.gridster div.widget {
transition: background-color 0.5s ease;
background-color: #EEE;
border: 2px solid #444;
border-radius: 15px;
box-shadow: none;
}
h2 {
color: white;
font-size: 52px;
}
.navbar {
box-shadow: none;
}
.gridster div.widget:hover {
border: 2px solid #000;
background-color: #EAEAEA;
}
.navbar {
transition: opacity 0.5s ease;
opacity: 0.05;
}
.navbar:hover {
opacity: 1;
}
.chart-header .header{
font-weight: normal;
font-size: 12px;
}
.nvd3 text {
font-size: 12px;
font-family: inherit;
}
body{
background: #000;
font-family: Courier, Monaco, monospace;;
}
/*
var bnbColors = [
//rausch hackb kazan babu lima beach tirol
'#ff5a5f', '#7b0051', '#007A87', '#00d1c1', '#8ce071', '#ffb400', '#b4a76c',
'#ff8083', '#cc0086', '#00a1b3', '#00ffeb', '#bbedab', '#ffd266', '#cbc29a',
'#ff3339', '#ff1ab1', '#005c66', '#00b3a5', '#55d12e', '#b37e00', '#988b4e',
];
*/
"""
)
obj.css = css
db.session.merge(obj)
db.session.commit()
|
PypiClean
|
/auto_white_reimu-0.2.2-py3-none-any.whl/mahjong/record/universe/command.py
|
import ast
from collections import namedtuple
import numpy
import pandas
from mahjong.record.universe.property_manager import prop_manager
from mahjong.record.universe.format import View, Update, EventType
def is_empty(x):
return x is None or x != x
def norm_empty(x):
return None if is_empty(x) else x
def norm_value_str(x: str): # FIXME: take caution about type with str
return ast.literal_eval(x) if x != "" else None
class GameProperty:
def __init__(self, view_property: View, update_method: Update):
self.view_property = view_property
self.update_method = update_method
# self.default_ctor = default_ctor
@property
def scope(self):
return self.view_property.scope
command_field_names = [
"timestamp",
"event",
"scope",
"sub_scope_id",
"property",
"update_method",
"value",
"state"
]
_Game_command = namedtuple(
"GameCommand_",
field_names=command_field_names
)
command_field_names_set = set(command_field_names)
def try_int(x):
if isinstance(x, str) and x.isdigit():
return int(x)
return x
class GameCommand:
def __init__(self, *, prop: View, update: Update, sub_scope="all", value=None, timestamp=None, state=None, event=None):
self.event = event
self.timestamp = timestamp
self.sub_scope_id = sub_scope
self.value = value
self.property = prop
self.update_method = update
self.state = state
self.prop = GameProperty(self.property, self.update_method)
def __str__(self):
return str(self.to_raw_record())
def __repr__(self):
return "{%s}" % str(self)
@staticmethod
def clean(pandas_dataframe):
return pandas_dataframe.apply(GameCommand.pandas_columns_clean, axis="columns")
@staticmethod
def to_dataframe(command_list, raw=False):
if raw:
return pandas.DataFrame(
(x.to_raw_record() for x in command_list),
)
else:
return pandas.DataFrame(
(x.to_record() for x in command_list),
)
@staticmethod
def read_clean_csv(csv_path):
return GameCommand.clean(pandas.read_csv(csv_path))
@staticmethod
def pandas_columns_clean(row_origin):
# remove index
row = row_origin[command_field_names]
record = _Game_command(**row)
command = GameCommand.from_record(record)
row_return = row_origin.copy()
for name, value in command.to_raw_record()._asdict().items():
row_return[name] = value
return row_return
def to_raw_record(self):
return _Game_command(
timestamp=self.timestamp,
event=self.event,
scope=self.prop.scope,
sub_scope_id=self.sub_scope_id,
property=self.prop.view_property,
update_method=self.prop.update_method,
value=self.value,
state=self.state,
)
@staticmethod
def from_raw_record(record: _Game_command):
return GameCommand(
prop=record.property,
event=record.event,
update=record.update_method,
sub_scope=try_int(record.sub_scope_id),
value=norm_empty(record.value),
timestamp=norm_empty(record.timestamp),
state=norm_empty(record.state),
)
def to_record(self):
return _Game_command(
timestamp=self.timestamp,
event=self.event.name if self.event is not None else None,
scope=self.prop.scope.name,
sub_scope_id=self.sub_scope_id,
property=self.prop.view_property.name,
update_method=self.prop.update_method.name,
value=prop_manager.to_str(self.value, prop=self.prop.view_property),
state=prop_manager.to_str(self.state, prop=self.prop.view_property),
)
@staticmethod
def from_record(record: _Game_command):
view = View.by_name(record.scope)[record.property]
return GameCommand(
prop=view,
event=None if is_empty(record.event) else EventType[record.event],
update=Update[record.update_method],
sub_scope=try_int(record.sub_scope_id),
value=prop_manager.from_str(record.value, prop=view),
timestamp=norm_empty(record.timestamp),
state=prop_manager.from_str(record.state, prop=view),
)
|
PypiClean
|
/LDB_Algebra-0.3.2.tar.gz/LDB_Algebra-0.3.2/ldb/algebra/implicit_differentiation.py
|
from __future__ import absolute_import
from ldb.lapack.lapack import Matrix, Vector, dgesv, LAPACKTypeEnum
import ldb.algebra.expression
from ldb.algebra.expression import bind
from ldb.algebra.distribute import divide_list
def chain_differentiate(expression, differentiation):
"""
>>> from ldb.algebra.expression import Variable
>>> x = Variable('x')
>>> chain_differentiate(x*x, [x, x])
2
>>> y = Variable('y')
>>> chain_differentiate(x*x*y, [x, y])
(x + x)
"""
if len(differentiation) == 0:
return expression
else:
next_diff = ldb.algebra.expression.differentiate(expression,
differentiation[0])
return chain_differentiate(next_diff, differentiation[1:])
def differentiate(equations, dependent_variables, differentiation, binding):
"""
z = 2 y^3 + 3 x^2
z2 = 15 sqrt(y) + 5 x
F1 = z^2 - (4 y^6 + 12 y^3 x^2 + 9 x^4)
F2 = z2 - z - (15 sqrt(y) + 5x) + (2 y^3 + 3 x^2)
>>> from ldb.algebra.expression import Variable, differentiate as diff
>>> from ldb.algebra.function import Function
>>> from ldb.algebra.math import sqrt
>>> x = Variable('x')
>>> y = Variable('y')
>>> z = Function('z')
>>> z2 = Function('z2')
>>> dz_dx = diff(z, x)
>>> dz2_dx = diff(z2, x)
>>> F1 = z*z - (4*y**6 + 12*y**3*x**2 + 9*x**4)
>>> F2 = z2 - z - (15*sqrt(y) + 5*x) + (2*y**3 + 3*x**2)
>>> differentiate([F1, F2], [z, z2], [x], {x: 2, y: 9, z: 1470, z2: 55})
[12.0, 5.0]
>>> differentiate([F1, F2], [z, z2], [x, x], {x: 2, y: 9, z: 1470, z2: 55})
[6.0, 0.0]
"""
assert len(equations) == len(dependent_variables)
dF_diff = [chain_differentiate(equation, differentiation) for
equation in equations]
ddep_diff = [chain_differentiate(dependent_variable, differentiation) for
dependent_variable in dependent_variables]
total_binding = dict(binding)
# TODO: memoize the intermediate values
for i in range(1, len(differentiation)):
this_differentiation = differentiation[0:i]
extra_binding_vars = [chain_differentiate(dependent_variable,
this_differentiation) for
dependent_variable in dependent_variables]
extra_binding_values = differentiate(equations, dependent_variables,
this_differentiation,
total_binding)
these_extra_bindings = {var:value for var, value
in zip(extra_binding_vars,
extra_binding_values)}
total_binding.update(these_extra_bindings)
coefficient_matrix = Matrix(LAPACKTypeEnum.double, len(equations),
len(dependent_variables))
right_hand_side = Vector(LAPACKTypeEnum.double, len(equations))
for row, diff_eq in enumerate(dF_diff):
coeffs, remainder = divide_list(diff_eq, ddep_diff)
try:
for column, coeff in enumerate(coeffs):
coefficient_matrix[row, column] = bind(coeff, total_binding)
except TypeError:
raise Exception, 'Unbound value: '+str(bind(coeff, total_binding))
try:
right_hand_side[row] = -bind(remainder, total_binding)
except TypeError:
raise Exception, ('Unbound value: ' + str(bind(remainder,
total_binding))
+ ' with binding: ' + str(total_binding))
dgesv(coefficient_matrix, right_hand_side)
return list(right_hand_side)
|
PypiClean
|
/dbpedia_ent-0.1.9-py3-none-any.whl/dbpedia_ent/dto/ent/n1/e/trie_el.py
|
d_trie_el = {'_': ['el.wikipedia.org',
'el-elohe-israel',
'el-mansourieh',
'el-mokawloon',
'el-ashmunein',
'el-universal',
'el-azariyeh',
'el-producto',
'el-harairia',
'el-shennawi',
'el-dibbiyeh',
'el-tourbini',
'el-merreikh',
'el-djazzair',
'el-muqadasi',
'el-cassette',
'el-giganten',
'el-mudzahid',
'el-sakakini',
'el-naddaha',
'el-de-haus',
'el-kentour',
'el-balyana',
'el-qantara',
'el-ghuweir',
'el-bahnasa',
'el-creepo!',
'el-abnoudi',
'el-shaddai',
'el-zamalek',
'el-qadhafi',
'el-alamein',
'el-beth-el',
'el-assasif',
'el-fagoumi',
'el-makrizi',
'el-gadarif',
'el-harrach',
'el-jazouli',
'el-aurians',
'el-kantara',
'el-zahrawi',
'el-djazair',
'el-ismaily',
'el-badari',
'el-shedai',
'el-faiyum',
'el-tawhid',
'el/m-2085',
'el/w-2090',
'el-hazard',
'el-khalej',
'el-qusiya',
'el-hammeh',
'el-aurian',
'el-shafei',
'el-hadary',
'el-m-2083',
'el/w-2085',
'el-amarna',
'el-fustat',
'el-torito',
'el-khokha',
'el/m-2106',
'el-mansha',
'el/m-2084',
'el/m-2133',
'el/m-2226',
'el-nakhla',
'el-jezair',
'el-kilani',
'el-abnudi',
'el-olympi',
'el-jadida',
'el-quiada',
'el-qahira',
'el/m-2160',
'el/m-2083',
'el/m-2032',
'el/m-2075',
'el/m-2052',
'el-sheikh',
'el-wihdat',
'el-hobagi',
'el/m-2090',
'el-birweh',
'el-kanemi',
'el-khader',
'el-fagumi',
'el-lejjun',
'el-hachem',
'el-koura',
'el-paran',
'el-jaish',
'el-haria',
'el-gaada',
'el-kaeda',
'el-arian',
'el-gamal',
'el-kaida',
'el-azhar',
'el-bireh',
'el-arish',
'el-qaeda',
'el-gouna',
'el-kamus',
'el-islah',
'el-lahun',
'el-darad',
'el-bizri',
'el-masry',
'el-marsa',
'el-dakka',
'el-gaish',
'el-geish',
'el-limby',
'el-hibah',
'el-a-kru',
'el-ouali',
'el-ahwat',
'el-aryan',
'el-kurru',
'el-elyon',
'el-obeid',
'el-lisht',
'el-menya',
'el-queda',
'el-tarif',
'el-qaida',
'el-watan',
'el-detti',
'el-hibeh',
'el-flaye',
'el-kuds',
'el-bira',
'el-hasa',
'el-idwa',
'el-oued',
'el-ahly',
'el-hajj',
'el-biar',
'el-mina',
'el-fish',
'el-wafd',
'el-quds',
'el-wali',
'el-hawa',
'el-nath',
'el-golu',
'el-hadj',
'el-hiba',
'el-yafi',
"el'-76",
'el-tod',
'el-ain',
'el-tur',
'el-baz',
'el-kab',
'el-tor',
'el33t',
'el-op',
'el-al',
'el34g',
"el'ad",
'el-p',
'el++',
'el84',
'el34',
'el21',
'el-b',
'el/1',
'el32',
'el.p',
'el3'],
'a': ['elayaperumalnallur',
'elasto-capillarity',
'elaphoglossoideae',
'elass-lothringen',
'elateriospermeae',
'elatostematoides',
'elaphostrongylus',
'elaphomycetaceae',
'elamo-dravidian',
'elachiptereicus',
'elachertomorpha',
'elasmonematidae',
'elaphidionopsis',
'elachistosuchus',
'elaphonematidae',
'elamba-mudakkal',
'elateriospermum',
'elaeocarpaceae',
'elachistocleis',
'elaidinization',
'elamkunnapuzha',
'elaidinisation',
'elayirampannai',
'elassomatoidei',
'elasmodactylus',
'elachyophtalma',
'elaphrolaelaps',
'elarappallikal',
'elastodynamics',
'elasmosauridae',
'elasmobranchii',
'elaphantiasis',
'elassomatidae',
'elachothamnos',
'elaphoglossum',
'elasmotherium',
'elaphrosaurus',
'elapegademase',
'elachisinidae',
'elattostachys',
'elaphroconcha',
'elaphrothrips',
'elaphrocnemus',
'elavangargudi',
'elasticsearch',
'elachanthemum',
'elasmobranche',
'elaterodiscus',
'elangakurichy',
'elanthankuzhi',
'elassocanthon',
'elanodactylus',
'elasmostethus',
'elateriformia',
'elaphocephala',
'elasmobranchs',
'elastichosts',
'elaphrodites',
'elaphoidella',
'elangeswaran',
'elah-gabalus',
'elachocharax',
'elakatothrix',
'elavumthitta',
'elasmognatha',
'elaphebolion',
'elachistidae',
'elasmosaurid',
'elasmobranch',
'elavancherry',
'elaiochorion',
'elassoctenus',
'elasmosaurus',
'elassovalene',
'elachypteryx',
'elagaballium',
'elambilakode',
'elachistodon',
'elassodiscus',
'elamipretide',
'elanthangudi',
'elaeagnaceae',
'ela-mana-mou',
'elaan-e-jung',
'elapomorphus',
'elandsgracht',
'elachocroton',
'elaeophorbia',
'elamaldeniya',
'elaborations',
'elapognathus',
'elasmopalpus',
'elachistites',
'elastography',
'elaphanthera',
'elandakuttai',
'elandslaagte',
'elandhanjery',
'elassogaster',
'elaeodendron',
'elatotrypes',
'elandapatti',
'elateropsis',
'elandskraal',
'elaphomyces',
'elaeodopsis',
'elateroidea',
'elaiohorion',
'elattoneura',
'elastically',
'elastomania',
'elatinaceae',
'elagabalium',
'elassomatid',
'elampalloor',
'elakelaiset',
'elangarkudi',
'elasti-girl',
'elateroides',
'elaphebolia',
'elaeocarpus',
'elastomeric',
'elandakudam',
'elasipodida',
'elastolysis',
'elaiochorio',
'elateriodea',
'elachiptera',
'elatosaurus',
'elasmosarus',
'elaeomyrmex',
'elaphriella',
'elachanthus',
'elaphidiini',
'elastration',
'elaphristis',
'elafibranor',
'elavanchery',
'elakatmakhi',
'elacestrant',
'elainabella',
'elastoplast',
'elanthanoor',
'elatocladus',
'elafonissos',
'elaboration',
'elandsdoorn',
'elachyptera',
'elaeosticta',
'elaphromyia',
'elamanchili',
'elastolefin',
'elatophilus',
'elaphopsis',
'elathaalam',
'elastrator',
'elamajarvi',
'elamalwewa',
'elapeedika',
'elacholoma',
'elaeomyces',
'elaeophora',
'elasmopoda',
'elaphidion',
'elampillai',
'elatostema',
'elaphrinae',
'elatochori',
'elaintarha',
'elasiprora',
'elasticity',
'elachorbis',
'elagabalus',
'elaiohorio',
'elathagiri',
'elaphropus',
'elapotinus',
'elaterinae',
'elaphocera',
'elassoptes',
'elachisina',
'elappunkal',
'elagobalus',
'elaioplast',
'elachisoma',
'elasmosoma',
'elastomers',
'elateridae',
'elasmaspis',
'elafonisos',
'elagabulus',
'elaphandra',
'elaeidinae',
'elasmosaur',
'elapsoidea',
'elaiomycin',
'elandsrand',
'elakurichi',
'elacatinus',
'elamanmeno',
'elaiochori',
'elachertus',
'elasterell',
'elawyering',
'elandspark',
'elazigspor',
'elaeococca',
'elangovan',
'elaltitan',
'elaterite',
'elanthoor',
'elantxobe',
'elaiohori',
'elaeodina',
'elachista',
'elasminae',
'elatobium',
'elaterini',
'elanoides',
'elancourt',
'elankadai',
'elamgulam',
'elaeocarp',
'elandurai',
'elaphodus',
'elapoidea',
'elaeocyma',
'elastinen',
'elasippus',
'elassonas',
'elastance',
'elasmidae',
'elahuizen',
'elanjipra',
'elasticin',
'elagamuwa',
'elastolin',
'elaiosome',
'elasmucha',
'elappully',
'elaeolite',
'elathalam',
'elambalur',
'elaeagnus',
'elamkulam',
'elafonisi',
'elakolanu',
'elasmaria',
'elatrolet',
'elapoidis',
'elastomer',
'elappally',
'elaeoluma',
'elamites',
'elampini',
'elassoma',
'elagatis',
'elanthur',
'elahiyeh',
'elasmias',
'elaealis',
'elaprase',
'elamkadu',
'elangadu',
"elaine's",
'elagabal',
'elakhbar',
'elacatis',
'elantris',
'elaionas',
'elak-oku',
'elabored',
'elasonas',
'elasayel',
'elangbam',
'elatobia',
'elaphria',
'elappara',
'elapinae',
'elamaram',
'elastics',
'elastase',
'elapidae',
'elaninae',
'elanapis',
'elastane',
'elaphrus',
'elassona',
'elaeatis',
'elamitic',
'elaemima',
'elaeagia',
'elabered',
'eladetta',
'elaidius',
'elagolix',
'elastica',
'elabared',
'elavally',
'elaprolu',
'elayiyeh',
'elanders',
'elaiyur',
'elapids',
'elaspol',
'elastic',
'elapata',
'elation',
'elakiri',
'elamite',
'elastin',
'elaenia',
'elasmia',
'elafius',
'elaunin',
'elative',
'elateia',
'elahieh',
'elathur',
'elampal',
'elathan',
'elasioi',
'elantra',
'elatine',
'elafina',
'elamadu',
'elaphus',
'elatrol',
'elasmus',
'elabuga',
'elasona',
'elaspan',
'elamena',
'elamais',
'elampus',
'ela-stv',
'elackad',
'elabftw',
'elastix',
'elaszym',
'elanger',
'elanora',
'elaraby',
'elabela',
'elasund',
'eladio',
'elamad',
'elachi',
'elavur',
'elanad',
'elatea',
'elafin',
'elabad',
'eladha',
'elanex',
'elaine',
'elazar',
'elanga',
'elahly',
'elaeis',
'eladar',
'elafos',
'elatha',
'elavl1',
'elaeth',
'elaris',
'elanco',
'elavon',
'elazay',
'elazig',
'elaman',
'elaeus',
'elaver',
'elapid',
'elasht',
'elasis',
'elatos',
'elatia',
'elaska',
'elavil',
'elayne',
'elanji',
'elaldi',
'elanus',
'elaphe',
'elaiza',
'elater',
'elaheh',
'elanda',
'elands',
'elatus',
'elasi',
'elano',
'elaan',
'elaia',
'elac1',
'elass',
'elath',
'elara',
'elata',
'elaps',
'elam1',
'ela-2',
'elame',
'elani',
'elaph',
'elahi',
'elain',
'elati',
'elaea',
'elaol',
'elada',
'elasa',
'eland',
'elato',
'ela-3',
'elac2',
'ela-1',
'elahe',
'elana',
'elah',
'elal',
'elam',
'elai',
'elat',
'elan',
'elac',
'elas',
'elar',
'ela2',
'elaf',
'ela'],
'b': ['elbasvir/grazoprevir',
'elbsandsteingebirge',
'elbe-stremme-fiener',
'elbeuf-sur-andelle',
'elbe-weser-dreieck',
'elbe-havel-canal',
'elbe-havel-kanal',
'elbe-seitenkanal',
'elbschwanenorden',
'elbe-havel-land',
'elbphilharmonie',
'elburgo/burgelu',
'elbe-ehle-nuthe',
'elbeuf-en-bray',
'elbaue-flaming',
'elbenschwand',
'elbe-project',
'elbe-germans',
'elbretornis',
'elbingerode',
'elbrus-avia',
'elbakin.net',
'elbow-joint',
'elbhangfest',
'elbchaussee',
'elbe-elster',
'elbowgrease',
'elbauenpark',
'elbe-saale',
'elbersdorf',
'elberadweg',
'elbflorenz',
'elbe-parey',
'elbbrucken',
'elbrusskii',
'elbrusskiy',
'elbe-heide',
'elbrus-2s+',
'elbe-havel',
'elbigenalp',
'elberfeld',
'elbrusski',
'elbrussky',
'elbmarsch',
'elbtalaue',
'elbaradei',
'elbrus-8s',
'elbegdorj',
'elborough',
'elbingian',
'elbrewery',
'elbahouse',
'elbowgate',
'elberich',
'elbasani',
'elbingen',
'elbassan',
'elbonian',
'elbtower',
'elbakyan',
'elbridge',
'elbursia',
'elbereth',
'elburgon',
'elburton',
'elbegast',
'elberton',
'elbasvir',
'elbistan',
'elbessos',
'elbling',
'elbette',
'elbourz',
'elbasan',
'elberus',
'elbriot',
'elbayon',
'elbrick',
'elbonia',
'elbafly',
'elbbach',
'elb-139',
'elboeuf',
'elbulli',
'elbogen',
'elberon',
'elbaite',
'elburgo',
'elbeyli',
'elbella',
'elberta',
'elbiya',
'elbert',
'elbrus',
'elbora',
'elbete',
'elbers',
'elburg',
'elbows',
'elborz',
'elbaka',
'elburz',
'elbaum',
'elbtal',
'elbiku',
'elblag',
'elbeuf',
'elbach',
'elbie',
'elban',
'elbis',
'elben',
'elbit',
'elbow',
'elbio',
'elbu',
'elbw',
'elbo',
'elba',
'elbe',
'elbi',
'elb'],
'c': ['elchesheim-illingen',
'elcabrosaurus',
'elchasaites',
'elckerlijc',
'elchweiler',
'elchasites',
'elcesaites',
'elcassette',
'elckerlyc',
'elcomsoft',
'elcanidae',
'elche/elx',
'elchingen',
'elcatonin',
'elchasai',
'elchlepp',
'elcidis',
'elciego',
'elcaset',
'elcoteq',
'elcedes',
'elcysma',
'elcock',
'elchin',
'elcine',
'elcvia',
'elcmar',
'elcom',
'elcic',
'elcot',
'elcar',
'elcor',
'elcan',
'elcat',
'elcho',
'elche',
'elci',
'elce',
'elcb',
'elcy',
'elco',
'elca',
'elc'],
'd': ['elder-beerman',
'eldridgeville',
'elderberries',
'eldeyjarbodi',
'eldecalcitol',
'elderflower',
'eldrevatnet',
'eldiario.es',
'eldredgeops',
'eldermyrmex',
'eldersfield',
'eldertreks',
'eldoradina',
'eldredelei',
'eldoraigne',
'elderbrook',
'elderfield',
'eldhrimnir',
'eldredgeia',
'elderspeak',
'elderville',
'elderberry',
'eldiguzids',
'eldebrock',
'eldeceeon',
'elderslea',
'eldsberga',
'eldersloo',
'elderslie',
'eldelumab',
'eldership',
'elderwort',
'eldopaque',
'eldershaw',
'eldercare',
'eldonnia',
'elderkin',
'elduayen',
'eldritch',
'eldeniya',
'eldridge',
'eldepryl',
'eldredge',
'eldingen',
'eldaafer',
'elderate',
'eldegard',
'eldering',
'eldkvarn',
'eldbjorg',
'eldorado',
'eldoyane',
'eldaring',
'eldetal',
'eldoret',
'elduain',
'eldzhey',
'eldwick',
'eldonia',
'elderly',
'eldarov',
'eldivan',
'eldlive',
'eldjarn',
'eldrine',
'eldamar',
'elduvik',
'eldiguz',
'eldfell',
'eldikan',
'eldrin',
'elddis',
'eldest',
'eldopa',
'eldana',
'eldred',
'eldgja',
'elders',
'eldijk',
'eldyak',
'eldrid',
'eldora',
'eldena',
'eldon',
'eldem',
'eldar',
'eldra',
'eldee',
'eldor',
'eldyk',
'eldad',
'eldap',
'eldho',
'elden',
'eldia',
'eldir',
'elder',
'eldur',
'eldis',
'eldin',
'eldey',
'eldol',
'elda',
'elde',
'eldo',
'eldr',
'eldc',
'eld'],
'e': ['elected-mayors-in-the-united-kingdom',
'elexacaftor/tezacaftor/ivacaftor',
'electronic-visual-displays',
'electrochemiluminescence',
'electrodiffusiophoresis',
'electrochemiluminescent',
'electrogastroenterogram',
'electrohypersensitivity',
'electroencephalographic',
'electropermeabilization',
'electrokompressiongraph',
'electro-encephalography',
'elektro-apparate-werke',
'electroencephalography',
'electro-fluid-dynamics',
'electricity-generating',
'electroencephalograph',
'electromethanogenesis',
'electrochromatography',
'electro-encephalogram',
'electromyoneurography',
'electronystagmography',
'electroencefalography',
'electricalengineering',
'electrolithoautotroph',
'electroencephalophone',
'electrical-resistance',
'electromyostimulation',
'elementarygrouptheory',
'electroencelphalogram',
'electro-olfactography',
'electroretinographic',
'electrocardiographic',
'electrocorticography',
'electropalatographic',
'electrophysiologists',
'electroneuronography',
'electrocochleography',
'electrogalvanization',
'electroantennography',
'electrotrichogenesis',
'electro-therapeutics',
'elektro-mess-technik',
'electrocauterization',
'eleutheroschizonidae',
'electrohydrodynamics',
'electrophysiological',
'electrocommunication',
'electronystagmograph',
'electroencephalogram',
'electronegativities',
'electropalatography',
'elektronorgtechnica',
'electronorgtechnica',
'electrotherapeutics',
'electrocardiography',
'electrochemotherapy',
'electric-generating',
'electron-micrograph',
'electrohydrogenesis',
'electrochlorination',
'electrohydrodynamic',
'electromanipulation',
'elementalallotropes',
'eleutherodactylinae',
'eleutherodactylidae',
'elektrizitatsmuseum',
'electro-ejaculation',
'electrophysiologist',
'electro-engineering',
'electroconductivity',
'electrodeionization',
'electroluminescence',
'electroretinography',
'electroshocktherapy',
'electro-oculography',
'electrofluorination',
'eleftherio-kordelio',
'electroluminescent',
'electroantennogram',
'electrostimulation',
'electroengineering',
'electrodynamometer',
'electoral-vote.com',
'electro-mechanical',
'electrocardiograms',
'electroreflectance',
'electro-extracting',
'eleu-dit-leauwette',
'electropalatograph',
'electro-industrial',
'electrooculography',
'electrofulguration',
'electrocardiophone',
'electromyrmococcus',
'elevorganisasjonen',
'electrocoagulation',
'electrocyclization',
'electronequivalent',
'electroglottograph',
'electroejaculation',
'eleutherodactyline',
'electrosensitivity',
'electro-technology',
'electropsychometer',
'elektrokardiogramm',
'electrophotography',
'electronegativeity',
'electrocapillarity',
'electroacupuncture',
'electrocardiograph',
'elektromesstechnik',
'electrogastrogram',
'electron-neutrino',
'electromyographic',
'electroantenogram',
'electrogravimetry',
'electrocomponents',
'electroacupunture',
'electrotechnician',
'electrokardiogram',
'electro-paintings',
'electro-mechanics',
'electrogustometry',
'electrofiltration',
'eleutherodactylus',
'electrodiagnostic',
'electronegativity',
'electro-magnetism',
'electro-oculogram',
'electrotachyscope',
'electriclarryland',
'electro-pneumatic',
'electrodeposition',
'electrometallurgy',
'electroejaculator',
'electrohomeopathy',
'electoralvote.com',
'electromechanical',
'eleutherengonides',
'electro-oxidation',
'eleuthero-lacones',
'electrophysiology',
'electroendosmosis',
'electroextraction',
'electroretinogram',
'electrotechnology',
'electropositivity',
'electrospinlacing',
'electrodomesticos',
'electrocardiogram',
'elektro-slovenija',
'electrooculogram',
'elettrodomestico',
'electropolishing',
'electrosynthesis',
'electrodiffusion',
'electron-capture',
'electrodiagnoses',
'electromagnitism',
'electro-coatings',
'electroextracted',
'electrodiathermy',
'electromaterials',
'electro-harmonix',
'electropaintings',
'electrotherapist',
'electrocutionist',
'electromagnetism',
'electrochemicals',
'elearnnetwork.ca',
'electro-acoustic',
'electroanalgesia',
'electro-magnetic',
'electrostephanus',
'elektropartizany',
'electronic-books',
'electroreceptive',
'electropodagrion',
'electro-acustico',
'electronicvoting',
'electromyography',
'electroneurogram',
'electromigration',
'elektronikpraxis',
'electro-theremin',
'electrolytically',
'electromagnetics',
'electrogravitics',
'electrostriction',
'electrovibration',
'electromechanics',
'electrochemistry',
'electrodiagnosis',
'eleutherocentrus',
'electro-painting',
'electrocompaniet',
'electrochromatic',
'elektra/musician',
'electromagnatism',
'electro-refining',
'electroguitarpop',
'electroreception',
'eleutherospermum',
'electrophilicity',
'electro-precizia',
'electropherogram',
'electrochromism',
'electrokoenenia',
'eleutheranthera',
'elearnetwork.ca',
'eleutherostylis',
'electrinocellia',
'electrotheremin',
'eleftheropoulos',
'electronarcosis',
'electrofocusing',
'electromyograms',
'electropainting',
'electrolyzation',
'electrocutioner',
'electroadhesion',
'elephantorrhiza',
'electro-plating',
'electrification',
'electromyograph',
'electronovision',
'electrolocation',
'electrodynamics',
'elephantosaurus',
'electro-optical',
'electrorefining',
'electrophoresis',
'electromedicine',
'electrotechnics',
'electrochromics',
'electro-osmosis',
'electrocatalyst',
'electrochemical',
'electro-painted',
'elephantimorpha',
'electrodialysis',
'elephantopoides',
'elektrozavodsky',
'electro-coating',
'eleuteroschisis',
'electronegative',
'elephantineness',
'electropositive',
'electrokinetics',
'electroceramics',
'eleutherocercus',
'elephantiformes',
'electrophoretic',
'eleftheroupolis',
'electroplankton',
'electroporation',
'electrophoridae',
'electroreceptor',
'electrospinning',
'electrentomidae',
'electroacoustic',
'electroblotting',
'electrorotation',
'electromobility',
'electroharmonix',
'eleutherostigma',
'electrocoatings',
'eleutherococcus',
'electro-winning',
'electrovalence',
'electrogravity',
'electroforming',
'electromyogram',
'eleutheromania',
'eleutheromenia',
'electioneering',
'electrohippies',
'elephant-apple',
'electrocoating',
'elektracustika',
'elephant-shrew',
'electromagnets',
'eleoscytalopus',
'electron-volts',
'electroelution',
'electropainted',
'electrowetting',
'electrokinesis',
'electrofishing',
'electro-magnet',
'electro-optics',
'electroceramic',
'electokinetics',
'eleutherosides',
'electro-paints',
'electromotance',
'electroosmosis',
'electroplating',
'electrorefined',
'eleios-pronnoi',
'electronicasia',
'elektriraudtee',
'eleftheroupoli',
'electrooptical',
'electrotherapy',
'eleftherotypia',
'eleutheropolis',
'electrocutango',
'electro-coated',
'electroception',
'electrogenesis',
'eleftherochori',
'electrostrymon',
'electroplaques',
'electrochromic',
'eleutheromyces',
'electrosurgery',
'electrokinetic',
'elector-prince',
'electronically',
'electrolytical',
'electroetching',
'elefantenrunde',
'electrocteniza',
'electronomicon',
'electrodynamic',
'electroporator',
'electrographic',
'electrotropism',
'elephantoceras',
'elettromacumba',
'electrowinning',
'electro-system',
'electrostatics',
'elearnsecurity',
'electrochemist',
'electrolysist',
'electrocrania',
'electropolish',
'elephantomyia',
'elephantinely',
'electro-music',
'eleventh-hour',
'elektromotive',
'eleutheronema',
'elephantiasis',
'electrography',
'eleuthranthes',
'eleventyseven',
'electrictears',
'electro-coats',
'electrocyclic',
'elephantdrive',
'electrophilic',
'elecytrolysis',
'electropaints',
'electroretard',
'elevator:2010',
'electroputere',
'electroatopos',
'electrolarynx',
'electrotettix',
'electrofringe',
'elegansovella',
'electro-voice',
'electrontrans',
'electrophorus',
'eleutherandra',
'electro-optic',
'electrologica',
'elegestolepis',
'electronvolts',
'electrovermis',
'electrophiles',
'electronicore',
'elepuukahonua',
'electrologist',
'electrofreeze',
'electrocibles',
'eleutheroside',
'electricsheep',
'electrocuting',
'electroimpact',
'electromagnet',
'electrophorid',
'electrophobia',
'elektropoyezd',
'eleutherochir',
'eleutharrhena',
'elephantomene',
'electrocoated',
'electromerism',
'elettariopsis',
'electromyrmex',
'electron-volt',
'eleutherornis',
'electrocution',
'electrafixion',
'electrostrong',
'electro-paint',
'electrosmosis',
'electromethes',
'electrostatic',
'electrotyping',
'electromotive',
'electro-house',
'eletronuclear',
'electrooptics',
'electricution',
'elephantoidea',
'electromobile',
'electrofusion',
'electr-o-pura',
'elearnnetwork',
'electrosexual',
'eleutherascus',
'electricimage',
'electropathy',
'electrolytic',
'electronicam',
'electraglide',
'eleuthromyia',
'electroscope',
'electrolites',
'electroklash',
'electrogenic',
'electrotonus',
'electricidad',
'elephantidae',
'elektrogorsk',
'electrolytes',
'electrotherm',
'eleveneleven',
'electoralism',
'electro-mech',
'eleutherobin',
'elektroclash',
'electronicat',
'electrotango',
'elephantusia',
'elegantaspis',
'elektroforez',
'electrowavez',
'electrolysis',
'electrotroph',
'electrokoopa',
'electropoise',
'eleutherozoa',
'electrohouse',
'electroclash',
'electroshock',
'electro-jazz',
'electronvolt',
'elephantfish',
'electrolysed',
'elephantbird',
'electro-coat',
'electrowerkz',
'electromotor',
'electrolosis',
'elektropoezd',
'electraglaia',
'electrotaxis',
'electro-weak',
'electrawoman',
'electrooptic',
'elephantware',
'electro-rock',
'electrocoats',
'electrocytes',
'electability',
'electrosmart',
'elephantopin',
'elenophorini',
'electricland',
'electrophile',
'elephantstay',
'electrorides',
'eleothreptus',
'electronorte',
'elephantitis',
'elefteriades',
'elettrotreno',
'electrocuted',
'electricians',
'electrovoice',
'elektorornis',
'electroliner',
'electro-funk',
'elektrotwist',
'electrically',
'electrophaes',
'elevenstring',
'electrosport',
'elektrithone',
'elephantopus',
'electrometer',
'elephantulus',
'electrotonic',
'eleemosynary',
'elementalors',
'electrolyzer',
'eleutherobia',
'electrolaser',
'electrophone',
'electropaint',
'electroplate',
'electro-smut',
'electro-soma',
'electrospray',
'elektrolytes',
'electrochem',
'electroblot',
'elearethusa',
'eleodiphaga',
'eleutherios',
'electrofrac',
'electroboom',
'electromuse',
'electrolite',
'eletropaulo',
'electrogram',
'eleutherius',
'elektrychka',
'electrovite',
'electrichka',
'electronics',
'elektrenika',
'elecalthusa',
'eleotriodes',
'electronemo',
'electrocute',
'electro-mat',
'elektronika',
'electrotech',
'electrofuge',
'electrapate',
'electioneer',
'elephantina',
'elektrobank',
'elenthikara',
'electrojazz',
'elexacaftor',
'elerithattu',
'eleven-plus',
'electroplax',
'electro-hop',
'electrosmog',
'elektrougli',
'electralane',
'electotreta',
'electro-pop',
'electrocoat',
'elektrichka',
'elerium-115',
'eleiosuchus',
'electropump',
'electrecord',
'electrovamp',
'electropost',
'electrocore',
'elefantasia',
'electrothan',
'eleochorion',
'electricity',
'electrofuel',
'electrolyze',
'eleutherian',
'elektrostal',
'electronico',
'elelasingan',
'electronium',
'electrovaya',
'electrocart',
'electrolier',
'electrified',
'elephantida',
'electroweak',
'eleutherine',
'eleftheriou',
'electropunk',
'elektroboot',
'eletrolysis',
'electrofolk',
'electrician',
'elephantine',
'electrolyte',
'electrocyte',
'electrology',
'eleutherion',
'eleuchadius',
'electralyte',
'electrorana',
'electronika',
'elementaita',
'elementfour',
'electrorock',
'eletronorte',
'electrogena',
'electroboot',
'electrelane',
'electricoil',
'electrostal',
'elektronics',
'electrohome',
'elephantmen',
'electoplate',
'electrathon',
'electricfil',
'electrofunk',
'electocracy',
'electromote',
'electronica',
'elektrichki',
'elecampane',
'elevatorny',
'eleohorion',
'electryone',
'eleochorio',
'electrofax',
'elementree',
'eleven-gon',
'electricar',
'eletricity',
'electronic',
'elephantis',
'eleutherus',
'elekosmioi',
'elektafilm',
'eleaticism',
'eleocharis',
'eleutherna',
'elektrobit',
'eleftheres',
'eleutherae',
'elevensies',
'eleotridae',
'eleftherna',
'elecrolyte',
'eleutheria',
'elessaurus',
'elektromis',
'elethiomel',
'electrobat',
'elesclomol',
'electrobix',
'electroboy',
'electrical',
'electravia',
'electrocop',
'elegestina',
'electrodes',
'elektrenai',
'elenendorf',
'electorate',
'electrosex',
'electrowon',
'electrojet',
'electropop',
'electrolux',
'elegocampa',
'eleventeen',
'eleusinian',
'elegabalus',
'elektrobay',
"elect'road",
'eleothinus',
'electresia',
'elevations',
'electranet',
'electrobel',
'electracma',
'eletriptan',
'elevenplay',
'electronet',
'elementalz',
'electrodry',
'eleionomae',
'elemenstor',
'electridae',
'elementals',
'elementary',
'electrocar',
'eletronics',
'eleoniscus',
'electrohop',
'eletrobras',
'electrabel',
'eleusinion',
'elenchidae',
'electride',
'elevators',
'elenctics',
'elektrola',
'electrons',
'eleorchis',
'eleuterio',
'elekmania',
'elementar',
'eleuterus',
'elezovici',
'electryon',
'elephants',
'elementeo',
'eleuthera',
'eleochori',
'electrico',
'elections',
'eleikeroz',
'elementor',
'electracy',
'elefthero',
'electribe',
'elefantes',
'electrion',
'elekeiroz',
'electives',
'electrada',
'electrism',
'elektro-l',
'eletipadu',
'eleiodoxa',
'elephanta',
'electrics',
'elexorien',
'electrode',
'elephante',
'elephenor',
'elec-trak',
'electuary',
'eleithyia',
'elemental',
'elearning',
'electrock',
'elementis',
'eleotrica',
'elektroni',
'electrona',
'elettaria',
'elevation',
'eletefine',
'elephant9',
'electoral',
'electrium',
'elezagici',
'electrasy',
'elencourt',
'elekistra',
'eletronic',
'elevenses',
'elekmonar',
'electrica',
'eleohorio',
'electrola',
'eleuthero',
'eledoisin',
'eleusinia',
'elentsite',
'eleazarus',
'electress',
'elecbyte',
'elefsina',
'elective',
'elenhank',
'elesotis',
'elesbaan',
'elenarus',
'elentari',
'elegance',
'elenctic',
'elephunk',
'electrum',
'elemento',
'elecraft',
'electrip',
'elefante',
'eleuthia',
'elephind',
'electone',
'elektrac',
'eleotris',
'eleanora',
'eleginus',
'eleusine',
'elektrim',
'eleatics',
'election',
'elegarda',
'electric',
'elektron',
'electron',
'elektrit',
'eledhwen',
'elegiacs',
'eleonora',
'electret',
'elekiter',
'elemicin',
'eleassar',
'elecussa',
'eleskirt',
'electrik',
'eleotrid',
'eleanore',
'eleodoxa',
'eleclink',
'elemenop',
'elenowen',
'electors',
'elegante',
'elephanz',
'elevenie',
'eleventh',
'elebrity',
'eleiotis',
'elematic',
'elenchus',
'electrek',
'eleusina',
'eletrica',
'elelwani',
'elezovic',
'eleuther',
'eleazarr',
'eleagnus',
'elephant',
'elemetal',
'elements',
'eleohori',
'elenchos',
'elevator',
'elethyia',
'elepogon',
'eletale',
'elettra',
'elektro',
'elegaic',
'electre',
'eleve11',
'electra',
'elektor',
'eleaeoi',
'eleazar',
'elendor',
'eleazus',
'elegeia',
'elegaon',
'elestat',
'elegiac',
'eleggua',
'elegast',
'eleuter',
'eleians',
'elesmes',
'elenore',
'eleanor',
'eleleth',
'elector',
'elenari',
'elevens',
'eleazer',
'eledees',
'elephas',
'elenika',
'eleruwa',
'elesias',
'elekere',
'electro',
'elevons',
'elemaga',
'eledone',
'electus',
'elemene',
'elephan',
'eledona',
'elekana',
'eleusis',
'eleatic',
'eledhel',
'elevate',
'eleidin',
'elenydd',
'eleague',
'elefsis',
'elegans',
'elerium',
'elendil',
'eleones',
'eleodes',
'elefant',
'element',
'elebits',
'elering',
'elegant',
'elected',
'elecard',
'eleleis',
'elewijt',
'elemund',
'elefunk',
'eleison',
'elefun',
'eleans',
'eleint',
'elevci',
'elegit',
'elegia',
'elesun',
'elekta',
'eleius',
'elegua',
'elerji',
'elemir',
'elemag',
'elerai',
'elegie',
'elevon',
'elenia',
'elenga',
'elelea',
'ele.me',
'elebit',
'eleyas',
'eletot',
'elebra',
'elemex',
'elekes',
'elevil',
'elegba',
'elemer',
'eleusa',
'elench',
'elenin',
'elenco',
'eledio',
'elecom',
'elektr',
'elect',
'elene',
'eleos',
'elegu',
'eleia',
'eleme',
'elena',
'eleja',
'elend',
'elemi',
'eleon',
'eleth',
'eleva',
'elean',
'eleda',
'elets',
'eleni',
'elegy',
'elers',
'eleri',
'eleks',
'elele',
'elei',
'elem',
'elev',
'eleo',
'elex',
'eleh',
'elek',
'elen',
'eley',
'eles',
'elea',
'eled',
'ele'],
'f': ['elfangor-sirinial-shamtul',
'elfstedenronde',
'elfenbeinbreen',
'elfstedentocht',
'elfershausen',
'elf-aquitane',
'elfriedella',
'elferkofel',
'elfenstein',
'elfconners',
'elfenroads',
'elfthritha',
'elf-titled',
'elfvengren',
'elfdalian',
'elfenlied',
'elfazepam',
'elfbuchen',
'elfconner',
'elfstones',
'elfenland',
'elfarsker',
'elfensjon',
'elfsorrow',
'elf-arrow',
'elfenbein',
'elferrat',
'elfquest',
'elfmania',
'elfingen',
'elfsheen',
'elfsborg',
'elfsberg',
'elfangor',
'elfridia',
'elfstone',
'elfriede',
'elfstrom',
'elfling',
'elfving',
'elf-man',
'elfster',
'elfoddw',
'elfwood',
'elfshot',
'elflein',
'elfonia',
'elffin',
'elfish',
'elfman',
'elfeld',
'elfata',
'elfael',
'elfern',
'elfodd',
'elfrid',
'elfros',
'elford',
'elfora',
'elfcon',
'elfaa',
'elfin',
'elfed',
'elf32',
'elfas',
'elf1',
'elfi',
'elfa',
'elf5',
'elfs',
'elf2',
'elfe',
'elfm',
'elf3',
'elf4',
'elff',
'elf'],
'g': ['elgin--middlesex--london',
'elgaland-vargaland',
'elginerpetontidae',
'elgeseterlinjen',
'elgoonishshive',
'elgonotyphlus',
'elginerpeton',
'elgrandetoto',
'elgondaguda',
'elgoresyite',
'elginhaugh',
'elgiganten',
'elgorriaga',
'elganellus',
'elgspiggen',
'elginshire',
'elgygytgyn',
'elgersburg',
'elgiszewo',
'elgeseter',
'elganowko',
'elgfrodi',
'elgnowko',
'elgonima',
'elganowo',
'elgonina',
'elgoibar',
'elgibbor',
'elgsnes',
'elgaard',
'elgaras',
'elginia',
'elgiloy',
'elgrand',
'elgamal',
'elgaria',
'elgheia',
'elgnowo',
'elgeta',
'elgart',
'elgort',
'elgyay',
'elgoog',
'elgato',
'elgins',
'elgyan',
'elgama',
'elgie',
'elgin',
'elgar',
'elgol',
'elgee',
'elg-e',
'elger',
'elgon',
'elgg',
'elge',
'elgi',
'elg'],
'h': ['elhanyaya',
'elharar',
'elhuyar',
'elhanan',
'elhamma',
'elhovo',
'elhayi',
'elhaz',
'elham',
'elhae',
'elht',
'elhs',
'elh'],
'i': ['elisabeth-engelhardt-literaturpreis',
'elincourt-sainte-marguerite',
'elisabeth-sophien-koog',
'elisabeth-anna-palais',
'elizabethtown-kitley',
'elisabethatriene',
'elibertydollars',
'elizabethkingia',
'elisabethschule',
'elisabethsminde',
'elictognathidae',
'elisabethville',
'elisabethszell',
'elichiribehety',
'eliminationism',
'elisabethbuhne',
'elizabethville',
'elisabethinsel',
'elibertydollar',
'elise-daucourt',
'elisabeth-anne',
'elisabethmarkt',
'eliminativism',
'elizabethgrad',
'eligmodermini',
'elizabethtown',
'elisabethstad',
'elitetorrents',
'elisabethgrad',
'elinga-mpango',
'elisabethinia',
'eliasluthman',
'eligmodontia',
'elizardbeast',
'eligmocarpus',
'elippathayam',
'elisavetgrad',
'elizavetovca',
'elizabethans',
'elijah/eliot',
'elinochorion',
'elisabethans',
'elizavetgrad',
'elizabetgrad',
'elisabetgrad',
'elixophyllin',
'elinzanetant',
'elissarrhena',
'elias-clark',
'elitechrome',
'elisavetpol',
'elicitation',
'elipathayam',
'eliminatory',
'elizabethan',
'eligibility',
'elis-thomas',
'elipsocidae',
'elisenvaara',
'elikkattoor',
'eliogabalus',
'elimination',
'eliteserien',
'eliminators',
'elistanzhi',
'elitserien',
'elixhausen',
'eliminalia',
'elitar-202',
'elisionism',
'elitaliana',
'elidiptera',
'elizaville',
'elizangela',
'elisabetin',
'elisolimax',
'elitloppet',
'eliminated',
'elig-mfomo',
'elizabetha',
'elingamite',
'elibelinde',
'elitegroup',
'elingamita',
'elixiaceae',
'elisenberg',
'elisabetha',
'elisenheim',
'elisabetta',
'eliogabalo',
'eliglustat',
'elinohorio',
'elinochori',
'elipovimab',
'eliminator',
'elinkinde',
'eliphalet',
'elisaveta',
'eliprodil',
'elimiotes',
'eliaszuki',
'elizabeth',
'elisarion',
'elisenhoy',
'eliasberg',
'elikewela',
'elissonas',
'elinchrom',
'elipandus',
'elimadate',
'elicicola',
'eliyathur',
'elinohori',
'eliotiana',
'elimelech',
'elissalde',
'elipsocus',
'eliasfund',
'elincourt',
'elionurus',
'elicarlos',
'eliphante',
'eliptical',
'eliogarty',
'elitettan',
'elivelton',
'elinkwijk',
'elikkulam',
'elizaveta',
'eligminae',
'elimidate',
'eliossana',
'eliminedu',
'elinelund',
'elishebha',
'eliticide',
'elisiario',
'elinogrel',
'elizaphan',
'elisabeta',
'elistvere',
'elimiotis',
'elishabha',
'elisabeth',
'elizalde',
'eligijus',
'elisions',
'elisyces',
'elinitsa',
'elisheva',
'eliandro',
'eliphius',
'elimnion',
'eliseyna',
'elishaba',
'elifelet',
'elishava',
'elicitor',
'eliassen',
'eliashev',
'eliachna',
'elibyrge',
'eliyyahu',
'elisenda',
'elisheba',
'elimians',
'eliteweb',
'elidurus',
'elishama',
'elixomin',
'eliademy',
'elinkine',
'elizeche',
'eligible',
'elizanow',
'eliseina',
'elisacom',
'elitists',
'elivelto',
'elicitus',
'eliasite',
'elipsoid',
'elibrary',
'elikkala',
'eliasson',
'elibalta',
'elimnioi',
'elisella',
'elicinae',
'eliberri',
'elisedal',
'elipando',
'elingard',
'elisapie',
'elivagar',
'eliashiv',
'eliquate',
'eliozeta',
'elizondo',
'eliakhin',
'elicitin',
'elicini',
"elisa's",
'elizewo',
'elinlin',
'eliseus',
'elimeia',
'eliomys',
'elipses',
'eliyohu',
'elishia',
'elishua',
'elinand',
'elitexc',
'eliaqim',
'eliomar',
'elimaea',
'eligedu',
'elingen',
'eliakim',
'elivava',
'eliwlod',
'elipsos',
'elispot',
'eliecer',
'eliyahu',
'elienor',
'elision',
'eligard',
'elixier',
'elizium',
'elixoia',
'eliason',
'eliurus',
'elitist',
'elishea',
'eliksem',
'eliksir',
'eligiow',
'elitmus',
'eliphaz',
'eliezer',
'elisson',
'elitour',
'eliyyah',
'elissos',
'eliding',
'eliseev',
'elishoa',
'elisava',
'elitims',
'elimnio',
'elimaki',
'elishah',
'elitism',
'eliasch',
'elinvar',
'eliezio',
'elipsis',
'eliburn',
'eliazer',
'elimala',
'eliamep',
'elixirs',
'elizate',
'eliseni',
'eligius',
'elisita',
'elijah',
'elivie',
'elisos',
'elimus',
'elixer',
'eliseo',
'elizio',
'elizur',
'elides',
'elinav',
'elista',
'elisei',
'elined',
'elites',
'elisir',
'eliane',
'elidel',
'elishe',
'elifaz',
'elimar',
'eliduc',
'elidor',
'eligos',
'elimia',
'elincs',
'elixio',
'elinca',
'elissa',
'eliyeh',
'elizeu',
'eliade',
'eliste',
'elixia',
'elitis',
'elipse',
'eliana',
'elinks',
'elisra',
'elisio',
'eliica',
'elidar',
'eliseu',
'eliscu',
'elitek',
'elinos',
'elibia',
'eliose',
'eliyah',
'elitsa',
'eliska',
'elisha',
'elitch',
'elided',
'elizer',
'eliraz',
'elikeh',
'elinor',
'elisee',
'eligor',
'eliahu',
'elioud',
'eliava',
'elinoi',
'eliseg',
'elidon',
'elixir',
'eliot',
'eliav',
'elihu',
'elice',
'eliss',
'eliel',
'elian',
'eling',
'elima',
'elise',
'eliab',
'elife',
'elihe',
'elide',
'elita',
'eliza',
'eline',
'elija',
'elial',
'eligh',
'eliea',
'elion',
'elite',
'elica',
'elisp',
'elips',
'elium',
'elick',
'elias',
'eliki',
'elize',
'elisa',
'elina',
'eliad',
'elida',
'elimi',
'eliya',
'elios',
'eliud',
'elist',
'eliso',
'elini',
'elia',
'elih',
'elic',
'elim',
'elit',
'elie',
'elin',
'elif',
'elil',
'elis',
'elio',
'elix',
'eli'],
'j': ['eljigidey',
'eljigidei',
'eljudnir',
'eljigin',
'eljahmi',
'eljanov',
'eljas'],
'k': ['elkov-kasper',
'elkathurthy',
'elkasaites',
'elkunchwar',
'elkstones',
'elkabbach',
'elk-sedge',
'elkhotovo',
'elkesaite',
'elkenroth',
'elkington',
'elkasites',
'elkunirsa',
'elkerzee',
'elkesley',
'elkasite',
'elkstone',
'elkalyce',
'elk*rtuk',
'elkhound',
'elkheart',
'elkargeh',
'elkridge',
'elkaduwa',
'elkhabar',
'elkwood',
'elkadri',
'elkroot',
'elkford',
'elkhart',
'elkeson',
'elkmont',
'elkhorn',
'elkland',
'elkhovo',
'elkanah',
'elkunde',
'elkjop',
'elkosh',
'elkeid',
'elkins',
'elkann',
'elkton',
'elktoe',
'elkpen',
'elkana',
'elkies',
'elkind',
'elkia',
'elkem',
'elkin',
'elkan',
'elkie',
'elkab',
'elke',
'elko',
'elki',
'elky',
'elka',
'elk1',
'elk4',
'elk3',
'elkr',
'elks',
'elk'],
'l': ['ellemann-jensen-doctrine',
'elliot-murray-kynynmound',
'elloughton-cum-brough',
'elliniko-argyroupoli',
'ellenz-poltersdorf',
'ellesmeroceratidae',
'ellerton-on-swale',
'ellenfeldstadion',
'ellerbrockgraben',
'ellersinghuizen',
'ellibou-badasso',
'ellipsocephalus',
'ellesmerocerida',
'ellipsoolithus',
'elliptocephala',
'ellisiophyllum',
'ellipse/proofs',
'ellinaphididae',
'elliptochloris',
'ellesmerocerid',
'elliptocytosis',
'ellesmeroceras',
'ellieharrison',
'ellappugazhum',
'ellewoutsdijk',
'ellingshausen',
'elliotsmithia',
'ellipteroides',
'ellesmocerids',
'ellinochorion',
'ellinochorio',
'ellenborough',
'ellipsoptera',
'elliptorhina',
'ellipsometer',
'ellisellidae',
'ellesborough',
'ellinohorion',
'elliotherium',
'ellanderroch',
'ellinopyrgos',
'ellagitannin',
'ellis-bextor',
'ellbogen-see',
'ellipsometry',
'ellsworthite',
'ellingstring',
'ellisoniidae',
'ellupanatham',
'elliotsville',
'elleipsisoma',
'ellisichthys',
'elliptoleus',
'ellenbeckia',
'ellinohorio',
'ellighausen',
'ellenabeich',
'ellobiopsis',
'ellispontos',
'ellipsuella',
'ellersleben',
'ellisochloa',
'ellsworthia',
'ellenvatnet',
'ellinoroson',
'elliotomyia',
'elliptocyte',
'ellipticism',
'elleschodes',
'ellmenreich',
'ellerbeckia',
'ellenbergia',
'ellapalayam',
'ellinoceras',
'ellensbrook',
'ellipanthus',
'ellenhausen',
'ell-cranell',
'ellipostoma',
'ellerspring',
'ellenberger',
'elliponeura',
'ellishadder',
'ellobioidea',
'ellipticine',
'ella-zallar',
'ellenthorpe',
'ellekumbura',
'ellipsanime',
'ellingstedt',
'ellipsoidal',
'ellertshaar',
'ellinochori',
'ellbogensee',
'elliottdale',
'elliottinia',
'ellopostoma',
'ellobiusini',
'ellipticity',
'ellingsoya',
'elliotdale',
'ellipsoids',
'ellenstein',
"elle'ments",
'ellehammer',
'ellingwood',
'ellerstadt',
'ellegarden',
'ellingsrud',
'ellobiidae',
'ellenglaze',
'elleanthus',
'elloughton',
'ellembelle',
'ellisville',
'ellinohori',
'ellinistic',
'ellidavatn',
'ellaichami',
'ellisfield',
'ellatrivia',
'ellinthorp',
'ellenbrook',
'ellochotis',
'ellezelles',
'elliptical',
'ellangowan',
'ellipinion',
'ellambhavi',
'ellwanger',
'ellerbeck',
'ellerdine',
'elliyadda',
'ellenabad',
'ellastone',
'ellenbrae',
'ellichpur',
'ellingboe',
'ellaidhoo',
'elliphant',
'ellenbach',
'ellenwood',
'ellscheid',
'ellweiler',
'ellistown',
'ellemobil',
'ellington',
'ellychnia',
'ellicombe',
'ellermann',
'ellerhoop',
'elliptics',
'ellingham',
'ellenboro',
'ellabella',
'ellinitsa',
'ellenberg',
'ellsworth',
'ellobiida',
'ellingson',
'ellendale',
'ellisdale',
'ellerbach',
'ellakvere',
'ellingsen',
'ellenfeny',
'ellenhall',
'elleporus',
'ellenshaw',
'ellescini',
'ellegaard',
'elliptera',
'ellamanda',
'elliottia',
'ellerdorf',
'ellesmere',
'elladoone',
'ellerburn',
'ellwangen',
'ellewatta',
'ellisella',
'ellacombe',
'ellecourt',
'ellomenos',
'ellinikon',
'elliptigo',
'ellisonia',
'ellipsoid',
'ellerhein',
'ellerslie',
'ellislab',
'ell-wand',
'ellerbee',
'ellinais',
'elliston',
'ellingen',
'ellenico',
'ellinika',
'ellision',
'ellisras',
'ellalink',
'elliptic',
'ellispet',
'ellisdon',
'ellakkal',
'ellobius',
'ellaktor',
'ellsbury',
'ellerton',
'ellagiri',
'elleflot',
'ellicott',
'ellipsis',
'elliniko',
'ellipses',
'ellemann',
'ellidaey',
'ellanico',
'ellerker',
'ellopium',
'ellektra',
'ellerbek',
'ellering',
'ellendun',
'ellipura',
'ellenika',
'ellvange',
'ellerson',
'ellefsen',
'ellemeet',
'ellerman',
'elleguna',
'ellescus',
'ellnvika',
"ellman's",
'ellepola',
'ellandun',
'ellstorp',
'ellinair',
'ellefson',
'ellopion',
'ellenson',
'elliptio',
'elleholm',
'ellobium',
'ellaichi',
'ellebach',
'ellegirl',
'ellakudy',
'ellsberg',
'elligood',
'ellurema',
'ellisell',
'ellefeld',
'ellevest',
'ellimist',
'ellbogen',
'ellhofen',
'ellscott',
'ellidaar',
'ellenton',
'ellinge',
'ellipta',
'ellwood',
'ellsler',
'elladio',
'ellonby',
'ellison',
'ellipse',
'ellmann',
'ellrich',
'elleray',
'elleber',
'elladan',
'elliott',
'ellided',
'ellasar',
'ellbach',
'elloway',
'elliman',
'elledge',
'ellikon',
'ellevio',
'ellough',
'ellange',
'ellwand',
'ellecom',
'elliops',
'ellauda',
'ellamaa',
'ellinia',
'elleria',
'ellyson',
'ellauri',
'ellands',
'ellguth',
'ellerby',
'elleben',
'ellence',
'elliant',
'ellerau',
'ellopia',
'elleore',
'ellisia',
'ellalan',
'ellhoft',
'ellesse',
'ellenby',
'ellzee',
'ellchi',
'ellice',
'ellcia',
'ellena',
'ellzey',
'ellett',
'ellgau',
'ellora',
'ellert',
'ellada',
'ellmer',
'ellend',
'ellmau',
'elledi',
'ellard',
'ellies',
'ellias',
'ellipi',
'elliot',
'ellern',
'elliss',
'elling',
'elland',
'ellica',
'ellida',
'ellams',
'ellero',
'ellyse',
'ellore',
'elloes',
'ellman',
'ellery',
'elley',
'ellas',
'ellak',
'ellul',
'ellie',
'ellil',
'ellac',
'ellos',
'eller',
'ellim',
'ellan',
'ellek',
'ellon',
'ellys',
'ellen',
'ellar',
'elles',
'ellin',
'ellex',
'ellia',
'ellet',
'elloe',
'ellai',
'ellyn',
'ellis',
'ellu',
'ellx',
'ell3',
'ells',
'ellc',
'ello',
'elle',
'elli',
'ella',
'ell'],
'm': ['elmenhorst/lichtenhagen',
'elmwood-on-the-opequon',
'elmton-with-creswell',
'elmstone-hardwicke',
'elmwood--transcona',
'elmelundemester',
'elmshorn-land',
'elmastasoglu',
'elmisauridae',
'elmer-dewitt',
'elmuthalleth',
'elmuerto.com',
'elmesthorpe',
'elmopalooza',
'elmerrillia',
'elmenteitan',
'elmetsaetan',
'elmerinula',
'elmenteita',
'elmerillia',
'elmerrilla',
'elmenhorst',
'elmsthorpe',
'elmetsaete',
'elmendorff',
'elmisaurus',
'elminiidae',
'elmapinar',
'elminster',
'elmayurdu',
'elmerilla',
'elmshaven',
'elmbridge',
'elmendorf',
'elmington',
'elmerimys',
'elmstone',
'elmridge',
'elmhirst',
'elmakuzu',
'elmcroft',
'elmswell',
'elmtaryd',
'elmhurst',
'elmerick',
'elmworth',
'elminius',
'elmsdale',
'elmaamul',
'elmshorn',
'elmo-mit',
'elminage',
'elm-asse',
'elmsater',
'elmstead',
'elmander',
'elmarada',
'elmerina',
'elmstein',
'elmfield',
'elmiotai',
'elmahjer',
'elmbach',
'elmrahu',
'elmgren',
'elmslie',
'elmarit',
'elminia',
'elmatan',
'elmlohe',
'elmdale',
'elminae',
'elmatic',
'elmsett',
'elmvale',
'elmalik',
'elmiron',
'elmwood',
'elmacik',
'elmidae',
'elmsley',
'elmsted',
'elmsall',
'elmaleh',
'elmonte',
'elmini',
'elmali',
'elm327',
'elmera',
'elmley',
'elmira',
'elmire',
'elmham',
'elmina',
'elmdon',
'elmton',
'elmore',
'elmyra',
'elmoia',
'elmont',
'elmlea',
'elmus',
'elmar',
'elmir',
'elmo3',
'elmes',
'elmia',
'elmau',
'elmet',
'elmex',
'elmo2',
'elmas',
'elman',
'elmon',
'elmcc',
'elmyr',
'elmaz',
'elmah',
'elmer',
'elmen',
'elmo1',
'elmis',
'elmo',
'elme',
'elma',
'elml',
'elmi',
'elms',
'elm'],
'n': ['elnesvagen',
'elnoretta',
'elnhausen',
'elnadim',
'elnora',
'elnard',
'elnath',
'elnur',
'elnec',
'elnes',
'elnet',
'elna',
'elne',
'eln'],
'o': ['elongatocontoderus',
'elongatoolithidae',
'elongatopothyne',
'elongatoolithus',
'elongoparorchis',
'elohormiguero/',
'elopterygidae',
'elongatedness',
'elocutionists',
'elongatosybra',
'elopteryginae',
'elocutionary',
'elopiprazole',
'elocutionist',
'eloxochitlan',
'elorgnewwave',
'elopomorpha',
'elonichthys',
'elogbatindi',
'elographics',
'elopiformes',
'elorhynchus',
'eloszallas',
'elonkorjuu',
'elogistics',
'elo-rating',
'elonkerjuu',
'elongation',
'elobixibat',
'elotuzumab',
'elotherium',
'eloeophila',
'elookkara',
'elonexone',
'elopoides',
'elochelys',
'elorriaga',
'elosaurus',
'elocution',
'elopteryx',
'eloquence',
'elopopsis',
'elosuchus',
'elopement',
'elohekhem',
'elocation',
'elopiform',
'eloth:tes',
'elodimyia',
'elokuutio',
'eloquent',
'elophila',
'eloyella',
'elocater',
'eloxatin',
'elottery',
'elom-080',
'elomeryx',
'elomatic',
'eloranta',
'elonidae',
'eloceria',
'elocutio',
'elomenos',
'elofsson',
'elovitsa',
'elothtes',
'elongase',
'elopidae',
'eloheim',
'elouera',
'elounda',
'eloheem',
'elosita',
'elophos',
'eloizes',
'elotuwa',
'elogium',
'elovena',
'elorrio',
'elodina',
'elorduy',
'elounta',
'elonex1',
'elokobi',
'elohist',
'elochai',
'eloisia',
'elonex',
'elohim',
'elodea',
'elorus',
'elonet',
'elordi',
'elocon',
'eloisa',
'elokuu',
'elonka',
'elonzo',
'elodia',
'elovl4',
'elonty',
'elobey',
'elotha',
'elomar',
'elodie',
'elobod',
'elocom',
'elorde',
'elodes',
'eloops',
'elomay',
'eloria',
'elopes',
'elorza',
'eloise',
'elousa',
'elokim',
'elonus',
'eloyes',
'elopid',
'elomaa',
'elosha',
'elopak',
'elovl5',
'eloth',
'eloro',
'elora',
'elovo',
'eloji',
'eloko',
'eloyi',
'eloie',
'elorn',
'eloff',
'elosa',
'elona',
'elout',
'eloka',
'elops',
'eloho',
'elole',
'elone',
'elota',
'elorg',
'elorz',
'elore',
'elois',
'eloor',
'elong',
'eloel',
'eloan',
'elob',
'eloa',
'elok',
'eloi',
'elo2',
'elos',
'eloc',
'elon',
'elom',
'eloy',
'elod',
'elo'],
'p': ['elpistostegidae',
'elpistostegalia',
'elpistoichthys',
'elpersbuttel',
'elpistostege',
'elphinstone',
'elpincha:2d',
'elphphilm1',
'elph.zwolf',
'elpidiidae',
'elphidium',
'elpenbach',
'elphicke',
'elpinice',
'elpidius',
'elpitiya',
'elpidio',
'elpidia',
'elpiste',
'elphaba',
'elphick',
'elprice',
'elpenor',
'elpmas',
'elphir',
'elpozo',
'elphin',
'elpeus',
'elphel',
'elphie',
'elphas',
'elpira',
'elpida',
'elpin',
'elpsn',
'elpis',
'elpt',
'elpa',
'elpc',
'elph',
'elpe',
'elp2',
'elp3',
'elp4',
'elp'],
'q': ['elq-300', 'elqana', 'elqosh', 'elqar', 'elqui', 'elq'],
'r': ['elrhazosaurus',
'elringklinger',
'elroy-sparta',
'elrodoceras',
'elrington',
'elrathia',
'elrihana',
'elrohir',
'elrond',
'elric!',
'elrose',
'elrick',
'elris',
'elrob',
'elros',
'elroy',
'elrey',
'elrig',
'elrow',
'elram',
'elron',
'elrod',
'elrad',
'elrt',
'elra',
'elrr',
'elr'],
's': ['elsdorf-westermuhlen',
'elsass-lothringen',
'elschnitztalbach',
'elsaesserdeutsch',
'elsasserdeutsch',
'elstertrebnitz',
'elsulfavirine',
'elsparefonden',
'elsenztalbahn',
'elsbett-motor',
'elsalvadoria',
'elsner+flake',
'elsbettmotor',
'elsamitrucin',
'elsterwerda',
'elsilimomab',
'elsterheide',
'elsinoaceae',
'elstronwick',
'elskwatawa',
'elsterberg',
'elsipogtog',
'elsatsoosu',
'elsalbador',
'elseworlds',
'else-where',
'else-marie',
'elsholtzia',
'elseyornis',
'elseworld',
'elsbethen',
'elsvatnet',
'elsendorf',
'elsenheim',
'elsenhans',
'elsrickle',
'elsighorn',
'elsheimer',
'elsie-dee',
'elsaesser',
'elsenfeld',
'elsewhere',
'elsthorpe',
'elsteraue',
'elsschot',
'elsornis',
'elsholtz',
'elsberry',
'elsinora',
'elsfleth',
'elsagate',
'elshitsa',
'elsewhen',
'elsebach',
'elsevier',
'elshtain',
'elstanus',
'elsfjord',
'elsenham',
'elsworth',
'elsfield',
'elsebeth',
'elsasser',
'elsiane',
'elsmore',
'elsinoe',
'elssler',
'elsbach',
'elswick',
'elsnigk',
'elstree',
'elspeet',
'elslack',
'elstead',
'elstner',
'elsburg',
'elstorf',
'elsynge',
'elsehul',
'elsevir',
'elsdorf',
'elsayed',
'elsinho',
'elsecar',
'elsword',
'elsener',
'elsbeth',
'elsanor',
'elscint',
'elsmere',
'elstela',
'elswout',
'elspeth',
'elsbett',
'elsamni',
'elskere',
'elsenz',
'elsner',
'elsets',
'elsass',
'elston',
'elsdon',
'elsted',
'elstak',
'elsham',
'elstra',
'elsoff',
'elskop',
'elsnig',
'elstob',
'elsing',
'elstow',
'elsava',
'elsach',
'elster',
'elsloo',
'elsene',
'elsani',
'elstar',
'elseya',
'elspar',
'elsen',
'elser',
'elsio',
'elsam',
'elsee',
'elsey',
'elsys',
'elsau',
'elsie',
'elsby',
'elsom',
'elspa',
'elsib',
'elson',
'elspe',
'elsa',
'elso',
'else',
'elsi',
'elsd',
'elst',
'elsu',
'elsp',
'elsy',
'els'],
't': ['elton-on-the-hill',
'eltingmuhlenbach',
'eltroplectris',
'eltrombopag',
'eltingville',
'eltoprazine',
'elternheft',
'elta-kabel',
'elterngeld',
'elterwater',
'eltrummor',
'eltendorf',
'elthyrone',
'elthuruth',
'eltonjohn',
'eltonasen',
'elterlein',
'eltorito',
'elthorne',
'eltisley',
'eltville',
'elterish',
'elton.tv',
'eltroxin',
'elteber',
'eltanin',
'eltinge',
'eltihno',
'elthusa',
'eltmann',
'eltigen',
'eltynia',
'eltinho',
'elthor',
'eltsin',
'eltham',
'elting',
'eltono',
'eltro',
'elten',
'eltek',
'eltd1',
'elter',
'eltis',
'elton',
'elto',
'elta',
'eltz',
'elte',
'elt'],
'u': ['elusimicrobiota',
'elutherosides',
'elutriation',
'elucidarium',
'elucidation',
'elus-cohens',
'elursrebmem',
'eluxadoline',
'eluvapalli',
'elucapalli',
'eluvaitivu',
'eluviation',
'eluxolweni',
'eluveitie',
'eluta.com',
'elurupadu',
'elumathur',
'eluta.ca',
'elunetru',
'elumalai',
'elusates',
'elulaios',
'elunchun',
'eluchans',
'elugelab',
'elunirsa',
'eluphant',
'eluosizu',
'eluthero',
'elulmesh',
'eluvium',
'eluanbi',
'eluvial',
'elution',
'elusion',
'elusive',
'elumos',
'elutec',
'elunay',
'eluana',
'elusia',
'eluted',
'eluoma',
'elucid',
'eluate',
'elumur',
'eluned',
'eluent',
'eluru',
'eluta',
'eluku',
'elusa',
'elubo',
'elute',
'elulu',
'eluza',
'elul',
'elua',
'elu'],
'v': ['elvitegravir/cobicistat/emtricitabine/tenofovir',
'elvillar/bilar',
'elvegardsmoen',
'elvirasminde',
'elvitegravir',
'elvucitabine',
'elversstein',
'elvavralet',
'elvebakken',
'elvesaeter',
'elvistojko',
'elversberg',
'elvisaurus',
'elvenquest',
'elven-king',
'elverdinge',
'elvifrance',
'elvalandet',
'elvstroem',
'elvenking',
'elvenpath',
'elvenhome',
'elvington',
'elverskud',
'elvanfoot',
'elvetham',
'elvisaur',
'elvaston',
'elviemek',
'elveskud',
'elveflya',
'elvegard',
'elvissey',
'elvillar',
'elvestad',
'elvehoj',
'elvissa',
'elvenes',
'elviria',
'elvange',
'elveden',
'elvanli',
'elvidge',
'elverum',
'elvasia',
'elvazlu',
'elvetia',
'elvarg',
'elvina',
'elvira',
'elvish',
'elvire',
'elviss',
'elvida',
'elveon',
'elvers',
'elvyra',
'elvine',
'elviin',
'elvend',
'elvin!',
'elvran',
'elvet',
'elven',
'elvio',
'elvir',
'elv1s',
'elvii',
'elvia',
'elvas',
'elvis',
'elves',
'elvey',
'elvin',
'elvan',
'elvik',
'elver',
'elvo',
'elva',
'elve',
'elv'],
'w': ['elwetritsche',
'elwedritsche',
'elwetritsch',
'elworthy',
'elwendia',
'elworth',
'elwesia',
'elwick',
'elwala',
'elwire',
'elwood',
'elwiki',
'elwill',
'elwell',
'elwes',
'elwis',
'elwen',
'elwin',
'elwro',
'elwan',
'elwha',
'elway',
'elwat',
'elwe',
'elwy',
'elwo',
'elwa',
'elw'],
'x': ['elx/elche', 'elxleben', 'elxsi', 'elx'],
'y': ['elytracanthinini',
'elytrostachys',
'elythranthera',
'elytrophorus',
'elytroleptus',
'elytropappus',
'elymoclavine',
'elyounoussi',
'elyasalylar',
'elymniopsis',
'elytroderma',
'elymniinae',
'elytranthe',
'elytrurini',
'elyasabad',
'elymniini',
'elymaeans',
'elytropus',
'elysiidae',
'elytraria',
'elymiotis',
'elymandra',
'elytrigia',
'elyankudi',
'elyasvand',
'elymninae',
'elybirge',
'elyakhin',
'elymians',
'elysette',
'elymnioi',
'elyashiv',
'elymnion',
'elyu-ene',
'elyachin',
'elyptron',
'elymnias',
'elytrum',
'elymaic',
'elymaea',
'elyasin',
'elyeser',
'elyasan',
'elymian',
'elyahin',
'elytron',
'elyadal',
'elymnio',
'elysium',
'elysion',
'elymais',
'elysius',
'elysian',
'elyotto',
'elyaqim',
'elymana',
'elyakim',
'elymus',
'elyton',
'elymos',
'elyasi',
'elyahu',
'elypse',
'elytra',
'elymas',
'elymia',
'elyria',
'elytis',
'elyatu',
'elyerd',
'elyrus',
'elysia',
'elyes',
'elyon',
'elyas',
'elyar',
'elyna',
'elymi',
'elyan',
'elyot',
'elys',
'elyc',
'elya',
'ely'],
'z': ['elzbieta-kolonia',
'elzbietkow',
'elzbietowo',
'elztalbahn',
'elzbieciny',
'elzamalek',
'elzasonan',
'elzbiecin',
'elzbietow',
'elzweiler',
'elzogram',
'elzanowo',
'elzevier',
'elzbieta',
'elzinga',
'elzevir',
'elzbach',
'elzunia',
'elzaqum',
'elzange',
'elztal',
'elzele',
'elzach',
'elzey',
'elzhi',
'elzie',
'elzo',
'elze',
'elzy',
'elza',
'elz']}
|
PypiClean
|
/onnxruntime_directml-1.15.1-cp38-cp38-win_amd64.whl/onnxruntime/quantization/operators/lstm.py
|
import numpy
import onnx
from onnx import onnx_pb as onnx_proto
from ..quant_utils import QuantType, attribute_to_kwarg, ms_domain # noqa: F401
from .base_operator import QuantOperatorBase
"""
Quantize LSTM
"""
class LSTMQuant(QuantOperatorBase):
def __init__(self, onnx_quantizer, onnx_node):
super().__init__(onnx_quantizer, onnx_node)
def quantize(self):
"""
parameter node: LSTM node.
parameter new_nodes_list: List of new nodes created before processing this node.
return: a list of nodes in topological order that represents quantized Attention node.
"""
node = self.node
assert node.op_type == "LSTM"
if not self.quantizer.is_valid_quantize_weight(node.input[1]) or not self.quantizer.is_valid_quantize_weight(
node.input[2]
):
super().quantize()
return
model = self.quantizer.model
W = model.get_initializer(node.input[1]) # noqa: N806
R = model.get_initializer(node.input[2]) # noqa: N806
if len(W.dims) != 3 or len(R.dims) != 3:
super().quantize()
return
[W_num_dir, W_4_hidden_size, W_input_size] = W.dims # noqa: N806
[R_num_dir, R_4_hidden_size, R_hidden_size] = R.dims # noqa: N806
if self.quantizer.is_per_channel():
del W.dims[0]
del R.dims[0]
W.dims[0] = W_num_dir * W_4_hidden_size
R.dims[0] = R_num_dir * R_4_hidden_size
quant_input_weight_tuple = self.quantizer.quantize_weight_per_channel(
node.input[1], onnx_proto.TensorProto.INT8, 0
)
quant_recurrent_weight_tuple = self.quantizer.quantize_weight_per_channel(
node.input[2], onnx_proto.TensorProto.INT8, 0
)
W_quant_weight = model.get_initializer(quant_input_weight_tuple[0]) # noqa: N806
R_quant_weight = model.get_initializer(quant_recurrent_weight_tuple[0]) # noqa: N806
W_quant_array = onnx.numpy_helper.to_array(W_quant_weight) # noqa: N806
R_quant_array = onnx.numpy_helper.to_array(R_quant_weight) # noqa: N806
W_quant_array = numpy.reshape(W_quant_array, (W_num_dir, W_4_hidden_size, W_input_size)) # noqa: N806
R_quant_array = numpy.reshape(R_quant_array, (R_num_dir, R_4_hidden_size, R_hidden_size)) # noqa: N806
W_quant_array = numpy.transpose(W_quant_array, (0, 2, 1)) # noqa: N806
R_quant_array = numpy.transpose(R_quant_array, (0, 2, 1)) # noqa: N806
W_quant_tranposed = onnx.numpy_helper.from_array(W_quant_array, quant_input_weight_tuple[0]) # noqa: N806
R_quant_tranposed = onnx.numpy_helper.from_array(R_quant_array, quant_recurrent_weight_tuple[0]) # noqa: N806
model.remove_initializers([W_quant_weight, R_quant_weight])
model.add_initializer(W_quant_tranposed)
model.add_initializer(R_quant_tranposed)
W_quant_zp = model.get_initializer(quant_input_weight_tuple[1]) # noqa: N806
R_quant_zp = model.get_initializer(quant_recurrent_weight_tuple[1]) # noqa: N806
W_quant_scale = model.get_initializer(quant_input_weight_tuple[2]) # noqa: N806
R_quant_scale = model.get_initializer(quant_recurrent_weight_tuple[2]) # noqa: N806
if self.quantizer.is_per_channel():
W_quant_zp.dims[:] = [W_num_dir, W_4_hidden_size]
R_quant_zp.dims[:] = [R_num_dir, R_4_hidden_size]
W_quant_scale.dims[:] = [W_num_dir, W_4_hidden_size]
R_quant_scale.dims[:] = [R_num_dir, R_4_hidden_size]
inputs = []
input_len = len(node.input)
inputs.extend([node.input[0]])
inputs.extend([quant_input_weight_tuple[0], quant_recurrent_weight_tuple[0]])
inputs.extend([node.input[3] if input_len > 3 else ""])
inputs.extend([node.input[4] if input_len > 4 else ""])
inputs.extend([node.input[5] if input_len > 5 else ""])
inputs.extend([node.input[6] if input_len > 6 else ""])
inputs.extend([node.input[7] if input_len > 7 else ""])
inputs.extend(
[
quant_input_weight_tuple[2],
quant_input_weight_tuple[1],
quant_recurrent_weight_tuple[2],
quant_recurrent_weight_tuple[1],
]
)
kwargs = {}
for attribute in node.attribute:
kwargs.update(attribute_to_kwarg(attribute))
kwargs["domain"] = ms_domain
quant_lstm_name = "" if not node.name else node.name + "_quant"
quant_lstm_node = onnx.helper.make_node("DynamicQuantizeLSTM", inputs, node.output, quant_lstm_name, **kwargs)
self.quantizer.new_nodes.append(quant_lstm_node)
dequantize_node = self.quantizer._dequantize_value(node.input[0])
if dequantize_node is not None:
self.quantizer.new_nodes.append(dequantize_node)
|
PypiClean
|
/mct-nightly-1.9.0.20230903.post433.tar.gz/mct-nightly-1.9.0.20230903.post433/model_compression_toolkit/core/keras/graph_substitutions/substitutions/input_scaling.py
|
from tensorflow.keras.layers import InputLayer, Dense, DepthwiseConv2D, Conv2D, Conv2DTranspose, ZeroPadding2D
from typing import List
from model_compression_toolkit.core import common
from model_compression_toolkit.core.common.framework_info import FrameworkInfo
from model_compression_toolkit.core.common.graph.base_graph import Graph
from model_compression_toolkit.core.common.graph.graph_matchers import NodeOperationMatcher, EdgeMatcher, WalkMatcher
from model_compression_toolkit.core.common.graph.base_node import BaseNode
from model_compression_toolkit.core.common.quantization.quantization_config import QuantizationConfig
from model_compression_toolkit.constants import THRESHOLD
from model_compression_toolkit.core.keras.constants import KERNEL
input_node = NodeOperationMatcher(InputLayer)
zeropad_node = NodeOperationMatcher(ZeroPadding2D)
op2d_node = NodeOperationMatcher(Dense) | \
NodeOperationMatcher(Conv2D) | \
NodeOperationMatcher(DepthwiseConv2D) | \
NodeOperationMatcher(Conv2DTranspose)
INPUT_MATCHER = WalkMatcher([input_node, op2d_node])
INPUT_MATCHER_WITH_PAD = WalkMatcher([input_node, zeropad_node, op2d_node])
class BaseInputScaling(common.BaseSubstitution):
"""
General scale activation threshold for input layers, if they are followed by linear nodes. We first
scale their thresholds to a constrained threshold, and then fix it by scaling the linear op weights
correspondingly.
The matcher instance of type WalkMatcher may include intermediate nodes that don't affect scaling
(such as ZeroPadding), but only the first and last nodes are used for scaling
"""
def __init__(self,
matcher_instance):
"""
Matches: InputLayer -> (optional nodes) -> (Dense,Conv2D,DepthwiseConv2D,Conv2DTranspose)
note: the optional nodes are nodes that don't affect the scaling (such as ZeroPadding)
Create a substitution using different params which may affect the way this substitution is made.
The substitution is looking for edges in the graph which are input layers connected to linear layers.
Args:
matcher_instance: matcher instance of type WalkMatcher
"""
super().__init__(matcher_instance=matcher_instance)
def substitute(self,
graph: Graph,
nodes_list: List[BaseNode]) -> Graph:
"""
Scale activation threshold for input layers, if they are followed by linear nodes. We first
scale their thresholds to a constrained threshold, and then fix it by scaling the linear op weights
correspondingly.
Args:
graph: Graph to apply the substitution on.
edge_nodes: Edge (tuple of nodes) that matches the pattern the substitution can be applied on.
Returns:
Graph after applying the substitution.
"""
input_layer = nodes_list[0]
linear_layer = nodes_list[-1]
if not input_layer.is_all_activation_candidates_equal():
raise Exception("Input scaling is not supported for more than one activation quantization configuration "
"candidate")
# all candidates have same activation config, so taking the first candidate for calculations
threshold = input_layer.candidates_quantization_cfg[0].activation_quantization_cfg.activation_quantization_params.get(THRESHOLD)
if threshold is None:
return graph
min_value, max_value = graph.get_out_stats_collector(input_layer).get_min_max_values()
threshold_float = max(abs(min_value), max_value)
if threshold > threshold_float:
scale_factor = threshold_float / threshold
graph.user_info.set_input_scale(1 / scale_factor)
w1_fixed = linear_layer.get_weights_by_keys(KERNEL) * scale_factor
linear_layer.set_weights_by_keys(KERNEL, w1_fixed)
graph.scale_stats_collector(input_layer, 1 / scale_factor)
# After scaling weights may have different thresholds so it needs to be recalculated
for nqc in linear_layer.candidates_quantization_cfg:
nqc.weights_quantization_cfg.calculate_and_set_weights_params(w1_fixed)
return graph
class InputScaling(BaseInputScaling):
"""
Substitution extends BaseInputScaling to the case of Input-->Linear
"""
def __init__(self):
"""
Initialize a ScaleEqualization object.
"""
super().__init__(matcher_instance=INPUT_MATCHER)
class InputScalingWithPad(BaseInputScaling):
"""
Substitution extends BaseInputScaling to the case of Input-->ZeroPadding-->Linear
"""
def __init__(self):
"""
Initialize a ScaleEqualization object.
"""
super().__init__(matcher_instance=INPUT_MATCHER_WITH_PAD)
|
PypiClean
|
/spconv_cu113-2.3.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl/spconv/pytorch/core.py
|
from typing import Any, List, Optional, Tuple, TypeVar, Union, Dict
import numpy as np
import torch
from spconv.core import ConvAlgo
from spconv.pytorch.constants import PYTORCH_VERSION
from spconv.tools import CUDAKernelTimer
from spconv.constants import SPCONV_FX_TRACE_MODE
if PYTORCH_VERSION >= [1, 8, 0]:
try:
import torch.fx
if PYTORCH_VERSION >= [1, 10, 0]:
from torch.fx import ProxyableClassMeta
else:
from torch.fx.symbolic_trace import ProxyableClassMeta
SpConvTensorMeta = ProxyableClassMeta
except:
class SpConvTensorMeta(type):
pass
else:
class SpConvTensorMeta(type):
pass
class ThrustSortAllocator:
def __init__(self, device: torch.device) -> None:
super().__init__()
self.alloced_objs = {}
self.device = device
def alloc(self, n: int):
if n in self.alloced_objs:
return self.alloced_objs[n].data_ptr()
for n_cur, ten in self.alloced_objs.items():
if n < n_cur:
return ten.data_ptr()
ten = torch.empty([n], dtype=torch.uint8, device=self.device)
self.alloced_objs[n] = ten
return ten.data_ptr()
class IndiceData(object):
def __init__(self, out_indices, indices, indice_pairs, indice_pair_num,
spatial_shape, out_spatial_shape, is_subm: bool, algo: ConvAlgo,
ksize: List[int], stride: List[int], dilation: List[int], padding: List[int],
voxel_num: Optional[Any] = None):
self.out_indices = out_indices
self.indices = indices
self.indice_pairs = indice_pairs
self.indice_pair_num = indice_pair_num
self.spatial_shape = spatial_shape
self.out_spatial_shape = out_spatial_shape
self.is_subm = is_subm
self.algo = algo
self.ksize = ksize
self.stride = stride
self.dilation = dilation
self.padding = padding
# voxel_num is only used in tensorrt conversion.
self.voxel_num = voxel_num
class ImplicitGemmIndiceData(object):
def __init__(self, out_indices: torch.Tensor, indices: torch.Tensor,
pair_fwd: torch.Tensor, pair_bwd: torch.Tensor,
pair_mask_fwd_splits: List[torch.Tensor],
pair_mask_bwd_splits: List[torch.Tensor],
mask_argsort_fwd_splits: List[torch.Tensor],
mask_argsort_bwd_splits: List[torch.Tensor],
masks: List[np.ndarray], spatial_shape,
out_spatial_shape, is_subm: bool, algo: ConvAlgo,
ksize: List[int], stride: List[int], dilation: List[int], padding: List[int],
in_voxel_num: Optional[Any] = None,
out_voxel_num: Optional[Any] = None):
self.out_indices = out_indices
self.indices = indices
self.pair_fwd = pair_fwd
self.pair_bwd = pair_bwd
self.pair_mask_fwd_splits = pair_mask_fwd_splits
self.pair_mask_bwd_splits = pair_mask_bwd_splits
self.mask_argsort_fwd_splits = mask_argsort_fwd_splits
self.mask_argsort_bwd_splits = mask_argsort_bwd_splits
self.masks = masks
self.spatial_shape = spatial_shape
self.out_spatial_shape = out_spatial_shape
self.is_subm = is_subm
self.algo = algo
self.ksize = ksize
self.stride = stride
self.dilation = dilation
self.padding = padding
# in/out voxel_num is only used in tensorrt conversion.
self.in_voxel_num = in_voxel_num
self.out_voxel_num = out_voxel_num
def scatter_nd(indices, updates, shape):
"""pytorch edition of tensorflow scatter_nd.
this function don't contain except handle code. so use this carefully
when indice repeats, don't support repeat add which is supported
in tensorflow.
"""
ret = torch.zeros(*shape, dtype=updates.dtype, device=updates.device)
ndim = indices.shape[-1]
output_shape = list(indices.shape[:-1]) + shape[indices.shape[-1]:]
flatted_indices = indices.view(-1, ndim)
slices = [flatted_indices[:, i] for i in range(ndim)]
slices += [Ellipsis]
ret[slices] = updates.view(*output_shape)
return ret
# ProxyableClassMeta is used for torch.fx
class SparseConvTensor(metaclass=SpConvTensorMeta):
def __init__(self,
features: torch.Tensor,
indices: torch.Tensor,
spatial_shape: Union[List[int], np.ndarray],
batch_size: int,
grid: Optional[torch.Tensor] = None,
voxel_num: Optional[torch.Tensor] = None,
indice_dict: Optional[dict] = None,
benchmark: bool = False,
permanent_thrust_allocator: bool = False,
enable_timer: bool = False,
force_algo: Optional[ConvAlgo] = None):
"""
Args:
features: [num_points, num_features] feature tensor
indices: [num_points, ndim + 1] indice tensor. batch index saved in indices[:, 0]
spatial_shape: spatial shape of your sparse data
batch_size: batch size of your sparse data
grid: pre-allocated grid tensor. should be used when the volume of spatial shape
is very large.
benchmark: whether to enable benchmark. if enabled, all sparse operators will be record to
SparseConvTensor.
enable_timer: if exists, all spconv internal ops run time will be record in _timer.
force_algo: force conv/pool layers use this algo, should only used for debug.
"""
ndim = indices.shape[1] - 1
if not SPCONV_FX_TRACE_MODE:
assert features.ndim == 2
assert indices.ndim == 2
assert len(spatial_shape) == ndim, "spatial shape must equal to ndim"
assert indices.dtype == torch.int32, "only support int32"
assert batch_size > 0
# assert features.shape[0] == indices.shape[0]
self._features = features
self.indices = indices
self.spatial_shape = [int(v) for v in spatial_shape]
self.batch_size = batch_size
if indice_dict is None:
indice_dict = {}
self.indice_dict = indice_dict
if grid is None:
grid = torch.Tensor() # empty tensor
self.grid = grid
self.voxel_num = voxel_num # for tensorrt
self.benchmark = benchmark
self.benchmark_record = {}
self.thrust_allocator: Optional[ThrustSortAllocator] = None
if permanent_thrust_allocator:
self.thrust_allocator = ThrustSortAllocator(features.device)
self._timer = CUDAKernelTimer(enable_timer)
self.force_algo = force_algo
self.int8_scale: Optional[np.ndarray] = None
def __repr__(self):
return f"SparseConvTensor[shape={self._features.shape}]"
@property
def is_quantized(self):
return self.features.dtype == torch.qint8
def q_scale(self):
if self.is_quantized:
return self.features.q_scale()
raise ValueError("sparse tensor must be quantized")
def replace_feature(self, feature: torch.Tensor):
"""we need to replace x.features = F.relu(x.features) with x = x.replace_feature(F.relu(x.features))
due to limit of torch.fx
"""
# assert feature.shape[0] == self.indices.shape[0], "replaced num of features not equal to indices"
new_spt = SparseConvTensor(feature, self.indices, self.spatial_shape,
self.batch_size, self.grid, self.voxel_num,
self.indice_dict)
new_spt.benchmark = self.benchmark
new_spt.benchmark_record = self.benchmark_record
new_spt.thrust_allocator = self.thrust_allocator
new_spt._timer = self._timer
new_spt.force_algo = self.force_algo
new_spt.int8_scale = self.int8_scale
return new_spt
def select_by_index(self, valid_indices: torch.Tensor):
new_spt = self.shadow_copy()
new_spt.indices = self.indices[valid_indices]
new_spt.features = self.features[valid_indices]
# reuse data must be cleared after modify indices
new_spt.indice_dict.clear()
return new_spt
def minus(self):
return self.replace_feature(-self.features)
@property
def features(self):
return self._features
@features.setter
def features(self, val):
msg = (
"you can't set feature directly, use 'x = x.replace_feature(your_new_feature)'"
" to generate new SparseConvTensor instead.")
raise ValueError(msg)
@classmethod
def from_dense(cls, x: torch.Tensor):
"""create sparse tensor fron channel last dense tensor by to_sparse
x must be NHWC tensor, channel last
"""
x_sp = x.to_sparse(x.ndim - 1)
spatial_shape = x_sp.shape[1:-1]
batch_size = x_sp.shape[0]
indices_th = x_sp.indices().permute(1, 0).contiguous().int()
features_th = x_sp.values()
return cls(features_th, indices_th, spatial_shape, batch_size)
def dequantize(self):
return self.replace_feature(self.features.dequantize())
@property
def spatial_size(self):
return np.prod(self.spatial_shape)
def find_indice_pair(
self, key) -> Optional[Union[IndiceData, ImplicitGemmIndiceData]]:
if key is None:
return None
if key in self.indice_dict:
return self.indice_dict[key]
return None
def dense(self, channels_first: bool = True):
output_shape = [self.batch_size] + list(
self.spatial_shape) + [self.features.shape[1]]
res = scatter_nd(
self.indices.to(self.features.device).long(), self.features,
output_shape)
if not channels_first:
return res
ndim = len(self.spatial_shape)
trans_params = list(range(0, ndim + 1))
trans_params.insert(1, ndim + 1)
return res.permute(*trans_params).contiguous()
# remove this due to limit of torch.fx
# @property
# def sparity(self):
# return self.indices.shape[0] / np.prod(
# self.spatial_shape) / self.batch_size
def __add__(self, other: Union["SparseConvTensor", torch.Tensor]):
assert isinstance(other, (SparseConvTensor, torch.Tensor))
if isinstance(other, torch.Tensor):
other_features = other
else:
other_features = other.features
return self.replace_feature(self.features + other_features)
def __iadd__(self, other: Union["SparseConvTensor", torch.Tensor]):
assert isinstance(other, (SparseConvTensor, torch.Tensor))
if isinstance(other, torch.Tensor):
other_features = other
else:
other_features = other.features
self.features += other_features
return self
def __radd__(self, other: Union["SparseConvTensor", torch.Tensor]):
assert isinstance(other, (SparseConvTensor, torch.Tensor))
if isinstance(other, torch.Tensor):
other_features = other
else:
other_features = other.features
return self.replace_feature(self.features + other_features)
def shadow_copy(self) -> "SparseConvTensor":
"""create a new spconv tensor with all member unchanged"""
tensor = SparseConvTensor(self.features, self.indices,
self.spatial_shape, self.batch_size,
self.grid, self.voxel_num, self.indice_dict,
self.benchmark)
tensor.benchmark_record = self.benchmark_record
tensor.thrust_allocator = self.thrust_allocator
tensor._timer = self._timer
tensor.force_algo = self.force_algo
tensor.int8_scale = self.int8_scale
return tensor
def expand_nd(ndim: int, val: Union[int, List[int], Tuple[int, ...], np.ndarray]) -> List[int]:
if isinstance(val, int):
res = [val] * ndim
elif isinstance(val, tuple):
res = list(val)
elif isinstance(val, np.ndarray):
res = list(val)
else:
res = val
assert len(res) == ndim
return [int(v) for v in res]
|
PypiClean
|
/neptune_client-1.6.2rc0.tar.gz/neptune_client-1.6.2rc0/src/neptune/integrations/python_logger.py
|
__all__ = ["NeptuneHandler"]
import logging
import threading
from neptune import Run
from neptune.internal.state import ContainerState
from neptune.internal.utils import verify_type
from neptune.logging import Logger
from neptune.version import version as neptune_client_version
INTEGRATION_VERSION_KEY = "source_code/integrations/neptune-python-logger"
class NeptuneHandler(logging.Handler):
"""Handler that sends the log records created by the logger to Neptune
Args:
run (Run): An existing run reference (as returned by `neptune.init_run`)
Logger will send messages as a `StringSeries` field on this run.
level (int, optional): Log level of the handler. Defaults to `logging.NOTSET`,
which logs everything that matches logger's level.
path (str, optional): Path to the `StringSeries` field used for logging. Default to `None`.
If `None`, `'monitoring/python_logger'` is used.
Examples:
>>> import logging
>>> import neptune
>>> from neptune.integrations.python_logger import NeptuneHandler
>>> logger = logging.getLogger("root_experiment")
>>> logger.setLevel(logging.DEBUG)
>>> run = neptune.init_run(project="neptune/sandbox")
>>> npt_handler = NeptuneHandler(run=run)
>>> logger.addHandler(npt_handler)
>>> logger.debug("Starting data preparation")
...
>>> logger.debug("Data preparation done")
"""
def __init__(self, *, run: Run, level=logging.NOTSET, path: str = None):
verify_type("run", run, Run)
verify_type("level", level, int)
if path is None:
path = f"{run.monitoring_namespace}/python_logger"
verify_type("path", path, str)
super().__init__(level=level)
self._run = run
self._logger = Logger(run, path)
self._thread_local = threading.local()
self._run[INTEGRATION_VERSION_KEY] = str(neptune_client_version)
def emit(self, record: logging.LogRecord) -> None:
if not hasattr(self._thread_local, "inside_write"):
self._thread_local.inside_write = False
if self._run._state == ContainerState.STARTED and not self._thread_local.inside_write:
try:
self._thread_local.inside_write = True
message = self.format(record)
self._logger.log(message)
finally:
self._thread_local.inside_write = False
|
PypiClean
|
/b26_toolkit-0.1a1.tar.gz/b26_toolkit-0.1a1/b26_toolkit/plotting/plots_2d.py
|
import numpy as np
from matplotlib.ticker import FormatStrFormatter
# todo: delete plot_fluorescence and refactor plot_fluorescence_new to plot_fluorescence
def plot_fluorescence(image_data, extent, axes_image, implot=None, cbar=None, max_counts=-1, axes_colorbar=None):
"""
Args:
image_data: 2D - array
extent: vector of length 4, i.e. [x_min, x_max, y_max, y_min]
axes: axes object on which to plot
implot: reference to image plot
Returns:
"""
fig = axes_image.get_figure()
if axes_colorbar is None:
# try to figure out if there is a axis for the colorbar
fig = axes_image.get_figure()
number_of_axes = len(fig.axes)
for index in range(number_of_axes):
if fig.axes[index] == axes_image and index < number_of_axes - 1:
axes_colorbar = fig.axes[index + 1]
if implot is None:
if max_counts > 0:
implot = axes_image.imshow(image_data, cmap='pink', interpolation="nearest", extent=extent, vmax=max_counts)
else:
implot = axes_image.imshow(image_data, cmap='pink', interpolation="nearest", extent=extent)
axes_image.set_xlabel(r'V$_x$ [V]')
axes_image.set_ylabel(r'V$_y$ [V]')
axes_image.set_title('Confocal Image')
else:
implot.set_data(image_data)
if not max_counts > 0:
implot.autoscale()
if axes_colorbar is None and cbar is None:
cbar = fig.colorbar(implot, label='kcounts/sec')
elif cbar is None:
cbar = fig.colorbar(implot, cax=axes_colorbar, label='kcounts/sec')
else:
cbar.update_bruteforce(implot)
# todo: tightlayout warning test it this avoids the warning:
fig.set_tight_layout(True)
# fig.tight_layout()
return implot, cbar
def update_fluorescence(image_data, axes_image, max_counts = -1):
"""
updates a the data in a fluorescence plot. This is more efficient than replotting from scratch
Args:
image_data: 2D - array
axes_image: axes object on which to plot
implot: reference to image plot
Returns:
"""
if max_counts >= 0:
image_data = np.clip(image_data, 0, max_counts)
implot = axes_image.images[0]
colorbar = implot.colorbar
implot.set_data(image_data)
implot.autoscale()
if colorbar is not None and max_counts < 0:
# colorbar_min = 0
colorbar_min = np.min(image_data)
colorbar_max = np.max(image_data)
colorbar_labels = [np.floor(x) for x in np.linspace(colorbar_min, colorbar_max, 5, endpoint=True)]
colorbar.set_ticks(colorbar_labels)
colorbar.set_clim(colorbar_min, colorbar_max)
colorbar.update_normal(implot)
def plot_fluorescence_new(image_data, extent, axes_image, max_counts = -1, colorbar = None):
"""
plots fluorescence data in a 2D plot
Args:
image_data: 2D - array
extent: vector of length 4, i.e. [x_min, x_max, y_max, y_min]
axes_image: axes object on which to plot
max_counts: cap colorbar at this value if negative autoscale
Returns:
"""
if max_counts >= 0:
image_data = np.clip(image_data, 0, max_counts)
extra_x_extent = (extent[1]-extent[0])/float(2*(len(image_data[0])-1))
extra_y_extent = (extent[2]-extent[3])/float(2*(len(image_data)-1))
extent = [extent[0] - extra_x_extent, extent[1] + extra_x_extent, extent[2] + extra_y_extent, extent[3] - extra_y_extent]
fig = axes_image.get_figure()
implot = axes_image.imshow(image_data, cmap='pink', interpolation="nearest", extent=extent)
axes_image.set_xlabel(r'V$_x$ [V]')
axes_image.set_ylabel(r'V$_y$ [V]')
axes_image.set_title('Confocal Image')
# explicitly round x_ticks because otherwise they have too much precision (~17 decimal points) when displayed
# on plot
axes_image.set_xticklabels([round(xticklabel, 4) for xticklabel in axes_image.get_xticks()], rotation=90)
if np.min(image_data)<200:
colorbar_min = 0
else:
colorbar_min = np.min(image_data)
if max_counts < 0:
colorbar_max = np.max(image_data)
else:
colorbar_max = max_counts
colorbar_labels = [np.floor(x) for x in np.linspace(colorbar_min, colorbar_max, 5, endpoint=True)]
if max_counts <= 0:
implot.autoscale()
if colorbar is None:
colorbar = fig.colorbar(implot, label='kcounts/sec')
colorbar.set_ticks(colorbar_labels)
colorbar.set_clim(colorbar_min, colorbar_max)
else:
colorbar = fig.colorbar(implot, cax=colorbar.ax, label='kcounts/sec')
colorbar.set_ticks(colorbar_labels)
colorbar.set_clim(colorbar_min, colorbar_max)
|
PypiClean
|
/blender-basico-0.2.3rc0.tar.gz/blender-basico-0.2.3rc0/blender_basico/static/blender_basico/scripts/vendor/popper-1.15.0.min.js
|
(function(e,t){'object'==typeof exports&&'undefined'!=typeof module?module.exports=t():'function'==typeof define&&define.amd?define(t):e.Popper=t()})(this,function(){'use strict';function e(e){return e&&'[object Function]'==={}.toString.call(e)}function t(e,t){if(1!==e.nodeType)return[];var o=e.ownerDocument.defaultView,n=o.getComputedStyle(e,null);return t?n[t]:n}function o(e){return'HTML'===e.nodeName?e:e.parentNode||e.host}function n(e){if(!e)return document.body;switch(e.nodeName){case'HTML':case'BODY':return e.ownerDocument.body;case'#document':return e.body;}var i=t(e),r=i.overflow,p=i.overflowX,s=i.overflowY;return /(auto|scroll|overlay)/.test(r+s+p)?e:n(o(e))}function r(e){return 11===e?pe:10===e?se:pe||se}function p(e){if(!e)return document.documentElement;for(var o=r(10)?document.body:null,n=e.offsetParent||null;n===o&&e.nextElementSibling;)n=(e=e.nextElementSibling).offsetParent;var i=n&&n.nodeName;return i&&'BODY'!==i&&'HTML'!==i?-1!==['TH','TD','TABLE'].indexOf(n.nodeName)&&'static'===t(n,'position')?p(n):n:e?e.ownerDocument.documentElement:document.documentElement}function s(e){var t=e.nodeName;return'BODY'!==t&&('HTML'===t||p(e.firstElementChild)===e)}function d(e){return null===e.parentNode?e:d(e.parentNode)}function a(e,t){if(!e||!e.nodeType||!t||!t.nodeType)return document.documentElement;var o=e.compareDocumentPosition(t)&Node.DOCUMENT_POSITION_FOLLOWING,n=o?e:t,i=o?t:e,r=document.createRange();r.setStart(n,0),r.setEnd(i,0);var l=r.commonAncestorContainer;if(e!==l&&t!==l||n.contains(i))return s(l)?l:p(l);var f=d(e);return f.host?a(f.host,t):a(e,d(t).host)}function l(e){var t=1<arguments.length&&void 0!==arguments[1]?arguments[1]:'top',o='top'===t?'scrollTop':'scrollLeft',n=e.nodeName;if('BODY'===n||'HTML'===n){var i=e.ownerDocument.documentElement,r=e.ownerDocument.scrollingElement||i;return r[o]}return e[o]}function f(e,t){var o=2<arguments.length&&void 0!==arguments[2]&&arguments[2],n=l(t,'top'),i=l(t,'left'),r=o?-1:1;return e.top+=n*r,e.bottom+=n*r,e.left+=i*r,e.right+=i*r,e}function m(e,t){var o='x'===t?'Left':'Top',n='Left'==o?'Right':'Bottom';return parseFloat(e['border'+o+'Width'],10)+parseFloat(e['border'+n+'Width'],10)}function h(e,t,o,n){return ee(t['offset'+e],t['scroll'+e],o['client'+e],o['offset'+e],o['scroll'+e],r(10)?parseInt(o['offset'+e])+parseInt(n['margin'+('Height'===e?'Top':'Left')])+parseInt(n['margin'+('Height'===e?'Bottom':'Right')]):0)}function c(e){var t=e.body,o=e.documentElement,n=r(10)&&getComputedStyle(o);return{height:h('Height',t,o,n),width:h('Width',t,o,n)}}function g(e){return fe({},e,{right:e.left+e.width,bottom:e.top+e.height})}function u(e){var o={};try{if(r(10)){o=e.getBoundingClientRect();var n=l(e,'top'),i=l(e,'left');o.top+=n,o.left+=i,o.bottom+=n,o.right+=i}else o=e.getBoundingClientRect()}catch(t){}var p={left:o.left,top:o.top,width:o.right-o.left,height:o.bottom-o.top},s='HTML'===e.nodeName?c(e.ownerDocument):{},d=s.width||e.clientWidth||p.right-p.left,a=s.height||e.clientHeight||p.bottom-p.top,f=e.offsetWidth-d,h=e.offsetHeight-a;if(f||h){var u=t(e);f-=m(u,'x'),h-=m(u,'y'),p.width-=f,p.height-=h}return g(p)}function b(e,o){var i=2<arguments.length&&void 0!==arguments[2]&&arguments[2],p=r(10),s='HTML'===o.nodeName,d=u(e),a=u(o),l=n(e),m=t(o),h=parseFloat(m.borderTopWidth,10),c=parseFloat(m.borderLeftWidth,10);i&&s&&(a.top=ee(a.top,0),a.left=ee(a.left,0));var b=g({top:d.top-a.top-h,left:d.left-a.left-c,width:d.width,height:d.height});if(b.marginTop=0,b.marginLeft=0,!p&&s){var w=parseFloat(m.marginTop,10),y=parseFloat(m.marginLeft,10);b.top-=h-w,b.bottom-=h-w,b.left-=c-y,b.right-=c-y,b.marginTop=w,b.marginLeft=y}return(p&&!i?o.contains(l):o===l&&'BODY'!==l.nodeName)&&(b=f(b,o)),b}function w(e){var t=1<arguments.length&&void 0!==arguments[1]&&arguments[1],o=e.ownerDocument.documentElement,n=b(e,o),i=ee(o.clientWidth,window.innerWidth||0),r=ee(o.clientHeight,window.innerHeight||0),p=t?0:l(o),s=t?0:l(o,'left'),d={top:p-n.top+n.marginTop,left:s-n.left+n.marginLeft,width:i,height:r};return g(d)}function y(e){var n=e.nodeName;if('BODY'===n||'HTML'===n)return!1;if('fixed'===t(e,'position'))return!0;var i=o(e);return!!i&&y(i)}function E(e){if(!e||!e.parentElement||r())return document.documentElement;for(var o=e.parentElement;o&&'none'===t(o,'transform');)o=o.parentElement;return o||document.documentElement}function v(e,t,i,r){var p=4<arguments.length&&void 0!==arguments[4]&&arguments[4],s={top:0,left:0},d=p?E(e):a(e,t);if('viewport'===r)s=w(d,p);else{var l;'scrollParent'===r?(l=n(o(t)),'BODY'===l.nodeName&&(l=e.ownerDocument.documentElement)):'window'===r?l=e.ownerDocument.documentElement:l=r;var f=b(l,d,p);if('HTML'===l.nodeName&&!y(d)){var m=c(e.ownerDocument),h=m.height,g=m.width;s.top+=f.top-f.marginTop,s.bottom=h+f.top,s.left+=f.left-f.marginLeft,s.right=g+f.left}else s=f}i=i||0;var u='number'==typeof i;return s.left+=u?i:i.left||0,s.top+=u?i:i.top||0,s.right-=u?i:i.right||0,s.bottom-=u?i:i.bottom||0,s}function x(e){var t=e.width,o=e.height;return t*o}function O(e,t,o,n,i){var r=5<arguments.length&&void 0!==arguments[5]?arguments[5]:0;if(-1===e.indexOf('auto'))return e;var p=v(o,n,r,i),s={top:{width:p.width,height:t.top-p.top},right:{width:p.right-t.right,height:p.height},bottom:{width:p.width,height:p.bottom-t.bottom},left:{width:t.left-p.left,height:p.height}},d=Object.keys(s).map(function(e){return fe({key:e},s[e],{area:x(s[e])})}).sort(function(e,t){return t.area-e.area}),a=d.filter(function(e){var t=e.width,n=e.height;return t>=o.clientWidth&&n>=o.clientHeight}),l=0<a.length?a[0].key:d[0].key,f=e.split('-')[1];return l+(f?'-'+f:'')}function L(e,t,o){var n=3<arguments.length&&void 0!==arguments[3]?arguments[3]:null,i=n?E(t):a(t,o);return b(o,i,n)}function S(e){var t=e.ownerDocument.defaultView,o=t.getComputedStyle(e),n=parseFloat(o.marginTop||0)+parseFloat(o.marginBottom||0),i=parseFloat(o.marginLeft||0)+parseFloat(o.marginRight||0),r={width:e.offsetWidth+i,height:e.offsetHeight+n};return r}function T(e){var t={left:'right',right:'left',bottom:'top',top:'bottom'};return e.replace(/left|right|bottom|top/g,function(e){return t[e]})}function C(e,t,o){o=o.split('-')[0];var n=S(e),i={width:n.width,height:n.height},r=-1!==['right','left'].indexOf(o),p=r?'top':'left',s=r?'left':'top',d=r?'height':'width',a=r?'width':'height';return i[p]=t[p]+t[d]/2-n[d]/2,i[s]=o===s?t[s]-n[a]:t[T(s)],i}function D(e,t){return Array.prototype.find?e.find(t):e.filter(t)[0]}function N(e,t,o){if(Array.prototype.findIndex)return e.findIndex(function(e){return e[t]===o});var n=D(e,function(e){return e[t]===o});return e.indexOf(n)}function P(t,o,n){var i=void 0===n?t:t.slice(0,N(t,'name',n));return i.forEach(function(t){t['function']&&console.warn('`modifier.function` is deprecated, use `modifier.fn`!');var n=t['function']||t.fn;t.enabled&&e(n)&&(o.offsets.popper=g(o.offsets.popper),o.offsets.reference=g(o.offsets.reference),o=n(o,t))}),o}function k(){if(!this.state.isDestroyed){var e={instance:this,styles:{},arrowStyles:{},attributes:{},flipped:!1,offsets:{}};e.offsets.reference=L(this.state,this.popper,this.reference,this.options.positionFixed),e.placement=O(this.options.placement,e.offsets.reference,this.popper,this.reference,this.options.modifiers.flip.boundariesElement,this.options.modifiers.flip.padding),e.originalPlacement=e.placement,e.positionFixed=this.options.positionFixed,e.offsets.popper=C(this.popper,e.offsets.reference,e.placement),e.offsets.popper.position=this.options.positionFixed?'fixed':'absolute',e=P(this.modifiers,e),this.state.isCreated?this.options.onUpdate(e):(this.state.isCreated=!0,this.options.onCreate(e))}}function W(e,t){return e.some(function(e){var o=e.name,n=e.enabled;return n&&o===t})}function B(e){for(var t=[!1,'ms','Webkit','Moz','O'],o=e.charAt(0).toUpperCase()+e.slice(1),n=0;n<t.length;n++){var i=t[n],r=i?''+i+o:e;if('undefined'!=typeof document.body.style[r])return r}return null}function H(){return this.state.isDestroyed=!0,W(this.modifiers,'applyStyle')&&(this.popper.removeAttribute('x-placement'),this.popper.style.position='',this.popper.style.top='',this.popper.style.left='',this.popper.style.right='',this.popper.style.bottom='',this.popper.style.willChange='',this.popper.style[B('transform')]=''),this.disableEventListeners(),this.options.removeOnDestroy&&this.popper.parentNode.removeChild(this.popper),this}function A(e){var t=e.ownerDocument;return t?t.defaultView:window}function M(e,t,o,i){var r='BODY'===e.nodeName,p=r?e.ownerDocument.defaultView:e;p.addEventListener(t,o,{passive:!0}),r||M(n(p.parentNode),t,o,i),i.push(p)}function F(e,t,o,i){o.updateBound=i,A(e).addEventListener('resize',o.updateBound,{passive:!0});var r=n(e);return M(r,'scroll',o.updateBound,o.scrollParents),o.scrollElement=r,o.eventsEnabled=!0,o}function I(){this.state.eventsEnabled||(this.state=F(this.reference,this.options,this.state,this.scheduleUpdate))}function R(e,t){return A(e).removeEventListener('resize',t.updateBound),t.scrollParents.forEach(function(e){e.removeEventListener('scroll',t.updateBound)}),t.updateBound=null,t.scrollParents=[],t.scrollElement=null,t.eventsEnabled=!1,t}function U(){this.state.eventsEnabled&&(cancelAnimationFrame(this.scheduleUpdate),this.state=R(this.reference,this.state))}function Y(e){return''!==e&&!isNaN(parseFloat(e))&&isFinite(e)}function V(e,t){Object.keys(t).forEach(function(o){var n='';-1!==['width','height','top','right','bottom','left'].indexOf(o)&&Y(t[o])&&(n='px'),e.style[o]=t[o]+n})}function j(e,t){Object.keys(t).forEach(function(o){var n=t[o];!1===n?e.removeAttribute(o):e.setAttribute(o,t[o])})}function q(e,t){var o=e.offsets,n=o.popper,i=o.reference,r=$,p=function(e){return e},s=r(i.width),d=r(n.width),a=-1!==['left','right'].indexOf(e.placement),l=-1!==e.placement.indexOf('-'),f=t?a||l||s%2==d%2?r:Z:p,m=t?r:p;return{left:f(1==s%2&&1==d%2&&!l&&t?n.left-1:n.left),top:m(n.top),bottom:m(n.bottom),right:f(n.right)}}function K(e,t,o){var n=D(e,function(e){var o=e.name;return o===t}),i=!!n&&e.some(function(e){return e.name===o&&e.enabled&&e.order<n.order});if(!i){var r='`'+t+'`';console.warn('`'+o+'`'+' modifier is required by '+r+' modifier in order to work, be sure to include it before '+r+'!')}return i}function z(e){return'end'===e?'start':'start'===e?'end':e}function G(e){var t=1<arguments.length&&void 0!==arguments[1]&&arguments[1],o=ce.indexOf(e),n=ce.slice(o+1).concat(ce.slice(0,o));return t?n.reverse():n}function _(e,t,o,n){var i=e.match(/((?:\-|\+)?\d*\.?\d*)(.*)/),r=+i[1],p=i[2];if(!r)return e;if(0===p.indexOf('%')){var s;switch(p){case'%p':s=o;break;case'%':case'%r':default:s=n;}var d=g(s);return d[t]/100*r}if('vh'===p||'vw'===p){var a;return a='vh'===p?ee(document.documentElement.clientHeight,window.innerHeight||0):ee(document.documentElement.clientWidth,window.innerWidth||0),a/100*r}return r}function X(e,t,o,n){var i=[0,0],r=-1!==['right','left'].indexOf(n),p=e.split(/(\+|\-)/).map(function(e){return e.trim()}),s=p.indexOf(D(p,function(e){return-1!==e.search(/,|\s/)}));p[s]&&-1===p[s].indexOf(',')&&console.warn('Offsets separated by white space(s) are deprecated, use a comma (,) instead.');var d=/\s*,\s*|\s+/,a=-1===s?[p]:[p.slice(0,s).concat([p[s].split(d)[0]]),[p[s].split(d)[1]].concat(p.slice(s+1))];return a=a.map(function(e,n){var i=(1===n?!r:r)?'height':'width',p=!1;return e.reduce(function(e,t){return''===e[e.length-1]&&-1!==['+','-'].indexOf(t)?(e[e.length-1]=t,p=!0,e):p?(e[e.length-1]+=t,p=!1,e):e.concat(t)},[]).map(function(e){return _(e,i,t,o)})}),a.forEach(function(e,t){e.forEach(function(o,n){Y(o)&&(i[t]+=o*('-'===e[n-1]?-1:1))})}),i}function J(e,t){var o,n=t.offset,i=e.placement,r=e.offsets,p=r.popper,s=r.reference,d=i.split('-')[0];return o=Y(+n)?[+n,0]:X(n,p,s,d),'left'===d?(p.top+=o[0],p.left-=o[1]):'right'===d?(p.top+=o[0],p.left+=o[1]):'top'===d?(p.left+=o[0],p.top-=o[1]):'bottom'===d&&(p.left+=o[0],p.top+=o[1]),e.popper=p,e}for(var Q=Math.min,Z=Math.floor,$=Math.round,ee=Math.max,te='undefined'!=typeof window&&'undefined'!=typeof document,oe=['Edge','Trident','Firefox'],ne=0,ie=0;ie<oe.length;ie+=1)if(te&&0<=navigator.userAgent.indexOf(oe[ie])){ne=1;break}var i=te&&window.Promise,re=i?function(e){var t=!1;return function(){t||(t=!0,window.Promise.resolve().then(function(){t=!1,e()}))}}:function(e){var t=!1;return function(){t||(t=!0,setTimeout(function(){t=!1,e()},ne))}},pe=te&&!!(window.MSInputMethodContext&&document.documentMode),se=te&&/MSIE 10/.test(navigator.userAgent),de=function(e,t){if(!(e instanceof t))throw new TypeError('Cannot call a class as a function')},ae=function(){function e(e,t){for(var o,n=0;n<t.length;n++)o=t[n],o.enumerable=o.enumerable||!1,o.configurable=!0,'value'in o&&(o.writable=!0),Object.defineProperty(e,o.key,o)}return function(t,o,n){return o&&e(t.prototype,o),n&&e(t,n),t}}(),le=function(e,t,o){return t in e?Object.defineProperty(e,t,{value:o,enumerable:!0,configurable:!0,writable:!0}):e[t]=o,e},fe=Object.assign||function(e){for(var t,o=1;o<arguments.length;o++)for(var n in t=arguments[o],t)Object.prototype.hasOwnProperty.call(t,n)&&(e[n]=t[n]);return e},me=te&&/Firefox/i.test(navigator.userAgent),he=['auto-start','auto','auto-end','top-start','top','top-end','right-start','right','right-end','bottom-end','bottom','bottom-start','left-end','left','left-start'],ce=he.slice(3),ge={FLIP:'flip',CLOCKWISE:'clockwise',COUNTERCLOCKWISE:'counterclockwise'},ue=function(){function t(o,n){var i=this,r=2<arguments.length&&void 0!==arguments[2]?arguments[2]:{};de(this,t),this.scheduleUpdate=function(){return requestAnimationFrame(i.update)},this.update=re(this.update.bind(this)),this.options=fe({},t.Defaults,r),this.state={isDestroyed:!1,isCreated:!1,scrollParents:[]},this.reference=o&&o.jquery?o[0]:o,this.popper=n&&n.jquery?n[0]:n,this.options.modifiers={},Object.keys(fe({},t.Defaults.modifiers,r.modifiers)).forEach(function(e){i.options.modifiers[e]=fe({},t.Defaults.modifiers[e]||{},r.modifiers?r.modifiers[e]:{})}),this.modifiers=Object.keys(this.options.modifiers).map(function(e){return fe({name:e},i.options.modifiers[e])}).sort(function(e,t){return e.order-t.order}),this.modifiers.forEach(function(t){t.enabled&&e(t.onLoad)&&t.onLoad(i.reference,i.popper,i.options,t,i.state)}),this.update();var p=this.options.eventsEnabled;p&&this.enableEventListeners(),this.state.eventsEnabled=p}return ae(t,[{key:'update',value:function(){return k.call(this)}},{key:'destroy',value:function(){return H.call(this)}},{key:'enableEventListeners',value:function(){return I.call(this)}},{key:'disableEventListeners',value:function(){return U.call(this)}}]),t}();return ue.Utils=('undefined'==typeof window?global:window).PopperUtils,ue.placements=he,ue.Defaults={placement:'bottom',positionFixed:!1,eventsEnabled:!0,removeOnDestroy:!1,onCreate:function(){},onUpdate:function(){},modifiers:{shift:{order:100,enabled:!0,fn:function(e){var t=e.placement,o=t.split('-')[0],n=t.split('-')[1];if(n){var i=e.offsets,r=i.reference,p=i.popper,s=-1!==['bottom','top'].indexOf(o),d=s?'left':'top',a=s?'width':'height',l={start:le({},d,r[d]),end:le({},d,r[d]+r[a]-p[a])};e.offsets.popper=fe({},p,l[n])}return e}},offset:{order:200,enabled:!0,fn:J,offset:0},preventOverflow:{order:300,enabled:!0,fn:function(e,t){var o=t.boundariesElement||p(e.instance.popper);e.instance.reference===o&&(o=p(o));var n=B('transform'),i=e.instance.popper.style,r=i.top,s=i.left,d=i[n];i.top='',i.left='',i[n]='';var a=v(e.instance.popper,e.instance.reference,t.padding,o,e.positionFixed);i.top=r,i.left=s,i[n]=d,t.boundaries=a;var l=t.priority,f=e.offsets.popper,m={primary:function(e){var o=f[e];return f[e]<a[e]&&!t.escapeWithReference&&(o=ee(f[e],a[e])),le({},e,o)},secondary:function(e){var o='right'===e?'left':'top',n=f[o];return f[e]>a[e]&&!t.escapeWithReference&&(n=Q(f[o],a[e]-('right'===e?f.width:f.height))),le({},o,n)}};return l.forEach(function(e){var t=-1===['left','top'].indexOf(e)?'secondary':'primary';f=fe({},f,m[t](e))}),e.offsets.popper=f,e},priority:['left','right','top','bottom'],padding:5,boundariesElement:'scrollParent'},keepTogether:{order:400,enabled:!0,fn:function(e){var t=e.offsets,o=t.popper,n=t.reference,i=e.placement.split('-')[0],r=Z,p=-1!==['top','bottom'].indexOf(i),s=p?'right':'bottom',d=p?'left':'top',a=p?'width':'height';return o[s]<r(n[d])&&(e.offsets.popper[d]=r(n[d])-o[a]),o[d]>r(n[s])&&(e.offsets.popper[d]=r(n[s])),e}},arrow:{order:500,enabled:!0,fn:function(e,o){var n;if(!K(e.instance.modifiers,'arrow','keepTogether'))return e;var i=o.element;if('string'==typeof i){if(i=e.instance.popper.querySelector(i),!i)return e;}else if(!e.instance.popper.contains(i))return console.warn('WARNING: `arrow.element` must be child of its popper element!'),e;var r=e.placement.split('-')[0],p=e.offsets,s=p.popper,d=p.reference,a=-1!==['left','right'].indexOf(r),l=a?'height':'width',f=a?'Top':'Left',m=f.toLowerCase(),h=a?'left':'top',c=a?'bottom':'right',u=S(i)[l];d[c]-u<s[m]&&(e.offsets.popper[m]-=s[m]-(d[c]-u)),d[m]+u>s[c]&&(e.offsets.popper[m]+=d[m]+u-s[c]),e.offsets.popper=g(e.offsets.popper);var b=d[m]+d[l]/2-u/2,w=t(e.instance.popper),y=parseFloat(w['margin'+f],10),E=parseFloat(w['border'+f+'Width'],10),v=b-e.offsets.popper[m]-y-E;return v=ee(Q(s[l]-u,v),0),e.arrowElement=i,e.offsets.arrow=(n={},le(n,m,$(v)),le(n,h,''),n),e},element:'[x-arrow]'},flip:{order:600,enabled:!0,fn:function(e,t){if(W(e.instance.modifiers,'inner'))return e;if(e.flipped&&e.placement===e.originalPlacement)return e;var o=v(e.instance.popper,e.instance.reference,t.padding,t.boundariesElement,e.positionFixed),n=e.placement.split('-')[0],i=T(n),r=e.placement.split('-')[1]||'',p=[];switch(t.behavior){case ge.FLIP:p=[n,i];break;case ge.CLOCKWISE:p=G(n);break;case ge.COUNTERCLOCKWISE:p=G(n,!0);break;default:p=t.behavior;}return p.forEach(function(s,d){if(n!==s||p.length===d+1)return e;n=e.placement.split('-')[0],i=T(n);var a=e.offsets.popper,l=e.offsets.reference,f=Z,m='left'===n&&f(a.right)>f(l.left)||'right'===n&&f(a.left)<f(l.right)||'top'===n&&f(a.bottom)>f(l.top)||'bottom'===n&&f(a.top)<f(l.bottom),h=f(a.left)<f(o.left),c=f(a.right)>f(o.right),g=f(a.top)<f(o.top),u=f(a.bottom)>f(o.bottom),b='left'===n&&h||'right'===n&&c||'top'===n&&g||'bottom'===n&&u,w=-1!==['top','bottom'].indexOf(n),y=!!t.flipVariations&&(w&&'start'===r&&h||w&&'end'===r&&c||!w&&'start'===r&&g||!w&&'end'===r&&u),E=!!t.flipVariationsByContent&&(w&&'start'===r&&c||w&&'end'===r&&h||!w&&'start'===r&&u||!w&&'end'===r&&g),v=y||E;(m||b||v)&&(e.flipped=!0,(m||b)&&(n=p[d+1]),v&&(r=z(r)),e.placement=n+(r?'-'+r:''),e.offsets.popper=fe({},e.offsets.popper,C(e.instance.popper,e.offsets.reference,e.placement)),e=P(e.instance.modifiers,e,'flip'))}),e},behavior:'flip',padding:5,boundariesElement:'viewport',flipVariations:!1,flipVariationsByContent:!1},inner:{order:700,enabled:!1,fn:function(e){var t=e.placement,o=t.split('-')[0],n=e.offsets,i=n.popper,r=n.reference,p=-1!==['left','right'].indexOf(o),s=-1===['top','left'].indexOf(o);return i[p?'left':'top']=r[o]-(s?i[p?'width':'height']:0),e.placement=T(t),e.offsets.popper=g(i),e}},hide:{order:800,enabled:!0,fn:function(e){if(!K(e.instance.modifiers,'hide','preventOverflow'))return e;var t=e.offsets.reference,o=D(e.instance.modifiers,function(e){return'preventOverflow'===e.name}).boundaries;if(t.bottom<o.top||t.left>o.right||t.top>o.bottom||t.right<o.left){if(!0===e.hide)return e;e.hide=!0,e.attributes['x-out-of-boundaries']=''}else{if(!1===e.hide)return e;e.hide=!1,e.attributes['x-out-of-boundaries']=!1}return e}},computeStyle:{order:850,enabled:!0,fn:function(e,t){var o=t.x,n=t.y,i=e.offsets.popper,r=D(e.instance.modifiers,function(e){return'applyStyle'===e.name}).gpuAcceleration;void 0!==r&&console.warn('WARNING: `gpuAcceleration` option moved to `computeStyle` modifier and will not be supported in future versions of Popper.js!');var s,d,a=void 0===r?t.gpuAcceleration:r,l=p(e.instance.popper),f=u(l),m={position:i.position},h=q(e,2>window.devicePixelRatio||!me),c='bottom'===o?'top':'bottom',g='right'===n?'left':'right',b=B('transform');if(d='bottom'==c?'HTML'===l.nodeName?-l.clientHeight+h.bottom:-f.height+h.bottom:h.top,s='right'==g?'HTML'===l.nodeName?-l.clientWidth+h.right:-f.width+h.right:h.left,a&&b)m[b]='translate3d('+s+'px, '+d+'px, 0)',m[c]=0,m[g]=0,m.willChange='transform';else{var w='bottom'==c?-1:1,y='right'==g?-1:1;m[c]=d*w,m[g]=s*y,m.willChange=c+', '+g}var E={"x-placement":e.placement};return e.attributes=fe({},E,e.attributes),e.styles=fe({},m,e.styles),e.arrowStyles=fe({},e.offsets.arrow,e.arrowStyles),e},gpuAcceleration:!0,x:'bottom',y:'right'},applyStyle:{order:900,enabled:!0,fn:function(e){return V(e.instance.popper,e.styles),j(e.instance.popper,e.attributes),e.arrowElement&&Object.keys(e.arrowStyles).length&&V(e.arrowElement,e.arrowStyles),e},onLoad:function(e,t,o,n,i){var r=L(i,t,e,o.positionFixed),p=O(o.placement,r,t,e,o.modifiers.flip.boundariesElement,o.modifiers.flip.padding);return t.setAttribute('x-placement',p),V(t,{position:o.positionFixed?'fixed':'absolute'}),o},gpuAcceleration:void 0}}},ue});
//# sourceMappingURL=popper.min.js.map
|
PypiClean
|
/tf_utils-1.0.4-py3-none-any.whl/tf_utils/wrappers/atariActionWrapper.py
|
import gym
import numpy as np
from gym import spaces, Wrapper
from gym.envs.atari.atari_env import AtariEnv
from gym.spaces import MultiDiscrete
# try:
# from gym_doom.wrappers import ViZDoomEnv
# except:
# pass
# class MultiDiscreteActionWrapper(ViZDoomEnv):
# def __init__(self, env):
# super(MultiDiscreteActionWrapper, self).__init__(env)
# action_size = env.getPossibleActionsCodes()[0]
# action_space = []
# for i in range(len(action_size)):
# action_space.append(2)
# self.action_space = MultiDiscrete(action_space)
# self.observation_space = env.observation_space
class AtariObsWrapper(gym.ObservationWrapper):
def __init__(self, env, dummy_obs):
super(AtariObsWrapper, self).__init__(env)
self.atari_env = self.unwrapped
self.observation_space = self.atari_env.observation_space
self.shape = self.observation_space.shape
self.dummy_obs = dummy_obs
self._obs = np.zeros(shape=self.shape, dtype=np.uint8)
def observation(self, observation):
if self.dummy_obs:
return self._obs
return observation
class AtariActionWrapper(Wrapper):
def __init__(self, env, discrete=False):
"""
Initialize a new binary to discrete action space wrapper.
Args:
env (gym.Env): the environment to wrap
actions (list): an ordered list of actions (as lists of buttons).
The index of each button list is its discrete coded value
Returns:
None
"""
super(AtariActionWrapper, self).__init__(env)
self.discrete = discrete
# create the new action space
# self.action_space = spaces.Box(low=0, high=17, dtype=np.int32, shape=(1,))
if isinstance(env.unwrapped, AtariEnv):
(screen_width, screen_height) = self.env.unwrapped.ale.getScreenDims()
self.screen_space = spaces.Box(low=0, high=255, shape=(screen_height, screen_width, 3), dtype=np.uint8)
if not self.discrete:
self.action_space = MultiDiscrete([env.action_space.n])
self.observation_space = env.observation_space
def step(self, action):
try:
return self.env.step(action)
except:
if isinstance(self.env.unwrapped, AtariEnv):
return self.env.step(action[0])
return self.env.step(int(action[0]))
def reset(self):
"""Reset the environment and return the initial observation."""
return self.env.reset()
def getImage(self):
atari_env = self.env.unwrapped
return atari_env.ale.getScreenRGB2()
# explicitly define the outward facing API of this module
__all__ = [AtariActionWrapper.__name__, AtariObsWrapper.__name__]
|
PypiClean
|
/MetaGram-2.0.2.tar.gz/MetaGram-2.0.2/pyrogram/methods/utilities/run.py
|
import asyncio
import inspect
import pyrogram
from pyrogram.methods.utilities.idle import idle
class Run:
def run(
self: "pyrogram.Client",
coroutine=None
):
"""Start the client, idle the main script and finally stop the client.
When calling this method without any argument it acts as a convenience method that calls
:meth:`~pyrogram.Client.start`, :meth:`~pyrogram.idle` and :meth:`~pyrogram.Client.stop` in sequence.
It makes running a single client less verbose.
In case a coroutine is passed as argument, runs the coroutine until it's completed and doesn't do any client
operation. This is almost the same as :py:obj:`asyncio.run` except for the fact that Pyrogram's ``run`` uses the
current event loop instead of a new one.
If you want to run multiple clients at once, see :meth:`pyrogram.compose`.
Parameters:
coroutine (``Coroutine``, *optional*):
Pass a coroutine to run it until it completes.
Raises:
ConnectionError: In case you try to run an already started client.
Example:
.. code-block:: python
from pyrogram import Client
app = Client("my_account")
... # Set handlers up
app.run()
.. code-block:: python
from pyrogram import Client
app = Client("my_account")
async def main():
async with app:
print(await app.get_me())
app.run(main())
"""
loop = asyncio.get_event_loop()
run = loop.run_until_complete
if coroutine is not None:
run(coroutine)
else:
if inspect.iscoroutinefunction(self.start):
run(self.start())
run(idle())
run(self.stop())
else:
self.start()
run(idle())
self.stop()
|
PypiClean
|
/aiida-bigdft-0.2.6.tar.gz/aiida-bigdft-0.2.6/BigDFT/Logfiles.py
|
kcal_mev = 43.364
to_kcal = 1.0/kcal_mev*27.211386*1000
EVAL = "eval"
SETUP = "let"
INITIALIZATION = "globals"
PATH = 'path'
PRINT = 'print'
GLOBAL = 'global'
FLOAT_SCALAR = 'scalar'
PRE_POST = [EVAL, SETUP, INITIALIZATION]
# Builtin paths to define the search paths
BUILTIN = {
'number_of_orbitals': {PATH: [['Total Number of Orbitals']],
PRINT: "Total Number of Orbitals", GLOBAL: True},
'posinp_file': {PATH: [['posinp', 'properties', 'source', ]],
PRINT: "source:", GLOBAL: True},
'XC_parameter': {PATH: [['dft', 'ixc'], ['DFT parameters:', 'XC ID:']],
PRINT: "ixc:", GLOBAL: True, FLOAT_SCALAR: True},
'grid_spacing': {PATH: [["dft", "hgrids"]],
PRINT: "hgrids:", GLOBAL: True},
'spin_polarization': {PATH: [["dft", "nspin"]],
PRINT: "nspin:", GLOBAL: True},
'total_magn_moment': {PATH: [["dft", "mpol"]],
PRINT: "mpol:", GLOBAL: True},
'system_charge': {PATH: [["dft", "qcharge"]],
PRINT: "qcharge:", GLOBAL: True},
'rmult': {PATH: [["dft", "rmult"]],
PRINT: "rmult:", GLOBAL: True},
# 'up_elec'::{PATH: [["occupation:","K point 1:","up:","Orbital \d+"]],
# PRINT: "Orbital \d+", GLOBAL: True},
'astruct': {PATH: [['Atomic structure']]},
'data_directory': {PATH: [['Data Writing directory']]},
'dipole': {PATH: [['Electric Dipole Moment (AU)', 'P vector']],
PRINT: "Dipole (AU)"},
'electrostatic_multipoles': {PATH: [['Multipole coefficients']]},
'energy': {PATH: [["Last Iteration", "FKS"], ["Last Iteration", "EKS"],
["Energy (Hartree)"],
['Ground State Optimization', -1,
'self consistency summary', -1, 'energy']],
PRINT: "Energy", GLOBAL: False},
'trH': {PATH: [['Ground State Optimization', -1, 'kernel optimization',
-2, 'Kernel update', 'Kernel calculation', 0, 'trace(KH)']]
},
'hartree_energy': {PATH: [["Last Iteration", 'Energies', 'EH'],
['Ground State Optimization', -1,
'self consistency summary', -1,
'Energies', 'EH']]},
'ionic_energy': {PATH: [['Ion-Ion interaction energy']]},
'XC_energy': {PATH: [["Last Iteration", 'Energies', 'EXC'],
['Ground State Optimization', -1,
'self consistency summary', -1,
'Energies', 'EXC']]},
'trVxc': {PATH: [["Last Iteration", 'Energies', 'EvXC'],
['Ground State Optimization', -1,
'self consistency summary', -1,
'Energies', 'EvXC']]},
'evals': {PATH: [["Complete list of energy eigenvalues"],
["Ground State Optimization", -1, "Orbitals"],
["Ground State Optimization", -1,
"Hamiltonian Optimization", -1, "Subspace Optimization",
"Orbitals"]]},
'fermi_level': {PATH: [["Ground State Optimization", -1, "Fermi Energy"],
["Ground State Optimization", -1,
"Hamiltonian Optimization", -1,
"Subspace Optimization", "Fermi Energy"]],
PRINT: True, GLOBAL: False},
'forcemax': {PATH: [["Geometry", "FORCES norm(Ha/Bohr)", "maxval"],
['Clean forces norm (Ha/Bohr)', 'maxval']],
PRINT: "Max val of Forces"},
'forcemax_cv': {PATH: [['geopt', 'forcemax']],
PRINT: 'Convergence criterion on forces',
GLOBAL: True, FLOAT_SCALAR: True},
'force_fluct': {PATH: [["Geometry", "FORCES norm(Ha/Bohr)", "fluct"]],
PRINT: "Threshold fluctuation of Forces"},
'forces': {PATH: [['Atomic Forces (Ha/Bohr)']]},
'gnrm_cv': {PATH: [["dft", "gnrm_cv"]],
PRINT: "Convergence criterion on Wfn. Residue", GLOBAL: True},
'kpts': {PATH: [["K points"]],
PRINT: False, GLOBAL: True},
'kpt_mesh': {PATH: [['kpt', 'ngkpt']], PRINT: True, GLOBAL: True},
'magnetization': {PATH: [["Ground State Optimization", -1,
"Total magnetization"],
["Ground State Optimization", -1,
"Hamiltonian Optimization", -1,
"Subspace Optimization", "Total magnetization"]],
PRINT: "Total magnetization of the system"},
'memory_run': {PATH: [
['Accumulated memory requirements during principal run stages (MiB.KiB)']
]},
'memory_quantities': {PATH: [
['Memory requirements for principal quantities (MiB.KiB)']]},
'memory_peak': {PATH: [['Estimated Memory Peak (MB)']]},
'nat': {PATH: [['Atomic System Properties', 'Number of atoms']],
PRINT: "Number of Atoms", GLOBAL: True},
'pressure': {PATH: [['Pressure', 'GPa']], PRINT: True},
'sdos': {PATH: [['SDos files']], GLOBAL: True},
'support_functions': {PATH: [["Gross support functions moments",
'Multipole coefficients', 'values']]},
'symmetry': {PATH: [['Atomic System Properties', 'Space group']],
PRINT: "Symmetry group", GLOBAL: True}}
def get_logs(files):
"""
Return a list of loaded logfiles from files, which is a list
of paths leading to logfiles.
Args:
:param files: List of filenames indicating the logfiles
:returns: List of Logfile instances associated to filename
"""
# if dictionary is not None:
# # Read the dictionary or a list of dictionaries or from a generator
# # Need to return a list
# dicts = [dictionary] if isinstance(dictionary, dict) else [
# d for d in dictionary]
# else if arch is not None:
# dicts = YamlIO.load(archive=arch, member=member, safe_mode=True,
# doc_lists=True)
#
# if arch:
# # An archive is detected
# import tarfile
# from futile import YamlIO
# tar = tarfile.open(arch)
# members = [tar.getmember(member)] if member else tar.getmembers()
# # print members
# for memb in members:
# f = tar.extractfile(memb)
# dicts += YamlIO.load(stream=f.read())
# # Add the label (name of the file)
# # dicts[-1]['label'] = memb.name
# elif dictionary:
# elif args:
# # Read the list of files (member replaces load_only...)
# dicts = get_logs(args)
#
#
#
from futile import YamlIO
logs = []
for filename in files:
logs += YamlIO.load(filename, doc_lists=True, safe_mode=True)
return logs
def floatify(scalar):
"""
Useful to make float from strings compatible from fortran
Args:
scalar (str, float): When string representing a float that might be
given in fortran notation, otherwise it might be a floating point
Returns:
float. The value associated to scalar as a floating point number
Example:
>>> # this would be the same with "1.e-4" or with 0.0001
>>> floatify('1.d-4')
1.e-4
"""
if isinstance(scalar, str):
return float(scalar.replace('d', 'e').replace('D', 'E'))
else:
return scalar
# This is a tentative function written to extract information from the runs
def document_quantities(doc, to_extract):
"""
Extract information from the runs.
.. warning::
This routine was designed for the previous parse_log.py script and it
is here only for backward compatibility purposes.
"""
analysis = {}
for quantity in to_extract:
if quantity in PRE_POST:
continue
# follow the levels indicated to find the quantity
field = to_extract[quantity]
if not isinstance(field, list) and not isinstance(field, dict) \
and field in BUILTIN:
paths = BUILTIN[field][PATH]
else:
paths = [field]
# now try to find the first of the different alternatives
for path in paths:
# print path,BUILTIN,BUILTIN.keys(),field in BUILTIN,field
value = doc
for key in path:
# as soon as there is a problem the quantity is null
try:
value = value[key]
except (KeyError, TypeError):
value = None
break
if value is not None:
break
analysis[quantity] = value
return analysis
def perform_operations(variables, ops, debug=False):
"""
Perform operations given by 'ops'.
'variables' is a dictionary of variables i.e. key=value.
.. warning::
This routine was designed for the previous parse_log.py script and it is
here only for backward compatibility purposes.
"""
# glstr=''
# if globs is not None:
# for var in globs:
# glstr+= "global "+var+"\n"
# if debug: print '###Global Strings: \n',glstr
# first evaluate the given variables
for key in variables:
command = key+"="+str(variables[key])
if debug:
print(command)
exec(command)
# then evaluate the given expression
if debug:
print(ops)
# exec(glstr+ops, globals(), locals())
exec(ops, globals(), locals())
def process_logfiles(files, instructions, debug=False):
"""
Process the logfiles in files with the dictionary 'instructions'.
.. warning::
This routine was designed for the previous parse_log.py script and it is
here only for backward compatibility purposes.
"""
import sys
glstr = 'global __LAST_FILE__ \n'
glstr += '__LAST_FILE__='+str(len(files))+'\n'
if INITIALIZATION in instructions:
for var in instructions[INITIALIZATION]:
glstr += "global "+var+"\n"
glstr += var + " = " + str(instructions[INITIALIZATION][var])+"\n"
# exec var +" = "+ str(instructions[INITIALIZATION][var])
exec(glstr, globals(), locals())
for f in files:
sys.stderr.write("#########processing "+f+"\n")
datas = get_logs([f])
for doc in datas:
doc_res = document_quantities(doc, instructions)
# print doc_res,instructions
if EVAL in instructions:
perform_operations(doc_res, instructions[EVAL], debug=debug)
def find_iterations(log):
"""
Identify the different block of the iterations of the wavefunctions
optimization.
.. todo::
Should be generalized and checked for mixing calculation and O(N)
logfiles
:param log: logfile load
:type log: dictionary
:returns: wavefunction residue per iterations, per each subspace
diagonalization
:rtype: numpy array of rank two
"""
import numpy
for itrp in log['Ground State Optimization']:
rpnrm = []
for itsp in itrp['Hamiltonian Optimization']:
gnrm_sp = []
for it in \
itsp['Subspace Optimization']['Wavefunctions Iterations']:
if 'gnrm' in it:
gnrm_sp.append(it['gnrm'])
rpnrm.append(numpy.array(gnrm_sp))
rpnrm = numpy.array(rpnrm)
return rpnrm
def plot_wfn_convergence(wfn_it, gnrm_cv, label=None):
"""
Plot the convergence of the wavefunction coming from the find_iterations
function. Cumulates the plot in matplotlib.pyplot module
:param wfn_it: list coming from :func:`find_iterations`
:param gnrm_cv: convergence criterion for the residue of the wfn_it list
:param label: label for the given plot
"""
import matplotlib.pyplot as plt
import numpy
plt.semilogy(numpy.ravel(wfn_it), label=label)
plt.legend(loc="upper right")
plt.axhline(gnrm_cv, color='k', linestyle='--')
it = 0
for itrp in wfn_it:
it += len(itrp)
plt.axvline(it, color='k', linestyle='--')
class Logfile():
"""
Import a Logfile from a filename in yaml format, a list of filenames,
an archive (compressed tar file), a dictionary or a list of dictionaries.
:param args: logfile names to be parsed
:type args: strings
:param kwargs: keyword arguments
* archive: name of the archive from which retrieve the logfiles
* member: name of the logfile within the archive. If absent, all the
files of the archive will be considered as args
* label: the label of the logfile instance
* dictionary: parsed logfile given as a dictionary, serialization of the
yaml logfile
:Example:
>>> l = Logfile('one.yaml','two.yaml')
>>> l = Logfile(archive='calc.tgz')
>>> l = Logfile(archive='calc.tgz',member='one.yaml')
>>> l = Logfile(dictionary=dict1)
>>> l = Logfile(dictionary=[dict1, dict2])
.. todo::
Document the automatically generated attributes, perhaps via an inner
function in futile python module
"""
def __init__(self, *args, **kwargs):
"""
Initialize the class
"""
import os
dicts = []
# Read the dictionary kwargs
arch = kwargs.get("archive")
member = kwargs.get("member")
label = kwargs.get("label")
dictionary = kwargs.get("dictionary")
if arch:
# An archive is detected
import tarfile
from futile import YamlIO
tar = tarfile.open(arch)
members = [tar.getmember(member)] if member else tar.getmembers()
# print members
for memb in members:
f = tar.extractfile(memb)
dicts += YamlIO.load(stream=f.read())
# Add the label (name of the file)
# dicts[-1]['label'] = memb.name
srcdir = os.path.dirname(arch)
label = label if label is not None else arch
elif dictionary:
# Read the dictionary or a list of dictionaries or from a generator
# Need to return a list
dicts = [dictionary] if isinstance(dictionary, dict) else [
d for d in dictionary]
srcdir = ''
label = label if label is not None else 'dict'
elif args:
# Read the list of files (member replaces load_only...)
dicts = get_logs(args)
label = label if label is not None else args[0]
srcdir = os.path.dirname(args[0])
#: Label of the Logfile instance
self.label = label
#: Absolute path of the directory of logfile
self.srcdir = os.path.abspath('.' if srcdir == '' else srcdir)
if not dicts:
raise ValueError("No log information provided.")
# So we have a list of a dictionary or a list of dictionaries
# Initialize the logfile with the first document
self._initialize_class(dicts[0])
#
if len(dicts) > 1:
# first initialize the instances with the previous logfile such as
# to provide the correct information (we should however decide what
# to do if some run did not converged)
self._instances = []
for i, d in enumerate(dicts):
# label=d.get('label','log'+str(i))
label = 'log'+str(i)
dtmp = dicts[0]
# Warning: recursive call!!
instance = Logfile(dictionary=dtmp, label=label)
# now update the instance with the other value
instance._initialize_class(d)
self._instances.append(instance)
# then we should find the best values for the dictionary
print('Found', len(self._instances), 'different runs')
import numpy
# Initialize the class with the dictionary corresponding to the
# lower value of the energy
ens = [(ll.energy if hasattr(ll, 'energy') else 1.e100)
for ll in self._instances]
#: Position in the logfile items of the run associated to lower
# energy
self.reference_log = numpy.argmin(ens)
# print 'Energies',ens
self._initialize_class(dicts[self.reference_log])
#
def __getitem__(self, index):
if hasattr(self, '_instances'):
return self._instances[index]
else:
# print('index not available')
raise ValueError(
'This instance of Logfile has no multiple instances')
#
def __str__(self):
"""Display short information about the logfile"""
return self._print_information()
#
def __len__(self):
if hasattr(self, '_instances'):
return len(self._instances)
else:
return 0 # single point run
#
def _initialize_class(self, d):
import numpy
# : dictionary of the logfile (serialization of yaml format)
self.log = d
# here we should initialize different instances of the logfile class
# again
sublog = document_quantities(self.log, {val: val for val in BUILTIN})
for att, val in sublog.items():
if val is not None:
val_tmp = floatify(val) if BUILTIN[att].get(
FLOAT_SCALAR) else val
setattr(self, att, val_tmp)
elif hasattr(self, att) and not BUILTIN[att].get(GLOBAL):
delattr(self, att)
# then postprocess the particular cases
if not hasattr(self, 'fermi_level') and hasattr(self, 'evals'):
self._fermi_level_from_evals(self.evals)
if hasattr(self, 'kpts'):
#: Number of k-points, present only if meaningful
self.nkpt = len(self.kpts)
if hasattr(self, 'evals'):
self.evals = self._get_bz(self.evals, self.kpts)
if hasattr(self, 'forces') and hasattr(self, 'astruct'):
self.astruct.update({'forces': self.forces})
delattr(self, 'forces')
elif hasattr(self, 'evals'):
from BigDFT import BZ
#: Eigenvalues of the run, represented as a
# :class:`BigDFT.BZ.BandArray` class instance
self.evals = [BZ.BandArray(self.evals), ]
if hasattr(self, 'sdos'):
import os
# load the different sdos files
sd = []
for f in self.sdos:
try:
data = numpy.loadtxt(os.path.join(self.srcdir, f))
except IOError:
data = None
if data is not None:
xs = []
ba = [[], []]
for line in data:
xs.append(line[0])
ss = self._sdos_line_to_orbitals(line)
for ispin in [0, 1]:
ba[ispin].append(ss[ispin])
sd.append({'coord': xs, 'dos': ba})
else:
sd.append(None)
#: Spatial density of states, when available
self.sdos = sd
# memory attributes
self.memory = {}
for key in ['memory_run', 'memory_quantities', 'memory_peak']:
if hasattr(self, key):
title = BUILTIN[key][PATH][0][0]
self.memory[title] = getattr(self, key)
if key != 'memory_peak':
delattr(self, key)
#
def _fermi_level_from_evals(self, evals):
import numpy
# this works when the representation of the evals is only with
# occupied states
# write('evals',self.evals)
fl = None
fref = None
for iorb, ev in enumerate(evals):
e = ev.get('e')
if e is not None:
fref = ev['f'] if iorb == 0 else fref
fl = e
if ev['f'] < 0.5*fref:
break
e = ev.get('e_occ', ev.get('e_occupied'))
if e is not None:
fl = e if not isinstance(
e, list) else numpy.max(numpy.array(e))
e = ev.get('e_vrt', ev.get('e_virt'))
if e is not None:
break
#: Chemical potential of the system
self.fermi_level = fl
#
def _sdos_line_to_orbitals_old(self, sorbs):
from BigDFT import BZ
evals = []
iorb = 1
# renorm=len(xs)
# iterate on k-points
if hasattr(self, 'kpts'):
kpts = self.kpts
else:
kpts = [{'Rc': [0.0, 0.0, 0.0], 'Wgt':1.0}]
for i, kp in enumerate(kpts):
ev = []
# iterate on the subspaces of the kpoint
for ispin, norb in enumerate(self.evals[0].info):
for iorbk in range(norb):
# renorm postponed
ev.append({'e': sorbs[iorb+iorbk],
's': 1-2*ispin, 'k': i+1})
# ev.append({'e':np.sum([ so[iorb+iorbk] for so in sd]),
# 's':1-2*ispin,'k':i+1})
iorb += norb
evals.append(BZ.BandArray(
ev, ikpt=i+1, kpt=kp['Rc'], kwgt=kp['Wgt']))
return evals
#
def _sdos_line_to_orbitals(self, sorbs):
import numpy as np
iorb = 1
sdos = [[], []]
for ikpt, band in enumerate(self.evals):
sdoskpt = [[], []]
for ispin, norb in enumerate(band.info):
if norb == 0:
continue
for i in range(norb):
val = sorbs[iorb]
iorb += 1
sdoskpt[ispin].append(val)
sdos[ispin].append(np.array(sdoskpt[ispin]))
return sdos
#
def _get_bz(self, ev, kpts):
"""Get the Brillouin Zone."""
evals = []
from BigDFT import BZ
for i, kp in enumerate(kpts):
evals.append(BZ.BandArray(
ev, ikpt=i+1, kpt=kp['Rc'], kwgt=kp['Wgt']))
return evals
#
def get_dos(self, label=None, npts=2500, e_min=None, e_max=None):
"""
Get the density of states from the logfile.
:param label: id of the density of states.
:type label: string
:param npts: number of points of the DoS curve
:type npts: int
:param e_min: minimum energy value for the DoS
:type e_min: float
:param e_max: maximum energy value for the DoS
:type e_max: float
:returns: Instance of the DoS class
:rtype: :class:`BigDFT.DoS.DoS`
"""
from BigDFT import DoS
# reload(DoS)
lbl = self.label if label is None else label
sdos = self.sdos if hasattr(self, 'sdos') else None
return DoS.DoS(bandarrays=self.evals, label=lbl, units='AU',
fermi_level=self.fermi_level, npts=npts, sdos=sdos,
e_min=e_min, e_max=e_max)
#
def get_brillouin_zone(self):
"""
Return an instance of the BrillouinZone class, useful for band
structure.
:returns: Brillouin Zone of the logfile
:rtype: :class:`BigDFT.BZ.BrillouinZone`
"""
from BigDFT import BZ
if self.nkpt == 1:
print('WARNING: Brillouin Zone plot cannot be defined properly'
' with only one k-point')
# raise
mesh = self.kpt_mesh # : K-points grid
if isinstance(mesh, int):
mesh = [mesh, ]*3
if self.astruct['cell'][1] == float('inf'):
mesh[1] = 1
return BZ.BrillouinZone(self.astruct, mesh, self.evals,
self.fermi_level)
#
def wfn_plot(self):
"""
Plot the wavefunction convergence.
:Example:
>>> tt=Logfile('log-with-wfn-optimization.yaml',label='a label')
>>> tt.wfn_plot()
"""
wfn_it = find_iterations(self.log)
plot_wfn_convergence(wfn_it, self.gnrm_cv, label=self.label)
#
def geopt_plot(self):
"""
For a set of logfiles construct the convergence plot if available.
Plot the Maximum value of the forces against the difference between
the minimum value of the energy and the energy of the iteration.
Also an errorbar is given indicating the noise on the forces for a
given point. Show the plot as per plt.show() with matplotlib.pyplots as
plt
:Example:
>>> tt=Logfile('log-with-geometry-optimization.yaml')
>>> tt.geopt_plot()
"""
energies = []
forces = []
ferr = []
if not hasattr(self, '_instances'):
print('ERROR: No geopt plot possible, single point run')
return
for ll in self._instances:
if hasattr(ll, 'forcemax') and hasattr(ll, 'energy'):
forces.append(ll.forcemax)
energies.append(ll.energy-self.energy)
ferr.append(0.0 if not hasattr(ll, 'force_fluct') else (
self.force_fluct if hasattr(self, 'force_fluct') else 0.0))
if len(forces) > 1:
import matplotlib.pyplot as plt
plt.errorbar(energies, forces, yerr=ferr,
fmt='.-', label=self.label)
plt.legend(loc='upper right')
plt.loglog()
plt.xlabel('Energy - min(Energy)')
plt.ylabel('Forcemax')
if hasattr(self, 'forcemax_cv'):
plt.axhline(self.forcemax_cv, color='k', linestyle='--')
plt.show()
else:
print('No plot necessary, less than two points found')
#
#
def _print_information(self):
"""Display short information about the logfile (used by str)."""
import yaml
# summary=[{'Atom types':
# numpy.unique([at.keys()[0] for at in
# self.astruct['positions']]).tolist()},
# {'cell':
# self.astruct.get('cell', 'Free BC')}]
summary = [{'Atom types':
self.log['Atomic System Properties']['Types of atoms']},
{'cell':
self.astruct.get('cell', 'Free BC')}]
# normal printouts in the document, according to definition
for field in BUILTIN:
name = BUILTIN[field].get(PRINT)
if name:
name = field
if not name or not hasattr(self, field):
continue
summary.append({name: getattr(self, field)})
if hasattr(self, 'evals'):
nspin = self.log['dft']['nspin']
if nspin == 4:
nspin = 1
cmt = (' per k-point' if hasattr(self, 'kpts') else '')
summary.append(
{'No. of KS orbitals'+cmt: self.evals[0].info[0:nspin]})
return yaml.dump(summary, default_flow_style=False)
def _identify_value(line, key):
to_spaces = [',', ':', '{', '}', '[', ']']
ln = line
for sym in to_spaces:
ln = ln.replace(sym, ' ')
istart = ln.index(key) + len(key)
copy = ln[istart:]
return copy.split()[0]
def _log_energies(filename, into_kcal=False):
from numpy import nan
TO_SEARCH = {'Energy (Hartree)': 'Etot',
'Ion-Ion interaction energy': 'Eion',
'trace(KH)': 'Ebs', 'EH': 'Eh', 'EvXC': 'EVxc',
'EXC': 'EXC'}
data = {}
previous = {}
f = open(filename, 'r')
for line in f.readlines():
for key, name in TO_SEARCH.items():
if key in line:
previous[name] = data.get(name, nan)
todata = _identify_value(line, key)
try:
todata = float(todata) * (to_kcal if into_kcal else 1.0)
except Exception:
todata = nan
data[name] = todata
f.close()
return data, previous
class Energies():
"""
Find the energy terms from a BigDFT logfile.
May also accept malformed logfiles that are issued, for instance,
from a badly terminated run that had I/O error.
Args:
filename (str): path of the logfile
units (str): may be 'AU' or 'kcal/mol'
disp (float): dispersion energy (will be added to the total energy)
strict (bool): assume a well-behaved logfile
"""
def __init__(self, filename, units='AU', disp=None, strict=True):
from numpy import nan
TO_SEARCH = {'energy': 'Etot',
'ionic_energy': 'Eion',
'trH': 'Ebs', 'hartree_energy': 'Eh', 'trVxc': 'EVxc',
'XC_energy': 'EXC'}
self.into_kcal = units == 'kcal/mol'
self.conversion_factor = to_kcal if self.into_kcal else 1.0
data, previous = _log_energies(filename,
into_kcal=self.into_kcal)
try:
log = Logfile(filename)
data = {name: getattr(log, att, nan) * self.conversion_factor
for att, name in TO_SEARCH.items()}
except Exception:
pass
self._fill(data, previous, disp=disp, strict=strict)
def _fill(self, data, previous, disp=None, strict=True):
from numpy import nan
if disp is None:
self.dict_keys = []
self.Edisp = 0
else:
self.dict_keys = ['Edisp']
self.Edisp = disp
for key, val in previous.items():
setattr(self, key, val)
self.dict_keys.append(key)
setattr(self, key+'_last', data[key])
self.dict_keys.append(key+'_last')
for key in ['Etot', 'Eion', 'Ebs']:
setattr(self, key, data.get(key, nan))
self.dict_keys.append(key)
try:
self.Etot_last = self.Ebs_last + self.Eion - self.Eh_last + \
self.EXC_last - self.EVxc_last
self.Etot_approx = self.Ebs - self.Eh + self.Eion
self.sanity_error = self.Ebs - self.Eh + self.EXC - self.EVxc + \
self.Eion - self.Etot
self.dict_keys += ['Etot_last', 'Etot_approx']
self.Etot_last += self.Edisp
self.Etot_approx += self.Edisp
except Exception:
if strict:
raise ValueError('the data is malformed', data, previous)
self.sanity_error = 0.0
if abs(self.sanity_error) > 1.e-4 * self.conversion_factor:
raise ValueError('the sanity is too large', self.sanity_error)
self.dict_keys += ['sanity_error']
self.Etot += self.Edisp
@property
def to_dict(self):
dd = {key: getattr(self, key) for key in self.dict_keys}
return dd
if __name__ == "__main__":
# Create a logfile: should give an error
# (ValueError: No log information provided.)
lf = Logfile()
|
PypiClean
|
/pawpyseed-0.7.1.tar.gz/pawpyseed-0.7.1/docs/documentation/search/functions_15.js
|
var searchData=
[
['wave_5finterpolate',['wave_interpolate',['../utils_8h.html#a8e43b5af014cbe8c09d887a57cf8c5ce',1,'utils.h']]],
['wave_5fspherical_5fbessel_5ftransform',['wave_spherical_bessel_transform',['../sbt_8h.html#ac2d75c2ff5d9ac72ec1d04f567642411',1,'sbt.h']]],
['wave_5fvalue',['wave_value',['../utils_8h.html#a183927d2536c372d9b9a361745499329',1,'utils.h']]],
['wave_5fvalue2',['wave_value2',['../utils_8h.html#a888b700c775f3ed5d75827c7557b7743',1,'utils.h']]],
['wcclose',['wcclose',['../reader_8h.html#a244c80ed46808322e740e2aa255e58ce',1,'reader.h']]],
['wcopen',['wcopen',['../reader_8h.html#ace1c2460da5c220221fb880a2d7ac2f9',1,'reader.h']]],
['wcread',['wcread',['../reader_8h.html#afb1721460239960f9878b4aa698f2528',1,'reader.h']]],
['wcseek',['wcseek',['../reader_8h.html#a6b5af17bca6e450fc5c58d40c2f82e5d',1,'reader.h']]],
['write_5fdensity_5fnoreturn',['write_density_noreturn',['../density_8h.html#ac9206643498f87bc22ae547f913c4a71',1,'density.h']]],
['write_5fdensity_5frealspace',['write_density_realspace',['../classpawpyseed_1_1core_1_1noncollinear_1_1NCLWavefunction.html#a959d35b6f6ac01990ba1d821e243adda',1,'pawpyseed.core.noncollinear.NCLWavefunction.write_density_realspace()'],['../classpawpyseed_1_1core_1_1wavefunction_1_1Wavefunction.html#ac5321a8d9c65cb866932d922e799e331',1,'pawpyseed.core.wavefunction.Wavefunction.write_density_realspace()']]],
['write_5fdensity_5freturn',['write_density_return',['../density_8h.html#acfbd7f9a27eebd5c401f5bf92b9b8c61',1,'density.h']]],
['write_5frealspace_5fstate_5fncl_5fri',['write_realspace_state_ncl_ri',['../density_8h.html#a0752a5e780f70b059cc31e5bde420891',1,'density.h']]],
['write_5frealspace_5fstate_5fri_5fnoreturn',['write_realspace_state_ri_noreturn',['../density_8h.html#ad3dcb0fb3370b82476f2a85ae4605311',1,'density.h']]],
['write_5frealspace_5fstate_5fri_5freturn',['write_realspace_state_ri_return',['../density_8h.html#a305d331f989ea9a33953682ff0cf5c5c',1,'density.h']]],
['write_5fstate_5frealspace',['write_state_realspace',['../classpawpyseed_1_1core_1_1noncollinear_1_1NCLWavefunction.html#af563ea1ecb8a446405c74039b17fc686',1,'pawpyseed.core.noncollinear.NCLWavefunction.write_state_realspace()'],['../classpawpyseed_1_1core_1_1wavefunction_1_1Wavefunction.html#ab33e7cf0a02e1fb3caf8fbd023bddc1b',1,'pawpyseed.core.wavefunction.Wavefunction.write_state_realspace()']]],
['write_5fvolumetric',['write_volumetric',['../density_8h.html#afaf51ca488301a9c2780494e13afe055',1,'density.h']]],
['write_5fyaml',['write_yaml',['../classpawpyseed_1_1analysis_1_1defect__composition_1_1PawpyData.html#adf54d52e750a126fe225a0175dfae165',1,'pawpyseed::analysis::defect_composition::PawpyData']]]
];
|
PypiClean
|
/terra_classic_sdk-2.0.9.tar.gz/terra_classic_sdk-2.0.9/terra_classic_sdk/util/parse_content.py
|
from typing import Union
from terra_classic_sdk.core.distribution.proposals import CommunityPoolSpendProposal
from terra_classic_sdk.core.gov.proposals import TextProposal
from terra_classic_sdk.core.params.proposals import ParameterChangeProposal
from terra_classic_sdk.core.ibc.proposals import ClientUpdateProposal
from terra_classic_sdk.core.upgrade import (
CancelSoftwareUpgradeProposal,
SoftwareUpgradeProposal,
)
from terra_proto.cosmos.distribution.v1beta1 import CommunityPoolSpendProposal as CommunityPoolSpendProposal_pb
from terra_proto.cosmos.gov.v1beta1 import TextProposal as TextProposal_pb
from terra_proto.cosmos.params.v1beta1 import ParameterChangeProposal as ParameterChangeProposal_pb
from terra_proto.cosmos.upgrade.v1beta1 import (
CancelSoftwareUpgradeProposal as CancelSoftwareUpgradeProposal_pb,
SoftwareUpgradeProposal as SoftwareUpgradeProposal_pb
)
from terra_proto.ibc.core.client.v1 import ClientUpdateProposal as ClientUpdateProposal_pb
from .base import create_demux, create_demux_proto
Content = Union[
TextProposal,
CommunityPoolSpendProposal,
ParameterChangeProposal,
SoftwareUpgradeProposal,
CancelSoftwareUpgradeProposal,
ClientUpdateProposal,
]
parse_content = create_demux(
[
CommunityPoolSpendProposal,
TextProposal,
ParameterChangeProposal,
SoftwareUpgradeProposal,
CancelSoftwareUpgradeProposal,
ClientUpdateProposal
]
)
parse_content_proto = create_demux_proto(
[
CommunityPoolSpendProposal,
TextProposal,
ParameterChangeProposal,
SoftwareUpgradeProposal,
CancelSoftwareUpgradeProposal,
ClientUpdateProposal
]
)
"""
parse_content_proto = create_demux_proto(
[
[CommunityPoolSpendProposal.type_url, CommunityPoolSpendProposal_pb],
[TextProposal.type_url, TextProposal_pb],
[ParameterChangeProposal.type_url, ParameterChangeProposal_pb],
[SoftwareUpgradeProposal.type_url, SoftwareUpgradeProposal_pb],
[CancelSoftwareUpgradeProposal.type_url, CancelSoftwareUpgradeProposal_pb],
[ClientUpdateProposal.type_url, ClientUpdateProposal_pb]
]
)
"""
|
PypiClean
|
/opencv_pg-1.0.2.tar.gz/opencv_pg-1.0.2/src/opencv_pg/docs/js/mathjax/input/tex/extensions/gensymb.js
|
!function(){"use strict";var a,t,n,e={667:function(a,t){t.q=void 0,t.q="3.2.2"},82:function(a,t,n){Object.defineProperty(t,"__esModule",{value:!0}),t.GensymbConfiguration=void 0;var e=n(251),o=n(108);new(n(871).CharacterMap)("gensymb-symbols",(function(a,t){var n=t.attributes||{};n.mathvariant=o.TexConstant.Variant.NORMAL,n.class="MathML-Unit";var e=a.create("token","mi",n,t.char);a.Push(e)}),{ohm:"\u2126",degree:"\xb0",celsius:"\u2103",perthousand:"\u2030",micro:"\xb5"}),t.GensymbConfiguration=e.Configuration.create("gensymb",{handler:{macro:["gensymb-symbols"]}})},955:function(a,t){MathJax._.components.global.isObject,MathJax._.components.global.combineConfig,MathJax._.components.global.combineDefaults,t.r8=MathJax._.components.global.combineWithMathJax,MathJax._.components.global.MathJax},251:function(a,t){Object.defineProperty(t,"__esModule",{value:!0}),t.Configuration=MathJax._.input.tex.Configuration.Configuration,t.ConfigurationHandler=MathJax._.input.tex.Configuration.ConfigurationHandler,t.ParserConfiguration=MathJax._.input.tex.Configuration.ParserConfiguration},871:function(a,t){Object.defineProperty(t,"__esModule",{value:!0}),t.parseResult=MathJax._.input.tex.SymbolMap.parseResult,t.AbstractSymbolMap=MathJax._.input.tex.SymbolMap.AbstractSymbolMap,t.RegExpMap=MathJax._.input.tex.SymbolMap.RegExpMap,t.AbstractParseMap=MathJax._.input.tex.SymbolMap.AbstractParseMap,t.CharacterMap=MathJax._.input.tex.SymbolMap.CharacterMap,t.DelimiterMap=MathJax._.input.tex.SymbolMap.DelimiterMap,t.MacroMap=MathJax._.input.tex.SymbolMap.MacroMap,t.CommandMap=MathJax._.input.tex.SymbolMap.CommandMap,t.EnvironmentMap=MathJax._.input.tex.SymbolMap.EnvironmentMap},108:function(a,t){Object.defineProperty(t,"__esModule",{value:!0}),t.TexConstant=MathJax._.input.tex.TexConstants.TexConstant}},o={};function i(a){var t=o[a];if(void 0!==t)return t.exports;var n=o[a]={exports:{}};return e[a](n,n.exports,i),n.exports}a=i(955),t=i(667),n=i(82),MathJax.loader&&MathJax.loader.checkVersion("[tex]/gensymb",t.q,"tex-extension"),(0,a.r8)({_:{input:{tex:{gensymb:{GensymbConfiguration:n}}}}})}();
|
PypiClean
|
/FuzzyClassificator-1.3.84-py3-none-any.whl/pybrain/rl/learners/modelbased/policyiteration.py
|
__author__ = 'Tom Schaul, [email protected]'
"""
Doing RL when an environment model (transition matrices and rewards) are available.
Representation:
- a policy is a 2D-array of probabilities,
one row per state (summing to 1), one column per action.
- a transition matrix (T) maps from originating states to destination states
(probabilities in each row sum to 1).
- a reward vector (R) maps each state to the reward value obtained when entering (or staying in) a state.
- a feature map (fMap) is a 2D array of features, one row per state.
- a task model is defined by a list of transition matrices (Ts), one per action, a
reward vector R, a discountFactor
Note: a task model combined with a policy is again a transition matrix ("collapsed" dynamics).
- a value function (V) is a vector of expected discounted rewards (one per state).
- a set of state-action values (Qs) is a 2D array, one row per action.
"""
# TODO: we may use an alternative, more efficient representation if all actions are deterministic
# TODO: it may be worth considering a sparse representation of T matrices.
# TODO: optimize some of this code with vectorization
from scipy import dot, zeros, zeros_like, ones, mean, array, ravel, rand
from numpy.matlib import repmat
from pybrain.utilities import all_argmax
def trueValues(T, R, discountFactor):
""" Compute the true discounted value function for each state,
given a policy (encoded as collapsed transition matrix). """
assert discountFactor < 1
distr = T.copy()
res = dot(T, R)
for i in range(1, int(10 / (1. - discountFactor))):
distr = dot(distr, T)
res += (discountFactor ** i) * dot(distr, R)
return res
def trueQValues(Ts, R, discountFactor, policy):
""" The true Q-values, given a model and a policy. """
T = collapsedTransitions(Ts, policy)
V = trueValues(T, R, discountFactor)
Vnext = V*discountFactor+R
numA = len(Ts)
dim = len(R)
Qs = zeros((dim, numA))
for si in range(dim):
for a in range(numA):
Qs[si, a] = dot(Ts[a][si], Vnext)
return Qs
def collapsedTransitions(Ts, policy):
""" Collapses a list of transition matrices (one per action) and a list
of action probability vectors into a single transition matrix."""
res = zeros_like(Ts[0])
dim = len(Ts[0])
for ai, ap in enumerate(policy.T):
res += Ts[ai] * repmat(ap, dim, 1).T
return res
def greedyPolicy(Ts, R, discountFactor, V):
""" Find the greedy policy, (soft tie-breaking)
given a value function and full transition model. """
dim = len(V)
numA = len(Ts)
Vnext = V*discountFactor+R
policy = zeros((dim, numA))
for si in range(dim):
actions = all_argmax([dot(T[si, :], Vnext) for T in Ts])
for a in actions:
policy[si, a] = 1. / len(actions)
return policy, collapsedTransitions(Ts, policy)
def greedyQPolicy(Qs):
""" Find the greedy deterministic policy,
given the Q-values. """
dim = len(Qs)
numA = len(Qs[0])
policy = zeros((dim, numA))
for si in range(dim):
actions = all_argmax(Qs[si])
for a in actions:
policy[si, a] = 1. / len(actions)
return policy
def randomPolicy(Ts):
""" Each action is equally likely. """
numA = len(Ts)
dim = len(Ts[0])
return ones((dim, numA)) / float(numA), mean(array(Ts), axis=0)
def randomDeterministic(Ts):
""" Pick a random deterministic action for each state. """
numA = len(Ts)
dim = len(Ts[0])
choices = (rand(dim) * numA).astype(int)
policy = zeros((dim, numA))
for si, a in choices:
policy[si, a] = 1
return policy, collapsedTransitions(Ts, policy)
def policyIteration(Ts, R, discountFactor, VEvaluator=None, initpolicy=None, maxIters=20):
""" Given transition matrices (one per action),
produce the optimal policy, using the policy iteration algorithm.
A custom function that maps policies to value functions can be provided. """
if initpolicy is None:
policy, T = randomPolicy(Ts)
else:
policy = initpolicy
T = collapsedTransitions(Ts, policy)
if VEvaluator is None:
VEvaluator = lambda T: trueValues(T, R, discountFactor)
while maxIters > 0:
V = VEvaluator(T)
newpolicy, T = greedyPolicy(Ts, R, discountFactor, V)
# if the probabilities are not changing more than by 0.001, we're done.
if sum(ravel(abs(newpolicy - policy))) < 1e-3:
return policy, T
policy = newpolicy
maxIters -= 1
return policy, T
|
PypiClean
|
/ralph_scrooge-3.0.1.tar.gz/ralph_scrooge-3.0.1/doc/installation.rst
|
============
Installation
============
Scrooge contains ralph in its requirements, because it is a plugin for ralph. For more information on how to configure or install ralph, please refer to its documentation.
Install Scrooge
~~~~~~~~~~~~~~~
There are two ways to install Scrooge. One of them is a simple pip installation which is nice and easy. Installation from sources require Scrooge to be downloaded from github and then, installed manually
Install Scrooge from pip
------------------------
A fast and easy way is to install Scrooge from pip::
(ralph)$ pip install scrooge
That's it.
Install Scrooge from sources
----------------------------
It is also possible to install Scrooge from sources. To do this, first, you need to download Scrooge from github::
(ralph)$ git clone git://github.com/allegro/ralph_pricing.git
Enter to the project folder::
(ralph)$ cd ralph_scrooge
and install it::
(ralph)$ pip install -e .
The Scrooge requirements (ralph, ralph_assets) will be installed automatically
Upgrade existing installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To upgrade Scrooge, you need to stop any Ralph processes that are running. It is good practice not to upgrade the old version, but create a separate virtual environment and install everything from the begin, but if you need to upgrade the old version, be sure that everything is stopped.
Upgrade Scrooge from pip
------------------------
If you installed Scrooge from pip, then you can simply::
(ralph)$ pip install --upgrade scrooge
After it is finished, upgrade the static files::
(ralph)$ ralph collectstatic
Upgrade Scrooge from sources
----------------------------
First, you need to download Scrooge from github::
(ralph)$ git clone git://github.com/allegro/ralph_pricing.git
Enter to the project folder::
(ralph)$ cd ralph_scrooge
and upgrade it::
(ralph)$ pip install --upgrade -e .
Finally, you need to upgrade the static files::
(ralph)$ ralph collectstatic
Migrate the database
~~~~~~~~~~~~~~~~~~~~
Some of updates require database migrations. To migrate a database, you need to run::
(ralph)$ ralph migrate ralph_scrooge
Be sure that you have a backup of your database. Sometimes you can migrate data or create some complicated and unwanted changes.
Update the settings
~~~~~~~~~~~~~~~~~~~~
Some new features added to Ralph may require additional settings to work properly. In order to enable them in your settings, follow the instructions in the :doc:`change log <changes>` for the version you have installed.
Testing if it works
~~~~~~~~~~~~~~~~~~~
To be sure that everything work fine, is recommended to run unit tests. To do this, run::
(ralph)$ DJANGO_SETTINGS_PROFILE=test-pricing ralph test ralph_scrooge
|
PypiClean
|
/now_lms-0.0.1a220230314-py3-none-any.whl/now_lms/bi.py
|
# pylint: disable=E1101
# < --------------------------------------------------------------------------------------------- >
# Funciones auxiliares parte de la "logica de negocio" de la implementacion.
# Libreria standar:
from typing import Union
# Librerias de terceros:
from flask_login import current_user
# Recursos locales:
from now_lms.db import database, EstudianteCurso, DocenteCurso, ModeradorCurso, Usuario, Curso, CursoSeccion, CursoRecurso
def modificar_indice_curso(
codigo_curso: Union[None, str] = None,
task: Union[None, str] = None,
indice: int = 0,
):
"""Modica el número de indice de una sección dentro de un curso."""
indice_current = indice
indice_next = indice + 1
indice_back = indice - 1
actual = CursoSeccion.query.filter(CursoSeccion.curso == codigo_curso, CursoSeccion.indice == indice_current).first()
superior = CursoSeccion.query.filter(CursoSeccion.curso == codigo_curso, CursoSeccion.indice == indice_next).first()
inferior = CursoSeccion.query.filter(CursoSeccion.curso == codigo_curso, CursoSeccion.indice == indice_back).first()
if task == "increment":
actual.indice = indice_next
database.session.add(actual)
database.session.commit()
if superior:
superior.indice = indice_current
database.session.add(superior)
database.session.commit()
else: # task == decrement
if actual.indice != 1: # No convertir indice 1 a 0.
actual.indice = indice_back
database.session.add(actual)
database.session.commit()
if inferior:
inferior.indice = indice_current
database.session.add(inferior)
database.session.commit()
def reorganiza_indice_curso(codigo_curso: Union[None, str] = None):
"""Al eliminar una sección de un curso se debe generar el indice nuevamente."""
secciones = secciones = CursoSeccion.query.filter_by(curso=codigo_curso).order_by(CursoSeccion.indice).all()
if secciones:
indice = 1
for seccion in secciones:
seccion.indice = indice
database.session.add(seccion)
database.session.commit()
indice = indice + 1
def reorganiza_indice_seccion(seccion: Union[None, str] = None):
"""Al eliminar una sección de un curso se debe generar el indice nuevamente."""
recursos = CursoRecurso.query.filter_by(seccion=seccion).order_by(CursoRecurso.indice).all()
if recursos:
indice = 1
for recurso in recursos:
recurso.indice = indice
database.session.add(recurso)
database.session.commit()
indice = indice + 1
def modificar_indice_seccion(
seccion_id: Union[None, str] = None,
task: Union[None, str] = None,
# increment: aumenta el numero de indice por lo que el recurso "baja" en la lista de recursos.
# decrement: disminuye el numero de indice por lo que el recurso "sube" nala lista de recursos.
indice: int = 0,
):
"""Modica el número de indice de un recurso dentro de una sección."""
NO_INDICE_ACTUAL = int(indice)
NO_INDICE_ANTERIOR = NO_INDICE_ACTUAL - 1
NO_INDICE_POSTERIOR = NO_INDICE_ACTUAL + 1
# Obtenemos lista de recursos de la base de datos.
RECURSO_ACTUAL = CursoRecurso.query.filter(
CursoRecurso.seccion == seccion_id, CursoRecurso.indice == NO_INDICE_ACTUAL
).first()
RECURSO_ANTERIOR = CursoRecurso.query.filter(
CursoRecurso.seccion == seccion_id, CursoRecurso.indice == NO_INDICE_ANTERIOR
).first()
RECURSO_POSTERIOR = CursoRecurso.query.filter(
CursoRecurso.seccion == seccion_id, CursoRecurso.indice == NO_INDICE_POSTERIOR
).first()
if task == "increment" and RECURSO_POSTERIOR:
RECURSO_ACTUAL.indice = NO_INDICE_POSTERIOR
RECURSO_POSTERIOR.indice = NO_INDICE_ACTUAL
database.session.add(RECURSO_ACTUAL)
database.session.add(RECURSO_POSTERIOR)
elif task == "decrement" and RECURSO_ANTERIOR:
RECURSO_ACTUAL.indice = NO_INDICE_ANTERIOR
RECURSO_ANTERIOR.indice = NO_INDICE_ACTUAL
database.session.add(RECURSO_ACTUAL)
database.session.add(RECURSO_ANTERIOR)
database.session.commit()
def asignar_curso_a_instructor(curso_codigo: Union[None, str] = None, usuario_id: Union[None, str] = None):
"""Asigna un usuario como instructor de un curso."""
ASIGNACION = DocenteCurso(curso=curso_codigo, usuario=usuario_id, vigente=True, creado_por=current_user.usuario)
database.session.add(ASIGNACION)
database.session.commit()
def asignar_curso_a_moderador(curso_codigo: Union[None, str] = None, usuario_id: Union[None, str] = None):
"""Asigna un usuario como moderador de un curso."""
ASIGNACION = ModeradorCurso(usuario=usuario_id, curso=curso_codigo, vigente=True, creado_por=current_user.usuario)
database.session.add(ASIGNACION)
database.session.commit()
def asignar_curso_a_estudiante(curso_codigo: Union[None, str] = None, usuario_id: Union[None, str] = None):
"""Asigna un usuario como moderador de un curso."""
ASIGNACION = EstudianteCurso(
creado_por=current_user.usuario,
curso=curso_codigo,
usuario=usuario_id,
vigente=True,
)
database.session.add(ASIGNACION)
database.session.commit()
def cambia_tipo_de_usuario_por_id(
id_usuario: Union[None, str] = None, nuevo_tipo: Union[None, str] = None, usuario: Union[None, str] = None
):
"""
Cambia el estatus de un usuario del sistema.
Los valores reconocidos por el sistema son: admin, user, instructor, moderator.
"""
USUARIO = Usuario.query.filter_by(usuario=id_usuario).first()
USUARIO.tipo = nuevo_tipo
USUARIO.modificado_por = usuario
database.session.commit()
def cambia_estado_curso_por_id(
id_curso: Union[None, str, int] = None, nuevo_estado: Union[None, str] = None, usuario: Union[None, str] = None
):
"""
Cambia el estatus de un curso.
Los valores reconocidos por el sistema son: draft, public, open, closed.
"""
CURSO = Curso.query.filter_by(codigo=id_curso).first()
CURSO.estado = nuevo_estado
CURSO.modificado_por = usuario
database.session.commit()
def cambia_curso_publico(id_curso: Union[None, str, int] = None):
"""Cambia el estatus publico de un curso."""
CURSO = Curso.query.filter_by(codigo=id_curso).first()
if CURSO.publico:
CURSO.publico = False
else:
CURSO.publico = True
CURSO.modificado_por = current_user.usuario
database.session.commit()
def cambia_seccion_publico(codigo: Union[None, str, int] = None):
"""Cambia el estatus publico de una sección."""
SECCION = CursoSeccion.query.filter_by(id=codigo).first()
if SECCION.estado:
SECCION.estado = False
else:
SECCION.estado = True
SECCION.modificado_por = current_user.usuario
database.session.commit()
|
PypiClean
|
/disclosure-extractor-0.0.60.tar.gz/disclosure-extractor-0.0.60/disclosure_extractor/calculate.py
|
import re
from typing import Dict
class color:
PURPLE = "\033[95m"
CYAN = "\033[96m"
DARKCYAN = "\033[36m"
BLUE = "\033[94m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
RED = "\033[91m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
END = "\033[0m"
def estimate_investment_net_worth(results):
"""Currently only using investment table to calculate net worth"""
key_codes = {
"A": [1, 1000],
"B": [1001, 2500],
"C": [2501, 5000],
"D": [5001, 15000],
"E": [15001, 50000],
"F": [50001, 100000],
"G": [100001, 1000000],
"H1": [1000001, 5000000],
"H2": [ # This is inaccurate as their is no upper bound
5000001,
1000000000,
],
"J": [1, 15000],
"K": [15001, 50000],
"L": [50001, 100000],
"M": [100001, 250000],
"N": [250001, 500000],
"O": [500001, 1000000],
"P1": [1000001, 5000000],
"P2": [5000001, 25000000],
"P3": [25000001, 50000000],
"P4": [ # This is inaccurate as their is no upper bound
50000001,
1000000000,
],
}
gross_values = []
for k, v in results["sections"]["Investments and Trusts"]["rows"].items():
if "C1" in v.keys():
if v["C1"]["text"] != "" and v["C1"]["text"] != "•":
gross_values.append(key_codes[v["C1"]["text"]])
if v["D3"]["text"] != "" and v["D3"]["text"] != "•":
gross_values.append(key_codes[v["D3"]["text"]])
low = sum(x[0] for x in gross_values)
high = sum(x[1] for x in gross_values)
cd = {}
cd["investment_net_worth"] = (low, high)
net_change = []
for k, v in results["sections"]["Investments and Trusts"]["rows"].items():
if "C1" in v.keys():
B1, D4 = v["B1"]["text"], v["D4"]["text"]
for code in [B1, D4]:
if code != "" and code != "•":
net_change.append(key_codes[code])
low = sum(x[0] for x in net_change)
high = sum(x[1] for x in net_change)
cd["income_gains"] = (low, high)
liabilities_total = []
try:
for k, v in results["sections"]["Liabilities"]["rows"].items():
if (
v["Value Code"]["text"] != ""
and v["Value Code"]["text"] != "-1"
):
liabilities_total.append(key_codes[v["Value Code"]["text"]])
low = sum(x[0] for x in liabilities_total)
high = sum(x[1] for x in liabilities_total)
cd["liabilities"] = (low, high)
except:
cd["liabilities"] = (0, 0)
try:
salaries = []
for k, v in results["sections"]["Non-Investment Income"][
"rows"
].items():
if v["Income"]["text"] != "" and v["Income"]["text"] != "-1":
salary = v["Income"]["text"].replace(",", "").strip("$")
if not re.match(r"^-?\d+(?:\.\d+)?$", salary) is None:
salaries.append(float(salary))
cd["salary_income"] = sum(salaries)
except:
cd["salary_income"] = 0
return cd
def estimate_investment_net_worth_JEF(results: Dict) -> Dict:
"""Currently only using investment table to calculate net worth"""
key_codes = {
"A": [1, 1000],
"B": [1001, 2500],
"C": [2501, 5000],
"D": [5001, 15000],
"E": [15001, 50000],
"F": [50001, 100000],
"G": [100001, 1000000],
"H1": [1000001, 5000000],
"H2": [ # This is inaccurate as their is no upper bound
5000001,
1000000000,
],
"J": [1, 15000],
"K": [15001, 50000],
"L": [50001, 100000],
"M": [100001, 250000],
"N": [250001, 500000],
"O": [500001, 1000000],
"P1": [1000001, 5000000],
"P2": [5000001, 25000000],
"P3": [25000001, 50000000],
"P4": [ # This is inaccurate as their is no upper bound
50000001,
1000000000,
],
}
gross_values = []
for v in results["sections"]["Investments and Trusts"]["rows"]:
if v["C1"]["text"] != "" and v["C1"]["text"] != "•":
gross_values.append(key_codes[v["C1"]["text"]])
if v["D3"]["text"] != "" and v["D3"]["text"] != "•":
gross_values.append(key_codes[v["D3"]["text"]])
low = sum(x[0] for x in gross_values)
high = sum(x[1] for x in gross_values)
cd = {}
cd["investment_net_worth"] = (low, high)
net_change = []
for v in results["sections"]["Investments and Trusts"]["rows"]:
B1, D4 = v["B1"]["text"], v["D4"]["text"]
for code in [B1, D4]:
if code != "" and code != "•":
net_change.append(key_codes[code])
low = sum(x[0] for x in net_change)
high = sum(x[1] for x in net_change)
cd["income_gains"] = (low, high)
liabilities_total = []
try:
for v in results["sections"]["Liabilities"]["rows"]:
liabilities_total.append(key_codes[v["Value Code"]["text"]])
low = sum(x[0] for x in liabilities_total)
high = sum(x[1] for x in liabilities_total)
cd["liabilities"] = (low, high)
except:
cd["liabilities"] = (0, 0)
try:
salaries = []
for v in results["sections"]["Non-Investment Income"]["rows"]:
salary = v["Income"]["text"].replace(",", "").strip("$")
if not re.match(r"^-?\d+(?:\.\d+)?$", salary) is None:
salaries.append(float(salary))
cd["salary_income"] = sum(salaries)
except:
cd["salary_income"] = 0
return cd
|
PypiClean
|
/dob_bright-1.2.4-py3-none-any.whl/dob_bright/config/urable.py
|
import os
from gettext import gettext as _
from ..termio import click_echo, dob_in_user_exit, highlight_value
from . import ConfigRoot
from .fileboss import (
default_config_path,
empty_config_obj,
load_config_obj,
warn_user_config_errors,
write_config_obj
)
__all__ = (
'ConfigUrable',
)
class ConfigUrable(object):
""""""
DOB_CONFIGFILE_ENVKEY = 'DOB_CONFIGFILE'
def __init__(self):
super(ConfigUrable, self).__init__()
self.configfile_path = None
# The ConfigRoot is a module-level Singleton. Deal.
self._config_root = ConfigRoot
# ***
@property
def config_path(self):
return self.configfile_path
# ***
@property
def config_root(self):
return self._config_root
# ***
def find_all(self, parts):
# Caller is responsible for catching KeyError on unrecognized part(s).
return self.config_root.find_all(parts)
# ***
def create_config(self, force):
cfgfile_exists = os.path.exists(self.config_path)
if cfgfile_exists and not force:
dob_in_user_exit(_('Config file exists'))
self.reset_config()
click_echo(
_('Initialized default config file at {}').format(
highlight_value(self.config_path),
)
)
# ***
def load_config(self, configfile_path):
def _load_config():
# The tests mock load_configfile when going through CliRunner,
# to wire the Alchemy store and config upon CLI invocation.
cfgfile_exists = self.load_configfile(configfile_path)
self.cfgfile_exists = cfgfile_exists
self.cfgfile_sanity = not self.cfgfile_exists or _is_config_like()
def _is_config_like():
# What's a reasonable expectation to see if the config file
# legitimately exists? Check that the file exists? Or parse it
# and verify one or more settings therein? Let's do the latter,
# seems more robust. We can check the `store` settings, seems
# like the most obvious setting to check. In any case, we do
# this just to tell the user if they need to create a config;
# the app will run just fine without a config file, because
# defaults!
try:
self.config_root.asobj.db.orm.value_from_config
return True
except AttributeError:
return False
_load_config()
def load_configfile(self, configfile_path):
def _load_configfile():
self.configfile_path = _resolve_configfile_path(configfile_path)
cfgfile_exists = os.path.exists(self.config_path)
config_obj = load_config_obj(self.config_path)
self.config_root.forget_config_values()
unconsumed, errs = self.config_root.update_known(config_obj, errors_ok=True)
warn_if_smelly_config(unconsumed, errs)
return cfgfile_exists
def _resolve_configfile_path(commandline_value):
if commandline_value is not None:
return commandline_value
if ConfigUrable.DOB_CONFIGFILE_ENVKEY in os.environ:
return os.environ[ConfigUrable.DOB_CONFIGFILE_ENVKEY]
return default_config_path()
def warn_if_smelly_config(unconsumed, errs):
basename = os.path.basename(self.config_path)
warn_user_config_errors(unconsumed, errs, which=basename)
return _load_configfile()
def inject_from_cli(self, *keyvals):
def _inject_cli_settings():
for keyval in keyvals:
process_option(keyval)
def process_option(keyval):
key, value = keyval.split('=', 2)
parts = key.split('.')
setting = self.config_root.find_setting(parts)
if setting is None:
dob_in_user_exit(
_('ERROR: Unknown config option: “{}”').format(key)
)
setting.value_from_cliarg = value
return _inject_cli_settings()
# ***
def round_out_config(self):
self.write_config(skip_unset=False)
# ***
def reset_config(self):
config_obj = empty_config_obj(self.config_path)
# Fill in dict object using Config defaults.
self.config_root.forget_config_values()
self.config_root.apply_items(config_obj, use_defaults=True)
write_config_obj(config_obj)
self.cfgfile_exists = True # If anything, we just created it!
self.cfgfile_sanity = True # If anything, we just created it!
# ***
def write_config(self, skip_unset=False):
config_obj = empty_config_obj(self.config_path)
# - (lb): If we did not want to use skip_unset, which won't pollute
# the config_obj that was just read from the user's config, we
# could similarly delete entries from the config, e.g.,
# if skip_unset:
# # Remove settings that are no different than their default
# # (to not save them to the config, potentially cluttering it).
# self.config_root.del_not_persisted(config_obj)
# but sticking with apply_items(skip_unset=True) means self.config_root
# will still be usable after this method returns. I.e., no side effects.
# Fill in dict object using values previously set from config or newly set.
self.config_root.apply_items(config_obj, skip_unset=skip_unset)
write_config_obj(config_obj)
self.cfgfile_exists = True
self.cfgfile_sanity = True
|
PypiClean
|
/lsv2test-core-2.0.0.tar.gz/lsv2test-core-2.0.0/localstack/aws/api/stepfunctions/__init__.py
|
import sys
from datetime import datetime
from typing import List, Optional
if sys.version_info >= (3, 8):
from typing import TypedDict
else:
from typing_extensions import TypedDict
from localstack.aws.api import RequestContext, ServiceException, ServiceRequest, handler
Arn = str
ConnectorParameters = str
Definition = str
Enabled = bool
ErrorMessage = str
Identity = str
IncludeExecutionData = bool
IncludeExecutionDataGetExecutionHistory = bool
ListExecutionsPageToken = str
LongArn = str
MapRunLabel = str
MaxConcurrency = int
Name = str
PageSize = int
PageToken = str
ReverseOrder = bool
SensitiveCause = str
SensitiveData = str
SensitiveDataJobInput = str
SensitiveError = str
TagKey = str
TagValue = str
TaskToken = str
ToleratedFailurePercentage = float
TraceHeader = str
UnsignedInteger = int
includedDetails = bool
truncated = bool
class ExecutionStatus(str):
RUNNING = "RUNNING"
SUCCEEDED = "SUCCEEDED"
FAILED = "FAILED"
TIMED_OUT = "TIMED_OUT"
ABORTED = "ABORTED"
class HistoryEventType(str):
ActivityFailed = "ActivityFailed"
ActivityScheduled = "ActivityScheduled"
ActivityScheduleFailed = "ActivityScheduleFailed"
ActivityStarted = "ActivityStarted"
ActivitySucceeded = "ActivitySucceeded"
ActivityTimedOut = "ActivityTimedOut"
ChoiceStateEntered = "ChoiceStateEntered"
ChoiceStateExited = "ChoiceStateExited"
ExecutionAborted = "ExecutionAborted"
ExecutionFailed = "ExecutionFailed"
ExecutionStarted = "ExecutionStarted"
ExecutionSucceeded = "ExecutionSucceeded"
ExecutionTimedOut = "ExecutionTimedOut"
FailStateEntered = "FailStateEntered"
LambdaFunctionFailed = "LambdaFunctionFailed"
LambdaFunctionScheduled = "LambdaFunctionScheduled"
LambdaFunctionScheduleFailed = "LambdaFunctionScheduleFailed"
LambdaFunctionStarted = "LambdaFunctionStarted"
LambdaFunctionStartFailed = "LambdaFunctionStartFailed"
LambdaFunctionSucceeded = "LambdaFunctionSucceeded"
LambdaFunctionTimedOut = "LambdaFunctionTimedOut"
MapIterationAborted = "MapIterationAborted"
MapIterationFailed = "MapIterationFailed"
MapIterationStarted = "MapIterationStarted"
MapIterationSucceeded = "MapIterationSucceeded"
MapStateAborted = "MapStateAborted"
MapStateEntered = "MapStateEntered"
MapStateExited = "MapStateExited"
MapStateFailed = "MapStateFailed"
MapStateStarted = "MapStateStarted"
MapStateSucceeded = "MapStateSucceeded"
ParallelStateAborted = "ParallelStateAborted"
ParallelStateEntered = "ParallelStateEntered"
ParallelStateExited = "ParallelStateExited"
ParallelStateFailed = "ParallelStateFailed"
ParallelStateStarted = "ParallelStateStarted"
ParallelStateSucceeded = "ParallelStateSucceeded"
PassStateEntered = "PassStateEntered"
PassStateExited = "PassStateExited"
SucceedStateEntered = "SucceedStateEntered"
SucceedStateExited = "SucceedStateExited"
TaskFailed = "TaskFailed"
TaskScheduled = "TaskScheduled"
TaskStarted = "TaskStarted"
TaskStartFailed = "TaskStartFailed"
TaskStateAborted = "TaskStateAborted"
TaskStateEntered = "TaskStateEntered"
TaskStateExited = "TaskStateExited"
TaskSubmitFailed = "TaskSubmitFailed"
TaskSubmitted = "TaskSubmitted"
TaskSucceeded = "TaskSucceeded"
TaskTimedOut = "TaskTimedOut"
WaitStateAborted = "WaitStateAborted"
WaitStateEntered = "WaitStateEntered"
WaitStateExited = "WaitStateExited"
MapRunAborted = "MapRunAborted"
MapRunFailed = "MapRunFailed"
MapRunStarted = "MapRunStarted"
MapRunSucceeded = "MapRunSucceeded"
class LogLevel(str):
ALL = "ALL"
ERROR = "ERROR"
FATAL = "FATAL"
OFF = "OFF"
class MapRunStatus(str):
RUNNING = "RUNNING"
SUCCEEDED = "SUCCEEDED"
FAILED = "FAILED"
ABORTED = "ABORTED"
class StateMachineStatus(str):
ACTIVE = "ACTIVE"
DELETING = "DELETING"
class StateMachineType(str):
STANDARD = "STANDARD"
EXPRESS = "EXPRESS"
class SyncExecutionStatus(str):
SUCCEEDED = "SUCCEEDED"
FAILED = "FAILED"
TIMED_OUT = "TIMED_OUT"
class ValidationExceptionReason(str):
API_DOES_NOT_SUPPORT_LABELED_ARNS = "API_DOES_NOT_SUPPORT_LABELED_ARNS"
MISSING_REQUIRED_PARAMETER = "MISSING_REQUIRED_PARAMETER"
CANNOT_UPDATE_COMPLETED_MAP_RUN = "CANNOT_UPDATE_COMPLETED_MAP_RUN"
class ActivityDoesNotExist(ServiceException):
code: str = "ActivityDoesNotExist"
sender_fault: bool = False
status_code: int = 400
class ActivityLimitExceeded(ServiceException):
code: str = "ActivityLimitExceeded"
sender_fault: bool = False
status_code: int = 400
class ActivityWorkerLimitExceeded(ServiceException):
code: str = "ActivityWorkerLimitExceeded"
sender_fault: bool = False
status_code: int = 400
class ExecutionAlreadyExists(ServiceException):
code: str = "ExecutionAlreadyExists"
sender_fault: bool = False
status_code: int = 400
class ExecutionDoesNotExist(ServiceException):
code: str = "ExecutionDoesNotExist"
sender_fault: bool = False
status_code: int = 400
class ExecutionLimitExceeded(ServiceException):
code: str = "ExecutionLimitExceeded"
sender_fault: bool = False
status_code: int = 400
class InvalidArn(ServiceException):
code: str = "InvalidArn"
sender_fault: bool = False
status_code: int = 400
class InvalidDefinition(ServiceException):
code: str = "InvalidDefinition"
sender_fault: bool = False
status_code: int = 400
class InvalidExecutionInput(ServiceException):
code: str = "InvalidExecutionInput"
sender_fault: bool = False
status_code: int = 400
class InvalidLoggingConfiguration(ServiceException):
code: str = "InvalidLoggingConfiguration"
sender_fault: bool = False
status_code: int = 400
class InvalidName(ServiceException):
code: str = "InvalidName"
sender_fault: bool = False
status_code: int = 400
class InvalidOutput(ServiceException):
code: str = "InvalidOutput"
sender_fault: bool = False
status_code: int = 400
class InvalidToken(ServiceException):
code: str = "InvalidToken"
sender_fault: bool = False
status_code: int = 400
class InvalidTracingConfiguration(ServiceException):
code: str = "InvalidTracingConfiguration"
sender_fault: bool = False
status_code: int = 400
class MissingRequiredParameter(ServiceException):
code: str = "MissingRequiredParameter"
sender_fault: bool = False
status_code: int = 400
class ResourceNotFound(ServiceException):
code: str = "ResourceNotFound"
sender_fault: bool = False
status_code: int = 400
resourceName: Optional[Arn]
class StateMachineAlreadyExists(ServiceException):
code: str = "StateMachineAlreadyExists"
sender_fault: bool = False
status_code: int = 400
class StateMachineDeleting(ServiceException):
code: str = "StateMachineDeleting"
sender_fault: bool = False
status_code: int = 400
class StateMachineDoesNotExist(ServiceException):
code: str = "StateMachineDoesNotExist"
sender_fault: bool = False
status_code: int = 400
class StateMachineLimitExceeded(ServiceException):
code: str = "StateMachineLimitExceeded"
sender_fault: bool = False
status_code: int = 400
class StateMachineTypeNotSupported(ServiceException):
code: str = "StateMachineTypeNotSupported"
sender_fault: bool = False
status_code: int = 400
class TaskDoesNotExist(ServiceException):
code: str = "TaskDoesNotExist"
sender_fault: bool = False
status_code: int = 400
class TaskTimedOut(ServiceException):
code: str = "TaskTimedOut"
sender_fault: bool = False
status_code: int = 400
class TooManyTags(ServiceException):
code: str = "TooManyTags"
sender_fault: bool = False
status_code: int = 400
resourceName: Optional[Arn]
class ValidationException(ServiceException):
code: str = "ValidationException"
sender_fault: bool = False
status_code: int = 400
reason: Optional[ValidationExceptionReason]
class ActivityFailedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
Timestamp = datetime
class ActivityListItem(TypedDict, total=False):
activityArn: Arn
name: Name
creationDate: Timestamp
ActivityList = List[ActivityListItem]
class ActivityScheduleFailedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
TimeoutInSeconds = int
class HistoryEventExecutionDataDetails(TypedDict, total=False):
truncated: Optional[truncated]
class ActivityScheduledEventDetails(TypedDict, total=False):
resource: Arn
input: Optional[SensitiveData]
inputDetails: Optional[HistoryEventExecutionDataDetails]
timeoutInSeconds: Optional[TimeoutInSeconds]
heartbeatInSeconds: Optional[TimeoutInSeconds]
class ActivityStartedEventDetails(TypedDict, total=False):
workerName: Optional[Identity]
class ActivitySucceededEventDetails(TypedDict, total=False):
output: Optional[SensitiveData]
outputDetails: Optional[HistoryEventExecutionDataDetails]
class ActivityTimedOutEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
BilledDuration = int
BilledMemoryUsed = int
class BillingDetails(TypedDict, total=False):
billedMemoryUsedInMB: Optional[BilledMemoryUsed]
billedDurationInMilliseconds: Optional[BilledDuration]
class CloudWatchEventsExecutionDataDetails(TypedDict, total=False):
included: Optional[includedDetails]
class CloudWatchLogsLogGroup(TypedDict, total=False):
logGroupArn: Optional[Arn]
class Tag(TypedDict, total=False):
key: Optional[TagKey]
value: Optional[TagValue]
TagList = List[Tag]
class CreateActivityInput(ServiceRequest):
name: Name
tags: Optional[TagList]
class CreateActivityOutput(TypedDict, total=False):
activityArn: Arn
creationDate: Timestamp
class TracingConfiguration(TypedDict, total=False):
enabled: Optional[Enabled]
class LogDestination(TypedDict, total=False):
cloudWatchLogsLogGroup: Optional[CloudWatchLogsLogGroup]
LogDestinationList = List[LogDestination]
class LoggingConfiguration(TypedDict, total=False):
level: Optional[LogLevel]
includeExecutionData: Optional[IncludeExecutionData]
destinations: Optional[LogDestinationList]
CreateStateMachineInput = TypedDict(
"CreateStateMachineInput",
{
"name": Name,
"definition": Definition,
"roleArn": Arn,
"type": Optional[StateMachineType],
"loggingConfiguration": Optional[LoggingConfiguration],
"tags": Optional[TagList],
"tracingConfiguration": Optional[TracingConfiguration],
},
total=False,
)
class CreateStateMachineOutput(TypedDict, total=False):
stateMachineArn: Arn
creationDate: Timestamp
class DeleteActivityInput(ServiceRequest):
activityArn: Arn
class DeleteActivityOutput(TypedDict, total=False):
pass
class DeleteStateMachineInput(ServiceRequest):
stateMachineArn: Arn
class DeleteStateMachineOutput(TypedDict, total=False):
pass
class DescribeActivityInput(ServiceRequest):
activityArn: Arn
class DescribeActivityOutput(TypedDict, total=False):
activityArn: Arn
name: Name
creationDate: Timestamp
class DescribeExecutionInput(ServiceRequest):
executionArn: Arn
class DescribeExecutionOutput(TypedDict, total=False):
executionArn: Arn
stateMachineArn: Arn
name: Optional[Name]
status: ExecutionStatus
startDate: Timestamp
stopDate: Optional[Timestamp]
input: Optional[SensitiveData]
inputDetails: Optional[CloudWatchEventsExecutionDataDetails]
output: Optional[SensitiveData]
outputDetails: Optional[CloudWatchEventsExecutionDataDetails]
traceHeader: Optional[TraceHeader]
mapRunArn: Optional[LongArn]
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class DescribeMapRunInput(ServiceRequest):
mapRunArn: LongArn
UnsignedLong = int
class MapRunExecutionCounts(TypedDict, total=False):
pending: UnsignedLong
running: UnsignedLong
succeeded: UnsignedLong
failed: UnsignedLong
timedOut: UnsignedLong
aborted: UnsignedLong
total: UnsignedLong
resultsWritten: UnsignedLong
class MapRunItemCounts(TypedDict, total=False):
pending: UnsignedLong
running: UnsignedLong
succeeded: UnsignedLong
failed: UnsignedLong
timedOut: UnsignedLong
aborted: UnsignedLong
total: UnsignedLong
resultsWritten: UnsignedLong
ToleratedFailureCount = int
class DescribeMapRunOutput(TypedDict, total=False):
mapRunArn: LongArn
executionArn: Arn
status: MapRunStatus
startDate: Timestamp
stopDate: Optional[Timestamp]
maxConcurrency: MaxConcurrency
toleratedFailurePercentage: ToleratedFailurePercentage
toleratedFailureCount: ToleratedFailureCount
itemCounts: MapRunItemCounts
executionCounts: MapRunExecutionCounts
class DescribeStateMachineForExecutionInput(ServiceRequest):
executionArn: Arn
class DescribeStateMachineForExecutionOutput(TypedDict, total=False):
stateMachineArn: Arn
name: Name
definition: Definition
roleArn: Arn
updateDate: Timestamp
loggingConfiguration: Optional[LoggingConfiguration]
tracingConfiguration: Optional[TracingConfiguration]
mapRunArn: Optional[LongArn]
label: Optional[MapRunLabel]
class DescribeStateMachineInput(ServiceRequest):
stateMachineArn: Arn
DescribeStateMachineOutput = TypedDict(
"DescribeStateMachineOutput",
{
"stateMachineArn": Arn,
"name": Name,
"status": Optional[StateMachineStatus],
"definition": Definition,
"roleArn": Arn,
"type": StateMachineType,
"creationDate": Timestamp,
"loggingConfiguration": Optional[LoggingConfiguration],
"tracingConfiguration": Optional[TracingConfiguration],
"label": Optional[MapRunLabel],
},
total=False,
)
EventId = int
class ExecutionAbortedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class ExecutionFailedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class ExecutionListItem(TypedDict, total=False):
executionArn: Arn
stateMachineArn: Arn
name: Name
status: ExecutionStatus
startDate: Timestamp
stopDate: Optional[Timestamp]
mapRunArn: Optional[LongArn]
itemCount: Optional[UnsignedInteger]
ExecutionList = List[ExecutionListItem]
class ExecutionStartedEventDetails(TypedDict, total=False):
input: Optional[SensitiveData]
inputDetails: Optional[HistoryEventExecutionDataDetails]
roleArn: Optional[Arn]
class ExecutionSucceededEventDetails(TypedDict, total=False):
output: Optional[SensitiveData]
outputDetails: Optional[HistoryEventExecutionDataDetails]
class ExecutionTimedOutEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class GetActivityTaskInput(ServiceRequest):
activityArn: Arn
workerName: Optional[Name]
class GetActivityTaskOutput(TypedDict, total=False):
taskToken: Optional[TaskToken]
input: Optional[SensitiveDataJobInput]
class GetExecutionHistoryInput(ServiceRequest):
executionArn: Arn
maxResults: Optional[PageSize]
reverseOrder: Optional[ReverseOrder]
nextToken: Optional[PageToken]
includeExecutionData: Optional[IncludeExecutionDataGetExecutionHistory]
class MapRunFailedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class MapRunStartedEventDetails(TypedDict, total=False):
mapRunArn: Optional[LongArn]
class StateExitedEventDetails(TypedDict, total=False):
name: Name
output: Optional[SensitiveData]
outputDetails: Optional[HistoryEventExecutionDataDetails]
class StateEnteredEventDetails(TypedDict, total=False):
name: Name
input: Optional[SensitiveData]
inputDetails: Optional[HistoryEventExecutionDataDetails]
class LambdaFunctionTimedOutEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class LambdaFunctionSucceededEventDetails(TypedDict, total=False):
output: Optional[SensitiveData]
outputDetails: Optional[HistoryEventExecutionDataDetails]
class LambdaFunctionStartFailedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class TaskCredentials(TypedDict, total=False):
roleArn: Optional[LongArn]
class LambdaFunctionScheduledEventDetails(TypedDict, total=False):
resource: Arn
input: Optional[SensitiveData]
inputDetails: Optional[HistoryEventExecutionDataDetails]
timeoutInSeconds: Optional[TimeoutInSeconds]
taskCredentials: Optional[TaskCredentials]
class LambdaFunctionScheduleFailedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class LambdaFunctionFailedEventDetails(TypedDict, total=False):
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class MapIterationEventDetails(TypedDict, total=False):
name: Optional[Name]
index: Optional[UnsignedInteger]
class MapStateStartedEventDetails(TypedDict, total=False):
length: Optional[UnsignedInteger]
class TaskTimedOutEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class TaskSucceededEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
output: Optional[SensitiveData]
outputDetails: Optional[HistoryEventExecutionDataDetails]
class TaskSubmittedEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
output: Optional[SensitiveData]
outputDetails: Optional[HistoryEventExecutionDataDetails]
class TaskSubmitFailedEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class TaskStartedEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
class TaskStartFailedEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class TaskScheduledEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
region: Name
parameters: ConnectorParameters
timeoutInSeconds: Optional[TimeoutInSeconds]
heartbeatInSeconds: Optional[TimeoutInSeconds]
taskCredentials: Optional[TaskCredentials]
class TaskFailedEventDetails(TypedDict, total=False):
resourceType: Name
resource: Name
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
HistoryEvent = TypedDict(
"HistoryEvent",
{
"timestamp": Timestamp,
"type": HistoryEventType,
"id": EventId,
"previousEventId": Optional[EventId],
"activityFailedEventDetails": Optional[ActivityFailedEventDetails],
"activityScheduleFailedEventDetails": Optional[ActivityScheduleFailedEventDetails],
"activityScheduledEventDetails": Optional[ActivityScheduledEventDetails],
"activityStartedEventDetails": Optional[ActivityStartedEventDetails],
"activitySucceededEventDetails": Optional[ActivitySucceededEventDetails],
"activityTimedOutEventDetails": Optional[ActivityTimedOutEventDetails],
"taskFailedEventDetails": Optional[TaskFailedEventDetails],
"taskScheduledEventDetails": Optional[TaskScheduledEventDetails],
"taskStartFailedEventDetails": Optional[TaskStartFailedEventDetails],
"taskStartedEventDetails": Optional[TaskStartedEventDetails],
"taskSubmitFailedEventDetails": Optional[TaskSubmitFailedEventDetails],
"taskSubmittedEventDetails": Optional[TaskSubmittedEventDetails],
"taskSucceededEventDetails": Optional[TaskSucceededEventDetails],
"taskTimedOutEventDetails": Optional[TaskTimedOutEventDetails],
"executionFailedEventDetails": Optional[ExecutionFailedEventDetails],
"executionStartedEventDetails": Optional[ExecutionStartedEventDetails],
"executionSucceededEventDetails": Optional[ExecutionSucceededEventDetails],
"executionAbortedEventDetails": Optional[ExecutionAbortedEventDetails],
"executionTimedOutEventDetails": Optional[ExecutionTimedOutEventDetails],
"mapStateStartedEventDetails": Optional[MapStateStartedEventDetails],
"mapIterationStartedEventDetails": Optional[MapIterationEventDetails],
"mapIterationSucceededEventDetails": Optional[MapIterationEventDetails],
"mapIterationFailedEventDetails": Optional[MapIterationEventDetails],
"mapIterationAbortedEventDetails": Optional[MapIterationEventDetails],
"lambdaFunctionFailedEventDetails": Optional[LambdaFunctionFailedEventDetails],
"lambdaFunctionScheduleFailedEventDetails": Optional[
LambdaFunctionScheduleFailedEventDetails
],
"lambdaFunctionScheduledEventDetails": Optional[LambdaFunctionScheduledEventDetails],
"lambdaFunctionStartFailedEventDetails": Optional[LambdaFunctionStartFailedEventDetails],
"lambdaFunctionSucceededEventDetails": Optional[LambdaFunctionSucceededEventDetails],
"lambdaFunctionTimedOutEventDetails": Optional[LambdaFunctionTimedOutEventDetails],
"stateEnteredEventDetails": Optional[StateEnteredEventDetails],
"stateExitedEventDetails": Optional[StateExitedEventDetails],
"mapRunStartedEventDetails": Optional[MapRunStartedEventDetails],
"mapRunFailedEventDetails": Optional[MapRunFailedEventDetails],
},
total=False,
)
HistoryEventList = List[HistoryEvent]
class GetExecutionHistoryOutput(TypedDict, total=False):
events: HistoryEventList
nextToken: Optional[PageToken]
class ListActivitiesInput(ServiceRequest):
maxResults: Optional[PageSize]
nextToken: Optional[PageToken]
class ListActivitiesOutput(TypedDict, total=False):
activities: ActivityList
nextToken: Optional[PageToken]
class ListExecutionsInput(ServiceRequest):
stateMachineArn: Optional[Arn]
statusFilter: Optional[ExecutionStatus]
maxResults: Optional[PageSize]
nextToken: Optional[ListExecutionsPageToken]
mapRunArn: Optional[LongArn]
class ListExecutionsOutput(TypedDict, total=False):
executions: ExecutionList
nextToken: Optional[ListExecutionsPageToken]
class ListMapRunsInput(ServiceRequest):
executionArn: Arn
maxResults: Optional[PageSize]
nextToken: Optional[PageToken]
class MapRunListItem(TypedDict, total=False):
executionArn: Arn
mapRunArn: LongArn
stateMachineArn: Arn
startDate: Timestamp
stopDate: Optional[Timestamp]
MapRunList = List[MapRunListItem]
class ListMapRunsOutput(TypedDict, total=False):
mapRuns: MapRunList
nextToken: Optional[PageToken]
class ListStateMachinesInput(ServiceRequest):
maxResults: Optional[PageSize]
nextToken: Optional[PageToken]
StateMachineListItem = TypedDict(
"StateMachineListItem",
{
"stateMachineArn": Arn,
"name": Name,
"type": StateMachineType,
"creationDate": Timestamp,
},
total=False,
)
StateMachineList = List[StateMachineListItem]
class ListStateMachinesOutput(TypedDict, total=False):
stateMachines: StateMachineList
nextToken: Optional[PageToken]
class ListTagsForResourceInput(ServiceRequest):
resourceArn: Arn
class ListTagsForResourceOutput(TypedDict, total=False):
tags: Optional[TagList]
class SendTaskFailureInput(ServiceRequest):
taskToken: TaskToken
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class SendTaskFailureOutput(TypedDict, total=False):
pass
class SendTaskHeartbeatInput(ServiceRequest):
taskToken: TaskToken
class SendTaskHeartbeatOutput(TypedDict, total=False):
pass
class SendTaskSuccessInput(ServiceRequest):
taskToken: TaskToken
output: SensitiveData
class SendTaskSuccessOutput(TypedDict, total=False):
pass
class StartExecutionInput(ServiceRequest):
stateMachineArn: Arn
name: Optional[Name]
input: Optional[SensitiveData]
traceHeader: Optional[TraceHeader]
class StartExecutionOutput(TypedDict, total=False):
executionArn: Arn
startDate: Timestamp
class StartSyncExecutionInput(ServiceRequest):
stateMachineArn: Arn
name: Optional[Name]
input: Optional[SensitiveData]
traceHeader: Optional[TraceHeader]
class StartSyncExecutionOutput(TypedDict, total=False):
executionArn: Arn
stateMachineArn: Optional[Arn]
name: Optional[Name]
startDate: Timestamp
stopDate: Timestamp
status: SyncExecutionStatus
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
input: Optional[SensitiveData]
inputDetails: Optional[CloudWatchEventsExecutionDataDetails]
output: Optional[SensitiveData]
outputDetails: Optional[CloudWatchEventsExecutionDataDetails]
traceHeader: Optional[TraceHeader]
billingDetails: Optional[BillingDetails]
class StopExecutionInput(ServiceRequest):
executionArn: Arn
error: Optional[SensitiveError]
cause: Optional[SensitiveCause]
class StopExecutionOutput(TypedDict, total=False):
stopDate: Timestamp
TagKeyList = List[TagKey]
class TagResourceInput(ServiceRequest):
resourceArn: Arn
tags: TagList
class TagResourceOutput(TypedDict, total=False):
pass
class UntagResourceInput(ServiceRequest):
resourceArn: Arn
tagKeys: TagKeyList
class UntagResourceOutput(TypedDict, total=False):
pass
class UpdateMapRunInput(ServiceRequest):
mapRunArn: LongArn
maxConcurrency: Optional[MaxConcurrency]
toleratedFailurePercentage: Optional[ToleratedFailurePercentage]
toleratedFailureCount: Optional[ToleratedFailureCount]
class UpdateMapRunOutput(TypedDict, total=False):
pass
class UpdateStateMachineInput(ServiceRequest):
stateMachineArn: Arn
definition: Optional[Definition]
roleArn: Optional[Arn]
loggingConfiguration: Optional[LoggingConfiguration]
tracingConfiguration: Optional[TracingConfiguration]
class UpdateStateMachineOutput(TypedDict, total=False):
updateDate: Timestamp
class StepfunctionsApi:
service = "stepfunctions"
version = "2016-11-23"
@handler("CreateActivity")
def create_activity(
self, context: RequestContext, name: Name, tags: TagList = None
) -> CreateActivityOutput:
raise NotImplementedError
@handler("CreateStateMachine", expand=False)
def create_state_machine(
self, context: RequestContext, request: CreateStateMachineInput
) -> CreateStateMachineOutput:
raise NotImplementedError
@handler("DeleteActivity")
def delete_activity(self, context: RequestContext, activity_arn: Arn) -> DeleteActivityOutput:
raise NotImplementedError
@handler("DeleteStateMachine")
def delete_state_machine(
self, context: RequestContext, state_machine_arn: Arn
) -> DeleteStateMachineOutput:
raise NotImplementedError
@handler("DescribeActivity")
def describe_activity(
self, context: RequestContext, activity_arn: Arn
) -> DescribeActivityOutput:
raise NotImplementedError
@handler("DescribeExecution")
def describe_execution(
self, context: RequestContext, execution_arn: Arn
) -> DescribeExecutionOutput:
raise NotImplementedError
@handler("DescribeMapRun")
def describe_map_run(
self, context: RequestContext, map_run_arn: LongArn
) -> DescribeMapRunOutput:
raise NotImplementedError
@handler("DescribeStateMachine")
def describe_state_machine(
self, context: RequestContext, state_machine_arn: Arn
) -> DescribeStateMachineOutput:
raise NotImplementedError
@handler("DescribeStateMachineForExecution")
def describe_state_machine_for_execution(
self, context: RequestContext, execution_arn: Arn
) -> DescribeStateMachineForExecutionOutput:
raise NotImplementedError
@handler("GetActivityTask")
def get_activity_task(
self, context: RequestContext, activity_arn: Arn, worker_name: Name = None
) -> GetActivityTaskOutput:
raise NotImplementedError
@handler("GetExecutionHistory")
def get_execution_history(
self,
context: RequestContext,
execution_arn: Arn,
max_results: PageSize = None,
reverse_order: ReverseOrder = None,
next_token: PageToken = None,
include_execution_data: IncludeExecutionDataGetExecutionHistory = None,
) -> GetExecutionHistoryOutput:
raise NotImplementedError
@handler("ListActivities")
def list_activities(
self, context: RequestContext, max_results: PageSize = None, next_token: PageToken = None
) -> ListActivitiesOutput:
raise NotImplementedError
@handler("ListExecutions")
def list_executions(
self,
context: RequestContext,
state_machine_arn: Arn = None,
status_filter: ExecutionStatus = None,
max_results: PageSize = None,
next_token: ListExecutionsPageToken = None,
map_run_arn: LongArn = None,
) -> ListExecutionsOutput:
raise NotImplementedError
@handler("ListMapRuns")
def list_map_runs(
self,
context: RequestContext,
execution_arn: Arn,
max_results: PageSize = None,
next_token: PageToken = None,
) -> ListMapRunsOutput:
raise NotImplementedError
@handler("ListStateMachines")
def list_state_machines(
self, context: RequestContext, max_results: PageSize = None, next_token: PageToken = None
) -> ListStateMachinesOutput:
raise NotImplementedError
@handler("ListTagsForResource")
def list_tags_for_resource(
self, context: RequestContext, resource_arn: Arn
) -> ListTagsForResourceOutput:
raise NotImplementedError
@handler("SendTaskFailure")
def send_task_failure(
self,
context: RequestContext,
task_token: TaskToken,
error: SensitiveError = None,
cause: SensitiveCause = None,
) -> SendTaskFailureOutput:
raise NotImplementedError
@handler("SendTaskHeartbeat")
def send_task_heartbeat(
self, context: RequestContext, task_token: TaskToken
) -> SendTaskHeartbeatOutput:
raise NotImplementedError
@handler("SendTaskSuccess")
def send_task_success(
self, context: RequestContext, task_token: TaskToken, output: SensitiveData
) -> SendTaskSuccessOutput:
raise NotImplementedError
@handler("StartExecution")
def start_execution(
self,
context: RequestContext,
state_machine_arn: Arn,
name: Name = None,
input: SensitiveData = None,
trace_header: TraceHeader = None,
) -> StartExecutionOutput:
raise NotImplementedError
@handler("StartSyncExecution")
def start_sync_execution(
self,
context: RequestContext,
state_machine_arn: Arn,
name: Name = None,
input: SensitiveData = None,
trace_header: TraceHeader = None,
) -> StartSyncExecutionOutput:
raise NotImplementedError
@handler("StopExecution")
def stop_execution(
self,
context: RequestContext,
execution_arn: Arn,
error: SensitiveError = None,
cause: SensitiveCause = None,
) -> StopExecutionOutput:
raise NotImplementedError
@handler("TagResource")
def tag_resource(
self, context: RequestContext, resource_arn: Arn, tags: TagList
) -> TagResourceOutput:
raise NotImplementedError
@handler("UntagResource")
def untag_resource(
self, context: RequestContext, resource_arn: Arn, tag_keys: TagKeyList
) -> UntagResourceOutput:
raise NotImplementedError
@handler("UpdateMapRun")
def update_map_run(
self,
context: RequestContext,
map_run_arn: LongArn,
max_concurrency: MaxConcurrency = None,
tolerated_failure_percentage: ToleratedFailurePercentage = None,
tolerated_failure_count: ToleratedFailureCount = None,
) -> UpdateMapRunOutput:
raise NotImplementedError
@handler("UpdateStateMachine")
def update_state_machine(
self,
context: RequestContext,
state_machine_arn: Arn,
definition: Definition = None,
role_arn: Arn = None,
logging_configuration: LoggingConfiguration = None,
tracing_configuration: TracingConfiguration = None,
) -> UpdateStateMachineOutput:
raise NotImplementedError
|
PypiClean
|
/sdnn-cl-2.2.0.tar.gz/sdnn-cl-2.2.0/tvm/topi/nn/batch_matmul.py
|
"""Batch matrix multiplication"""
# pylint: disable=invalid-name
import logging
import tvm
from tvm import te, auto_scheduler
from ..utils import get_const_tuple
logger = logging.getLogger("topi")
def batch_matmul(
tensor_a,
tensor_b,
oshape=None,
out_dtype=None,
transpose_a=False,
transpose_b=True,
auto_scheduler_rewritten_layout="",
):
"""Compute batch matrix multiplication of `tensor_a` and `tensor_b`.
Both `tensor_a` and `tensor_b` can be transposed. For legacy reason, we use NT format
(transpose_a=False, transpose_b=True) by default.
Parameters
----------
tensor_a : tvm.te.Tensor
3-D with shape [batch, M, K] or [batch, K, M].
tensor_b : tvm.te.Tensor
3-D with shape [batch, K, N] or [batch, N, K].
oshape : List[Optional]
Explicit intended output shape of the computation. Can be useful in cases
with dynamic input shapes.
out_dtype : Optional[str]
Specifies the output data type for mixed precision batch matmul.
transpose_a : Optional[bool] = False
Whether the first tensor is in transposed format.
transpose_b : Optional[bool] = True
Whether the second tensor is in transposed format.
auto_scheduler_rewritten_layout: Optional[str] = ""
The layout after auto-scheduler's layout rewrite pass.
Returns
-------
output : tvm.te.Tensor
3-D with shape [batch, M, N]
"""
assert len(tensor_a.shape) == 3, "tensor_a only support 3-dim"
if transpose_a:
XB, XK, XI = get_const_tuple(tensor_a.shape)
else:
XB, XI, XK = get_const_tuple(tensor_a.shape)
if auto_scheduler_rewritten_layout:
# Infer shape for the rewritten layout
YB, YK, YJ = auto_scheduler.get_shape_from_rewritten_layout(
auto_scheduler_rewritten_layout, ["b", "k", "j"]
)
auto_scheduler.remove_index_check(tensor_b)
else:
assert len(tensor_b.shape) == 3, "tensor_b only support 3-dim"
if transpose_b:
YB, YJ, YK = get_const_tuple(tensor_b.shape)
else:
YB, YK, YJ = get_const_tuple(tensor_b.shape)
assert XK == YK or isinstance(YK, tvm.tir.expr.Var), "shapes of x and y are inconsistent"
k = te.reduce_axis((0, XK), name="k")
if oshape is None:
assert XB == YB or XB == 1 or YB == 1, "batch dimension doesn't match"
batch = (
tvm.tir.expr.SizeVar("batch", "int32")
if isinstance(XB, tvm.tir.expr.Var) or isinstance(YB, tvm.tir.expr.Var)
else te.max(XB, YB)
)
oshape = (batch, XI, YJ)
if out_dtype is None:
out_dtype = tensor_a.dtype
if tensor_a.dtype != tensor_b.dtype:
logger.warning(
"tensor_a has different data type with tensor_b: %s, %s",
tensor_a.dtype,
tensor_b.dtype,
)
if (transpose_a, transpose_b) == (True, True):
compute_lambda = lambda b, i, j: te.sum(
tensor_a[b if XB != 1 else 0, k, i].astype(out_dtype)
* tensor_b[b if YB != 1 else 0, j, k].astype(out_dtype),
axis=k,
)
compute_name = "T_batch_matmul_TT"
elif (transpose_a, transpose_b) == (True, False):
compute_lambda = lambda b, i, j: te.sum(
tensor_a[b if XB != 1 else 0, k, i].astype(out_dtype)
* tensor_b[b if YB != 1 else 0, k, j].astype(out_dtype),
axis=k,
)
compute_name = "T_batch_matmul_TN"
elif (transpose_a, transpose_b) == (False, True):
compute_lambda = lambda b, i, j: te.sum(
tensor_a[b if XB != 1 else 0, i, k].astype(out_dtype)
* tensor_b[b if YB != 1 else 0, j, k].astype(out_dtype),
axis=k,
)
compute_name = "T_batch_matmul_NT"
else: # (transpose_a, transpose_b) == (False, False):
compute_lambda = lambda b, i, j: te.sum(
tensor_a[b if XB != 1 else 0, i, k].astype(out_dtype)
* tensor_b[b if YB != 1 else 0, k, j].astype(out_dtype),
axis=k,
)
compute_name = "T_batch_matmul_NN"
output = te.compute(
oshape,
compute_lambda,
name=compute_name,
tag="batch_matmul",
attrs={"layout_free_placeholders": [tensor_b]},
)
if auto_scheduler_rewritten_layout:
output = auto_scheduler.rewrite_compute_body(output, auto_scheduler_rewritten_layout)
return output
@tvm.target.generic_func
def batch_matmul_legalize(attrs, inputs, types):
"""Legalizes batch_matmul op.
Parameters
----------
attrs : tvm.ir.Attrs
Attributes of current batch_matmul
inputs : list of tvm.relay.Expr
The args of the Relay expr to be legalized
types : list of types
List of input and output types
Returns
-------
result : tvm.relay.Expr
The legalized expr
"""
# not to change by default
# pylint: disable=unused-argument
return None
|
PypiClean
|
/mindspore_gpu-1.10.0-cp39-cp39-manylinux1_x86_64.whl/mindspore/_akg/akg/composite/build_module.py
|
"""build module"""
import os
import json
from collections.abc import Iterable
import akg
from akg import tvm
from akg.utils.kernel_exec import ReturnType, is_symbolic_tiling
from .split_stitch import split_stitch_attr
from .construct_args import get_construct_args, get_tune_construct_args, \
should_enable_attr, get_stmt_for_tune, add_attrs_in_segment_infos, \
update_attrs
from .construct_args import ConstructType, ConstructKey
from .construct_args import get_construct_args, get_tune_construct_args, \
should_enable_attr, get_stmt_for_tune, add_attrs_in_segment_infos
from .split_stitch import split_stitch_attr
def generate_trait(desc):
"""
generate trait of kernel description
"""
def get_op_trait(op, counter, tensor_idx):
input_idx = []
if op['input_desc']:
for input_desc in op['input_desc']:
if input_desc[0].get('value', None) is None:
input_idx.append(counter - tensor_idx[input_desc[0]['tensor_name']])
input_idx.sort()
input_idx_str = ''.join(str(i) for i in input_idx)
op_trait = op['name'] + input_idx_str
if op['name'] == "MatMul":
for attr in op['attr']:
if attr['name'] == "transpose_a":
transpose_a = str(int(attr['value']))
if attr['name'] == "transpose_b":
transpose_b = str(int(attr['value']))
op_trait += '_' + transpose_a + '_' + transpose_b
return op_trait
def generate_compute_trait():
tensor_idx = {}
counter = 0
traits = []
if desc['input_desc'] is not None:
for in_desc in desc['input_desc']:
tensor_idx[in_desc[0]['tensor_name']] = counter
counter += 1
traits = [str(len(desc['input_desc']))]
for op in desc['op_desc'] if desc['op_desc'] is not None else []:
op_trait = get_op_trait(op, counter, tensor_idx)
traits.append(op_trait)
for op_out_desc in op['output_desc'] if op['output_desc'] is not None else []:
tensor_idx[op_out_desc['tensor_name']] = counter
counter += 1
output_idx = []
for out_desc in desc['output_desc'] if desc['output_desc'] is not None else []:
output_idx.append(tensor_idx.get(out_desc.get('tensor_name', "")))
output_idx.sort()
traits.append(''.join(str(i) for i in output_idx))
return '.'.join(traits)
def append_trait(traits, data):
if traits and traits[-1].rstrip('-') == data:
traits[-1] += '-'
else:
traits.append(data)
def generate_shape_trait():
traits = []
for in_desc in desc['input_desc'] if desc['input_desc'] is not None else []:
shape_s = '_'.join(str(i) for i in in_desc[0]['shape'])
append_trait(traits, shape_s)
for out_desc in desc['output_desc'] if desc['output_desc'] is not None else []:
shape_s = '_'.join(str(i) for i in out_desc['shape'])
append_trait(traits, shape_s)
return '.'.join(traits)
def generate_dtype_trait():
traits = []
for in_desc in desc['input_desc'] if desc['input_desc'] is not None else []:
dtype = in_desc[0]['data_type']
append_trait(traits, dtype)
for out_desc in desc['output_desc'] if desc['output_desc'] is not None else []:
dtype = out_desc['data_type']
append_trait(traits, dtype)
return '.'.join(traits)
compute = generate_compute_trait()
shape = generate_shape_trait()
dtype = generate_dtype_trait()
return compute, shape, dtype
def _set_compute_attrs(desc_d_in, attr):
desc_d = desc_d_in
for i, op in enumerate(desc_d.get('op_desc')):
if op.get('name') == "MatMul" and attr.get('bypass') not in (None, ''):
desc_d['op_desc'][i]['attr'].append({'data_type': 'int32', 'name': 'bypass', 'value': attr['bypass']})
desc_s = json.dumps(desc_d)
return desc_d, desc_s
def _get_feature(target, segment_tree, segment_infos):
tune_composite = tvm.get_global_func("tune_composite")
stmt, args = tune_composite(target, True, segment_tree, segment_infos)
from akg.tvm import build_module
binds, _ = build_module.get_binds(args)
from akg.utils.auto_tuning import get_features_from_stmts
feature = get_features_from_stmts(target=target, stmts=[stmt], binds=[binds], n_skip_cache=0)[0]
return feature
def _build_for_tuning(attrs, func, target, segment_tree, segment_infos):
def _setup_for_feature(segment_infos):
feature_segment_infos = segment_infos.copy()
if attrs.get("ret_mode") != ReturnType.FEAT:
feature_segment_infos = add_attrs_in_segment_infos(feature_segment_infos, "ret_mode", ReturnType.FEAT)
feature_segment_infos = get_stmt_for_tune(feature_segment_infos)
return feature_segment_infos
if attrs.get("ret_mode") == ReturnType.FEAT:
segment_infos = _setup_for_feature(segment_infos)
return _get_feature(target, segment_tree, segment_infos)
elif attrs.get("ret_mode") in [ReturnType.DEFAULT, ReturnType.MOD]:
return func(target, True, segment_tree, segment_infos)
elif attrs.get("ret_mode") == ReturnType.MOD_AND_FEAT:
# get both module and feature
feature_segment_infos = _setup_for_feature(segment_infos)
feature = _get_feature(target, segment_tree, feature_segment_infos)
segment_infos = add_attrs_in_segment_infos(segment_infos, "ret_mode", ReturnType.MOD)
mod = func(target, True, segment_tree, segment_infos)
return mod, feature
else:
raise ValueError("ret_mode gets a wrong value: {}, should be in DEFAULT, FEAT, MOD, MOD_AND_FEAT".
format(attrs.get("ret_mode")))
def _set_tiling_attrs(out_shape, attrs):
axis_len = len(out_shape)
if axis_len < 3:
return attrs
if all(map(lambda x: x == 1, list(out_shape[x] for x in range(axis_len - 2)))):
return attrs
if attrs.get('bind_block') in (None, ''):
i = 0
while out_shape[i] == 1:
i += 1
block_y = out_shape[i]
block_x = out_shape[i + 1] if i < axis_len - 3 else 1
attrs['bind_block'] = str(block_x) + ' ' + str(block_y)
if attrs.get('dim') in (None, ''):
batch_axis = 0
for i in range(axis_len - 2):
if out_shape[i] != 1:
batch_axis += 1
dim_list = [0, 0, 64, 64, 0, 0, 64, 64, 0, 0, 64, 4]
dim_list = [0, 0, 1, 1] * batch_axis + dim_list
i = 0
while i < (len(dim_list) // 4):
dim_list[i * 4 + 1] = i
i += 1
attrs['dim'] = ' '.join(str(x) for x in dim_list)
return attrs
def _update_target_info(desc_d, attr):
target_info = desc_d.get("target_info")
if not target_info:
return attr
process = desc_d.get("process")
if process == "cuda":
# auto detect proper gpu device type according to compute capability description
if target_info.get("compute_capability") == "8.0":
attr["device_type"] = "a100"
elif process == "cpu":
if target_info.get("feature"):
attr["feature"] = target_info.get("feature")
return attr
def _update_compile_attr(desc_d, attr):
# For user defined akg compile attr
attr = _update_target_info(desc_d, attr)
if desc_d.get('op_desc') is None:
return attr
for op in desc_d.get('op_desc'):
op_attrs = op.get("attr", {})
if not isinstance(op_attrs, Iterable):
continue
compile_attrs = list(item.get("value", "") for item in op_attrs if isinstance(
item, dict) and item.get("name", "") == "func_compile_attrs")
if compile_attrs:
attrs_dict = json.loads(compile_attrs[0])
for key in attrs_dict:
attr.update({key: attrs_dict[key]})
return attr
def update_tuned_attrs(desc_d, attrs):
"""update attrs from tuning to build process, like 'tuned_dim' -> 'dim'
"""
tuned_attrs_list = ["tuned_dim", "tuned_bind_block", "tuned_bind_thread"]
if desc_d.get("op_desc", None):
return attrs
for op in desc_d.get("op_desc"):
if op.get("attr", None):
continue
for a in op.get("attr"):
if a["name"] in tuned_attrs_list:
name = a["name"][6:] # remove 'tuned_'
attrs[name] = attrs.get(name, a["value"])
return attrs
def update_dynmaic_batch_attrs(desc_d, attrs):
"""update attrs related to dynamic batch
"""
if "dynamic_input_index" in desc_d:
attrs["dynamic_input_index"] = desc_d["dynamic_input_index"]
else:
attrs["dynamic_input_index"] = ""
return attrs
def _set_attrs(desc_d, attrs, poly):
if "enable_atomic_add" not in attrs.keys():
attrs["enable_atomic_add"] = should_enable_attr(desc_d, "enable_atomic_add")
if not poly:
attrs["enable_atomic_add"] = False
if "is_csr" not in attrs.keys():
attrs["is_csr"] = should_enable_attr(desc_d, "is_csr")
if "enable_approximate_read" not in attrs.keys():
attrs["enable_approximate_read"] = should_enable_attr(desc_d, "enable_approximate_read")
if "enable_elementwise_flatten" not in attrs.keys():
attrs["enable_elementwise_flatten"] = False
attrs["enable_symbolic_tiling"] = is_symbolic_tiling(desc_d['op'])
attrs["process"] = desc_d["process"]
attrs = update_tuned_attrs(desc_d, attrs)
attrs = update_dynmaic_batch_attrs(desc_d, attrs)
if desc_d["process"] == "cpu":
attrs["pack_matrix_b"] = False if should_enable_attr(desc_d, "pack_b") else True
return _update_compile_attr(desc_d, attrs)
def _get_online_tune_attr(desc_s, attrs, repo_path, use_new_space=True):
try:
import auto_tune
except ImportError:
raise ImportError("Import auto_tune fail, please install auto_tune using pip")
desc_d = json.loads(desc_s)
if "buffer_stitch" in desc_d:
best_config = auto_tune.tune_stitch_segment(desc_s,
repo_path=repo_path)
elif use_new_space:
task_options = auto_tune.TaskOptions(tune_level=attrs["online_tuning"],
use_new_space=use_new_space,
attrs=attrs,
generate_trait=generate_trait,
mode="online",
enable_transfer=True)
best_config = auto_tune.tune_composite_v2(desc_s,
task_options=task_options)
else:
from tests.prev_version_auto_tune.composite_tuner import tune_composite
best_config = tune_composite(desc_s,
tune_level=attrs["online_tuning"],
repo_path=repo_path,
skip_exist=True)
attrs.update(best_config)
pop_keys = ["online_tuning", "help_tiling", "tuning", "use_new_space"]
clean_attrs = {k: v for k, v in attrs.items() if k not in pop_keys}
return clean_attrs
def get_attr_from_dict(keys, repo, default=None):
"""
:param keys: [key1,key3,key3]
:param repo: {key1:{key2:{key3:attr}}}
:return: attr
"""
for key in keys:
repo = repo.get(key)
if not repo:
return default
return repo
def merge_attrs(attrs_a, attrs_b):
# merge attrs_b into attrs_a if an attr in attrs_b but not in attrs_a
attrs = attrs_a.copy()
for i in attrs_b:
if not attrs.get(i):
attrs[i] = attrs_b[i]
return attrs
def read_repo_file(repo_file):
if not os.path.exists(repo_file):
return {}
with open(repo_file, 'r') as f:
repo = json.loads(f.read())
return repo
def _get_default_repository_file(process):
filename = "repository.json" if process == "aicore" else "repository_%s.json" % process
# get the abosulte path for a file in currect dir, input is a file's name like "a.json"
pwd = os.path.dirname(os.path.abspath(__file__))
path_str = pwd + "/" + filename
if not os.path.exists(path_str):
path_str = pwd + "/../config/" + filename
if not os.path.exists(path_str):
raise FileNotFoundError("Can not find {} in directory {} and {}".format(filename, pwd, pwd + "/../config"))
return path_str
def _get_repository(desc_d, attrs):
if os.getenv('MS_GRAPH_KERNEL_TILING'):
return read_repo_file(str(os.getenv('MS_GRAPH_KERNEL_TILING')))
if 'buffer_stitch' in desc_d and attrs.get("process") == 'cuda':
return {}
if "repository_path" in attrs:
filepath = os.path.join(os.path.realpath(attrs["repository_path"]), "repo_op_tiling.json")
if os.path.exists(filepath):
return read_repo_file(filepath)
process = attrs.get("process", "aicore")
return read_repo_file(_get_default_repository_file(process))
def _get_repo_attr(desc_d, compute, shape, dtype, repo, batchmatmul):
repo_attr = get_attr_from_dict([compute, shape, dtype, 'metadata', 'attrs'], repo, {})
if repo_attr and batchmatmul:
repo_attr = _set_tiling_attrs(desc_d['output_desc'][0]['shape'], repo_attr)
if not repo_attr:
repo_attr = get_attr_from_dict([compute, 'metadata', 'attrs'], repo, {})
return repo_attr
def _update_attrs_gpu(all_ops, attrs, poly):
if poly:
if any(i in all_ops for i in ['Argmax', 'Argmin']):
# disable auto_fuse and akg_reduce_lib for argmax and argmin
attrs["enable_akg_reduce_lib"] = False
attrs["enable_auto_fuse"] = False
elif "enable_akg_reduce_lib" not in attrs.keys():
attrs["enable_akg_reduce_lib"] = True
if "pragma_enable_matmul" not in attrs.keys() and any(
i in all_ops for i in ["BatchMatMul", "MatMul", "Conv2D"]):
attrs['pragma_enable_matmul'] = True
attrs['enable_auto_inline'] = False
if "pragma_enable_conv_tensor_core" not in attrs.keys() and "Conv2D" in all_ops:
attrs["pragma_enable_conv_tensor_core"] = True
attrs["enable_auto_fuse"] = False
# Close general tot by default
enable_general_tot = False
if "has_tot_ops" not in attrs.keys() and any(i in all_ops for i in ["Gather", "TensorScatterAdd"]):
attrs["has_tot_ops"] = enable_general_tot
return attrs
def _update_attrs_cpu(all_ops, attrs, poly):
if not poly:
return attrs
if "pragma_enable_matmul" not in attrs.keys() and any(i in all_ops for i in ["BatchMatMul", "MatMul"]):
attrs['pragma_enable_matmul'] = True
attrs['enable_auto_inline'] = False
attrs['pragma_enable_schedule_maximize_coincidence'] = True
if any([i in all_ops for i in ["Conv2D"]]):
attrs["enable_auto_fuse"] = False
attrs["pragma_enable_conv2d_direct"] = True
if any([i in all_ops for i in ["Pool2D"]]):
attrs["enable_auto_fuse"] = False
if "feature" not in attrs.keys() and any([i in all_ops for i in ["BatchMatMul", "MatMul"]]):
attrs["feature"] = "avx"
return attrs
def _update_attrs_ascend(all_ops, attr):
attr["pragma_rmselfdep"] = all(i not in all_ops for i in ["BatchMatMul", "MatMul"])
# For the MatMul/BatchMatMul with bias, the inline is necessary
# For the Ascend, turn 'enable_auto_inline' off for composite op by default.
attr["enable_auto_inline"] = any(i in all_ops for i in ["BatchMatMul", "MatMul"])
attr["multicore_loop_switch_hoist"] = "UnsortedSegmentSum" not in all_ops
return attr
def _build_to_module(desc_s, desc_d, attrs=None, poly=True):
"""
build kernel with compute description in json format
Args:
desc_s : str of compute description
desc_d : dict of compute description
attrs : dict of build attributes
Returns:
Module.
"""
process = desc_d["process"]
file_name = "repository_" + process + ".json"
def _update_attr_by_repo(desc_s, attrs):
desc_d = json.loads(desc_s)
process = desc_d["process"]
attrs.update({"process": process})
repository = _get_repository(desc_d, attrs)
all_ops = set(op["name"] for op in desc_d["op_desc"])
if attrs is None:
attrs = {"dim": ""}
compute, shape, dtype = generate_trait(desc_d)
batchmatmul = "BatchMatMul" in all_ops
if batchmatmul:
shape = "any_shape"
repo_attr = _get_repo_attr(desc_d, compute, shape, dtype, repository, batchmatmul)
attrs = merge_attrs(attrs, repo_attr)
attr_list = ["dim", "bind_block", "bind_thread"] if process == "cuda" else ["dim"]
for item in attr_list:
if attrs.get(item) in (None, ""):
value = get_attr_from_dict([compute, shape, dtype, item], repository)
if value:
attrs[item] = value
if attrs.get("dim") in (None, "") and "online_tuning" in attrs:
attrs = _get_online_tune_attr(desc_s, attrs, _get_default_repository_file(process))
return desc_d, attrs
def _post_update_attr(desc_s, attrs, poly):
desc_d, attrs = _update_attr_by_repo(desc_s, attrs)
all_ops = set(op["name"] for op in desc_d["op_desc"])
if process == "cuda":
attrs = _update_attrs_gpu(all_ops, attrs, poly)
elif process == "cpu":
attrs = _update_attrs_cpu(all_ops, attrs, poly)
return attrs
def _common_postprocess(_, json_str_list, attrs_list, poly):
for i, (cur_json_str, cur_attr) in enumerate(zip(json_str_list, attrs_list)):
attrs_list[i] = _post_update_attr(cur_json_str, cur_attr, poly)
return json_str_list, attrs_list
def _get_stitch_repo(desc_d):
compute, shape, dtype = generate_trait(desc_d)
repo_attr = get_attr_from_dict([compute, shape, dtype], _get_repository(file_name, desc_d), {})
return repo_attr
def _stitch_postprocess(desc_d, json_str_list, attrs_list, _):
def _stitch_combine_attrs(common_attr, sub_attrs):
combine_attrs = []
for i, a in enumerate(sub_attrs):
new_sub_attrs = {}
for k, v in common_attr.items():
new_sub_attrs[k] = v
if a:
key = "sub_attr_" + str(i + 1)
new_sub_attrs[key] = {}
for k, v in a.items():
new_sub_attrs.get(key)[k] = v
combine_attrs.append(new_sub_attrs)
return combine_attrs
origin_stitch_attrs = attrs_list[0]
if origin_stitch_attrs.get("peeling") is None:
# Read buffer stitch attr from repo
stitch_repo = _get_stitch_repo(desc_d)
if stitch_repo.get("peeling") is not None:
origin_stitch_attrs.update(stitch_repo)
elif "online_tuning" in attrs:
# If buffer stitch attr not in repo, use online tuning
tuning_attr = _get_online_tune_attr(json.dumps(desc_d), origin_stitch_attrs,
_get_default_repository_file(process))
origin_stitch_attrs.update(tuning_attr)
# Update sub json attr
common_attr, stitch_sub_attrs = split_stitch_attr(origin_stitch_attrs, len(json_str_list))
# common_attr.update({'peeling': '0 1', 'fold_dim': False})
for i, cur_attr in enumerate(stitch_sub_attrs):
stitch_sub_attrs[i] = _post_update_attr(json.dumps(desc_d), cur_attr, poly)
stitch_attrs = _stitch_combine_attrs(common_attr, stitch_sub_attrs)
return json_str_list, stitch_attrs
post_funcs = {
ConstructType.PARALLEL: _common_postprocess,
ConstructType.STITCH: _stitch_postprocess,
ConstructType.NORMAL: _common_postprocess,
ConstructType.TOT: _common_postprocess,
ConstructType.CONCAT: _common_postprocess
}
segment_tree, segment_infos = get_construct_args(desc_s, attrs, post_funcs)
process = desc_d["process"]
func = tvm.get_global_func("lower_composite_to_module")
if "ret_mode" in attrs and poly:
return _build_for_tuning(attrs, func, process, segment_tree, segment_infos)
return func(process, poly, segment_tree, segment_infos)
def _build_to_module_ascend(desc_s_in, desc_d_in, attr, use_repo=True):
"""
build kernel with compute description in json format
Args:
desc_s_in : str of compute description
desc_d_in : dict of compute description
attr : dict of build attributes
Returns:
Module.
"""
repository = _get_repository(desc_d_in, attr)
def _update_attr_by_repo(desc_s, desc_d, attr, given_attrs=None, support_online_tuning=True):
def _auto_set_single_block(desc_d, attr):
if not attr.get("enable_multicore", None) and desc_d.get("extra", None):
if desc_d["extra"].get("BlockMode", "") == "single_block":
attr["enable_multicore"] = 0
return attr
if attr is None:
attr = {'dim': ''}
all_ops = set(op['name'] for op in desc_d['op_desc'])
attr = _update_attrs_ascend(all_ops, attr)
attr = _auto_set_single_block(desc_d, attr)
if given_attrs is not None:
for key, value in given_attrs.items():
if not attr.get(key):
attr[key] = value
elif use_repo:
compute, shape, dtype = generate_trait(desc_d)
repo_attr = _get_repo_attr(desc_d, compute, shape, dtype, repository, False)
attr = merge_attrs(attr, repo_attr)
if attr.get('dim') in (None, ''):
tiling = get_attr_from_dict([compute, shape, dtype, 'dim'], repository)
if tiling:
attr['dim'] = tiling
elif support_online_tuning and 'online_tuning' in attr:
attr = _get_online_tune_attr(desc_s, attr, _get_default_repository_file("aicore"))
_, desc_s = _set_compute_attrs(desc_d, attr)
return desc_s, attr
def _get_parallel_repo(desc_d):
compute, shape, dtype = generate_trait(desc_d)
repo_attr = get_attr_from_dict([compute, shape, dtype, 'BlockPlan'], repository, {})
return repo_attr
def _get_stitch_repo(desc_d):
compute, shape, dtype = generate_trait(desc_d)
repo_attr = get_attr_from_dict([compute, shape, dtype], repository, {})
return repo_attr
def _parallel_postprocess(desc_d, json_str_list, attrs_list, _):
parallel_repo = _get_parallel_repo(desc_d)
if parallel_repo:
# "BlockPlan" should be: [{"block_plan": x1, attr1: x2, attr2: x3}, ...]
for i, [cur_json, cur_attr, cur_plan] in enumerate(zip(json_str_list, attrs_list, parallel_repo)):
# When BlockPlan is active, the body should be run as single block
cur_attr["enable_multicore"] = 0
json_str_list[i], attrs_list[i] = _update_attr_by_repo(cur_json, json.loads(cur_json), cur_attr,
cur_plan[ConstructKey.ATTRS], False)
else:
for i, [cur_json, cur_attr] in enumerate(zip(json_str_list, attrs_list)):
json_str_list[i], attrs_list[i] = _update_attr_by_repo(
cur_json, json.loads(cur_json), cur_attr, None, False)
return json_str_list, attrs_list
def _stitch_postprocess(desc_d, stitch_jsons, attrs_list, _):
def _stitch_combine_attrs(common_attr, sub_attrs):
combine_attrs = []
for i, a in enumerate(sub_attrs):
new_sub_attrs = {}
for k, v in common_attr.items():
new_sub_attrs[k] = v
if a:
key = "sub_attr_" + str(i + 1)
new_sub_attrs[key] = {}
for k, v in a.items():
new_sub_attrs.get(key)[k] = v
combine_attrs.append(new_sub_attrs)
return combine_attrs
origin_stitch_attrs = attrs_list[0]
if origin_stitch_attrs.get("peeling") is None:
# Read buffer stitch attr from repo
stitch_repo = _get_stitch_repo(desc_d)
if stitch_repo.get("peeling") is not None:
origin_stitch_attrs.update(stitch_repo)
elif "online_tuning" in attr:
# If buffer stitch attr not in repo, use online tuning
tuning_attr = _get_online_tune_attr(json.dumps(desc_d), origin_stitch_attrs,
_get_default_repository_file("aicore"))
origin_stitch_attrs.update(tuning_attr)
# Update sub json attr
common_attr, stitch_sub_attrs = split_stitch_attr(origin_stitch_attrs, len(stitch_jsons))
for i, cur_json_str in enumerate(stitch_jsons):
stitch_jsons[i], stitch_sub_attrs[i] = _update_attr_by_repo(
cur_json_str, json.loads(cur_json_str), stitch_sub_attrs[i], {})
stitch_attrs = _stitch_combine_attrs(common_attr, stitch_sub_attrs)
return stitch_jsons, stitch_attrs
def _normal_postprocess(desc_d, json_str_list, attrs_list, poly):
_ = (desc_d, poly) # For unused warning...
for i, (cur_json_str, cur_attr) in enumerate(zip(json_str_list, attrs_list)):
json_str_list[i], attrs_list[i] = _update_attr_by_repo(
cur_json_str, json.loads(cur_json_str), cur_attr)
return json_str_list, attrs_list
post_funcs = {
ConstructType.PARALLEL: _parallel_postprocess,
ConstructType.STITCH: _stitch_postprocess,
ConstructType.NORMAL: _normal_postprocess,
}
segment_tree, segment_infos = get_construct_args(desc_s_in, attr, post_funcs)
process = desc_d_in["process"]
func = tvm.get_global_func("lower_composite_to_module")
if "ret_mode" in attr:
return _build_for_tuning(attr, func, process, segment_tree, segment_infos)
return func(process, True, segment_tree, segment_infos)
def _set_backend(desc_d):
desc_d_process = desc_d
for i, op in enumerate(desc_d.get("op_desc")):
op_attrs = op.get("attr", [])
op_name = op.get("name", "")
if op_name != "UnsortedSegmentSum":
continue
op_attrs.append({'data_type': 'string', 'name': 'process', 'value': desc_d['process']})
op["attr"] = op_attrs
desc_d_process["op_desc"][i] = op
desc_s = json.dumps(desc_d_process)
return desc_s
def build(kernel_desc, attrs=None, poly=True, use_repo=True):
"""
build kernel with compute description in json format
Args:
kernel_desc : str or dict of compute description
attrs : dict of build attributes
Returns:
Module.
"""
if isinstance(kernel_desc, str):
desc_d = json.loads(kernel_desc)
else:
if not isinstance(kernel_desc, dict):
raise TypeError("kernel_desc should be a dict, but get a {}".format(type(kernel_desc)))
desc_d = kernel_desc
from akg.ms.info_version_adapt import InfoVersionAdapt
info_adapter = InfoVersionAdapt(desc_d)
ret = info_adapter.run()
if not ret:
raise RuntimeError(info_adapter.msg)
desc_s = _set_backend(desc_d)
if attrs is None:
attrs = dict()
backend = desc_d['process']
attrs = _set_attrs(desc_d, attrs, poly)
if backend == 'aicore':
return _build_to_module_ascend(desc_s, desc_d, attrs, use_repo)
else:
return _build_to_module(desc_s, desc_d, attrs, poly)
def get_tiling_space(kernel_desc, level=1, attr=None):
"""
get tiling space of composite kernel
Args:
kernel_desc : str of compute description
level : info level
attr : dict of build attributes
Returns:
Module.
"""
if attr is None:
attr = {}
attr['help_tiling'] = level
attr['tuning'] = 'on'
desc_d = json.loads(kernel_desc)
backend = desc_d['process']
all_ops = set(op['name'] for op in desc_d['op_desc'])
if backend == "cuda":
attr = _update_attrs_gpu(all_ops, attr, True)
elif backend == "cpu":
attr = _update_attrs_cpu(all_ops, attr, True)
else:
attr = _update_attrs_ascend(all_ops, attr)
segment_tree, segment_infos = get_tune_construct_args(kernel_desc, attr)
tune_composite = tvm.get_global_func("tune_composite")
ret = tune_composite(backend, True, segment_tree, segment_infos)
spaces = {}
if attr.get("use_new_space", False):
spaces['tune_space'] = ret
else:
spaces['index'] = ret.index_table.asnumpy().tolist()
spaces['c1_range'] = ret.c1_tile_range_table.asnumpy().tolist()
spaces['c0_range'] = ret.c0_tile_range_table.asnumpy().tolist()
spaces['c1_mod'] = ret.c1_tile_mod_table.asnumpy().tolist()
spaces['c0_mod'] = ret.c0_tile_mod_table.asnumpy().tolist()
if level >= 2:
spaces['tuning_space'] = ret.tiling_candidate.asnumpy().tolist()
return spaces
|
PypiClean
|
/nni-3.0rc1-py3-none-macosx_10_9_x86_64.whl/nni_node/node_modules/yargs/build/lib/command.js
|
import { assertNotStrictEqual, } from './typings/common-types.js';
import { isPromise } from './utils/is-promise.js';
import { applyMiddleware, commandMiddlewareFactory, } from './middleware.js';
import { parseCommand } from './parse-command.js';
import { isYargsInstance, } from './yargs-factory.js';
import { maybeAsyncResult } from './utils/maybe-async-result.js';
import whichModule from './utils/which-module.js';
const DEFAULT_MARKER = /(^\*)|(^\$0)/;
export class CommandInstance {
constructor(usage, validation, globalMiddleware, shim) {
this.requireCache = new Set();
this.handlers = {};
this.aliasMap = {};
this.frozens = [];
this.shim = shim;
this.usage = usage;
this.globalMiddleware = globalMiddleware;
this.validation = validation;
}
addDirectory(dir, req, callerFile, opts) {
opts = opts || {};
if (typeof opts.recurse !== 'boolean')
opts.recurse = false;
if (!Array.isArray(opts.extensions))
opts.extensions = ['js'];
const parentVisit = typeof opts.visit === 'function' ? opts.visit : (o) => o;
opts.visit = (obj, joined, filename) => {
const visited = parentVisit(obj, joined, filename);
if (visited) {
if (this.requireCache.has(joined))
return visited;
else
this.requireCache.add(joined);
this.addHandler(visited);
}
return visited;
};
this.shim.requireDirectory({ require: req, filename: callerFile }, dir, opts);
}
addHandler(cmd, description, builder, handler, commandMiddleware, deprecated) {
let aliases = [];
const middlewares = commandMiddlewareFactory(commandMiddleware);
handler = handler || (() => { });
if (Array.isArray(cmd)) {
if (isCommandAndAliases(cmd)) {
[cmd, ...aliases] = cmd;
}
else {
for (const command of cmd) {
this.addHandler(command);
}
}
}
else if (isCommandHandlerDefinition(cmd)) {
let command = Array.isArray(cmd.command) || typeof cmd.command === 'string'
? cmd.command
: this.moduleName(cmd);
if (cmd.aliases)
command = [].concat(command).concat(cmd.aliases);
this.addHandler(command, this.extractDesc(cmd), cmd.builder, cmd.handler, cmd.middlewares, cmd.deprecated);
return;
}
else if (isCommandBuilderDefinition(builder)) {
this.addHandler([cmd].concat(aliases), description, builder.builder, builder.handler, builder.middlewares, builder.deprecated);
return;
}
if (typeof cmd === 'string') {
const parsedCommand = parseCommand(cmd);
aliases = aliases.map(alias => parseCommand(alias).cmd);
let isDefault = false;
const parsedAliases = [parsedCommand.cmd].concat(aliases).filter(c => {
if (DEFAULT_MARKER.test(c)) {
isDefault = true;
return false;
}
return true;
});
if (parsedAliases.length === 0 && isDefault)
parsedAliases.push('$0');
if (isDefault) {
parsedCommand.cmd = parsedAliases[0];
aliases = parsedAliases.slice(1);
cmd = cmd.replace(DEFAULT_MARKER, parsedCommand.cmd);
}
aliases.forEach(alias => {
this.aliasMap[alias] = parsedCommand.cmd;
});
if (description !== false) {
this.usage.command(cmd, description, isDefault, aliases, deprecated);
}
this.handlers[parsedCommand.cmd] = {
original: cmd,
description,
handler,
builder: builder || {},
middlewares,
deprecated,
demanded: parsedCommand.demanded,
optional: parsedCommand.optional,
};
if (isDefault)
this.defaultCommand = this.handlers[parsedCommand.cmd];
}
}
getCommandHandlers() {
return this.handlers;
}
getCommands() {
return Object.keys(this.handlers).concat(Object.keys(this.aliasMap));
}
hasDefaultCommand() {
return !!this.defaultCommand;
}
runCommand(command, yargs, parsed, commandIndex, helpOnly, helpOrVersionSet) {
const commandHandler = this.handlers[command] ||
this.handlers[this.aliasMap[command]] ||
this.defaultCommand;
const currentContext = yargs.getInternalMethods().getContext();
const parentCommands = currentContext.commands.slice();
const isDefaultCommand = !command;
if (command) {
currentContext.commands.push(command);
currentContext.fullCommands.push(commandHandler.original);
}
const builderResult = this.applyBuilderUpdateUsageAndParse(isDefaultCommand, commandHandler, yargs, parsed.aliases, parentCommands, commandIndex, helpOnly, helpOrVersionSet);
return isPromise(builderResult)
? builderResult.then(result => this.applyMiddlewareAndGetResult(isDefaultCommand, commandHandler, result.innerArgv, currentContext, helpOnly, result.aliases, yargs))
: this.applyMiddlewareAndGetResult(isDefaultCommand, commandHandler, builderResult.innerArgv, currentContext, helpOnly, builderResult.aliases, yargs);
}
applyBuilderUpdateUsageAndParse(isDefaultCommand, commandHandler, yargs, aliases, parentCommands, commandIndex, helpOnly, helpOrVersionSet) {
const builder = commandHandler.builder;
let innerYargs = yargs;
if (isCommandBuilderCallback(builder)) {
yargs.getInternalMethods().getUsageInstance().freeze();
const builderOutput = builder(yargs.getInternalMethods().reset(aliases), helpOrVersionSet);
if (isPromise(builderOutput)) {
return builderOutput.then(output => {
innerYargs = isYargsInstance(output) ? output : yargs;
return this.parseAndUpdateUsage(isDefaultCommand, commandHandler, innerYargs, parentCommands, commandIndex, helpOnly);
});
}
}
else if (isCommandBuilderOptionDefinitions(builder)) {
yargs.getInternalMethods().getUsageInstance().freeze();
innerYargs = yargs.getInternalMethods().reset(aliases);
Object.keys(commandHandler.builder).forEach(key => {
innerYargs.option(key, builder[key]);
});
}
return this.parseAndUpdateUsage(isDefaultCommand, commandHandler, innerYargs, parentCommands, commandIndex, helpOnly);
}
parseAndUpdateUsage(isDefaultCommand, commandHandler, innerYargs, parentCommands, commandIndex, helpOnly) {
if (isDefaultCommand)
innerYargs.getInternalMethods().getUsageInstance().unfreeze(true);
if (this.shouldUpdateUsage(innerYargs)) {
innerYargs
.getInternalMethods()
.getUsageInstance()
.usage(this.usageFromParentCommandsCommandHandler(parentCommands, commandHandler), commandHandler.description);
}
const innerArgv = innerYargs
.getInternalMethods()
.runYargsParserAndExecuteCommands(null, undefined, true, commandIndex, helpOnly);
return isPromise(innerArgv)
? innerArgv.then(argv => ({
aliases: innerYargs.parsed.aliases,
innerArgv: argv,
}))
: {
aliases: innerYargs.parsed.aliases,
innerArgv: innerArgv,
};
}
shouldUpdateUsage(yargs) {
return (!yargs.getInternalMethods().getUsageInstance().getUsageDisabled() &&
yargs.getInternalMethods().getUsageInstance().getUsage().length === 0);
}
usageFromParentCommandsCommandHandler(parentCommands, commandHandler) {
const c = DEFAULT_MARKER.test(commandHandler.original)
? commandHandler.original.replace(DEFAULT_MARKER, '').trim()
: commandHandler.original;
const pc = parentCommands.filter(c => {
return !DEFAULT_MARKER.test(c);
});
pc.push(c);
return `$0 ${pc.join(' ')}`;
}
handleValidationAndGetResult(isDefaultCommand, commandHandler, innerArgv, currentContext, aliases, yargs, middlewares, positionalMap) {
if (!yargs.getInternalMethods().getHasOutput()) {
const validation = yargs
.getInternalMethods()
.runValidation(aliases, positionalMap, yargs.parsed.error, isDefaultCommand);
innerArgv = maybeAsyncResult(innerArgv, result => {
validation(result);
return result;
});
}
if (commandHandler.handler && !yargs.getInternalMethods().getHasOutput()) {
yargs.getInternalMethods().setHasOutput();
const populateDoubleDash = !!yargs.getOptions().configuration['populate--'];
yargs
.getInternalMethods()
.postProcess(innerArgv, populateDoubleDash, false, false);
innerArgv = applyMiddleware(innerArgv, yargs, middlewares, false);
innerArgv = maybeAsyncResult(innerArgv, result => {
const handlerResult = commandHandler.handler(result);
return isPromise(handlerResult)
? handlerResult.then(() => result)
: result;
});
if (!isDefaultCommand) {
yargs.getInternalMethods().getUsageInstance().cacheHelpMessage();
}
if (isPromise(innerArgv) &&
!yargs.getInternalMethods().hasParseCallback()) {
innerArgv.catch(error => {
try {
yargs.getInternalMethods().getUsageInstance().fail(null, error);
}
catch (_err) {
}
});
}
}
if (!isDefaultCommand) {
currentContext.commands.pop();
currentContext.fullCommands.pop();
}
return innerArgv;
}
applyMiddlewareAndGetResult(isDefaultCommand, commandHandler, innerArgv, currentContext, helpOnly, aliases, yargs) {
let positionalMap = {};
if (helpOnly)
return innerArgv;
if (!yargs.getInternalMethods().getHasOutput()) {
positionalMap = this.populatePositionals(commandHandler, innerArgv, currentContext, yargs);
}
const middlewares = this.globalMiddleware
.getMiddleware()
.slice(0)
.concat(commandHandler.middlewares);
const maybePromiseArgv = applyMiddleware(innerArgv, yargs, middlewares, true);
return isPromise(maybePromiseArgv)
? maybePromiseArgv.then(resolvedInnerArgv => this.handleValidationAndGetResult(isDefaultCommand, commandHandler, resolvedInnerArgv, currentContext, aliases, yargs, middlewares, positionalMap))
: this.handleValidationAndGetResult(isDefaultCommand, commandHandler, maybePromiseArgv, currentContext, aliases, yargs, middlewares, positionalMap);
}
populatePositionals(commandHandler, argv, context, yargs) {
argv._ = argv._.slice(context.commands.length);
const demanded = commandHandler.demanded.slice(0);
const optional = commandHandler.optional.slice(0);
const positionalMap = {};
this.validation.positionalCount(demanded.length, argv._.length);
while (demanded.length) {
const demand = demanded.shift();
this.populatePositional(demand, argv, positionalMap);
}
while (optional.length) {
const maybe = optional.shift();
this.populatePositional(maybe, argv, positionalMap);
}
argv._ = context.commands.concat(argv._.map(a => '' + a));
this.postProcessPositionals(argv, positionalMap, this.cmdToParseOptions(commandHandler.original), yargs);
return positionalMap;
}
populatePositional(positional, argv, positionalMap) {
const cmd = positional.cmd[0];
if (positional.variadic) {
positionalMap[cmd] = argv._.splice(0).map(String);
}
else {
if (argv._.length)
positionalMap[cmd] = [String(argv._.shift())];
}
}
cmdToParseOptions(cmdString) {
const parseOptions = {
array: [],
default: {},
alias: {},
demand: {},
};
const parsed = parseCommand(cmdString);
parsed.demanded.forEach(d => {
const [cmd, ...aliases] = d.cmd;
if (d.variadic) {
parseOptions.array.push(cmd);
parseOptions.default[cmd] = [];
}
parseOptions.alias[cmd] = aliases;
parseOptions.demand[cmd] = true;
});
parsed.optional.forEach(o => {
const [cmd, ...aliases] = o.cmd;
if (o.variadic) {
parseOptions.array.push(cmd);
parseOptions.default[cmd] = [];
}
parseOptions.alias[cmd] = aliases;
});
return parseOptions;
}
postProcessPositionals(argv, positionalMap, parseOptions, yargs) {
const options = Object.assign({}, yargs.getOptions());
options.default = Object.assign(parseOptions.default, options.default);
for (const key of Object.keys(parseOptions.alias)) {
options.alias[key] = (options.alias[key] || []).concat(parseOptions.alias[key]);
}
options.array = options.array.concat(parseOptions.array);
options.config = {};
const unparsed = [];
Object.keys(positionalMap).forEach(key => {
positionalMap[key].map(value => {
if (options.configuration['unknown-options-as-args'])
options.key[key] = true;
unparsed.push(`--${key}`);
unparsed.push(value);
});
});
if (!unparsed.length)
return;
const config = Object.assign({}, options.configuration, {
'populate--': false,
});
const parsed = this.shim.Parser.detailed(unparsed, Object.assign({}, options, {
configuration: config,
}));
if (parsed.error) {
yargs
.getInternalMethods()
.getUsageInstance()
.fail(parsed.error.message, parsed.error);
}
else {
const positionalKeys = Object.keys(positionalMap);
Object.keys(positionalMap).forEach(key => {
positionalKeys.push(...parsed.aliases[key]);
});
Object.keys(parsed.argv).forEach(key => {
if (positionalKeys.includes(key)) {
if (!positionalMap[key])
positionalMap[key] = parsed.argv[key];
if (!this.isInConfigs(yargs, key) &&
!this.isDefaulted(yargs, key) &&
Object.prototype.hasOwnProperty.call(argv, key) &&
Object.prototype.hasOwnProperty.call(parsed.argv, key) &&
(Array.isArray(argv[key]) || Array.isArray(parsed.argv[key]))) {
argv[key] = [].concat(argv[key], parsed.argv[key]);
}
else {
argv[key] = parsed.argv[key];
}
}
});
}
}
isDefaulted(yargs, key) {
const { default: defaults } = yargs.getOptions();
return (Object.prototype.hasOwnProperty.call(defaults, key) ||
Object.prototype.hasOwnProperty.call(defaults, this.shim.Parser.camelCase(key)));
}
isInConfigs(yargs, key) {
const { configObjects } = yargs.getOptions();
return (configObjects.some(c => Object.prototype.hasOwnProperty.call(c, key)) ||
configObjects.some(c => Object.prototype.hasOwnProperty.call(c, this.shim.Parser.camelCase(key))));
}
runDefaultBuilderOn(yargs) {
if (!this.defaultCommand)
return;
if (this.shouldUpdateUsage(yargs)) {
const commandString = DEFAULT_MARKER.test(this.defaultCommand.original)
? this.defaultCommand.original
: this.defaultCommand.original.replace(/^[^[\]<>]*/, '$0 ');
yargs
.getInternalMethods()
.getUsageInstance()
.usage(commandString, this.defaultCommand.description);
}
const builder = this.defaultCommand.builder;
if (isCommandBuilderCallback(builder)) {
return builder(yargs, true);
}
else if (!isCommandBuilderDefinition(builder)) {
Object.keys(builder).forEach(key => {
yargs.option(key, builder[key]);
});
}
return undefined;
}
moduleName(obj) {
const mod = whichModule(obj);
if (!mod)
throw new Error(`No command name given for module: ${this.shim.inspect(obj)}`);
return this.commandFromFilename(mod.filename);
}
commandFromFilename(filename) {
return this.shim.path.basename(filename, this.shim.path.extname(filename));
}
extractDesc({ describe, description, desc }) {
for (const test of [describe, description, desc]) {
if (typeof test === 'string' || test === false)
return test;
assertNotStrictEqual(test, true, this.shim);
}
return false;
}
freeze() {
this.frozens.push({
handlers: this.handlers,
aliasMap: this.aliasMap,
defaultCommand: this.defaultCommand,
});
}
unfreeze() {
const frozen = this.frozens.pop();
assertNotStrictEqual(frozen, undefined, this.shim);
({
handlers: this.handlers,
aliasMap: this.aliasMap,
defaultCommand: this.defaultCommand,
} = frozen);
}
reset() {
this.handlers = {};
this.aliasMap = {};
this.defaultCommand = undefined;
this.requireCache = new Set();
return this;
}
}
export function command(usage, validation, globalMiddleware, shim) {
return new CommandInstance(usage, validation, globalMiddleware, shim);
}
export function isCommandBuilderDefinition(builder) {
return (typeof builder === 'object' &&
!!builder.builder &&
typeof builder.handler === 'function');
}
function isCommandAndAliases(cmd) {
return cmd.every(c => typeof c === 'string');
}
export function isCommandBuilderCallback(builder) {
return typeof builder === 'function';
}
function isCommandBuilderOptionDefinitions(builder) {
return typeof builder === 'object';
}
export function isCommandHandlerDefinition(cmd) {
return typeof cmd === 'object' && !Array.isArray(cmd);
}
|
PypiClean
|
/py-pure-client-1.38.0.tar.gz/py-pure-client-1.38.0/pypureclient/flasharray/FA_2_4/models/software_upgrade_plan.py
|
import pprint
import re
import six
import typing
from ....properties import Property
if typing.TYPE_CHECKING:
from pypureclient.flasharray.FA_2_4 import models
class SoftwareUpgradePlan(object):
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'step_name': 'str',
'description': 'str',
'hop_version': 'str'
}
attribute_map = {
'step_name': 'step_name',
'description': 'description',
'hop_version': 'hop_version'
}
required_args = {
}
def __init__(
self,
step_name=None, # type: str
description=None, # type: str
hop_version=None, # type: str
):
"""
Keyword args:
step_name (str): Name of the upgrade step.
description (str): Description of the upgrade step.
hop_version (str): The version to which the step is upgrading.
"""
if step_name is not None:
self.step_name = step_name
if description is not None:
self.description = description
if hop_version is not None:
self.hop_version = hop_version
def __setattr__(self, key, value):
if key not in self.attribute_map:
raise KeyError("Invalid key `{}` for `SoftwareUpgradePlan`".format(key))
self.__dict__[key] = value
def __getattribute__(self, item):
value = object.__getattribute__(self, item)
if isinstance(value, Property):
raise AttributeError
else:
return value
def __getitem__(self, key):
if key not in self.attribute_map:
raise KeyError("Invalid key `{}` for `SoftwareUpgradePlan`".format(key))
return object.__getattribute__(self, key)
def __setitem__(self, key, value):
if key not in self.attribute_map:
raise KeyError("Invalid key `{}` for `SoftwareUpgradePlan`".format(key))
object.__setattr__(self, key, value)
def __delitem__(self, key):
if key not in self.attribute_map:
raise KeyError("Invalid key `{}` for `SoftwareUpgradePlan`".format(key))
object.__delattr__(self, key)
def keys(self):
return self.attribute_map.keys()
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
if hasattr(self, attr):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(SoftwareUpgradePlan, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, SoftwareUpgradePlan):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
|
PypiClean
|
/biobb_vs-4.0.0.tar.gz/biobb_vs-4.0.0/biobb_vs/utils/box.py
|
"""Module containing the Box class and the command line interface."""
import argparse
from biobb_common.generic.biobb_object import BiobbObject
from biobb_common.configuration import settings
from biobb_common.tools import file_utils as fu
from biobb_common.tools.file_utils import launchlogger
from biobb_vs.utils.common import *
class Box(BiobbObject):
"""
| biobb_vs Box
| This class sets the center and the size of a rectangular parallelepiped box around a set of residues or a pocket.
| Sets the center and the size of a rectangular parallelepiped box around a set of residues from a given PDB or a pocket from a given PQR.
Args:
input_pdb_path (str): PDB file containing a selection of residue numbers or PQR file containing the pocket. File type: input. `Sample file <https://github.com/bioexcel/biobb_vs/raw/master/biobb_vs/test/data/utils/input_box.pqr>`_. Accepted formats: pdb (edam:format_1476), pqr (edam:format_1476).
output_pdb_path (str): PDB including the annotation of the box center and size as REMARKs. File type: output. `Sample file <https://github.com/bioexcel/biobb_vs/raw/master/biobb_vs/test/reference/utils/ref_output_box.pdb>`_. Accepted formats: pdb (edam:format_1476).
properties (dic - Python dictionary object containing the tool parameters, not input/output files):
* **offset** (*float*) - (2.0) [0.1~1000|0.1] Extra distance (Angstroms) between the last residue atom and the box boundary.
* **box_coordinates** (*bool*) - (False) Add box coordinates as 8 ATOM records.
* **remove_tmp** (*bool*) - (True) [WF property] Remove temporal files.
* **restart** (*bool*) - (False) [WF property] Do not execute if output files exist.
Examples:
This is a use example of how to use the building block from Python::
from biobb_vs.utils.box import box
prop = {
'offset': 2,
'box_coordinates': True
}
box(input_pdb_path='/path/to/myPocket.pqr',
output_pdb_path='/path/to/newBox.pdb',
properties=prop)
Info:
* wrapped_software:
* name: In house
* license: Apache-2.0
* ontology:
* name: EDAM
* schema: http://edamontology.org/EDAM.owl
"""
def __init__(self, input_pdb_path, output_pdb_path,
properties=None, **kwargs) -> None:
properties = properties or {}
# Call parent class constructor
super().__init__(properties)
self.locals_var_dict = locals().copy()
# Input/Output files
self.io_dict = {
"in": { "input_pdb_path": input_pdb_path },
"out": { "output_pdb_path": output_pdb_path }
}
# Properties specific for BB
self.offset = float(properties.get('offset', 2.0))
self.box_coordinates = float(properties.get('box_coordinates', False))
self.properties = properties
# Check the properties
self.check_properties(properties)
self.check_arguments()
def check_data_params(self, out_log, err_log):
""" Checks all the input/output paths and parameters """
self.io_dict["in"]["input_pdb_path"] = check_input_path(self.io_dict["in"]["input_pdb_path"],"input_pdb_path", self.out_log, self.__class__.__name__)
self.io_dict["out"]["output_pdb_path"] = check_output_path(self.io_dict["out"]["output_pdb_path"],"output_pdb_path", False, self.out_log, self.__class__.__name__)
@launchlogger
def launch(self) -> int:
"""Execute the :class:`Box <utils.box.Box>` utils.box.Box object."""
# check input/output paths and parameters
self.check_data_params(self.out_log, self.err_log)
# Setup Biobb
if self.check_restart(): return 0
self.stage_files()
# check if cavity (pdb) or pocket (pqr)
input_type = PurePath(self.io_dict["in"]["input_pdb_path"]).suffix.lstrip('.')
if input_type == 'pdb':
fu.log('Loading residue PDB selection from %s' % (self.io_dict["in"]["input_pdb_path"]), self.out_log, self.global_log)
else:
fu.log('Loading pocket PQR selection from %s' % (self.io_dict["in"]["input_pdb_path"]), self.out_log, self.global_log)
# get input_pdb_path atoms coordinates
selection_atoms_num = 0
x_coordslist = []
y_coordslist = []
z_coordslist = []
with open(self.io_dict["in"]["input_pdb_path"]) as infile:
for line in infile:
if line.startswith("HETATM") or line.startswith("ATOM"):
x_coordslist.append(float(line[31:38].strip()))
y_coordslist.append(float(line[39:46].strip()))
z_coordslist.append(float(line[47:54].strip()))
selection_atoms_num = selection_atoms_num + 1
## Compute binding site box size
# compute box center
selection_box_center = [np.average(x_coordslist), np.average(y_coordslist), np.average(z_coordslist)]
fu.log('Binding site center (Angstroms): %10.3f%10.3f%10.3f' % (selection_box_center[0],selection_box_center[1],selection_box_center[2]), self.out_log, self.global_log)
# compute box size
selection_coords_max = np.amax([x_coordslist, y_coordslist, z_coordslist],axis=1)
selection_box_size = selection_coords_max - selection_box_center
if self.offset:
fu.log('Adding %.1f Angstroms offset' % (self.offset), self.out_log, self.global_log)
selection_box_size = [c + self.offset for c in selection_box_size]
fu.log('Binding site size (Angstroms): %10.3f%10.3f%10.3f' % (selection_box_size[0],selection_box_size[1],selection_box_size[2]), self.out_log, self.global_log)
# compute volume
vol = np.prod(selection_box_size) * 2**3
fu.log('Volume (cubic Angstroms): %.0f' % (vol), self.out_log, self.global_log)
# add box details as PDB remarks
remarks = "REMARK BOX CENTER:%10.3f%10.3f%10.3f" % (selection_box_center[0],selection_box_center[1],selection_box_center[2])
remarks += " SIZE:%10.3f%10.3f%10.3f" % (selection_box_size[0],selection_box_size[1],selection_box_size[2])
selection_box_coords_txt = ""
# add (optional) box coordinates as 8 ATOM records
if self.box_coordinates:
fu.log('Adding box coordinates', self.out_log, self.global_log)
selection_box_coords_txt = get_box_coordinates(selection_box_center,selection_box_size)
with open(self.io_dict["out"]["output_pdb_path"], 'w') as f:
f.seek(0, 0)
f.write(remarks.rstrip('\r\n') + '\n' + selection_box_coords_txt)
fu.log('Saving output PDB file (with box setting annotations): %s' % (self.io_dict["out"]["output_pdb_path"]), self.out_log, self.global_log)
# Copy files to host
self.copy_to_host()
self.tmp_files.extend([
self.stage_io_dict.get("unique_dir")
])
self.remove_tmp_files()
self.check_arguments(output_files_created=True, raise_exception=False)
return 0
def box(input_pdb_path: str, output_pdb_path: str, properties: dict = None, **kwargs) -> int:
"""Execute the :class:`Box <utils.box.Box>` class and
execute the :meth:`launch() <utils.box.Box.launch>` method."""
return Box(input_pdb_path=input_pdb_path,
output_pdb_path=output_pdb_path,
properties=properties, **kwargs).launch()
def main():
"""Command line execution of this building block. Please check the command line documentation."""
parser = argparse.ArgumentParser(description="Sets the center and the size of a rectangular parallelepiped box around a set of residues from a given PDB or a pocket from a given PQR.", formatter_class=lambda prog: argparse.RawTextHelpFormatter(prog, width=99999))
parser.add_argument('--config', required=False, help='Configuration file')
# Specific args of each building block
required_args = parser.add_argument_group('required arguments')
required_args.add_argument('--input_pdb_path', required=True, help='PDB file containing a selection of residue numbers or PQR file containing the pocket. Accepted formats: pdb, pqr.')
required_args.add_argument('--output_pdb_path', required=True, help='PDB including the annotation of the box center and size as REMARKs. Accepted formats: pdb.')
args = parser.parse_args()
args.config = args.config or "{}"
properties = settings.ConfReader(config=args.config).get_prop_dic()
# Specific call of each building block
box(input_pdb_path=args.input_pdb_path,
output_pdb_path=args.output_pdb_path,
properties=properties)
if __name__ == '__main__':
main()
|
PypiClean
|
/tensorflow-gpu-macosx-1.8.1.tar.gz/tensorflow/contrib/input_pipeline/python/ops/input_pipeline_ops.py
|
"""Python wrapper for input_pipeline_ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import random
from tensorflow.contrib.input_pipeline.ops import gen_input_pipeline_ops
from tensorflow.contrib.util import loader
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.platform import resource_loader
_input_pipeline_ops = loader.load_op_library(
resource_loader.get_path_to_datafile("_input_pipeline_ops.so"))
def obtain_next(string_list_tensor, counter):
"""Basic wrapper for the ObtainNextOp.
Args:
string_list_tensor: A tensor that is a list of strings
counter: an int64 ref tensor to keep track of which element is returned.
Returns:
An op that produces the element at counter + 1 in the list, round
robin style.
"""
return gen_input_pipeline_ops.obtain_next(string_list_tensor, counter)
def _maybe_randomize_list(string_list, shuffle):
if shuffle:
random.shuffle(string_list)
return string_list
def _create_list(string_list, shuffle, seed, num_epochs):
if shuffle and seed:
random.seed(seed)
expanded_list = _maybe_randomize_list(string_list, shuffle)[:]
if num_epochs:
for _ in range(num_epochs - 1):
expanded_list.extend(_maybe_randomize_list(string_list, shuffle))
return expanded_list
def seek_next(string_list, shuffle=False, seed=None, num_epochs=None):
"""Returns an op that seeks the next element in a list of strings.
Seeking happens in a round robin fashion. This op creates a variable called
obtain_next_counter that is initialized to -1 and is used to keep track of
which element in the list was returned, and a variable
obtain_next_expanded_list to hold the list. If num_epochs is not None, then we
limit the number of times we go around the string_list before OutOfRangeError
is thrown. It creates a variable to keep track of this.
Args:
string_list: A list of strings.
shuffle: If true, we shuffle the string_list differently for each epoch.
seed: Seed used for shuffling.
num_epochs: Returns OutOfRangeError once string_list has been repeated
num_epoch times. If unspecified then keeps on looping.
Returns:
An op that produces the next element in the provided list.
"""
expanded_list = _create_list(string_list, shuffle, seed, num_epochs)
with variable_scope.variable_scope("obtain_next"):
counter = variable_scope.get_variable(
name="obtain_next_counter",
initializer=constant_op.constant(
-1, dtype=dtypes.int64),
dtype=dtypes.int64,
trainable=False)
with ops.colocate_with(counter):
string_tensor = variable_scope.get_variable(
name="obtain_next_expanded_list",
initializer=constant_op.constant(expanded_list),
dtype=dtypes.string,
trainable=False)
if num_epochs:
filename_counter = variable_scope.get_variable(
name="obtain_next_filename_counter",
initializer=constant_op.constant(
0, dtype=dtypes.int64),
dtype=dtypes.int64,
trainable=False)
c = filename_counter.count_up_to(len(expanded_list))
with ops.control_dependencies([c]):
return obtain_next(string_tensor, counter)
else:
return obtain_next(string_tensor, counter)
|
PypiClean
|
/qcompute-qep-1.1.0.tar.gz/qcompute-qep-1.1.0/qcompute_qep/tomography/utils.py
|
import copy
import json
import math
from typing import List, Dict, Union, Iterable, Tuple
import numpy as np
from qcompute_qep.exceptions.QEPError import ArgumentError
from qcompute_qep.quantum.pauli import complete_pauli_basis
try:
from matplotlib import pyplot as plt
from matplotlib import rc
from mpl_toolkits.axes_grid1 import make_axes_locatable
import pylab
HAS_MATPLOTLIB = True
except ImportError:
HAS_MATPLOTLIB = False
try:
import pandas
import seaborn
HAS_SEABORN = True
except ImportError:
HAS_SEABORN = False
def plot_process_ptm(ptm: np.ndarray,
show_labels: bool = False,
title: str = None,
fig_name: str = None,
show: str = False) -> None:
r"""
Visualize the Pauli transfer matrix of the quantum process.
:param ptm: np.ndarray, a :math:`4^n \times 4^n` Pauli transfer matrix.
:param show_labels: bool, default to ``False``, indicator for adding labels to the x and y axes or not.
Notice that if ptm is very large (more than 5 qubits), then it is meaningless to add the labels.
:param title: str, default to None, a string that describes the data in @ptm
:param fig_name: str, default to None, the file name for saving
:param show: bool, default to ``False``, indicates whether the plotted figure should be shown or not
**Examples**
>>> import QCompute
>>> import qcompute_qep.tomography as tomography
>>> qp = QCompute.QEnv()
>>> qp.Q.createList(2)
>>> QCompute.CZ(qp.Q[1], qp.Q[0])
>>> qc = QCompute.BackendName.LocalBaiduSim2
>>> st = tomography.ProcessTomography()
>>> noisy_ptm = st.fit(qp, qc, prep_basis='Pauli', meas_basis='Pauli', method='inverse', shots=4096, ptm=True)
>>> tomography.plot_process_ptm(ptm=noisy_ptm.data, show_labels=True, title='LocalBaiduSim2')
"""
if not HAS_MATPLOTLIB:
raise ImportError('Function "plot_process_ptm" requires matplotlib. Please run "pip install matplotlib" first.')
# Enforce the Pauli transfer matrix to be a real matrix
ptm = np.real(ptm)
# Compute the number of qubits
n = int(math.log(ptm.shape[0], 4))
cpb = complete_pauli_basis(n)
# Create the label list
labels = [pauli.name for pauli in cpb]
fig, ax = plt.subplots(figsize=(12, 8))
# Visualize the matrix
im = ax.imshow(ptm, vmin=-1, vmax=1, cmap='RdBu')
# Add the colorbar
fig.colorbar(im, ax=ax)
# Add ticklabels
if show_labels:
# We want to show all ticks and label them with the respective list entries
ax.set_xticks(np.arange(len(labels)))
ax.set_yticks(np.arange(len(labels)))
ax.set_xticklabels(labels)
ax.set_yticklabels(labels)
size = 'small' if n <= 2 else 'xx-small'
ax.tick_params(axis='both', labelsize=size)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
else:
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
# Add minor ticks and use them to visualize gridlines
ax.set_xticks(np.arange(-0.5, len(labels), 0.5), minor=True)
ax.set_yticks(np.arange(-0.5, len(labels), 0.5), minor=True)
ax.grid(which='minor', color='w', linestyle='-', linewidth=1)
if title is not None: # set figure title
ax.set_title(title, fontsize='medium')
if fig_name is not None: # save figure
plt.savefig(fig_name, format='png', dpi=600, bbox_inches='tight', pad_inches=0.1)
if show:
plt.show()
def compare_process_ptm(ptms: List[np.ndarray],
titles: List[str] = None,
show_labels: bool = False,
fig_name: str = None,
show: str = False) -> None:
r"""
Compare the Pauli transfer matrices of the quantum process, maybe obtained via different methods.
:param ptms: List[np.ndarray], a list of Pauli transfer matrices of size :math:`4^n \times 4^n`
:param titles: List[str], default to None, a list of strings that describes the data in @ptms
:param show_labels: bool, default to None, indicator for adding labels to the x and y axes or not.
Notice that if ptm is very large (more than 5 qubits), then it is meaningless to add the labels.
:param fig_name: str, default to None, the file name for saving
:param show: bool, default to ``False``, indicates whether the plotted figure should be shown or not
**Examples**
>>> import QCompute
>>> import qcompute_qep.tomography as tomography
>>> from qcompute_qep.utils.circuit import circuit_to_unitary
>>> import qcompute_qep.quantum.channel as channel
>>> import qcompute_qep.utils.types as typing
>>> qp = QCompute.QEnv()
>>> qp.Q.createList(2)
>>> QCompute.CZ(qp.Q[1], qp.Q[0])
>>> ideal_cnot = circuit_to_unitary(qp)
>>> ideal_ptm = channel.unitary_to_ptm(ideal_cnot).data
>>> qc = QCompute.BackendName.LocalBaiduSim2
>>> qc_name = typing.get_qc_name(qc)
>>> st = tomography.ProcessTomography()
>>> noisy_ptm = st.fit(qp, qc, prep_basis='Pauli', meas_basis='Pauli', method='inverse', shots=4096, ptm=True)
>>> diff_ptm = ideal_ptm - noisy_ptm.data
>>> tomography.compare_process_ptm(ptms=[ideal_ptm, noisy_ptm.data, diff_ptm])
"""
if not HAS_MATPLOTLIB:
raise ImportError('Function "compare_process_ptm" requires matplotlib. '
'Please run "pip install matplotlib" first.')
# Compute the number of qubits
n = int(math.log(ptms[0].shape[0], 4))
cpb = complete_pauli_basis(n)
# Create the label list
labels = [pauli.name for pauli in cpb]
if (titles is not None) and (len(ptms) != len(titles)):
raise ArgumentError("in compare_process_ptm(): the number of matrices and titles must the same!")
# Visualize the PTM matrices
fig, axs = plt.subplots(nrows=1, ncols=len(ptms), figsize=(12, 8))
fontsize = 8 if n <= 2 else 3
im = None
for i, ptm in enumerate(ptms):
# Enforce the Pauli transfer matrix to be a real matrix
ptm = np.real(ptm)
im = axs[i].imshow(ptm, vmin=-1, vmax=1, cmap='RdBu')
if titles is not None:
axs[i].set_title(titles[i], fontsize='medium')
# Add ticklabels
if show_labels:
# We want to show all ticks and label them with the respective list entries
axs[i].set_xticks(np.arange(len(labels)))
axs[i].set_xticklabels(labels)
axs[i].set_yticks(np.arange(len(labels)))
axs[i].set_yticklabels(labels)
axs[i].tick_params(axis='both', labelsize='small')
# Rotate the tick labels and set their alignment.
plt.setp(axs[i].get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
else:
axs[i].axes.get_xaxis().set_visible(False)
axs[i].axes.get_yaxis().set_visible(False)
# Add minor ticks and use them to visualize gridlines
axs[i].set_xticks(np.arange(-0.5, len(labels), 0.5), minor=True)
axs[i].set_yticks(np.arange(-0.5, len(labels), 0.5), minor=True)
axs[i].grid(which='minor', color='w', linestyle='-', linewidth=1)
# Add the colorbar. Create new axes according to image position
cax = fig.add_axes([axs[-1].get_position().x1+0.02,
axs[-1].get_position().y0,
0.02,
axs[-1].get_position().height])
plt.colorbar(im, cax=cax)
# Save the figure if needed
if fig_name is not None:
plt.savefig(fig_name, format='png', dpi=600, bbox_inches='tight', pad_inches=0.1)
if show:
plt.show()
|
PypiClean
|
/Spire.XLS_for_Python-13.5.0-py3-none-any.whl/spire/xls/HTMLOptions.py
|
from enum import Enum
from plum import dispatch
from typing import TypeVar,Union,Generic,List,Tuple
from spire.common import *
from spire.xls import *
from ctypes import *
import abc
class HTMLOptions (SpireObject) :
"""
"""
@dispatch
def __init__(self):
GetDllLibXls().HTMLOptions_Create.restype = c_void_p
intPtr = GetDllLibXls().HTMLOptions_Create()
super(HTMLOptions, self).__init__(intPtr)
@property
def ImagePath(self)->str:
"""
"""
GetDllLibXls().HTMLOptions_get_ImagePath.argtypes=[c_void_p]
GetDllLibXls().HTMLOptions_get_ImagePath.restype=c_wchar_p
ret = GetDllLibXls().HTMLOptions_get_ImagePath(self.Ptr)
return ret
@ImagePath.setter
def ImagePath(self, value:str):
GetDllLibXls().HTMLOptions_set_ImagePath.argtypes=[c_void_p, c_wchar_p]
GetDllLibXls().HTMLOptions_set_ImagePath(self.Ptr, value)
@property
def TextMode(self)->'GetText':
"""
"""
GetDllLibXls().HTMLOptions_get_TextMode.argtypes=[c_void_p]
GetDllLibXls().HTMLOptions_get_TextMode.restype=c_int
ret = GetDllLibXls().HTMLOptions_get_TextMode(self.Ptr)
objwraped = GetText(ret)
return objwraped
@TextMode.setter
def TextMode(self, value:'GetText'):
GetDllLibXls().HTMLOptions_set_TextMode.argtypes=[c_void_p, c_int]
GetDllLibXls().HTMLOptions_set_TextMode(self.Ptr, value.value)
@property
def ImageLocationType(self)->'ImageLocationTypes':
"""
<summary>
Gets or sets the Image Location type.
GlobalAbsolute or Relative to Table
</summary>
"""
GetDllLibXls().HTMLOptions_get_ImageLocationType.argtypes=[c_void_p]
GetDllLibXls().HTMLOptions_get_ImageLocationType.restype=c_int
ret = GetDllLibXls().HTMLOptions_get_ImageLocationType(self.Ptr)
objwraped = ImageLocationTypes(ret)
return objwraped
@ImageLocationType.setter
def ImageLocationType(self, value:'ImageLocationTypes'):
GetDllLibXls().HTMLOptions_set_ImageLocationType.argtypes=[c_void_p, c_int]
GetDllLibXls().HTMLOptions_set_ImageLocationType(self.Ptr, value.value)
@property
def ImageEmbedded(self)->bool:
"""
<summary>
If false,indicates exporting the image as a single file;
If true, embedding the image into the html code using Data URI scheme.
The default value is false.
Note: Internet Explorer 8 limits data URIs to a maximum length of 32KB.
</summary>
<value>The value of the HTML export image style sheet.</value>
"""
GetDllLibXls().HTMLOptions_get_ImageEmbedded.argtypes=[c_void_p]
GetDllLibXls().HTMLOptions_get_ImageEmbedded.restype=c_bool
ret = GetDllLibXls().HTMLOptions_get_ImageEmbedded(self.Ptr)
return ret
@ImageEmbedded.setter
def ImageEmbedded(self, value:bool):
GetDllLibXls().HTMLOptions_set_ImageEmbedded.argtypes=[c_void_p, c_bool]
GetDllLibXls().HTMLOptions_set_ImageEmbedded(self.Ptr, value)
@property
def StyleDefine(self)->'StyleDefineType':
"""
<summary>
where is the style defined; default : head
</summary>
"""
GetDllLibXls().HTMLOptions_get_StyleDefine.argtypes=[c_void_p]
GetDllLibXls().HTMLOptions_get_StyleDefine.restype=c_int
ret = GetDllLibXls().HTMLOptions_get_StyleDefine(self.Ptr)
objwraped = StyleDefineType(ret)
return objwraped
@StyleDefine.setter
def StyleDefine(self, value:'StyleDefineType'):
GetDllLibXls().HTMLOptions_set_StyleDefine.argtypes=[c_void_p, c_int]
GetDllLibXls().HTMLOptions_set_StyleDefine(self.Ptr, value.value)
@property
def IsFixedTableColWidth(self)->bool:
"""
<summary>
Gets or sets whether the width of td is fixed :
If true, the width of td is fixed, same as width of column in excel view.
If false, the width of td is not fixed.
Default is false.
</summary>
"""
GetDllLibXls().HTMLOptions_get_IsFixedTableColWidth.argtypes=[c_void_p]
GetDllLibXls().HTMLOptions_get_IsFixedTableColWidth.restype=c_bool
ret = GetDllLibXls().HTMLOptions_get_IsFixedTableColWidth(self.Ptr)
return ret
@IsFixedTableColWidth.setter
def IsFixedTableColWidth(self, value:bool):
GetDllLibXls().HTMLOptions_set_IsFixedTableColWidth.argtypes=[c_void_p, c_bool]
GetDllLibXls().HTMLOptions_set_IsFixedTableColWidth(self.Ptr, value)
@staticmethod
def Default()->'HTMLOptions':
"""
"""
#GetDllLibXls().HTMLOptions_Default.argtypes=[]
GetDllLibXls().HTMLOptions_Default.restype=c_void_p
intPtr = GetDllLibXls().HTMLOptions_Default()
ret = None if intPtr==None else HTMLOptions(intPtr)
return ret
|
PypiClean
|
/well_being_diary-0.0.2-py3-none-any.whl/wbd/wbd_global.py
|
import logging
import enum
import os
import datetime
import subprocess
import io
import typing
from PySide6 import QtCore
import PIL.Image
import PIL.ExifTags
import configparser
# Directory and file names
IMAGES_DIR_STR = "thumbnail_images"
ICONS_DIR_STR = "icons"
BACKUP_DIR_STR = "backups"
EXPORTED_DIR_STR = "exported"
LOGS_DIR_STR = "logs"
PUBLIC_DIR_STR = "public"
USER_IMAGES_DIR_STR = "images"
ITERDUMP_DIR_STR = "iterdump"
LOG_FILE_NAME_STR = "wbd.log"
DATABASE_FILE_STR = "wbd_database.sqlite"
DB_IN_MEMORY_STR = ":memory:"
db_file_exists_at_application_startup_bl = False
testing_bool = False
database_debugging_bool = False
WBD_APPLICATION_VERSION_STR = "prototype 5"
WBD_APPLICATION_NAME_STR = "Well-Being Diary"
DIARY_ENTRIES_PER_PAGE_INT = 10
TMP_EMAIL_ATTACHMENTS_DIR_STR = "tmp_email_attachments"
NO_ACTIVE_ROW_INT = -1
NO_ACTIVE_QUESTION_INT = -1
########NO_ACTIVE_FILTER_PRESET_INT = -1
NO_ACTIVE_TAG_INT = -1
NO_DIARY_ENTRY_EDITING_INT = -1
NO_VIEW_ACTIVE_INT = -1
NO_GROUP_ACTIVE_INT = -1
DATETIME_NOT_SET_STR = ""
QT_NO_ROW_SELECTED_INT = -1
# Image related constants
ORIENTATION_EXIF_TAG_NAME_STR = "Orientation"
DATETIME_ORIGINAL_EXIF_TAG_NAME_STR = "DateTimeOriginal"
SIZE_TE = (512, 512)
JPEG_FORMAT_STR = "JPEG"
# Datetime formats
# Python: https://docs.python.org/3/library/datetime.html#datetime.datetime.isoformat
# SQLite: https://www.sqlite.org/lang_datefunc.html
# Qt: http://doc.qt.io/qt-5/qdatetime.html#fromString-1
# Camera EXIF: https://www.awaresystems.be/imaging/tiff/tifftags/privateifd/exif/datetimeoriginal.html
PY_DATETIME_FORMAT_STR = "%Y-%m-%dT%H:%M:%S"
PY_DATE_ONLY_FORMAT_STR = "%Y-%m-%d"
PY_DATETIME_FILENAME_FORMAT_STR = "%Y-%m-%dT%H-%M-%S"
QT_EXIF_DATETIME_FORMAT_STR = "yyyy:MM:dd HH:mm:ss"
# -please note the colons instead of dashes in the date
QT_DATETIME_FORMAT_STR = "yyyy-MM-ddTHH:mm:ss"
QT_DATE_ONLY_FORMAT_STR = "yyyy-MM-dd"
class EventSource(enum.Enum):
application_start = enum.auto()
page_changed = enum.auto()
filters_changed = enum.auto()
question_activated = enum.auto()
tags_changed = enum.auto()
entry_edit = enum.auto()
entry_delete = enum.auto()
importing = enum.auto()
diary_view_activated = enum.auto()
entry_area_close = enum.auto()
image_import = enum.auto()
collection_changed = enum.auto()
view_tags_tag_changed = enum.auto()
all_tags_tag_changed = enum.auto()
tag_deleted = enum.auto()
tag_added = enum.auto()
add_tag_to_view = enum.auto()
remove_tag_from_view = enum.auto()
tag_edited = enum.auto()
all_tags_sort_type_changed = enum.auto()
view_added = enum.auto()
entry_selected_tag_changed = enum.auto()
tag_added_for_entry = enum.auto()
question_row_changed = enum.auto()
entry_suggested_tags_tag_changed = enum.auto()
# -please note that unlike the other _tag_changed enums this one is sent outside of the tags class
APPLICATION_NAME = "well-being-diary"
SETTINGS_FILE_STR = "settings.ini"
SETTINGS_GENERAL_STR = "general"
SETTINGS_USER_DIR_STR = "user_dir_str"
SETTINGS_DIARY_FONT_SIZE_STR = "diary_font_size"
DEFAULT_DIARY_FONT_SIZE_INT = 13
SETTINGS_ENTRY_FONT_SIZE_STR = "entry_font_size"
DEFAULT_ENTRY_FONT_SIZE_INT = 14
DEFAULT_USER_DIR_STR = "/home/sunyata/PycharmProjects/well-being-diary/user_files"
class MoveDirectionEnum(enum.Enum):
up = 1
down = 2
class SortType(enum.Enum):
sort_by_default_db_order = -2
sort_by_custom_order = -1
sort_by_name = 0
sort_by_frequency = 1
sort_by_time = 2
class Filters:
def __init__(self):
self.reset()
def reset(self):
self.tag_active_bool = False
self.tag_id_int = NO_ACTIVE_TAG_INT
self.search_active_bool = False
self.search_term_str = ""
self.rating_active_bool = False
self.rating_int = 0
self.datetime_active_bool = False
self.start_datetime_string = DATETIME_NOT_SET_STR
self.end_datetime_string = DATETIME_NOT_SET_STR
def get_config_path(*args) -> str:
config_dir = QtCore.QStandardPaths.standardLocations(QtCore.QStandardPaths.ConfigLocation)[0]
if APPLICATION_NAME not in config_dir:
# There is a bug in Qt: For Windows, the application name is included in
# QStandardPaths.ConfigLocation (for Linux, it's not included)
config_dir = os.path.join(config_dir, APPLICATION_NAME)
full_path_str = config_dir
for arg in args:
full_path_str = os.path.join(full_path_str, arg)
os.makedirs(os.path.dirname(full_path_str), exist_ok=True)
return full_path_str
class ApplicationState:
def __init__(self):
self.current_page_number_int = 1
self.filters = Filters()
self.question_id: int = NO_ACTIVE_QUESTION_INT
self.edit_entry_id_list: list = []
self.collection_id: int = NO_VIEW_ACTIVE_INT
self.tag_id: int = NO_ACTIVE_TAG_INT
#Please note that we can use the view_id to see if the focus is on the left or right tags list
"""
self.selected_friends_id_list = []
def add_selected_friend(self, i_new_friend_id: int) -> None:
self.selected_friends_id_list.append(i_new_friend_id)
def remove_selected_friend(self, i_new_friend_id: int) -> None:
self.selected_friends_id_list.remove(i_new_friend_id)
def clear_selected_friends_list(self) -> None:
self.selected_friends_id_list.clear()
"""
def add_entry(self, new_entry_id: int) -> None:
if new_entry_id not in self.edit_entry_id_list:
self.edit_entry_id_list.append(new_entry_id)
def last_entry(self):
if len(self.edit_entry_id_list) > 0:
return self.edit_entry_id_list[-1]
else:
return False
def is_entries_empty(self) -> bool:
if len(self.edit_entry_id_list) > 0:
return False
else:
return True
def clear_entries(self):
self.edit_entry_id_list.clear()
def remove_last_entry(self) -> bool:
# -bool is unused at the time of writing
if len(self.edit_entry_id_list) > 0:
del self.edit_entry_id_list[-1]
return True
else:
return False
active_state = ApplicationState()
def get_base_dir_path(*args, i_file_name: str="") -> str:
first_str = os.path.abspath(__file__)
# -__file__ is the file that started the application, in other words mindfulness-at-the-computer.py
second_str = os.path.dirname(first_str)
base_dir_str = os.path.dirname(second_str)
ret_path = base_dir_str
for arg in args:
ret_path = os.path.join(ret_path, arg)
os.makedirs(ret_path, exist_ok=True)
if i_file_name:
ret_path = os.path.join(ret_path, i_file_name)
return ret_path
def get_diary_font_size() -> int:
config = configparser.ConfigParser()
config.read(get_config_path(SETTINGS_FILE_STR))
ret_font_size_int = 0
ret_font_size_int = config.getint(
SETTINGS_GENERAL_STR,
SETTINGS_DIARY_FONT_SIZE_STR,
fallback=DEFAULT_DIARY_FONT_SIZE_INT
)
return ret_font_size_int
def get_entry_font_size() -> int:
config = configparser.ConfigParser()
config.read(get_config_path(SETTINGS_FILE_STR))
ret_font_size_int = 0
try:
ret_font_size_int = config.getint(SETTINGS_GENERAL_STR, SETTINGS_ENTRY_FONT_SIZE_STR)
except configparser.NoOptionError:
ret_font_size_int = DEFAULT_ENTRY_FONT_SIZE_INT
return ret_font_size_int
def get_user_dir_path(*args, i_file_name: str="") -> str:
ret_path = ""
if testing_bool:
# ret_path = "/home/sunyata/PycharmProjects/my-gtd/example"
ret_path = DEFAULT_USER_DIR_STR
else:
config = configparser.ConfigParser()
settings_file_path = get_config_path(SETTINGS_FILE_STR)
config.read(settings_file_path)
try:
ret_path = config[SETTINGS_GENERAL_STR][SETTINGS_USER_DIR_STR]
except KeyError:
ret_path = DEFAULT_USER_DIR_STR
# -using the application dir if the settings can't be read
"""
if not ret_path:
raise Exception("No path has been set as the base dir")
"""
for arg in args:
ret_path = os.path.join(ret_path, arg)
os.makedirs(ret_path, exist_ok=True)
if i_file_name:
ret_path = os.path.join(ret_path, i_file_name)
return ret_path
def get_icon_path(i_file_name: str) -> str:
ret_icon_path_str = get_base_dir_path(ICONS_DIR_STR, i_file_name=i_file_name)
return ret_icon_path_str
def get_database_filename() -> str:
# i_backup_timestamp: str = ""
# if i_backup_timestamp:
# database_filename_str = i_backup_timestamp + "_" + DATABASE_FILE_STR
if testing_bool:
return DB_IN_MEMORY_STR
else:
# ret_path_str = os.path.join(get_base_dir(), DEFAULT_USER_DIR_STR, DATABASE_FILE_STR)
ret_path_str = get_user_dir_path(i_file_name=DATABASE_FILE_STR)
return ret_path_str
def get_user_logs_path(i_file_name: str = "") -> str:
log_files_path_str = get_base_dir_path(LOGS_DIR_STR)
if i_file_name:
log_files_path_str = os.path.join(log_files_path_str, i_file_name)
return log_files_path_str
def get_public_path(i_file_name: str = "") -> str:
public_files_path_str = get_base_dir_path(PUBLIC_DIR_STR)
if i_file_name:
public_files_path_str = os.path.join(public_files_path_str, i_file_name)
return public_files_path_str
def get_user_backup_path(i_file_name: str = "") -> str:
# file_or_dir_path_str = os.path.join(get_base_dir(), DEFAULT_USER_DIR_STR, BACKUP_DIR_STR)
file_or_dir_path_str = get_user_dir_path(BACKUP_DIR_STR, i_file_name=i_file_name)
return file_or_dir_path_str
def get_user_tmp_email_attachments_path(i_file_name: str = "") -> str:
user_files_path_str = get_user_dir_path(TMP_EMAIL_ATTACHMENTS_DIR_STR, i_file_name=i_file_name)
return user_files_path_str
def get_user_iterdump_path(i_file_name: str = "") -> str:
user_files_path_str = get_user_dir_path(ITERDUMP_DIR_STR, i_file_name=i_file_name)
return user_files_path_str
def get_user_exported_path(i_file_name: str = "") -> str:
file_path_str = get_user_dir_path(EXPORTED_DIR_STR, i_file_name=i_file_name)
return file_path_str
def get_user_exported_images_path(i_file_name: str = "") -> str:
path_str = get_user_dir_path(EXPORTED_DIR_STR, USER_IMAGES_DIR_STR, i_file_name=i_file_name)
return path_str
def get_testing_images_path(i_file_name: str="") -> str:
testing_images_path_str = get_base_dir_path("varia", "unprocessed_image_files_for_testing", i_file_name=i_file_name)
return testing_images_path_str
RGB_STR = 'RGB'
BW_STR = 'L'
def process_image(i_file_path: str) -> (bytes, QtCore.QDateTime):
image_pi: PIL.Image = PIL.Image.open(i_file_path)
# Time when the photo was taken
time_photo_taken_qdatetime = get_datetime_image_taken(image_pi)
# -important that this is done before rotating since rotating removes exif data
# Rotating
rotation_degrees_int = get_rotation_degrees(image_pi)
if rotation_degrees_int != 0:
image_pi = image_pi.rotate(rotation_degrees_int, expand=True)
# -Warning: Rotating removes exif data (unknown why)
if image_pi.mode != RGB_STR:
image_pi = image_pi.convert(RGB_STR)
# -How to check is described in this answer: https://stackoverflow.com/a/43259180/2525237
image_pi.thumbnail(SIZE_TE, PIL.Image.ANTIALIAS)
# -Please note that exif metadata is removed. If we want to
# keep exif metadata: https://stackoverflow.com/q/17042602/2525237
image_byte_stream = io.BytesIO()
image_pi.save(image_byte_stream, format=JPEG_FORMAT_STR)
image_bytes = image_byte_stream.getvalue()
return (image_bytes, time_photo_taken_qdatetime)
def jpg_image_file_to_bytes(i_file_path: str) -> bytes:
image_pi: PIL.Image = PIL.Image.open(i_file_path)
image_byte_stream = io.BytesIO()
image_pi.save(image_byte_stream, format=JPEG_FORMAT_STR)
image_bytes = image_byte_stream.getvalue()
return image_bytes
def get_rotation_degrees(i_image_pi: PIL.Image, i_switch_direction: bool=False) -> int:
# Inspiration for this function:
# https://stackoverflow.com/questions/4228530/pil-thumbnail-is-rotating-my-image
# https://coderwall.com/p/nax6gg/fix-jpeg-s-unexpectedly-rotating-when-saved-with-pil
ret_degrees_int = 0
orientation_tag_key_str = ""
for tag_key_str in PIL.ExifTags.TAGS.keys():
if PIL.ExifTags.TAGS[tag_key_str] == ORIENTATION_EXIF_TAG_NAME_STR:
orientation_tag_key_str = tag_key_str
break
if orientation_tag_key_str == "":
logging.warning("get_rotation_degrees - exif tag not found")
ret_degrees_int = 0
try:
exif_data_dict = dict(i_image_pi._getexif().items())
if exif_data_dict[orientation_tag_key_str] == 3:
ret_degrees_int = 180
elif exif_data_dict[orientation_tag_key_str] == 6:
ret_degrees_int = 270
elif exif_data_dict[orientation_tag_key_str] == 8:
ret_degrees_int = 90
if i_switch_direction:
ret_degrees_int = -ret_degrees_int
except AttributeError:
# -A strange problem: If we use hasattr(i_image_pi, "_getexif") this will return True,
# so instead we use this exception handling
logging.warning(
"get_rotation_degrees - Image doesn't have exif data. This may be because it has already been processed by an application"
)
return ret_degrees_int
def get_datetime_image_taken(i_image_pi: PIL.Image) -> QtCore.QDateTime:
# Please note that usually (always?) the time we get from the camera is in the UTC time zone:
# https://photo.stackexchange.com/questions/82166/is-it-possible-to-get-the-time-a-photo-was-taken-timezone-aware
# So we need to convert the time that we get
ret_datetime_qdt = None
datetime_original_tag_key_str = ""
for tag_key_str in PIL.ExifTags.TAGS.keys():
if PIL.ExifTags.TAGS[tag_key_str] == DATETIME_ORIGINAL_EXIF_TAG_NAME_STR:
datetime_original_tag_key_str = tag_key_str
break
if datetime_original_tag_key_str == "":
logging.warning("get_datetime_image_taken - exif tag not found")
try:
exif_data_dict = dict(i_image_pi._getexif().items())
# -Good to be aware that _getexif() is an experimental function:
# https://stackoverflow.com/a/48428533/2525237
datetime_exif_string = exif_data_dict[datetime_original_tag_key_str]
logging.debug("datetime_exif_string = " + datetime_exif_string)
from_camera_qdt = QtCore.QDateTime.fromString(datetime_exif_string, QT_EXIF_DATETIME_FORMAT_STR)
from_camera_qdt.setTimeSpec(QtCore.Qt.UTC)
ret_datetime_qdt = from_camera_qdt.toLocalTime()
logging.debug("from_camera_qdt.toString = " + ret_datetime_qdt.toString(QT_DATETIME_FORMAT_STR))
except AttributeError:
# -A strange problem: If we use hasattr(i_image_pi, "_getexif") this will return True,
# so instead we use this exception handling
logging.warning("get_datetime_image_taken - Image doesn't have exif data. This may be because it has already been processed by an application")
return ret_datetime_qdt
def get_today_datetime_string() -> str:
now_pdt = datetime.datetime.now()
time_as_iso_str = now_pdt.strftime(PY_DATE_ONLY_FORMAT_STR)
return time_as_iso_str
def get_now_datetime_string() -> str:
now_pdt = datetime.datetime.now()
time_as_iso_str = now_pdt.strftime(PY_DATETIME_FORMAT_STR)
return time_as_iso_str
def clear_widget_and_layout_children(qlayout_or_qwidget):
if qlayout_or_qwidget.widget():
qlayout_or_qwidget.widget().deleteLater()
elif qlayout_or_qwidget.layout():
while qlayout_or_qwidget.layout().count():
child_qlayoutitem = qlayout_or_qwidget.takeAt(0)
clear_widget_and_layout_children(child_qlayoutitem) # Recursive call
def removing_oldest_files(directory_path: str, i_suffix: str, i_nr_of_files_to_keep: int):
# Removing the oldest files
filtered_files_list = [
fn for fn in os.listdir(directory_path) if fn.endswith(i_suffix)
]
sorted_and_filtered_files_list = sorted(filtered_files_list)
# logging.debug("sorted_and_filtered_files_list = " + str(sorted_and_filtered_files_list))
for file_name_str in sorted_and_filtered_files_list[:-i_nr_of_files_to_keep]:
file_path_str = os.path.join(directory_path, file_name_str)
os.remove(file_path_str)
logging.debug("Old backup file " + file_name_str + " was removed")
def open_directory(i_directory_path: str):
try:
# noinspection PyUnresolvedReferences
os.startfile(i_directory_path)
# -only available on windows
except:
subprocess.Popen(["xdg-open", i_directory_path])
|
PypiClean
|
/Flask-MDEditor-0.1.4.tar.gz/Flask-MDEditor-0.1.4/flask_mdeditor/static/mdeditor/js/lib/codemirror/mode/ntriples/ntriples.js
|
/*
The following expression defines the defined ASF grammar transitions.
pre_subject ->
{
( writing_subject_uri | writing_bnode_uri )
-> pre_predicate
-> writing_predicate_uri
-> pre_object
-> writing_object_uri | writing_object_bnode |
(
writing_object_literal
-> writing_literal_lang | writing_literal_type
)
-> post_object
-> BEGIN
} otherwise {
-> ERROR
}
*/
(function(mod) {
if (typeof exports == "object" && typeof module == "object") // CommonJS
mod(require("../../lib/codemirror"));
else if (typeof define == "function" && define.amd) // AMD
define(["../../lib/codemirror"], mod);
else // Plain browser env
mod(CodeMirror);
})(function(CodeMirror) {
"use strict";
CodeMirror.defineMode("ntriples", function() {
var Location = {
PRE_SUBJECT : 0,
WRITING_SUB_URI : 1,
WRITING_BNODE_URI : 2,
PRE_PRED : 3,
WRITING_PRED_URI : 4,
PRE_OBJ : 5,
WRITING_OBJ_URI : 6,
WRITING_OBJ_BNODE : 7,
WRITING_OBJ_LITERAL : 8,
WRITING_LIT_LANG : 9,
WRITING_LIT_TYPE : 10,
POST_OBJ : 11,
ERROR : 12
};
function transitState(currState, c) {
var currLocation = currState.location;
var ret;
// Opening.
if (currLocation == Location.PRE_SUBJECT && c == '<') ret = Location.WRITING_SUB_URI;
else if(currLocation == Location.PRE_SUBJECT && c == '_') ret = Location.WRITING_BNODE_URI;
else if(currLocation == Location.PRE_PRED && c == '<') ret = Location.WRITING_PRED_URI;
else if(currLocation == Location.PRE_OBJ && c == '<') ret = Location.WRITING_OBJ_URI;
else if(currLocation == Location.PRE_OBJ && c == '_') ret = Location.WRITING_OBJ_BNODE;
else if(currLocation == Location.PRE_OBJ && c == '"') ret = Location.WRITING_OBJ_LITERAL;
// Closing.
else if(currLocation == Location.WRITING_SUB_URI && c == '>') ret = Location.PRE_PRED;
else if(currLocation == Location.WRITING_BNODE_URI && c == ' ') ret = Location.PRE_PRED;
else if(currLocation == Location.WRITING_PRED_URI && c == '>') ret = Location.PRE_OBJ;
else if(currLocation == Location.WRITING_OBJ_URI && c == '>') ret = Location.POST_OBJ;
else if(currLocation == Location.WRITING_OBJ_BNODE && c == ' ') ret = Location.POST_OBJ;
else if(currLocation == Location.WRITING_OBJ_LITERAL && c == '"') ret = Location.POST_OBJ;
else if(currLocation == Location.WRITING_LIT_LANG && c == ' ') ret = Location.POST_OBJ;
else if(currLocation == Location.WRITING_LIT_TYPE && c == '>') ret = Location.POST_OBJ;
// Closing typed and language literal.
else if(currLocation == Location.WRITING_OBJ_LITERAL && c == '@') ret = Location.WRITING_LIT_LANG;
else if(currLocation == Location.WRITING_OBJ_LITERAL && c == '^') ret = Location.WRITING_LIT_TYPE;
// Spaces.
else if( c == ' ' &&
(
currLocation == Location.PRE_SUBJECT ||
currLocation == Location.PRE_PRED ||
currLocation == Location.PRE_OBJ ||
currLocation == Location.POST_OBJ
)
) ret = currLocation;
// Reset.
else if(currLocation == Location.POST_OBJ && c == '.') ret = Location.PRE_SUBJECT;
// Error
else ret = Location.ERROR;
currState.location=ret;
}
return {
startState: function() {
return {
location : Location.PRE_SUBJECT,
uris : [],
anchors : [],
bnodes : [],
langs : [],
types : []
};
},
token: function(stream, state) {
var ch = stream.next();
if(ch == '<') {
transitState(state, ch);
var parsedURI = '';
stream.eatWhile( function(c) { if( c != '#' && c != '>' ) { parsedURI += c; return true; } return false;} );
state.uris.push(parsedURI);
if( stream.match('#', false) ) return 'variable';
stream.next();
transitState(state, '>');
return 'variable';
}
if(ch == '#') {
var parsedAnchor = '';
stream.eatWhile(function(c) { if(c != '>' && c != ' ') { parsedAnchor+= c; return true; } return false;});
state.anchors.push(parsedAnchor);
return 'variable-2';
}
if(ch == '>') {
transitState(state, '>');
return 'variable';
}
if(ch == '_') {
transitState(state, ch);
var parsedBNode = '';
stream.eatWhile(function(c) { if( c != ' ' ) { parsedBNode += c; return true; } return false;});
state.bnodes.push(parsedBNode);
stream.next();
transitState(state, ' ');
return 'builtin';
}
if(ch == '"') {
transitState(state, ch);
stream.eatWhile( function(c) { return c != '"'; } );
stream.next();
if( stream.peek() != '@' && stream.peek() != '^' ) {
transitState(state, '"');
}
return 'string';
}
if( ch == '@' ) {
transitState(state, '@');
var parsedLang = '';
stream.eatWhile(function(c) { if( c != ' ' ) { parsedLang += c; return true; } return false;});
state.langs.push(parsedLang);
stream.next();
transitState(state, ' ');
return 'string-2';
}
if( ch == '^' ) {
stream.next();
transitState(state, '^');
var parsedType = '';
stream.eatWhile(function(c) { if( c != '>' ) { parsedType += c; return true; } return false;} );
state.types.push(parsedType);
stream.next();
transitState(state, '>');
return 'variable';
}
if( ch == ' ' ) {
transitState(state, ch);
}
if( ch == '.' ) {
transitState(state, ch);
}
}
};
});
CodeMirror.defineMIME("text/n-triples", "ntriples");
});
|
PypiClean
|
/salt-ssh-9000.tar.gz/salt-ssh-9000/salt/modules/dnsmasq.py
|
import logging
import os
import salt.utils.files
import salt.utils.platform
from salt.exceptions import CommandExecutionError
log = logging.getLogger(__name__)
def __virtual__():
"""
Only work on POSIX-like systems.
"""
if salt.utils.platform.is_windows():
return (
False,
"dnsmasq execution module cannot be loaded: only works on "
"non-Windows systems.",
)
return True
def version():
"""
Shows installed version of dnsmasq.
CLI Example:
.. code-block:: bash
salt '*' dnsmasq.version
"""
cmd = "dnsmasq -v"
out = __salt__["cmd.run"](cmd).splitlines()
comps = out[0].split()
return comps[2]
def fullversion():
"""
Shows installed version of dnsmasq and compile options.
CLI Example:
.. code-block:: bash
salt '*' dnsmasq.fullversion
"""
cmd = "dnsmasq -v"
out = __salt__["cmd.run"](cmd).splitlines()
comps = out[0].split()
version_num = comps[2]
comps = out[1].split()
return {"version": version_num, "compile options": comps[3:]}
def set_config(config_file="/etc/dnsmasq.conf", follow=True, **kwargs):
"""
Sets a value or a set of values in the specified file. By default, if
conf-dir is configured in this file, salt will attempt to set the option
in any file inside the conf-dir where it has already been enabled. If it
does not find it inside any files, it will append it to the main config
file. Setting follow to False will turn off this behavior.
If a config option currently appears multiple times (such as dhcp-host,
which is specified at least once per host), the new option will be added
to the end of the main config file (and not to any includes). If you need
an option added to a specific include file, specify it as the config_file.
:param string config_file: config file where settings should be updated / added.
:param bool follow: attempt to set the config option inside any file within
the ``conf-dir`` where it has already been enabled.
:param kwargs: key value pairs that contain the configuration settings that you
want set.
CLI Examples:
.. code-block:: bash
salt '*' dnsmasq.set_config domain=mydomain.com
salt '*' dnsmasq.set_config follow=False domain=mydomain.com
salt '*' dnsmasq.set_config config_file=/etc/dnsmasq.conf domain=mydomain.com
"""
dnsopts = get_config(config_file)
includes = [config_file]
if follow is True and "conf-dir" in dnsopts:
for filename in os.listdir(dnsopts["conf-dir"]):
if filename.startswith("."):
continue
if filename.endswith("~"):
continue
if filename.endswith("bak"):
continue
if filename.endswith("#") and filename.endswith("#"):
continue
includes.append("{}/{}".format(dnsopts["conf-dir"], filename))
ret_kwargs = {}
for key in kwargs:
# Filter out __pub keys as they should not be added to the config file
# See Issue #34263 for more information
if key.startswith("__"):
continue
ret_kwargs[key] = kwargs[key]
if key in dnsopts:
if isinstance(dnsopts[key], str):
for config in includes:
__salt__["file.sed"](
path=config,
before="^{}=.*".format(key),
after="{}={}".format(key, kwargs[key]),
)
else:
__salt__["file.append"](config_file, "{}={}".format(key, kwargs[key]))
else:
__salt__["file.append"](config_file, "{}={}".format(key, kwargs[key]))
return ret_kwargs
def get_config(config_file="/etc/dnsmasq.conf"):
"""
Dumps all options from the config file.
config_file
The location of the config file from which to obtain contents.
Defaults to ``/etc/dnsmasq.conf``.
CLI Examples:
.. code-block:: bash
salt '*' dnsmasq.get_config
salt '*' dnsmasq.get_config config_file=/etc/dnsmasq.conf
"""
dnsopts = _parse_dnamasq(config_file)
if "conf-dir" in dnsopts:
for filename in os.listdir(dnsopts["conf-dir"]):
if filename.startswith("."):
continue
if filename.endswith("~"):
continue
if filename.endswith("#") and filename.endswith("#"):
continue
dnsopts.update(
_parse_dnamasq("{}/{}".format(dnsopts["conf-dir"], filename))
)
return dnsopts
def _parse_dnamasq(filename):
"""
Generic function for parsing dnsmasq files including includes.
"""
fileopts = {}
if not os.path.isfile(filename):
raise CommandExecutionError("Error: No such file '{}'".format(filename))
with salt.utils.files.fopen(filename, "r") as fp_:
for line in fp_:
line = salt.utils.stringutils.to_unicode(line)
if not line.strip():
continue
if line.startswith("#"):
continue
if "=" in line:
comps = line.split("=")
if comps[0] in fileopts:
if isinstance(fileopts[comps[0]], str):
temp = fileopts[comps[0]]
fileopts[comps[0]] = [temp]
fileopts[comps[0]].append(comps[1].strip())
else:
fileopts[comps[0]] = comps[1].strip()
else:
if "unparsed" not in fileopts:
fileopts["unparsed"] = []
fileopts["unparsed"].append(line)
return fileopts
|
PypiClean
|
/monk_keras_cuda100-0.0.1.tar.gz/monk_keras_cuda100-0.0.1/monk/pytorch/finetune/level_7_aux_main.py
|
from monk.pytorch.finetune.imports import *
from monk.system.imports import *
from monk.pytorch.finetune.level_6_params_main import prototype_params
class prototype_aux(prototype_params):
'''
Main class for all auxiliary functions - EDA, Estimate Training Time, Resetting params, switching modes, & debugging
Args:
verbose (int): Set verbosity levels
0 - Print Nothing
1 - Print desired details
'''
@accepts("self", verbose=int, post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def __init__(self, verbose=1):
super().__init__(verbose=verbose);
###############################################################################################################################################
@accepts("self", show_img=bool, save_img=bool, check_missing=bool, check_corrupt=bool, post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def EDA(self, show_img=False, save_img=False, check_missing=False, check_corrupt=False):
'''
Experimental Data Analysis
- Finding number of images in each class
- Check missing images in case of csv type dataset
- Find all corrupt images
Args:
show_img (bool): If True, displays bar graph for images per class
save_img (bool): If True, saves bar graph for images per class
check_missing (bool): If True, checks for missing images in csv type dataset
check_corrupt (bool): If True, checks for corrupted images in foldered and csv dataset
Returns:
None
'''
if(not self.system_dict["dataset"]["train_path"]):
msg = "Dataset train path not set. Cannot run EDA";
raise ConstraintError(msg);
classes_folder, classes_folder_strength = class_imbalance(self.system_dict, show_img, save_img);
missing_images_train, missing_images_val, corrupt_images_train, corrupt_images_val = corrupted_missing_images(self.system_dict, check_missing, check_corrupt);
self.custom_print("EDA: Class imbalance")
for i in range(len(classes_folder)):
self.custom_print(" {}. Class: {}, Number: {}".format(i+1, classes_folder[i], classes_folder_strength[i]));
self.custom_print("");
if(check_missing):
self.custom_print("EDA: Check Missing");
if("csv" in self.system_dict["dataset"]["dataset_type"]):
if(missing_images_train):
self.custom_print(" Missing Images in folder {}".format(self.system_dict["dataset"]["train_path"]));
for i in range(len(missing_images_train)):
self.custom_print(" {}. {}".format(i+1, missing_images_train[i]));
self.custom_print("");
else:
self.custom_print(" All images present in train dir.");
self.custom_print("");
if(missing_images_val):
self.custom_print(" Missing Images in folder {}".format(self.system_dict["dataset"]["val_path"]));
for i in range(len(missing_images_val)):
self.custom_print(" {}. {}".format(i+1, missing_images_val[i]));
self.custom_print("");
else:
self.custom_print(" All images present in val dir.");
self.custom_print("");
else:
self.custom_print(" Missing check not required for foldered dataset");
self.custom_print("");
if(check_corrupt):
self.custom_print("EDA: Check Corrupt");
if(corrupt_images_train):
self.custom_print(" Corrupt Images in folder {}".format(self.system_dict["dataset"]["train_path"]));
for i in range(len(corrupt_images_train)):
self.custom_print(" {}. {}".format(i+1, corrupt_images_train[i]));
self.custom_print("");
else:
self.custom_print(" No corrupt image found in train dir.");
self.custom_print("");
if(corrupt_images_val):
self.custom_print(" Corrupt Images in folder {}".format(self.system_dict["dataset"]["val_path"]));
for i in range(len(corrupt_images_val)):
self.custom_print(" {}. {}".format(i+1, corrupt_images_val[i]));
self.custom_print("");
else:
self.custom_print(" No corrupt image found in val dir.");
self.custom_print("");
###############################################################################################################################################
###############################################################################################################################################
@warning_checks(None, num_epochs=["lt", 1000], post_trace=False)
@error_checks(None, num_epochs=["gt", 0], post_trace=False)
@accepts("self", num_epochs=[int, bool], post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def Estimate_Train_Time(self, num_epochs=False):
'''
Estimate training time before running training
Args:
num_epochs (int): Number of epochs to be trained and get eestimation for it.
Returns:
None
'''
total_time_per_epoch = self.get_training_estimate();
self.custom_print("Training time estimate");
if(not num_epochs):
total_time = total_time_per_epoch*self.system_dict["hyper-parameters"]["num_epochs"];
self.custom_print(" {} Epochs: Approx. {} Min".format(self.system_dict["hyper-parameters"]["num_epochs"], int(total_time//60)+1));
self.custom_print("");
else:
total_time = total_time_per_epoch*num_epochs;
self.custom_print(" {} Epochs: Approx. {} Min".format(num_epochs, int(total_time//60)+1));
self.custom_print("");
###############################################################################################################################################
###############################################################################################################################################
@error_checks(None, num=["gte", 0], post_trace=False)
@accepts("self", num=int, post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def Freeze_Layers(self, num=10):
'''
Freeze first "n" trainable layers in the network
Args:
num (int): Number of layers to freeze
Returns:
None
'''
self.num_freeze = num;
self.system_dict = freeze_layers(num, self.system_dict);
self.custom_print("Model params post freezing");
self.custom_print(" Num trainable layers: {}".format(self.system_dict["model"]["params"]["num_params_to_update"]));
self.custom_print("");
save(self.system_dict);
###############################################################################################################################################
##########################################################################################################################################################
@accepts("self", post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def Reload(self):
'''
Function to actuate all the updates in the update and expert modes
Args:
None
Returns:
None
'''
if(self.system_dict["states"]["eval_infer"]):
del self.system_dict["local"]["data_loaders"];
self.system_dict["local"]["data_loaders"] = {};
self.Dataset();
del self.system_dict["local"]["model"];
self.system_dict["local"]["model"] = False;
self.Model();
else:
if(not self.system_dict["states"]["copy_from"]):
self.system_dict["local"]["model"].to(torch.device("cpu"));
del self.system_dict["local"]["model"];
self.system_dict["local"]["model"] = False;
del self.system_dict["local"]["data_loaders"];
self.system_dict["local"]["data_loaders"] = {};
self.Dataset();
if(not self.system_dict["states"]["copy_from"]):
self.Model();
self.system_dict = load_optimizer(self.system_dict);
self.system_dict = load_scheduler(self.system_dict);
self.system_dict = load_loss(self.system_dict);
if(self.system_dict["model"]["params"]["num_freeze"]):
self.system_dict = freeze_layers(self.system_dict["model"]["params"]["num_freeze"], self.system_dict);
self.custom_print("Model params post freezing");
self.custom_print(" Num trainable layers: {}".format(self.system_dict["model"]["params"]["num_params_to_update"]));
self.custom_print("");
save(self.system_dict);
##########################################################################################################################################################
##########################################################################################################################################################
@accepts("self", test=bool, post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def reset_transforms(self, test=False):
'''
Reset transforms to change them.
Args:
test (bool): If True, test transforms are reset,
Else, train and validation transforms are reset.
Returns:
None
'''
if(self.system_dict["states"]["eval_infer"] or test):
self.system_dict["local"]["transforms_test"] = [];
self.system_dict["local"]["normalize"] = False;
self.system_dict["dataset"]["transforms"]["test"] = [];
else:
self.system_dict["local"]["transforms_train"] = [];
self.system_dict["local"]["transforms_val"] = [];
self.system_dict["local"]["normalize"] = False;
self.system_dict["dataset"]["transforms"]["train"] = [];
self.system_dict["dataset"]["transforms"]["val"] = [];
save(self.system_dict);
##########################################################################################################################################################
##########################################################################################################################################################
@accepts("self", post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def reset_model(self):
'''
Reset model to update and reload it with custom weights.
Args:
None
Returns:
None
'''
if(self.system_dict["states"]["copy_from"]):
msg = "Cannot reset model in Copy-From mode.\n";
raise ConstraintError(msg)
self.system_dict["model"]["custom_network"] = [];
self.system_dict["model"]["final_layer"] = None;
##########################################################################################################################################################
##########################################################################################################################################################
@accepts("self", train=bool, eval_infer=bool, post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def Switch_Mode(self, train=False, eval_infer=False):
'''
Switch modes between training an inference without reloading the experiment
Args:
train (bool): If True, switches to training mode
eval_infer (bool): If True, switches to validation and inferencing mode
Returns:
None
'''
if(eval_infer):
self.system_dict["states"]["eval_infer"] = True;
elif(train):
self.system_dict["states"]["eval_infer"] = False;
##########################################################################################################################################################
##########################################################################################################################################################
@accepts("self", list, post_trace=False)
#@TraceFunction(trace_args=True, trace_rv=True)
def debug_custom_model_design(self, network_list):
'''
Debug model while creating it.
Saves image as graph.png which is displayed
Args:
network_list (list): List containing network design
Returns:
None
'''
debug_create_network(network_list);
if(not isnotebook()):
self.custom_print("If not using notebooks check file generated graph.png");
##########################################################################################################################################################
##########################################################################################################################################################
def Visualize_Kernels(self, store_images_if_notebook=False):
'''
Visualize kernel weights of model
Args:
store_images_if_notebook (bool): If the images need to be stored instead of IPython widget
while using notebook. Not applicable for other environments.
Returns:
IPython widget displaying kernel weights if used inside a notebook.
Else stores the maps in the visualization directory.
'''
is_notebook = isnotebook()
visualizer = CNNVisualizer(self.system_dict["local"]["model"], is_notebook)
if(not is_notebook) or (store_images_if_notebook):
self.custom_print("The images will be stored in the visualization directory of the experiment");
from system.common import create_dir
create_dir(self.system_dict["visualization"]["base"])
create_dir(self.system_dict["visualization"]["kernels_dir"])
visualizer.visualize_kernels(self.system_dict["visualization"]["kernels_dir"])
else:
visualizer.visualize_kernels()
##########################################################################################################################################################
##########################################################################################################################################################
def Visualize_Feature_Maps(self, image_path, store_images_if_notebook=False):
'''
Visualize feature maps generated by model on an image
Args:
image_path (str): Path to the image
store_images_if_notebook (bool): If the images need to be stored instead of IPython widget
while using notebook. Not applicable for other environments.
Returns:
IPython widget displaying feature maps if used inside a notebook.
Else stores the maps in the visualization directory.
'''
is_notebook = isnotebook()
visualizer = CNNVisualizer(self.system_dict["local"]["model"], is_notebook)
if(self.system_dict["model"]["params"]["use_gpu"]):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
else:
device = torch.device('cpu')
if(not is_notebook) or (store_images_if_notebook):
self.custom_print("The images will be stored in the visualization directory of the experiment");
from system.common import create_dir
create_dir(self.system_dict["visualization"]["base"])
create_dir(self.system_dict["visualization"]["feature_maps_dir"])
img_name = "".join(image_path.split("/")[-1].split(".")[0:-1])
img_dir = self.system_dict["visualization"]["feature_maps_dir"] + img_name + '/'
create_dir(img_dir)
visualizer.visualize_feature_maps(image_path, device=device, store_path=img_dir)
else:
visualizer.visualize_feature_maps(image_path, device=device)
|
PypiClean
|
/djangoldp_needle-0.1.102.tar.gz/djangoldp_needle-0.1.102/djangoldp_needle/models/annotation.py
|
from django.conf import settings
from django.db import models
from djangoldp.models import Model
from djangoldp.permissions import LDPPermissions
from djangoldp.serializers import LDPSerializer, ContainerSerializer
from djangoldp.views import LDPViewSet
from .annotation_target import AnnotationTarget
from . import Tag
from ..permissions import AnnotationPermissions
class AnnotationSerializer(LDPSerializer):
annotationTargetSerializer = None
def to_representation(self, obj):
rep = super().to_representation(obj)
if hasattr(obj, "local_intersection_after"):
rep['local_intersection_after'] = obj.local_intersection_after
if hasattr(obj, "local_intersection_before"):
rep['local_intersection_before'] = obj.local_intersection_before
target = obj.target
if target is not None:
if self.annotationTargetSerializer is None: # Force depth 1 serialization for target only
serializer_generator = LDPViewSet(model=AnnotationTarget,
lookup_field=Model.get_meta(AnnotationTarget, 'lookup_field', 'pk'),
permission_classes=Model.get_meta(AnnotationTarget,
'permission_classes',
[LDPPermissions]),
)
self.annotationTargetSerializer = serializer_generator.build_read_serializer()(context=self.context)
rep['target'] = self.annotationTargetSerializer.to_representation(target)
return rep
class Annotation(Model):
creator = models.ForeignKey(settings.AUTH_USER_MODEL,
related_name='yarn',
null=True,
on_delete=models.SET_NULL
)
creation_date = models.DateTimeField(auto_now_add=True)
annotation_date = models.DateTimeField(null=True)
target = models.ForeignKey(AnnotationTarget, null=True, on_delete=models.SET_NULL, related_name='annotations')
tags = models.ManyToManyField(Tag, blank=True)
description = models.TextField(null=True)
@classmethod
def get_serializer_class(cls):
return AnnotationSerializer
class Meta(Model.Meta):
rdf_type = 'hd:annotation'
#rdf_context = 'https://www.w3.org/ns/anno.jsonld'
authenticated_perms = ['add', 'view']
auto_author = 'creator'
owner_field = 'creator'
owner_perms = ['view', 'delete', 'change']
serializer_fields = ['@id', 'creator', 'creation_date', 'annotation_date', 'target', 'tags', 'description', 'booklets']
permission_classes = [AnnotationPermissions]
|
PypiClean
|
/NNBuilder-0.3.7.tar.gz/NNBuilder-0.3.7/nnbuilder/extensions/saveload.py
|
import os
from basic import *
class SaveLoad(ExtensionBase):
def __init__(self):
ExtensionBase.__init__(self)
self.max = 3
self.freq = 10000
self.save = True
self.load = True
self.epoch = False
self.overwrite = True
self.loadfile = None
def init(self):
self.path = './'+self.config.name+'/save/'
def before_train(self):
if self.load:
self.mainloop_load(self.model, '')
def after_iteration(self):
if (self.train_history['n_iter']) % self.freq == 0:
savename = '{}.npz'.format(self.train_history['n_iter'])
self.mainloop_save(self.model, '', savename, self.max, self.overwrite)
def after_epoch(self):
if self.epoch:
savename = '{}.npz'.format(self.train_history['n_epoch'])
self.mainloop_save(self.model, 'epoch/', savename, self.max, self.overwrite)
def after_train(self):
self.mainloop_save(self.model, 'final/', 'final.npz', self.max, self.overwrite)
def mainloop_save(self, model, path, file, max=1, overwrite=True):
filepath = self.path + path + file
np.savez(filepath,
parameter=SaveLoad.get_params(model),
train_history=SaveLoad.get_train_history(self.train_history),
extensions=SaveLoad.get_extensions_dict(self.extensions),
optimizer=SaveLoad.get_optimizer_dict(self.model.optimizer))
if self.is_log_detail():
self.logger("")
self.logger("Save Sucessfully At File : [{}]".format(filepath), 1)
# delete old files
if overwrite:
filelist = [self.path + path + name for name in os.listdir(self.path + path) if name.endswith('.npz')]
filelist.sort(SaveLoad.compare_timestamp)
for i in range(len(filelist) - max):
os.remove(filelist[i])
if self.is_log_detail():
self.logger("Deleted Old File : [{}]".format(filelist[i]), 1)
if self.is_log_detail():
self.logger("")
def mainloop_load(self, model, file):
self.logger('Loading saved model from checkpoint:', 1, 1)
# prepare loading
if os.path.isfile(file):
file = self.path + file
else:
filelist = [self.path + filename for filename in os.listdir(self.path + file) if filename.endswith('.npz')]
if filelist == []:
self.logger('Checkpoint not found, exit loading', 2)
return
filelist.sort(SaveLoad.compare_timestamp)
file = filelist[-1]
self.logger('Checkpoint found : [{}]'.format(file), 2)
# load params
SaveLoad.load_params(model, file)
SaveLoad.load_train_history(self.train_history, file)
SaveLoad.load_extensions(self.extensions, file)
SaveLoad.load_optimizer(self.model.optimizer, file)
self.logger('Load sucessfully', 2)
self.logger('', 2)
@staticmethod
def get_params(model):
params = OrderedDict()
for name, param in model.params.items():
params[name] = param().get()
return params
@staticmethod
def get_train_history(train_history):
return train_history
@staticmethod
def get_extensions_dict(extensions):
extensions_dict = OrderedDict()
for ex in extensions:
extensions_dict[ex.__class__.__name__] = ex.save_(OrderedDict())
return extensions_dict
@staticmethod
def get_optimizer_dict(optimizer):
optimizer_dict = OrderedDict()
optimizer_dict[optimizer.__class__.__name__] = optimizer.save_(OrderedDict())
return optimizer_dict
@staticmethod
def load_params(model, file):
params = np.load(file)['parameter'].tolist()
for name, param in params.items():
model.params[name]().set(param)
@staticmethod
def load_train_history(train_history, file):
loaded_train_history = np.load(file)['train_history'].tolist()
for key, value in loaded_train_history.items():
train_history[key] = value
@staticmethod
def load_extensions(extensions, file):
loaded_extensions = np.load(file)['extensions'].tolist()
for ex in extensions:
if ex.__class__.__name__ in loaded_extensions:
ex.load_(loaded_extensions[ex.__class__.__name__])
@staticmethod
def load_optimizer(optimizer, file):
loaded_optimizer = np.load(file)['optimizer'].tolist()
if optimizer.__class__.__name__ in loaded_optimizer:
optimizer.load_(loaded_optimizer[optimizer.__class__.__name__])
@staticmethod
def save_file(model, file):
params = SaveLoad.get_params(model)
np.savez(file, parameter=params)
@staticmethod
def compare_timestamp(x, y):
xt = os.stat(x)
yt = os.stat(y)
if xt.st_mtime > yt.st_mtime:
return 1
else:
return -1
saveload = SaveLoad()
|
PypiClean
|
/django-andablog-3.2.0.tar.gz/django-andablog-3.2.0/HISTORY.rst
|
.. :changelog:
History
-------
3.2.0 (2020-02-21)
------------------
Django 2.2 support, maintaining Django 2.0 support
3.1.0 (2019-04-27)
------------------
Django 2.1 support, drops Django 1.11 (along with Python2.7) support
3.0.0 (2019-03-15)
------------------
Django 2.0 support, drops Django 1.10 support.
* Drops use of the, no longer maintained, django-markitup dependency in favor of django-markupfield.
* Database migrations support the conversion of all entry content and previews.
* Removes live preview in admin. See the django-markupfield project for additional usage.
* Maintains markdown support. Removes Textile support in favor of RST.
**If you previously used Textile you will have to write your own migration.** See the django-markupfield docs for assistance in this.
2.4.0 (2017-06-09)
------------------
New feature: Optional preview_content (markdown) and preview_image fields for direct control of appearance of item in listing.
2.3.0 (2017-06-09)
------------------
Django 1.11 support, drops Django 1.9 support
2.2.0 (2016-09-17)
------------------
Django 1.10 support, drops Django 1.8 support
2.1.1 (2016-01-17)
------------------
Fixes an issue with saving entries in Django 1.9 caused by a previously faulty version of django-markitup.
2.1.0 (2015-12-07)
------------------
Django 1.9 support, drops Django 1.7 support
2.0.0 (2015-10-18)
------------------
Adds support for titles and slugs up to 255 characters in length. **Major: Migration will auto-truncate existing titles that are > 255 characters**
* Thanks Federico (fedejaure) for the fork that inspired the change.
* Thanks Brad Montgomery for design input, fix and feature change.
1.4.2 (2015-09-17)
------------------
Fixed unicode support for models
* Thanks Samuel Mendes for the report and fix.
1.4.1 (2015-09-11)
------------------
Fixed a missing migration bug
* Thanks bradmontgomery for the report and fix.
* CI tests now include a missing migration check.
1.4.0 (2015-05-07)
------------------
Support for Django 1.7.x - Django 1.8.x
* Adds support for Django 1.8
* Drops support for Django 1.6 and therefore south_migrations
1.3.0 (2015-03-10)
------------------
Authors are now able to see 'draft' (unpublished) versions of their blog entries.
Upgraded taggit to address an issue that was locking us to an older Django 1.7 version.
1.2.2 (2014-12-04)
------------------
Fixed a bug where the Django 1.7.x migration for recent DB changes was somehow missed.
1.2.1 (2014-12-02)
------------------
The author is now selectable when editing entries in the admin.
* The list is limited to superusers and anyone with an andablog Entry permission.
* The initial value is the current user.
1.1.1 (2014-12-02)
------------------
Fixed a bug where the tags field was required in the admin.
1.1.0 (2014-12-01)
------------------
Blog entries can now have tags
* The entry model now supports tags by way of the django-taggit package.
* This affects the model only, there are no template examples or tags.
1.0.0 (2014-11-20)
------------------
**Backwards Incompatible with 0.1.0.**
This release includes a rename of the django app package from djangoandablog to andablog to better follow
community conventions. This of course is a very large breaking change, which is why the version is 1.0.
As this is the second version and we have been out such a short time. My hope is that few if any people
are using this app yet. If you are, please submit an issue on GitHub and I will try to help you migrate away.
0.1.0 (2014-11-16)
------------------
* First release on PyPI.
|
PypiClean
|
/flufl.bounce-4.0.tar.gz/flufl.bounce-4.0/flufl/bounce/_detectors/postfix.py
|
import re
from enum import Enum
from flufl.bounce.interfaces import (
IBounceDetector, NoFailures, NoTemporaryFailures)
from io import BytesIO
from public import public
from zope.interface import implementer
# Are these heuristics correct or guaranteed?
pcre = re.compile(
b'[ \\t]*the\\s*(bns)?\\s*(postfix|keftamail|smtp_gateway)',
re.IGNORECASE)
rcre = re.compile(b'failure reason:$', re.IGNORECASE)
acre = re.compile(b'<(?P<addr>[^>]*)>:')
REPORT_TYPES = ('multipart/mixed', 'multipart/report')
class ParseState(Enum):
start = 0
salutation_found = 1
def flatten(msg, leaves):
# Give us all the leaf (non-multipart) subparts.
if msg.is_multipart():
for part in msg.get_payload():
flatten(part, leaves)
else:
leaves.append(msg)
def findaddr(msg):
addresses = set()
body = BytesIO(msg.get_payload(decode=True))
state = ParseState.start
for line in body:
# Preserve leading whitespace.
line = line.rstrip()
# Yes, use match() to match at beginning of string.
if state is ParseState.start and (
pcre.match(line) or rcre.match(line)):
# Then...
state = ParseState.salutation_found
elif state is ParseState.salutation_found and line:
mo = acre.search(line)
if mo:
addresses.add(mo.group('addr'))
# Probably a continuation line.
return addresses
@public
@implementer(IBounceDetector)
class Postfix:
"""Parse bounce messages generated by Postfix."""
def process(self, msg):
"""See `IBounceDetector`."""
if msg.get_content_type() not in REPORT_TYPES:
return NoFailures
# We're looking for the plain/text subpart with a Content-Description:
# of 'notification'.
leaves = []
flatten(msg, leaves)
for subpart in leaves:
content_type = subpart.get_content_type()
content_desc = subpart.get('content-description', '').lower()
if content_type == 'text/plain' and content_desc == 'notification':
return NoTemporaryFailures, set(findaddr(subpart))
return NoFailures
|
PypiClean
|
/wmb-0.1.36.tar.gz/wmb-0.1.36/analysis/integration/mc_rna/aibs_tenx/Summary/all_integroup.ipynb
|
```
from ALLCools.mcds import MCDS
from ALLCools.plot import *
from ALLCools.integration import confusion_matrix_clustering
from wmb import cemba, aibs, broad, brain
import pandas as pd
import numpy as np
import anndata
import matplotlib.pyplot as plt
import seaborn as sns
import pathlib
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib.pyplot as plt
# Parameters
dataset = "AIBS_TENX"
mc_annot = cemba.get_mc_annot()
if dataset == 'AIBS_SMART':
rna_annot = aibs.get_smart_annot()
elif dataset == 'AIBS_TENX':
rna_annot = aibs.get_tenx_annot()
else:
rna_annot = broad.get_tenx_annot()
def get_pre_data(integroup, category_key):
#get adata
adata_merge = anndata.read_h5ad(f'../{category_key}/{integroup}/final_with_coords.h5ad')
rna_adata = adata_merge[adata_merge.obs['Modality'] == 'RNA'].copy()
mc_adata = adata_merge[adata_merge.obs['Modality'] == 'mC'].copy()
rna_meta = adata_merge.obs[adata_merge.obs['Modality'] == 'RNA'].copy()
mc_meta = adata_merge.obs[adata_merge.obs['Modality'] == 'mC'].copy()
#add L1 annot
mc_adata.obs['L1_annot'] = mc_annot['L1_annot'].to_pandas()
rna_adata.obs['L1_annot'] = rna_annot['L1_annot'].to_pandas()
#get integroup
rna_integroup = pd.read_csv(f'../{category_key}/{integroup}/rna_integration_group.csv.gz', index_col = 'cell').squeeze()
mc_integroup = pd.read_csv(f'../{category_key}/{integroup}/mc_integration_group.csv.gz', index_col = 'cell').squeeze()
rna_adata.obs[f'{category_key}_InteGroup'] = rna_adata.obs.index.map(rna_integroup)
rna_adata.obs[f'{category_key}_InteGroup'].value_counts()
mc_adata.obs[f'{category_key}_InteGroup'] = mc_adata.obs.index.map(mc_integroup)
mc_adata.obs[f'{category_key}_InteGroup'].value_counts()
return rna_adata, mc_adata
def plot_clustering(category_key):
from ALLCools.plot.color import level_one_palette
inte_group_palette = level_one_palette(
pd.concat([rna_adata.obs[f'{category_key}_InteGroup'], mc_adata.obs[f'{category_key}_InteGroup']]),
palette='tab20'
)
fig, axes = plt.subplots(figsize=(10, 15), ncols=2, nrows=3, dpi=200)
ax = axes[0, 0]
categorical_scatter(ax=ax,
data=rna_adata,
coord_base='tsne',
hue=f'{category_key}_InteGroup',
text_anno=f'{category_key}_InteGroup',
palette=inte_group_palette,
max_points=None)
ax.set(title='RNA Inte. Group')
ax = axes[0, 1]
categorical_scatter(ax=ax,
data=mc_adata,
coord_base='tsne',
hue=f'{category_key}_InteGroup',
text_anno=f'{category_key}_InteGroup',
palette=inte_group_palette,
max_points=None)
ax.set(title='mC Inte. Group')
ax = axes[1, 0]
categorical_scatter(ax=ax,
data=rna_adata,
coord_base='tsne',
palette='tab20',
hue='L1_annot',
text_anno='L1_annot',
max_points=None)
ax.set(title=f'RNA L1 annot')
ax = axes[1, 1]
categorical_scatter(ax=ax,
data=mc_adata,
coord_base='tsne',
palette='tab20',
hue='L1_annot',
text_anno='L1_annot',
max_points=None)
ax.set(title=f'mC L1 annot')
ax = axes[2, 0]
categorical_scatter(ax=ax,
data=rna_adata,
coord_base='tsne',
hue='DissectionRegion',
text_anno='DissectionRegion',
palette='tab20',
max_points=None)
ax.set(title='RNA DissectionRegionp')
ax = axes[2, 1]
categorical_scatter(ax=ax,
data=mc_adata,
coord_base='tsne',
palette='tab20',
hue='DissectionRegion',
text_anno='DissectionRegion',
max_points=None)
ax.set(title=f'mC DissectionRegion')
```
# plot
```
# by changing the category_key, can see all clustering results at each level
category_key = "L4"
integroups = []
for i in pathlib.Path(f'../{category_key}').glob('InteGroup*'):
integroups.append(str(i).split('/')[-1])
with PdfPages(f'{category_key}_clusers.pdf') as pdf:
for integroup in integroups:
rna_adata, mc_adata = get_pre_data(integroup, category_key)
plot_clustering(category_key)
pdf.savefig() # saves the current figure into a pdf page
plt.close()
```
|
PypiClean
|
/django_voting-1.1.0-py3-none-any.whl/django_voting-1.1.0.dist-info/LICENSE.rst
|
django-voting
-------------
Copyright (c) 2007, Jonathan Buchanan
Copyright (c) 2012, Jannis Leidel
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
CheeseRater
-----------
Copyright (c) 2007, Jacob Kaplan-Moss
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of Django nor the names of its contributors may be used
to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
PypiClean
|
/windows-rbs-parser-0.0.1.tar.gz/windows-rbs-parser-0.0.1/README.md
|
Windows RBS Parser
==================
Parses windows diagnostics rbs files.
Currently supported are version 3 and 5 files (starting with UTCRBES3 and UTCRBES5).
Not all parts of the file structure are currently known, but the content of each
item can be extracted.
Example
-------
```python
from rbs_parser import RBSFile
with RBSFile("events10.rbs") as rbs:
for item in rbs:
print('#####################')
print("Offset: 0x{0:x} {0}".format(item.offset))
print("Size: 0x{0:x} {0}".format(item.size))
print("Data:", item.uncompressed.decode())
```
Dependencies
------------
Tested and developed with python3.
Dependes on my [helperlib](https://pypi.python.org/pypi/helperlib/0.4.1).
|
PypiClean
|
/aliyun-log-python-sdk-0.8.8.tar.gz/aliyun-log-python-sdk-0.8.8/aliyun/log/ext/jupyter_magic.py
|
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic)
import pandas as pd
from IPython.display import display, clear_output
import re, time, threading, datetime
from pandas import DataFrame
from aliyun.log import LogClient, LogException
from concurrent.futures import ThreadPoolExecutor as PoolExecutor, as_completed
import multiprocessing
import six
import six.moves.configparser as configparser
import os
import sys
def can_use_widgets():
""" Expanded from from http://stackoverflow.com/a/34092072/1958900
"""
if 'IPython' not in sys.modules:
# IPython hasn't been imported, definitely not
return False
from IPython import get_ipython
# check for `kernel` attribute on the IPython instance
if getattr(get_ipython(), 'kernel', None) is None:
return False
try:
import ipywidgets as ipy
import traitlets
except ImportError:
return False
if int(ipy.__version__.split('.')[0]) < 6:
print('WARNING: widgets require ipywidgets 6.0 or later')
return False
return True
__CAN_USE_WIDGET__ = can_use_widgets()
CLI_CONFIG_FILENAME = "%s/.aliyunlogcli" % os.path.expanduser('~')
MAGIC_SECTION = "__jupyter_magic__"
DEFAULT_DF_NAME = 'log_df'
DEFAULT_TMP_DF_NAME = 'log_df_part'
result = None
detail = None
def _load_config():
global g_default_region, g_default_ak_id, g_default_ak_key, g_default_project, g_default_logstore
def _get_section_option(config, section_name, option_name, default=None):
if six.PY3:
return config.get(section_name, option_name, fallback=default)
else:
return config.get(section_name, option_name) if config.has_option(section_name, option_name) else default
config = configparser.ConfigParser()
config.read(CLI_CONFIG_FILENAME)
g_default_region = _get_section_option(config, MAGIC_SECTION, "region-endpoint", "")
g_default_ak_id = _get_section_option(config, MAGIC_SECTION, "access-id", "")
g_default_ak_key = _get_section_option(config, MAGIC_SECTION, "access-key", "")
g_default_project = _get_section_option(config, MAGIC_SECTION, "project", "")
g_default_logstore = _get_section_option(config, MAGIC_SECTION, "logstore", "")
def _save_config(region, ak_id, ak_key, project, logstore):
global g_default_region, g_default_ak_id, g_default_ak_key, g_default_project, g_default_logstore
config = configparser.ConfigParser()
config.read(CLI_CONFIG_FILENAME)
if not config.has_section(MAGIC_SECTION):
config.add_section(MAGIC_SECTION)
config.set(MAGIC_SECTION, "region-endpoint", region)
config.set(MAGIC_SECTION, "access-id", ak_id)
config.set(MAGIC_SECTION, "access-key", ak_key)
config.set(MAGIC_SECTION, "project", project)
config.set(MAGIC_SECTION, "logstore", logstore)
# save to disk
with open(CLI_CONFIG_FILENAME, 'w') as configfile:
config.write(configfile)
# save to memory
g_default_region, g_default_ak_id, g_default_ak_key, g_default_project, g_default_logstore = region, ak_id, ak_key, project, logstore
_load_config()
def parse_timestamp(tm):
return datetime.datetime.fromtimestamp(int(tm)).isoformat()
@magics_class
class MyMagics(Magics):
logclient = None
@staticmethod
def pull_worker(client, project_name, logstore_name, from_time, to_time, shard_id):
res = client.pull_log(project_name, logstore_name, shard_id, from_time, to_time)
result = []
next_cursor = 'as from_time configured'
try:
for data in res:
result.extend(data.get_flatten_logs_json(decode_bytes=True))
next_cursor = data.next_cursor
except Exception as ex:
print("dump log failed: task info {0} failed to copy data to target, next cursor: {1} detail: {2}".
format(
(project_name, logstore_name, shard_id, from_time, to_time),
next_cursor, ex))
return result
@staticmethod
def pull_log_all(client, project_name, logstore_name, from_time, to_time):
cpu_count = multiprocessing.cpu_count() * 2
shards = client.list_shards(project_name, logstore_name).get_shards_info()
current_shards = [str(shard['shardID']) for shard in shards]
target_shards = current_shards
worker_size = min(cpu_count, len(target_shards))
result = []
with PoolExecutor(max_workers=worker_size) as pool:
futures = [pool.submit(MyMagics.pull_worker, client, project_name, logstore_name, from_time, to_time,
shard_id=shard)
for shard in target_shards]
try:
for future in as_completed(futures):
data = future.result()
result.extend(data)
return True, result
except KeyboardInterrupt as ex:
clear_output()
print(u"正在取消当前获取……")
for future in futures:
if not future.done():
future.cancel()
return False, result
def client(self, reset=False):
if self.logclient is None or reset:
self.logclient = LogClient(g_default_region, g_default_ak_id, g_default_ak_key)
return self.logclient
def verify_sls_connection(self, region, ak_id, ak_key, project, logstore):
logclient = LogClient(region, ak_id, ak_key)
try:
res = logclient.get_logstore(project, logstore)
return True, res.body
except LogException as ex:
return False, str(ex)
except Exception as ex:
return False, str(ex)
@staticmethod
def _get_log_param(line, cell):
to_time = time.time()
from_time = to_time - 60*15
if cell is None:
query = line
else:
line = line.strip()
ret = [line, '']
if '~' in line:
ret = line.split('~')
elif '-' in line:
ret = line.split('~')
from_time = ret[0].strip() or from_time
to_time = ret[1].strip() or to_time if len(ret) > 1 else to_time
query = cell
return from_time, to_time, query
def log_imp(self, line, cell=None):
from_time, to_time, query = self._get_log_param(line, cell)
print(u"根据查询统计语句,从日志服务查询数据(时间范围:{0} ~ {1}),结果将保存到变量{2}中,请稍等……".format(from_time, to_time, DEFAULT_DF_NAME))
res = self.client().get_log_all(g_default_project, g_default_logstore, from_time=from_time, to_time=to_time,
query=query)
is_complete = True
logs = []
try:
for data in res:
if not data.is_completed():
is_complete = False
logs.extend(data.body)
except Exception as ex:
print(ex)
return
df1 = pd.DataFrame(logs)
is_stat = re.match(r'.+\|\s+select\s.+|^\s*select\s+.+', query.lower(), re.I) is not None
if is_stat:
if "__time__" in df1:
del df1["__time__"]
if "__source__" in df1:
del df1["__source__"]
else:
# change time to date time
if "__time__" in df1:
df1['__time__'] = pd.to_datetime(df1["__time__"].apply(parse_timestamp))
df1.set_index('__time__', inplace=True)
clear_output()
if is_complete:
print(u"变量名:{0}".format(DEFAULT_DF_NAME))
elif is_stat:
print(u"变量名:{0},结果非完整精确结果。".format(DEFAULT_DF_NAME))
else:
print(u"变量名:{0},部分结果非完整精确结果。".format(DEFAULT_DF_NAME))
self.shell.user_ns[DEFAULT_DF_NAME] = df1
return df1
def fetch_log_imp(self, line):
from_time, to_time, _ = self._get_log_param(line, "")
print(u"从日志服务拉取数据(日志插入时间:{0} ~ {1}),结果将保存到变量{2}中,请稍等……".format(from_time, to_time, DEFAULT_DF_NAME))
result, logs = self.pull_log_all(self.client(), g_default_project, g_default_logstore, from_time, to_time)
df1 = pd.DataFrame(logs)
# change time to date time
if "__time__" in df1:
df1['__time__'] = pd.to_datetime(df1["__time__"].apply(parse_timestamp))
df1.set_index('__time__', inplace=True)
clear_output()
if not result:
print(u"获取被取消,显示部分结果,变量名:{0}".format(DEFAULT_TMP_DF_NAME))
self.shell.user_ns[DEFAULT_TMP_DF_NAME] = df1
else:
print(u"变量名:{0}".format(DEFAULT_DF_NAME))
self.shell.user_ns[DEFAULT_DF_NAME] = df1
return df1
@line_cell_magic
def log(self, line, cell=None):
return self.log_imp(line, cell)
@line_cell_magic
def fetch(self, line):
return self.fetch_log_imp(line)
@line_magic
def manage_log(self, line):
line = line or ""
if line or not __CAN_USE_WIDGET__:
params = line.split(" ")
if len(params) == 5:
print(u"连接中...")
endpoint, key_id, key_val, project, logstore = params
result, detail = self.verify_sls_connection(endpoint, key_id, key_val, project, logstore)
if result:
clear_output()
print(u"连接成功.")
_save_config(endpoint, key_id, key_val, project, logstore)
self.client(reset=True)
else:
print(detail)
else:
print(u"参数错误,请使用GUI配置(无参)或遵循格式:%manage_log <endpoint> <ak_id> <ak_key> <project> <logstore>")
return
import ipywidgets as widgets
w_1 = widgets.ToggleButtons( options=[u'基本配置', u"高级配置"] )
w_endpoint = widgets.Text( description=u'服务入口', value=g_default_region)
w_key_id = widgets.Password( description=u'秘钥ID', value=g_default_ak_id)
w_key_val = widgets.Password( description=u'秘钥Key', value=g_default_ak_key)
w_project = widgets.Text( description=u'默认项目', value=g_default_project)
w_logstore = widgets.Text( description=u'默认日志库', value=g_default_logstore)
w_confirm = widgets.Button(
description=u'修改' if g_default_region else u'确认',
button_style='info',
icon='confirm'
)
w_result = widgets.Label(value='')
hide_layout = widgets.Layout(height="0px")
show_layout = widgets.Layout(height='auto')
progress = widgets.FloatProgress(description="", value=0.0, min=0.0, max=1.0, layout=hide_layout)
def work(progress):
global result
total = 100
for i in range(total):
time.sleep(0.2)
if result is None:
progress.value = float(i+1)/total
else:
progress.value = 100
progress.description = u"完成" if result else u"失败"
break
def on_button_clicked(b):
global result, detail
progress.layout = show_layout
progress.description = u"连接中..."
progress.value = 0
w_result.value = ""
result = None
detail = ""
thread = threading.Thread(target=work, args=(progress,))
thread.start()
result, detail = self.verify_sls_connection(w_endpoint.value, w_key_id.value, w_key_val.value, w_project.value, w_logstore.value)
if result:
w_result.value = u"连接成功."
_save_config(w_endpoint.value, w_key_id.value, w_key_val.value, w_project.value, w_logstore.value)
self.client(reset=True)
else:
w_result.value = str(detail)
w_confirm.on_click(on_button_clicked)
p = widgets.VBox(children=[w_1, w_endpoint, w_key_id, w_key_val, w_project, w_logstore, w_confirm, progress, w_result])
return p
def df_html(df1):
if not __CAN_USE_WIDGET__:
return df1._repr_html_()
try:
import odps
if len(df1.columns) > 1:
df2 = odps.DataFrame(df1)
return df2._repr_html_()
return df1._repr_html_()
except Exception as ex:
print(ex)
return df1._repr_html_()
def load_ipython_extension(ipython):
ipython.register_magics(MyMagics)
html_formatter = ipython.display_formatter.formatters['text/html']
html_formatter.for_type(DataFrame, df_html)
|
PypiClean
|
/napari-tracing-1.0.1.tar.gz/napari-tracing-1.0.1/src/napari_tracing/_trace_saver.py
|
import csv
from typing import List
from ._segment_model import Segment
class TraceSaver:
def __init__(self, filename: str, segments: List[Segment]) -> None:
self.filename = filename
self.segments = segments
def save_trace(self):
with open(self.filename, "w") as f:
writer = csv.writer(f)
column_headers = ["idx", "x", "y", "z", "prevIdx"]
writer.writerow(column_headers)
for row in self._get_row_values_for_saving_trace():
writer.writerow(row)
def _get_row_values_for_saving_trace(self) -> List:
rows = []
idx = 0
merged_segments = self._merge_segments()
for segment in merged_segments:
prevIdx = -1
result = segment.tracing_result
for point in result:
if len(point) == 2: # (y, x)
rows.append(
# idx, z, x, y, prevIdx
[
idx,
point[1],
point[0],
"",
prevIdx,
] # idx, x, y, z, prevIdx
)
else: # (z, y, x)
rows.append(
# idx, z, x, y, prevIdx
[
idx,
point[2],
point[1],
point[0],
prevIdx,
] # idx, x, y, z, prevIdx
)
idx += 1
prevIdx += 1
return rows
def _merge_segments(self) -> List:
"""
1. Merges segments that form a continuous tracing
(like A->B, B->C merged into A->C)
2. otherwise returns disjoint segments
(like A->B, B->C, X->Y, Y->Z merged into A->C is merged,
X->Y remains same since that is disjoint)
"""
if len(self.segments) == 1:
return self.segments
merged_segments = []
# we can assume that our list is always sorted
# i,e, (1, 5), (5, 6) not (5, 6), (1, 5)
prev_segment = self.segments[0]
prev_start = prev_segment.start_point
prev_goal = prev_segment.goal_point
merged_segments.append(prev_segment)
for curr_segment in self.segments[1:]:
curr_start = curr_segment.start_point
curr_goal = curr_segment.goal_point
if prev_goal == curr_start:
extended_tracing_result = (
prev_segment.tracing_result + curr_segment.tracing_result
)
new_segment = Segment(
segment_ID=prev_segment.segment_ID,
start_point=prev_start,
goal_point=curr_goal,
tracing_result=extended_tracing_result,
)
merged_segments[
-1
] = new_segment # we merged a segment into the last segment
prev_segment = new_segment
prev_start = prev_segment.start_point
prev_goal = prev_segment.goal_point
else:
# no merge, move ahead
merged_segments.append(curr_segment)
prev_segment = curr_segment
prev_start = curr_start
prev_goal = curr_goal
return merged_segments
|
PypiClean
|
/django-static-ace-builds-1.4.12.0.tar.gz/django-static-ace-builds-1.4.12.0/django_static_ace_builds/static/ace-builds/mode-latex.js
|
ace.define("ace/mode/latex_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"],function(e,t,n){"use strict";var r=e("../lib/oop"),i=e("./text_highlight_rules").TextHighlightRules,s=function(){this.$rules={start:[{token:"comment",regex:"%.*$"},{token:["keyword","lparen","variable.parameter","rparen","lparen","storage.type","rparen"],regex:"(\\\\(?:documentclass|usepackage|input))(?:(\\[)([^\\]]*)(\\]))?({)([^}]*)(})"},{token:["keyword","lparen","variable.parameter","rparen"],regex:"(\\\\(?:label|v?ref|cite(?:[^{]*)))(?:({)([^}]*)(}))?"},{token:["storage.type","lparen","variable.parameter","rparen"],regex:"(\\\\begin)({)(verbatim)(})",next:"verbatim"},{token:["storage.type","lparen","variable.parameter","rparen"],regex:"(\\\\begin)({)(lstlisting)(})",next:"lstlisting"},{token:["storage.type","lparen","variable.parameter","rparen"],regex:"(\\\\(?:begin|end))({)([\\w*]*)(})"},{token:"storage.type",regex:/\\verb\b\*?/,next:[{token:["keyword.operator","string","keyword.operator"],regex:"(.)(.*?)(\\1|$)|",next:"start"}]},{token:"storage.type",regex:"\\\\[a-zA-Z]+"},{token:"lparen",regex:"[[({]"},{token:"rparen",regex:"[\\])}]"},{token:"constant.character.escape",regex:"\\\\[^a-zA-Z]?"},{token:"string",regex:"\\${1,2}",next:"equation"}],equation:[{token:"comment",regex:"%.*$"},{token:"string",regex:"\\${1,2}",next:"start"},{token:"constant.character.escape",regex:"\\\\(?:[^a-zA-Z]|[a-zA-Z]+)"},{token:"error",regex:"^\\s*$",next:"start"},{defaultToken:"string"}],verbatim:[{token:["storage.type","lparen","variable.parameter","rparen"],regex:"(\\\\end)({)(verbatim)(})",next:"start"},{defaultToken:"text"}],lstlisting:[{token:["storage.type","lparen","variable.parameter","rparen"],regex:"(\\\\end)({)(lstlisting)(})",next:"start"},{defaultToken:"text"}]},this.normalizeRules()};r.inherits(s,i),t.LatexHighlightRules=s}),ace.define("ace/mode/folding/latex",["require","exports","module","ace/lib/oop","ace/mode/folding/fold_mode","ace/range","ace/token_iterator"],function(e,t,n){"use strict";var r=e("../../lib/oop"),i=e("./fold_mode").FoldMode,s=e("../../range").Range,o=e("../../token_iterator").TokenIterator,u={"\\subparagraph":1,"\\paragraph":2,"\\subsubsubsection":3,"\\subsubsection":4,"\\subsection":5,"\\section":6,"\\chapter":7,"\\part":8,"\\begin":9,"\\end":10},a=t.FoldMode=function(){};r.inherits(a,i),function(){this.foldingStartMarker=/^\s*\\(begin)|\s*\\(part|chapter|(?:sub)*(?:section|paragraph))\b|{\s*$/,this.foldingStopMarker=/^\s*\\(end)\b|^\s*}/,this.getFoldWidgetRange=function(e,t,n){var r=e.doc.getLine(n),i=this.foldingStartMarker.exec(r);if(i)return i[1]?this.latexBlock(e,n,i[0].length-1):i[2]?this.latexSection(e,n,i[0].length-1):this.openingBracketBlock(e,"{",n,i.index);var i=this.foldingStopMarker.exec(r);if(i)return i[1]?this.latexBlock(e,n,i[0].length-1):this.closingBracketBlock(e,"}",n,i.index+i[0].length)},this.latexBlock=function(e,t,n,r){var i={"\\begin":1,"\\end":-1},u=new o(e,t,n),a=u.getCurrentToken();if(!a||a.type!="storage.type"&&a.type!="constant.character.escape")return;var f=a.value,l=i[f],c=function(){var e=u.stepForward(),t=e.type=="lparen"?u.stepForward().value:"";return l===-1&&(u.stepBackward(),t&&u.stepBackward()),t},h=[c()],p=l===-1?u.getCurrentTokenColumn():e.getLine(t).length,d=t;u.step=l===-1?u.stepBackward:u.stepForward;while(a=u.step()){if(!a||a.type!="storage.type"&&a.type!="constant.character.escape")continue;var v=i[a.value];if(!v)continue;var m=c();if(v===l)h.unshift(m);else if(h.shift()!==m||!h.length)break}if(h.length)return;l==1&&(u.stepBackward(),u.stepBackward());if(r)return u.getCurrentTokenRange();var t=u.getCurrentTokenRow();return l===-1?new s(t,e.getLine(t).length,d,p):new s(d,p,t,u.getCurrentTokenColumn())},this.latexSection=function(e,t,n){var r=new o(e,t,n),i=r.getCurrentToken();if(!i||i.type!="storage.type")return;var a=u[i.value]||0,f=0,l=t;while(i=r.stepForward()){if(i.type!=="storage.type")continue;var c=u[i.value]||0;if(c>=9){f||(l=r.getCurrentTokenRow()-1),f+=c==9?1:-1;if(f<0)break}else if(c>=a)break}f||(l=r.getCurrentTokenRow()-1);while(l>t&&!/\S/.test(e.getLine(l)))l--;return new s(t,e.getLine(t).length,l,e.getLine(l).length)}}.call(a.prototype)}),ace.define("ace/mode/latex",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/latex_highlight_rules","ace/mode/behaviour/cstyle","ace/mode/folding/latex"],function(e,t,n){"use strict";var r=e("../lib/oop"),i=e("./text").Mode,s=e("./latex_highlight_rules").LatexHighlightRules,o=e("./behaviour/cstyle").CstyleBehaviour,u=e("./folding/latex").FoldMode,a=function(){this.HighlightRules=s,this.foldingRules=new u,this.$behaviour=new o({braces:!0})};r.inherits(a,i),function(){this.type="text",this.lineCommentStart="%",this.$id="ace/mode/latex",this.getMatching=function(e,t,n){t==undefined&&(t=e.selection.lead),typeof t=="object"&&(n=t.column,t=t.row);var r=e.getTokenAt(t,n);if(!r)return;if(r.value=="\\begin"||r.value=="\\end")return this.foldingRules.latexBlock(e,t,n,!0)}}.call(a.prototype),t.Mode=a}); (function() {
ace.require(["ace/mode/latex"], function(m) {
if (typeof module == "object" && typeof exports == "object" && module) {
module.exports = m;
}
});
})();
|
PypiClean
|
/igorpy-0.0.5-py3-none-any.whl/pygor3/IgorIO.py
|
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import numpy as np
import pandas as pd
import xarray as xr
import networkx as nx
from .IgorDictionaries import *
from .IgorDefaults import *
from .IgorSqliteDB import *
from .IgorSqliteDBBestScenarios import *
### load IGoR sequences database
from pygor3 import rcParams
import subprocess
from .utils import *
from .IgorSQL import *
import collections
### GENERIC FUNCTIONS
# Generation of label for a simple identification of genomic template sequence.
def genLabel(strName):
aaa = strName.split("|")
if len(aaa) > 1 :
return aaa[1]
else:
return strName
v_genLabel = np.vectorize(genLabel)
def command_from_dict_options(dicto:dict):
""" Return igor options from dictionary"""
cmd = ''
print()
for key in dicto.keys():
if dicto[key]['active']:
cmd = cmd + " " + key + " " + dicto[key]['value']
if dicto[key]['dict_options'] is not None:
#print(key, dicto[key]['dict_options'])
cmd = cmd + " " + command_from_dict_options(dicto[key]['dict_options'])
return cmd
def run_command(cmd):
"""from http://blog.kagesenshi.org/2008/02/teeing-python-subprocesspopen-output.html
"""
# print(cmd)
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout = []
while True:
line = p.stdout.readline()
line = line.decode("utf-8")
stdout.append(line)
#print (line, end='')
if line == '' and p.poll() != None:
break
return ''.join(stdout)
def execute_command_generator(cmd):
popen = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, universal_newlines=True)
for stdout_line in iter(popen.stdout.readline, ""):
yield stdout_line
popen.stdout.close()
return_code = popen.wait()
if return_code:
raise subprocess.CalledProcessError(return_code, cmd)
def run_command_print(cmd):
std_output_str = ""
for path in execute_command_generator(cmd):
print(path, end="")
std_output_str = std_output_str + '\n'
return std_output_str
def run_command_no_output(cmd):
"""from http://blog.kagesenshi.org/2008/02/teeing-python-subprocesspopen-output.html
"""
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return p
# FIXME: IT IS BETTER TO USE DECORATORS FOR VARIABLES LIKE igor_batchname and update the dependencies on that automatically?
class IgorTask:
"""
This class should encapsulate all
the input parameters and output files when IGoR run.
"""
def __init__(self, igor_exec_path=None, igor_datadir=None,
igor_models_root_path=None, igor_species=None, igor_chain=None,
igor_model_dir_path=None,
igor_path_ref_genome=None, fln_genomicVs=None, fln_genomicDs=None, fln_genomicJs=None, fln_V_gene_CDR3_anchors=None, fln_J_gene_CDR3_anchors=None,
igor_wd=None, igor_batchname=None,
igor_model_parms_file=None, igor_model_marginals_file=None,
igor_read_seqs=None,
igor_threads=None,
igor_fln_indexed_sequences=None,
igor_fln_indexed_CDR3=None,
igor_fln_align_V_alignments=None,
igor_fln_align_D_alignments=None,
igor_fln_align_J_alignments=None,
igor_fln_infer_final_marginals=None,
igor_fln_infer_final_parms=None,
igor_fln_evaluate_final_marginals=None,
igor_fln_evaluate_final_parms=None,
igor_fln_output_pgen=None,
igor_fln_output_scenarios=None,
igor_fln_output_coverage=None,
igor_fln_generated_realizations_werr=None,
igor_fln_generated_seqs_werr=None,
igor_fln_generation_info=None,
igor_fln_db=None
):
# To execute IGoR externally
self.igor_exec_path = igor_exec_path
self.igor_datadir = igor_datadir
# To load default models and genomic templates
self.igor_models_root_path = igor_models_root_path # igor models paths where all species and chains are stored.
self.igor_species = igor_species
self.igor_chain = igor_chain
self.igor_model_dir_path = igor_model_dir_path
# genome references alignments
self.igor_path_ref_genome = igor_path_ref_genome
self.fln_genomicVs = fln_genomicVs
self.fln_genomicDs = fln_genomicDs
self.fln_genomicJs = fln_genomicJs
self.fln_V_gene_CDR3_anchors = fln_V_gene_CDR3_anchors
self.fln_J_gene_CDR3_anchors = fln_J_gene_CDR3_anchors
self.igor_wd = igor_wd
self.igor_batchname = igor_batchname
self.igor_model_parms_file = igor_model_parms_file
self.igor_model_marginals_file = igor_model_marginals_file
self.igor_read_seqs = igor_read_seqs
self.igor_threads = igor_threads
# read
self.igor_fln_indexed_sequences = igor_fln_indexed_sequences
# aligns
self.igor_fln_indexed_CDR3 = igor_fln_indexed_CDR3
self.igor_fln_align_V_alignments = igor_fln_align_V_alignments
self.igor_fln_align_J_alignments = igor_fln_align_J_alignments
self.igor_fln_align_D_alignments = igor_fln_align_D_alignments
# inference
self.igor_fln_infer_final_marginals = igor_fln_infer_final_marginals
self.igor_fln_infer_final_parms = igor_fln_infer_final_parms
# evaluate
self.igor_fln_evaluate_final_marginals = igor_fln_evaluate_final_marginals
self.igor_fln_evaluate_final_parms = igor_fln_evaluate_final_parms
# output
self.igor_fln_output_pgen = igor_fln_output_pgen
self.igor_fln_output_scenarios = igor_fln_output_scenarios
self.igor_fln_output_coverage = igor_fln_output_coverage
# TODO: NO DATABASE FIELDS FOR THIS GUYS
self.igor_fln_generated_realizations_werr = igor_fln_generated_realizations_werr
self.igor_fln_generated_seqs_werr = igor_fln_generated_seqs_werr
self.igor_fln_generation_info = igor_fln_generation_info
self.igor_fln_db = igor_fln_db
# TODO: experimental dictionary to check status of igor batch associated files
# almost each of these files correspond to a sql table
self.batch_data = igor_batch_dict
self.igor_db = IgorSqliteDB()
self.igor_db_bs = None
self.b_read_seqs = False
self.b_align = False
self.b_infer = False
self.b_evaluate = False
self.b_generate = False
self.mdl = IgorModel()
self.genomes = IgorRefGenome() #{ 'V' : IgorRefGenome(), 'D' : IgorRefGenome(), 'J' : IgorRefGenome() }
self.igor_align_dict_options = igor_align_dict_options
self.igor_evaluate_dict_options = igor_evaluate_dict_options
self.igor_output_dict_options = igor_output_dict_options
try:
if self.igor_batchname is None:
self.gen_random_batchname()
except Exception as e:
print(e)
raise e
try:
if self.igor_wd is None:
self.gen_igor_wd()
except Exception as e:
print(e)
raise e
try:
if self.igor_exec_path is None:
p = subprocess.Popen("which igor", shell=True, stdout=subprocess.PIPE)
line = p.stdout.readline()
self.igor_exec_path = line.decode("utf-8").replace('\n', '')
except Exception as e:
print(e)
raise e
try:
if self.igor_datadir is None:
self.run_datadir()
except Exception as e:
print(e)
raise e
def __repr__(self):
str_repr = ""
for key, value in self.to_dict().items():
str_repr = str_repr + str(key) +" = "+ str(value) + "\n"
return str_repr
def to_dict(self):
dicto = {
"igor_species": self.igor_species,
"igor_chain": self.igor_chain,
"igor_model_dir_path": self.igor_model_dir_path,
"igor_path_ref_genome": self.igor_path_ref_genome,
"fln_genomicVs": self.fln_genomicVs,
"fln_genomicDs": self.fln_genomicDs,
"fln_genomicJs": self.fln_genomicJs,
"fln_V_gene_CDR3_anchors": self.fln_V_gene_CDR3_anchors,
"fln_J_gene_CDR3_anchors": self.fln_J_gene_CDR3_anchors,
"igor_wd": self.igor_wd,
"igor_batchname": self.igor_batchname,
"igor_model_parms_file": self.igor_model_parms_file,
"igor_model_marginals_file": self.igor_model_marginals_file,
"igor_read_seqs": self.igor_read_seqs,
"igor_threads": self.igor_threads,
"igor_fln_indexed_sequences": self.igor_fln_indexed_sequences,
"igor_fln_indexed_CDR3": self.igor_fln_indexed_CDR3,
"igor_fln_align_V_alignments": self.igor_fln_align_V_alignments,
"igor_fln_align_J_alignments": self.igor_fln_align_J_alignments,
"igor_fln_align_D_alignments": self.igor_fln_align_D_alignments,
"igor_fln_infer_final_marginals": self.igor_fln_infer_final_marginals,
"igor_fln_infer_final_parms": self.igor_fln_infer_final_parms,
"igor_fln_evaluate_final_marginals": self.igor_fln_evaluate_final_marginals,
"igor_fln_evaluate_final_parms": self.igor_fln_evaluate_final_parms,
"igor_fln_output_pgen": self.igor_fln_output_pgen,
"igor_fln_output_scenarios": self.igor_fln_output_scenarios,
"igor_fln_output_coverage": self.igor_fln_output_coverage,
"igor_fln_generated_realizations_werr": self.igor_fln_generated_realizations_werr,
"igor_fln_generated_seqs_werr": self.igor_fln_generated_seqs_werr,
"igor_fln_generation_info": self.igor_fln_generation_info,
"igor_fln_db": self.igor_fln_db,
"b_read_seqs": self.b_read_seqs,
"b_align" : self.b_align,
"b_infer": self.b_infer,
"b_evaluate": self.b_evaluate,
"b_generate": self.b_generate
}
return dicto
def load_IgorRefGenome(self, igor_path_ref_genome=None):
# FIXME: THERE ARE 2 OPTIONS HERE:
# 1. From template directory self.igor_path_ref_genome
if igor_path_ref_genome is not None:
self.igor_path_ref_genome = igor_path_ref_genome
self.genomes = IgorRefGenome.load_from_path(self.igor_path_ref_genome)
# TODO: FIND A BETTER WAY TO SYNCHRONIZE NAMES (FORWARD AND BACKWARD)
self.fln_genomicVs = self.genomes.fln_genomicVs
self.fln_genomicDs = self.genomes.fln_genomicDs
self.fln_genomicJs = self.genomes.fln_genomicJs
self.fln_V_gene_CDR3_anchors = self.genomes.fln_V_gene_CDR3_anchors
self.fln_J_gene_CDR3_anchors = self.genomes.fln_J_gene_CDR3_anchors
# # 2. Or directly from files.
# self.genomes = IgorRefGenome()
# self.genomes.fln_genomicVs = self.fln_genomicVs
# self.genomes.fln_genomicDs = self.fln_genomicDs
# self.genomes.fln_genomicJs = self.fln_genomicJs
def make_model_default_VJ_from_genomes(self, igor_path_ref_genome=None):
try:
self.load_IgorRefGenome(igor_path_ref_genome=igor_path_ref_genome)
mdl_parms = IgorModel_Parms.make_default_VJ(self.genomes.df_genomicVs, self.genomes.df_genomicJs)
mdl_marginals = IgorModel_Marginals.make_uniform_from_parms(mdl_parms)
self.mdl = IgorModel.load_from_parms_marginals_object(mdl_parms, mdl_marginals)
except Exception as e:
print("ERROR: ", e)
def make_model_default_VDJ_from_genomes(self, igor_path_ref_genome=None):
try:
self.load_IgorRefGenome(igor_path_ref_genome=igor_path_ref_genome)
mdl_parms = IgorModel_Parms.make_default_VDJ(self.genomes.df_genomicVs, self.genomes.df_genomicDs, self.genomes.df_genomicJs)
mdl_marginals = IgorModel_Marginals.make_uniform_from_parms(mdl_parms)
self.mdl = IgorModel.load_from_parms_marginals_object(mdl_parms, mdl_marginals)
except Exception as e:
print("ERROR: ", e)
def load_IgorModel(self):
if ( (self.igor_species is None ) or (self.igor_chain is None)):
self.mdl = IgorModel(model_parms_file = self.igor_model_parms_file, model_marginals_file=self.igor_model_marginals_file)
else :
self.mdl = IgorModel.load_default(self.igor_species, igor_option_path_dict[self.igor_chain])
def load_IgorModel_from_infer_files(self):
try:
self.mdl = IgorModel(model_parms_file = self.igor_fln_infer_final_parms, model_marginals_file=self.igor_fln_infer_final_marginals)
except Exception as e:
print("ERROR: IgorTask.load_IgorModel_inferred:")
print(e)
@classmethod
def default_model(cls, specie, chain, model_parms_file=None, model_marginals_file=None):
"""Return an IgorTask object"""
cls = IgorTask()
cls.igor_species = specie
cls.igor_chain = chain
#cls.igor_modeldirpath = model_parms_file
cls.run_datadir()
cls.igor_model_dir_path = cls.igor_models_root_path +"/" + cls.igor_species + "/" + igor_option_path_dict[cls.igor_chain]
if model_parms_file is None:
cls.igor_model_parms_file = cls.igor_model_dir_path+ "/models/model_parms.txt"
cls.igor_model_marginals_file = cls.igor_model_dir_path + "/models/model_marginals.txt"
cls.mdl = IgorModel(model_parms_file=cls.igor_model_parms_file, model_marginals_file=cls.igor_model_marginals_file)
return cls
def gen_igor_wd(self):
p = subprocess.Popen("pwd", shell=True, stdout=subprocess.PIPE)
line = p.stdout.readline()
self.igor_wd = line.decode("utf-8").replace('\n', '')
def gen_random_batchname(self):
p = subprocess.Popen("head /dev/urandom | tr -dc A-Za-z0-9 | head -c10", shell=True, stdout=subprocess.PIPE)
line = p.stdout.readline()
self.igor_batchname = "dataIGoR" + line.decode("utf-8").replace('\n', '')
def update_model_filenames(self, model_path=None):
try:
# if model_path is None use the self.igor_model_dir_path
if model_path is None:
# use previously defined igor_model_dir_path
if self.igor_model_dir_path is None:
# if wasn't defined use the current directory
model_path = "."
if (not (self.igor_species is None) ) and (not (self.igor_chain is None)):
self.run_datadir()
self.igor_model_dir_path = self.igor_models_root_path + "/" + self.igor_species + "/" + \
igor_option_path_dict[self.igor_chain]
else:
# if a model_path is provided then override it
self.igor_model_dir_path = model_path
self.igor_model_parms_file = self.igor_model_dir_path + "/models/model_parms.txt"
self.igor_model_marginals_file = self.igor_model_dir_path + "/models/model_marginals.txt"
self.igor_path_ref_genome = self.igor_model_dir_path + "/ref_genome/"
except Exception as e:
print("WARNING: IgorTask.update_model_filenames", e, self.igor_model_dir_path)
def update_ref_genome(self, igor_path_ref_genome=None):
if igor_path_ref_genome is not None:
self.igor_path_ref_genome = igor_path_ref_genome
self.genomes.update_fln_names(path_ref_genome=self.igor_path_ref_genome) #fln_genomicVs = self.igor_path_ref_genome + ""
self.fln_genomicVs = self.genomes.fln_genomicVs
self.fln_genomicJs = self.genomes.fln_genomicJs
self.fln_genomicDs = self.genomes.fln_genomicDs
self.fln_V_gene_CDR3_anchors = self.genomes.fln_V_gene_CDR3_anchors
self.fln_J_gene_CDR3_anchors = self.genomes.fln_J_gene_CDR3_anchors
def update_batch_filenames(self):
# reads
if self.igor_wd is None:
self.gen_igor_wd()
if self.igor_batchname is None:
self.gen_random_batchname()
self.igor_fln_indexed_sequences = self.igor_wd + "/aligns/" + self.igor_batchname + "_indexed_sequences.csv"
# aligns
self.igor_fln_indexed_CDR3 = self.igor_wd + "/aligns/" + self.igor_batchname + "_indexed_CDR3s.csv"
self.igor_fln_align_V_alignments = self.igor_wd + "/aligns/" + self.igor_batchname + "_V_alignments.csv"
self.igor_fln_align_J_alignments = self.igor_wd + "/aligns/" + self.igor_batchname + "_J_alignments.csv"
self.igor_fln_align_D_alignments = self.igor_wd + "/aligns/" + self.igor_batchname + "_D_alignments.csv"
# inference
tmpstr = self.igor_wd + "/" + self.igor_batchname + "_inference/"
self.igor_fln_infer_final_parms = tmpstr + "final_parms.txt"
self.igor_fln_infer_final_marginals = tmpstr + "final_marginals.txt"
# evaluate
tmpstr = self.igor_wd + "/" + self.igor_batchname + "_evaluate/"
self.igor_fln_evaluate_final_parms = tmpstr + "final_parms.txt"
self.igor_fln_evaluate_final_marginals = tmpstr + "final_marginals.txt"
# output
tmpstr = self.igor_wd + "/" + self.igor_batchname + "_output/"
self.igor_fln_output_pgen = tmpstr + "Pgen_counts.csv"
self.igor_fln_output_scenarios = tmpstr + "best_scenarios_counts.csv"
self.igor_fln_output_coverage = tmpstr + "coverage.csv"
# Set all files as not existing by default
import os.path
for file_id in igor_file_id_list:
self.batch_data[file_id]['status'] = os.path.isfile(self.batch_data[file_id]['filename'])
# database
self.igor_fln_db = self.igor_wd + "/" + self.igor_batchname+".db"
tmp_prefix_aligns = self.igor_wd + "/aligns/" + self.igor_batchname
self.batch_data['indexed_sequences']['filename'] = tmp_prefix_aligns + "_indexed_sequences.csv"
self.batch_data['indexed_CDR3']['filename'] = tmp_prefix_aligns + "_indexed_CDR3.csv"
self.batch_data['aligns_V_alignments']['filename'] = tmp_prefix_aligns + "_V_alignments.csv"
self.batch_data['aligns_D_alignments']['filename'] = tmp_prefix_aligns + "_D_alignments.csv"
self.batch_data['aligns_J_alignments']['filename'] = tmp_prefix_aligns + "_J_alignments.csv"
tmp_prefix = self.igor_wd + "/" + self.igor_batchname
self.batch_data['infer_final_parms']['filename'] = tmp_prefix + "_inference/" + "final_parms.txt"
self.batch_data['infer_final_marginals']['filename'] = tmp_prefix + "_inference/" + "final_marginals.txt"
self.batch_data['evaluate_final_parms']['filename'] = tmp_prefix + "_evaluate/" + "final_parms.txt"
self.batch_data['evaluate_final_marginals']['filename'] = tmp_prefix + "_evaluate/" + "final_marginals.txt"
self.batch_data['output_pgen']['filename'] = tmp_prefix + "_output/" + "Pgen_counts.csv"
self.batch_data['output_scenarios']['filename'] = tmp_prefix + "_output/" + "best_scenarios_counts.csv"
self.batch_data['output_coverage']['filename'] = tmp_prefix + "_output/" + "coverage.csv"
# def update_igor_filenames_by_modeldirpath(self, modeldirpath=None):
# if modeldirpath is None:
# modeldirpath = self.igor_modelspath + "/" + self.igor_specie + "/" + igor_option_path_dict[self.igor_chain]
#
# self.batch_data['genomicVs']['filename'] = tmp_prefix
# self.batch_data['genomicDs']['filename'] = tmp_prefix
# self.batch_data['genomicJs']['filename'] = tmp_prefix
#
# self.batch_data['V_gene_CDR3_anchors']['filename'] = tmp_prefix
# self.batch_data['J_gene_CDR3_anchors']['filename'] = tmp_prefix
#
# self.batch_data['model_parms']['filename'] = tmp_prefix
# self.batch_data['model_marginals']['filename'] = tmp_prefix
def update_batchname(self, batchname):
self.igor_batchname = batchname
self.update_batch_filenames()
@classmethod
def load_from_batchname(cls, batchname, wd=None):
cls = IgorTask()
if wd is None:
cls.igor_wd = "."
else:
cls.igor_wd = wd
cls.update_batchname(batchname)
try:
cls.run_datadir()
except Exception as e:
print(e)
raise e
return cls
def run_demo(self):
cmd = self.igor_exec_path+ " -run_demo"
return run_command(cmd)
def run_datadir(self):
cmd = self.igor_exec_path+ " -getdatadir"
self.igor_datadir = run_command(cmd).replace('\n','')
self.igor_models_root_path = self.igor_datadir + "/models/"
def run_read_seqs(self, igor_read_seqs=None):
if igor_read_seqs is not None:
self.igor_read_seqs = igor_read_seqs
"igor -set_wd $WDPATH -batch foo -read_seqs ../demo/murugan_naive1_noncoding_demo_seqs.txt"
cmd = self.igor_exec_path
cmd = cmd + " -set_wd " + self.igor_wd
cmd = cmd + " -batch " + self.igor_batchname
cmd = cmd + " -read_seqs " + self.igor_read_seqs
# TODO: if self.igor_read_seqs extension fastq then convert to csv and copy and create the file in aligns. Overwrite if necesserasy
print(cmd)
cmd_stdout = run_command(cmd)
self.b_read_seqs = True # FIXME: If run_command success then True
return cmd_stdout
def run_align(self, igor_read_seqs=None):
#"igor -set_wd ${tmp_dir} -batch ${randomBatch} -species
# ${species} -chain ${chain} -align --all"
import os.path
if igor_read_seqs is not None:
self.igor_read_seqs = igor_read_seqs
if self.b_read_seqs is False:
self.run_read_seqs(igor_read_seqs=igor_read_seqs)
cmd = self.igor_exec_path
cmd = cmd + " -set_wd " + self.igor_wd
cmd = cmd + " -batch " + self.igor_batchname
# TODO: USE COSTUM MODEL OR USE SPECIFIED SPECIES?
# I think that the safests is to use the
# FIXME: CHANGE TO CUSTOM GENOMICS
cmd = cmd + " -set_genomic "
if os.path.isfile(self.genomes.fln_genomicVs):
cmd = cmd + " --V " + self.genomes.fln_genomicVs
if os.path.isfile(self.genomes.fln_genomicDs):
cmd = cmd + " --D " + self.genomes.fln_genomicDs
if os.path.isfile(self.genomes.fln_genomicJs):
cmd = cmd + " --J " + self.genomes.fln_genomicJs
cmd = cmd + " -set_CDR3_anchors "
if os.path.isfile(self.genomes.fln_V_gene_CDR3_anchors):
cmd = cmd + " --V " + self.genomes.fln_V_gene_CDR3_anchors
if os.path.isfile(self.genomes.fln_J_gene_CDR3_anchors):
cmd = cmd + " --J " + self.genomes.fln_J_gene_CDR3_anchors
cmd = cmd + " -align " + command_from_dict_options(self.igor_align_dict_options)
#return cmd
print(cmd)
cmd_stdout = run_command_print(cmd)
#run_command_no_output(cmd)
self.b_align = True # FIXME: If run_command success then True
return cmd_stdout
def run_evaluate(self, igor_read_seqs=None, N_scenarios=None):
# "igor -set_wd $WDPATH -batch foo -species human -chain beta
# -evaluate -output --scenarios 10"
print(self.to_dict())
import os.path
if igor_read_seqs is not None:
self.igor_read_seqs = igor_read_seqs
if self.b_align is False:
self.run_align(igor_read_seqs=self.igor_read_seqs)
cmd = self.igor_exec_path
cmd = cmd + " -set_wd " + self.igor_wd
cmd = cmd + " -batch " + self.igor_batchname
# TODO: USE COSTUM MODEL OR USE SPECIFIED SPECIES?
# I think that the safests is to use the
# cmd = cmd + " -species " + self.igor_species
# cmd = cmd + " -chain " + self.igor_chain
cmd = cmd + " -set_custom_model " + self.igor_model_parms_file + " " + self.igor_model_marginals_file
# here the evaluation
self.igor_output_dict_options["--scenarios"]['active'] = True
if N_scenarios is not None:
self.igor_output_dict_options["--scenarios"]['value'] = str(N_scenarios)
self.igor_output_dict_options["--Pgen"]['active'] = True
cmd = cmd + " -evaluate " + command_from_dict_options(self.igor_evaluate_dict_options)
cmd = cmd + " -output " + command_from_dict_options(self.igor_output_dict_options)
# return cmd
print(cmd)
# FIXME: REALLY BIG FLAW USE DICTIONARY FOR THE SPECIE AND CHAIN
# self.mdl = IgorModel.load_default(self.igor_species, igor_option_path_dict[self.igor_chain], modelpath=self.igor_models_root_path)
run_command(cmd)
# run_command_no_output(cmd)
# self.b_evaluate = True # FIXME: If run_command success then Truerun_infer
def run_pgen(self, igor_read_seqs=None):
# "igor -set_wd $WDPATH -batch foo -species human -chain beta
# -evaluate -output --scenarios 10"
print(self.to_dict())
import os.path
if igor_read_seqs is not None:
self.igor_read_seqs = igor_read_seqs
if self.b_align is False:
self.run_align(igor_read_seqs=self.igor_read_seqs)
cmd = self.igor_exec_path
cmd = cmd + " -set_wd " + self.igor_wd
cmd = cmd + " -batch " + self.igor_batchname
# TODO: USE COSTUM MODEL OR USE SPECIFIED SPECIES?
# I think that the safests is to use the
cmd = cmd + " -set_custom_model " + self.igor_model_parms_file + " " + self.igor_model_marginals_file
# here the evaluation
self.igor_output_dict_options["--scenarios"]['active'] = False
self.igor_output_dict_options["--Pgen"]['active'] = True
cmd = cmd + " -evaluate " + command_from_dict_options(self.igor_evaluate_dict_options)
cmd = cmd + " -output " + command_from_dict_options(self.igor_output_dict_options)
# return cmd
print(cmd)
# FIXME: REALLY BIG FLAW USE DICTIONARY FOR THE SPECIE AND CHAIN
# self.mdl = IgorModel.load_default(self.igor_species, igor_option_path_dict[self.igor_chain], modelpath=self.igor_models_root_path)
run_command(cmd)
# run_command_no_output(cmd)
# self.b_evaluate = True # FIXME: If run_command success then Truerun_infer
def run_scenarios(self, igor_read_seqs=None, N_scenarios=None):
#"igor -set_wd $WDPATH -batch foo -species human -chain beta
# -evaluate -output --scenarios 10"
print(self.to_dict())
import os.path
if igor_read_seqs is not None:
self.igor_read_seqs = igor_read_seqs
if self.b_align is False:
self.run_align(igor_read_seqs=self.igor_read_seqs)
cmd = self.igor_exec_path
cmd = cmd + " -set_wd " + self.igor_wd
cmd = cmd + " -batch " + self.igor_batchname
# TODO: USE COSTUM MODEL OR USE SPECIFIED SPECIES?
# I think that the safests is to use the
cmd = cmd + " -set_custom_model " + self.igor_model_parms_file + " " + self.igor_model_marginals_file
# here the evaluation
self.igor_output_dict_options["--scenarios"]['active'] = True
if N_scenarios is not None:
self.igor_output_dict_options["--scenarios"]['value'] = str(N_scenarios)
self.igor_output_dict_options["--Pgen"]['active'] = False
cmd = cmd + " -evaluate " + command_from_dict_options(self.igor_evaluate_dict_options)
cmd = cmd + " -output " + command_from_dict_options(self.igor_output_dict_options)
#return cmd
print(cmd)
# FIXME: REALLY BIG FLAW USE DICTIONARY FOR THE SPECIE AND CHAIN
# self.mdl = IgorModel.load_default(self.igor_species, igor_option_path_dict[self.igor_chain], modelpath=self.igor_models_root_path)
# run_command(cmd)
run_command_print(cmd)
def run_infer(self, igor_read_seqs=None):
#"igor -set_wd $WDPATH -batch foo -species human -chain beta
# -evaluate -output --scenarios 10"
if igor_read_seqs is not None:
self.igor_read_seqs = igor_read_seqs
if self.b_align is False:
self.run_align(igor_read_seqs=igor_read_seqs)
print("Alignment finished!")
cmd = self.igor_exec_path
cmd = cmd + " -set_wd " + self.igor_wd
cmd = cmd + " -batch " + self.igor_batchname
# TODO: USE COSTUM MODEL OR USE SPECIFIED SPECIES?
# I think that the safests is to use the
# cmd = cmd + " -species " + self.igor_species
# cmd = cmd + " -chain " + self.igor_chain
cmd = cmd + " -set_custom_model " + self.igor_model_parms_file + " " + self.igor_model_marginals_file
# here the evaluation
cmd = cmd + " -infer " #+ command_from_dict_options(self.igor_output_dict_options)
#return cmd
print(cmd)
# FIXME: REALLY BIG FLAW USE DICTIONARY FOR THE SPECIE AND CHAIN
# self.mdl = IgorModel.load_default(self.igor_species, igor_option_path_dict[self.igor_chain], modelpath=self.igor_models_root_path)
self.mdl = IgorModel(model_parms_file=self.igor_model_parms_file, model_marginals_file=self.igor_model_marginals_file)
# output = run_command(cmd)
output = run_command_print(cmd)
#run_command_no_output(cmd)
self.b_infer = True # FIXME: If run_command success then True
return output
def run_generate(self, N_seqs=None):
cmd = self.igor_exec_path
cmd = cmd + " -set_wd " + self.igor_wd
cmd = cmd + " -batch " + self.igor_batchname
cmd = cmd + " -set_custom_model " + self.igor_model_parms_file + " " + self.igor_model_marginals_file
if N_seqs is not None:
cmd = cmd + " -generate " + str(N_seqs)
else:
cmd = cmd + " -generate "
print(cmd)
# run_command(cmd)
run_command_print(cmd)
path_generated = self.igor_wd + "/" + self.igor_batchname + "_generated/"
self.igor_fln_generated_realizations_werr = path_generated + "generated_realizations_werr.csv"
self.igor_fln_generated_seqs_werr = path_generated + "generated_seqs_werr.csv"
self.igor_fln_generation_info = path_generated + "generated_seqs_werr.out"
self.b_generate = True
# FIXME: LOAD TO DATABASE CREATE PROPER TABLES FOR THIS
# import pandas as pd
df = pd.read_csv(self.igor_fln_generated_seqs_werr, delimiter=';').set_index('seq_index')
return df
def run_generate_to_dataframe(self, N):
self.run_generate(self, N)
# FIXME: LOAD TO DATABASE CREATE PROPER TABLES FOR THIS
# import pandas as pd
df = pd.read_csv(self.igor_fln_generated_seqs_werr, delimiter=';').set_index('seq_index')
return df
def run_clean_batch(self):
cmd = "rm -r " + self.igor_wd + "/" + self.igor_batchname + "_evaluate"
run_command_no_output(cmd)
cmd = "rm -r " + self.igor_wd + "/" + self.igor_batchname + "_output"
run_command_no_output(cmd)
cmd = "rm " + self.igor_wd + "/aligns/" + self.igor_batchname + "*.csv"
run_command_no_output(cmd)
cmd = "rm -r " + self.igor_wd + "/" + self.igor_batchname + "_generated"
run_command_no_output(cmd)
def create_db(self, igor_fln_db=None):
if igor_fln_db is not None:
self.igor_fln_db = igor_fln_db
self.igor_db = IgorSqliteDB.create_db(self.igor_fln_db)
def load_db_from_indexed_sequences(self):
self.igor_db.load_IgorIndexedSeq_FromCSV(self.igor_fln_indexed_sequences)
# load genome templates from fasta and csv files.
def load_db_from_genomes(self):
print("Loading Gene templates ...")
self.igor_db.load_IgorGeneTemplate_FromFASTA("V", self.genomes.fln_genomicVs)
self.igor_db.load_IgorGeneTemplate_FromFASTA("J", self.genomes.fln_genomicJs)
try:
self.igor_db.load_IgorGeneTemplate_FromFASTA("D", self.genomes.fln_genomicDs)
except Exception as e:
print(e)
print("No D gene template found in batch files structure")
pass
# load
print("loading Anchors data ...")
# try:
self.igor_db.load_IgorGeneAnchors_FromCSV("V", self.genomes.fln_V_gene_CDR3_anchors)
self.igor_db.load_IgorGeneAnchors_FromCSV("J", self.genomes.fln_J_gene_CDR3_anchors)
# except Exception as e:
# print("ERROR : ", e)
def load_db_from_alignments(self):
print(self.igor_fln_align_V_alignments)
self.igor_db.load_IgorAlignments_FromCSV("V", self.igor_fln_align_V_alignments)
self.igor_db.load_IgorAlignments_FromCSV("J", self.igor_fln_align_J_alignments)
try:
self.igor_db.load_IgorAlignments_FromCSV("D", self.igor_fln_align_D_alignments)
except Exception as e:
print(e)
print("Couldn't load D gene alignments!")
pass
print("Alignments loaded in database in "+str(self.igor_fln_db))
def load_db_from_models(self, mdl=None):
# self.load_IgorModel()
try:
if self.igor_db.Q_model_in_db():
print("WARNING: Overwriting previous model in database ", self.igor_fln_db)
self.igor_db.delete_IgorModel_Tables()
if mdl is None:
self.igor_db.load_IgorModel(self.mdl)
else:
self.igor_db.load_IgorModel(mdl)
except Exception as e:
print("Couldn't load model to database from IgorModel object")
print("ERROR: ", e)
def load_db_from_inferred_model(self):
self.load_IgorModel_from_infer_files()
try:
self.igor_db.load_IgorModel(self.mdl)
except Exception as e:
print("Couldn't load model to database from IgorModel object")
print("ERROR: ", e)
def load_db_from_indexed_cdr3(self):
print(self.igor_fln_indexed_CDR3)
self.igor_db.load_IgorIndexedCDR3_FromCSV(self.igor_fln_indexed_CDR3)
def load_db_from_bestscenarios(self):
print(self.igor_fln_output_scenarios)
self.igor_db.load_IgorBestScenarios_FromCSV(self.igor_fln_output_scenarios, self.mdl)
def load_db_from_pgen(self):
print(self.igor_fln_output_pgen)
self.igor_db.load_IgorPgen_FromCSV(self.igor_fln_output_pgen)
def load_mdl_from_db(self):
try:
self.mdl = self.igor_db.get_IgorModel()
except Exception as e:
print("WARNING: Igor Model was not found in ", self.igor_fln_db)
pass
# return self.mdl
# def get_IgorModel_from_db(self):
# self.mdl = self.igor_db.get_IgorModel()
# return self.mdl
# FIXME: this method should be deprecated!!!
def load_VDJ_database(self, flnIgorSQL):
self.flnIgorSQL = flnIgorSQL
self.igor_db = IgorSqliteDB(flnIgorSQL)
# FIXME :EVERYTHING
flnIgorIndexedSeq = self.igor_wd+"/aligns/"+self.igor_batchname+"_indexed_sequences.csv"
# FIXME PATH AND OPTIONS NEED TO BE CONSISTENT
IgorModelPath = self.igor_models_root_path + self.igor_species + "/" \
+ igor_option_path_dict[self.igor_chain] + "/"
IgorRefGenomePath = IgorModelPath + "ref_genome/"
flnVGeneTemplate = IgorRefGenomePath + "genomicVs.fasta"
flnDGeneTemplate = IgorRefGenomePath + "genomicDs.fasta"
flnJGeneTemplate = IgorRefGenomePath + "genomicJs.fasta"
flnVGeneCDR3Anchors = IgorRefGenomePath + "V_gene_CDR3_anchors.csv"
flnJGeneCDR3Anchors = IgorRefGenomePath + "J_gene_CDR3_anchors.csv"
### IGoR Alignments files
flnVAlignments = self.igor_wd + "/aligns/" + self.igor_batchname + "_V_alignments.csv"
flnDAlignments = self.igor_wd + "/aligns/" + self.igor_batchname + "_D_alignments.csv"
flnJAlignments = self.igor_wd + "/aligns/" + self.igor_batchname + "_J_alignments.csv"
### IGoR ouptut files
flnModelParms = IgorModelPath + "models/model_parms.txt"
flnModelMargs = IgorModelPath + "models/model_marginals.txt"
flnIgorBestScenarios = self.igor_wd + self.igor_batchname + "_output/best_scenarios_counts.csv"
flnIgorDB = self.igor_batchname+".db"
self.igor_db.createSqliteDB(flnIgorDB)
self.igor_db.load_VDJ_Database(flnIgorIndexedSeq, \
flnVGeneTemplate, flnDGeneTemplate, flnJGeneTemplate, \
flnVAlignments, flnDAlignments, flnJAlignments)
# ### load IGoR model parms and marginals.
# # FIXME: THIS IS REDUNDANT IN SOME PLACE check it out.
# self.mdl = IgorModel(model_parms_file=flnModelParms, model_marginals_file=flnModelMargs)
# mdlParms = IgorModel.Model_Parms(flnModelParms) # mdl.parms
# mdlMargs = IgorModel.Model_Marginals(flnModelMargs) # mdl.marginals
#
# # load IGoR best scenarios file.
# db_bs = IgorSqliteDBBestScenarios.IgorSqliteDBBestScenariosVDJ()
# db_bs.createSqliteDB("chicagoMouse_bs.db")
# db_bs.load_IgorBestScenariosVDJ_FromCSV(flnIgorBestScenarios)
def load_VDJ_BS_database(self, flnIgorBSSQL):
flnIgorBestScenarios = self.igor_wd+"/"+self.igor_batchname+"_output/best_scenarios_counts.csv"
self.igor_db_bs = IgorSqliteDBBestScenariosVDJ(flnIgorBSSQL) #IgorDBBestScenariosVDJ.sql
self.igor_db_bs.createSqliteDB(self.igor_batchname+"_bs.db")
self.igor_db_bs.load_IgorBestScenariosVDJ_FromCSV(flnIgorBestScenarios)
def get_pgen_pd(self):
#load pgen file
import pandas as pd
df = pd.read_csv(self.igor_fln_output_pgen, sep=';')
df = df.set_index('seq_index')
df = df.sort_index()
df_seq = pd.read_csv(self.igor_fln_indexed_sequences, sep=';')
df_seq = df_seq.set_index('seq_index').sort_index()
df_cdr3 = pd.read_csv(self.igor_fln_indexed_CDR3, sep=';')
df_cdr3 = df_cdr3.set_index('seq_index').sort_index()
df = df.merge(df_seq, left_index=True, right_index=True)
df = df.merge(df_cdr3, left_index=True, right_index=True)
return df
def from_db_get_naive_align_dict_by_seq_index(self, seq_index):
indexed_sequence = self.igor_db.get_IgorIndexedSeq_By_seq_index(seq_index)
indexed_sequence.offset = 0
best_v_align_data = self.igor_db.get_best_IgorAlignment_data_By_seq_index('V', indexed_sequence.seq_index)
best_j_align_data = self.igor_db.get_best_IgorAlignment_data_By_seq_index('J', indexed_sequence.seq_index)
try:
best_d_align_data = self.igor_db.get_best_IgorAlignment_data_By_seq_index('D', indexed_sequence.seq_index)
vdj_naive_alignment = {'V': best_v_align_data,
'D': best_d_align_data,
'J': best_j_align_data}
v_align_data_list = self.igor_db.get_IgorAlignment_data_list_By_seq_index('V', indexed_sequence.seq_index)
# print('V', len(v_align_data_list), [ii.score for ii in v_align_data_list])
d_align_data_list = self.igor_db.get_IgorAlignment_data_list_By_seq_index('D', indexed_sequence.seq_index)
# print('D', len(d_align_data_list), [ii.score for ii in d_align_data_list])
j_align_data_list = self.igor_db.get_IgorAlignment_data_list_By_seq_index('J', indexed_sequence.seq_index)
# print('J', len(j_align_data_list), [ii.score for ii in j_align_data_list])
# 1. Choose the highest score then check if this one is the desire range.
# if there is an overlap
# calculate score without overlap. If overlap
# if hightest score
for i, d_align_data in enumerate(d_align_data_list):
# Check if D is btwn V and J position
if (best_v_align_data.offset_3_p <= d_align_data.offset_5_p) and (
d_align_data.offset_3_p <= best_j_align_data.offset_5_p):
# vdj_naive_alignment['D'+str(i)] = d_align_data
vdj_naive_alignment['D'] = d_align_data
break
except Exception as e:
print(e)
print("No d gene alignments found!")
vdj_naive_alignment = {'V': best_v_align_data,
'J': best_j_align_data}
v_align_data_list = self.igor_db.get_IgorAlignment_data_list_By_seq_index('V', indexed_sequence.seq_index)
print('V', len(v_align_data_list), [ii.score for ii in v_align_data_list])
j_align_data_list = self.igor_db.get_IgorAlignment_data_list_By_seq_index('J', indexed_sequence.seq_index)
print('J', len(j_align_data_list), [ii.score for ii in j_align_data_list])
pass
return indexed_sequence, vdj_naive_alignment
def from_db_str_fasta_naive_align_by_seq_index(self, seq_index):
""" Given an Sequence index and the corresponding alignments vj/ vdj
return a string with considering only offset"""
fasta_list = list()
indexed_sequence, vdj_alignments_dict = self.from_db_get_naive_align_dict_by_seq_index(seq_index)
indexed_sequence.sequence = indexed_sequence.sequence.lower()
# add mismatches in sequence.
s = list(indexed_sequence.sequence)
for key_align in vdj_alignments_dict.keys():
for pos_mis in vdj_alignments_dict[key_align].mismatches:
s[pos_mis] = s[pos_mis].upper()
indexed_sequence.sequence = "".join(s)
str_fasta = ""
min_offset_key = min(vdj_alignments_dict.keys(), key=lambda x: vdj_alignments_dict[x].offset) # .offset
min_offset = vdj_alignments_dict[min_offset_key].offset
min_offset = min(indexed_sequence.offset, min_offset)
delta_offset = indexed_sequence.offset - min_offset
str_prefix = '-' * (delta_offset)
str_fasta_sequence = str_prefix + indexed_sequence.sequence
# print(str_fasta_sequence)
str_fasta = str_fasta + "> " + str(indexed_sequence.seq_index)
str_fasta_description = str_fasta
str_fasta = str_fasta + "\n"
str_fasta = str_fasta + str_fasta_sequence + "\n"
fasta_list.append([str_fasta_description, str_fasta_sequence])
for key in vdj_alignments_dict.keys():
vdj_alignments_dict[key].strGene_seq = vdj_alignments_dict[key].strGene_seq.lower()
delta_offset = vdj_alignments_dict[key].offset - min_offset
str_prefix = '-' * (delta_offset)
str_fasta_sequence = str_prefix + vdj_alignments_dict[key].strGene_seq
# print(str_fasta_sequence)
str_fasta_description = "> " + key + ", " + vdj_alignments_dict[key].strGene_name
str_fasta = str_fasta + str_fasta_description + "\n"
str_fasta = str_fasta + str_fasta_sequence + "\n"
fasta_list.append([str_fasta_description, str_fasta_sequence])
offset_5_p = vdj_alignments_dict[key].offset_5_p - min_offset
offset_3_p = vdj_alignments_dict[key].offset_3_p - min_offset
# print("delta_offset : ", delta_offset)
# print("offset_5_p : ", vdj_alignments_dict[key].offset_5_p, offset_5_p)
# print("offset_3_p : ", vdj_alignments_dict[key].offset_3_p, offset_3_p)
str_prefix_2 = '-' * (offset_5_p + 1)
str_fasta_sequence2 = str_prefix_2 + str_fasta_sequence[offset_5_p + 1:offset_3_p + 1]
str_fasta_description2 = "> " + vdj_alignments_dict[key].strGene_name + ", score : " + str(vdj_alignments_dict[key].score)
str_fasta = str_fasta + str_fasta_description2 + "\n"
str_fasta = str_fasta + str_fasta_sequence2 + "\n"
fasta_list.append([str_fasta_description2, str_fasta_sequence2])
# TODO ADD MISMATCHES
# align = vdj_alignments_dict[key]
# align mismatches are in indexed sequence reference I need to convert it to gene reference given the alignment
# given the align.offset
# pos_in_gene = pos_in_seq - align.offset
# pos_in_gene = cdr3 - align.offset
# FIXME: make a list of tuples [(description_0, sequence_0), ..., (description_i, sequence_i), ..., (description_N, sequence_N)]
sequence_len_list = list(map(lambda x: len(x[1]), fasta_list))
max_seq_len = max(sequence_len_list)
for fasta_rec in fasta_list:
len_fasta_rec_seq = len(fasta_rec[1])
if len_fasta_rec_seq < max_seq_len:
# print(fasta_rec)
ngaps = max_seq_len - len_fasta_rec_seq
str_ngaps = str(ngaps * '-')
fasta_rec[1] = fasta_rec[1] + str_ngaps
str_fasta = ""
str_fasta = '\n'.join( [fasta_rec[0]+"\n"+fasta_rec[1] for fasta_rec in fasta_list] )
return str_fasta #, fasta_list
def from_db_plot_naive_align_by_seq_index(self, seq_index):
import Bio.AlignIO
import io
aaa = self.from_db_str_fasta_naive_align_by_seq_index(seq_index)
aln = Bio.AlignIO.read(io.StringIO(aaa), 'fasta')
view_alignment(aln)
def export_to_igorfiles(self):
print("Export: ")
#--- 1. Indexed Sequences
if self.igor_db.Q_sequences_in_db() and not (self.igor_fln_indexed_sequences is None):
try:
self.igor_db.write_IgorIndexedSeq_to_CSV(self.igor_fln_indexed_sequences)
except Exception as e:
print("ERROR: write_IgorIndexedSeq_to_CSV", e)
else:
print("No IgorIndexedSeq Table not exported")
#--- 2. Gene Templates
if self.igor_db.Q_ref_genome_in_db_by_gene("V") and not (self.fln_genomicVs is None):
try:
self.igor_db.write_IgorGeneTemplate_to_fasta("V", self.fln_genomicVs)
except Exception as e:
print("ERROR: write_IgorGeneTemplate_to_fasta V", e)
else:
print("No IgorGeneTemplate V Table")
if self.igor_db.Q_ref_genome_in_db_by_gene("J") and not (self.fln_genomicJs is None):
try:
self.igor_db.write_IgorGeneTemplate_to_fasta("J", self.fln_genomicJs)
except Exception as e:
print("ERROR: write_IgorGeneTemplate_to_fasta J", e)
else:
print("No IgorGeneTemplate J Table")
if self.igor_db.Q_ref_genome_in_db_by_gene("D") and not (self.fln_genomicDs is None):
try:
self.igor_db.write_IgorGeneTemplate_to_fasta("D", self.fln_genomicDs)
except Exception as e:
print("ERROR: write_IgorGeneTemplate_to_fasta D", e)
else:
print("No IgorGeneTemplate D Table")
if self.igor_db.Q_CDR3_Anchors_in_db("V") and not (self.fln_V_gene_CDR3_anchors is None):
try:
self.igor_db.write_IgorGeneAnchors_to_CSV("V", self.fln_V_gene_CDR3_anchors)
except Exception as e:
print("ERROR: write_IgorGeneAnchors_to_CSV V", e)
else:
print("No IgorGeneAnchors V Table")
if self.igor_db.Q_CDR3_Anchors_in_db("J") and not (self.fln_J_gene_CDR3_anchors is None):
try:
self.igor_db.write_IgorGeneAnchors_to_CSV("J", self.fln_J_gene_CDR3_anchors)
except Exception as e:
print("ERROR: write_IgorGeneAnchors_to_CSV J", e)
else:
print("No IgorGeneAnchors J Table")
#--- 3. Alignments
if self.igor_db.Q_align_in_db():
# b_igor_alignments
if self.igor_db.Q_align_in_db_by_gene("V") and not (self.igor_fln_align_V_alignments is None):
try:
self.igor_db.write_IgorAlignments_to_CSV("V", self.igor_fln_align_V_alignments)
except Exception as e:
print("ERROR: write_IgorAlignments_to_CSV V", e)
if self.igor_db.Q_align_in_db_by_gene("J") and not (self.igor_fln_align_J_alignments is None):
try:
self.igor_db.write_IgorAlignments_to_CSV("J", self.igor_fln_align_J_alignments)
except Exception as e:
print("ERROR: write_IgorAlignments_to_CSV J", e)
if self.igor_db.Q_align_in_db_by_gene("D") and not (self.igor_fln_align_D_alignments is None):
try:
self.igor_db.write_IgorAlignments_to_CSV("D", self.igor_fln_align_D_alignments)
except Exception as e:
print("ERROR: write_IgorAlignments_to_CSV D", e)
try:
self.igor_db.write_IgorIndexedCDR3_to_CSV(self.igor_fln_indexed_CDR3)
except Exception as e:
print("WARNING: No indexed CDR3 files found", self.igor_fln_indexed_CDR3)
print(e)
pass
# --- 4. Export Igor Model
if self.igor_db.Q_model_in_db():
if (not (self.igor_model_parms_file is None)) and (not (self.igor_model_marginals_file is None)):
try:
self.igor_db.write_IgorModel_to_TXT(self.igor_model_parms_file, self.igor_model_marginals_file)
except Exception as e:
print("ERROR: write_IgorModel_to_TXT ",e)
else:
print("ERROR: igor_model_parms_file or igor_model_marginals_file not specified.")
else:
print("No Models Tables")
# --- 5. Export Igor Model
if self.igor_db.Q_IgorPgen_in_db() and not (self.igor_fln_output_pgen is None):
try:
self.igor_db.write_IgorPgen_to_CSV(self.igor_fln_output_pgen)
except Exception as e:
print("ERROR: write_IgorPgen_to_CSV ", e)
# --- 6. Export Igor Model
# b_igor_scenarios
if self.igor_db.Q_IgorBestScenarios_in_db() and not (self.igor_fln_output_scenarios is None):
try:
self.igor_db.write_IgorBestScenarios_to_CSV(self.igor_fln_output_scenarios)
except Exception as e:
print("ERROR: write_IgorBestScenarios_to_CSV ", e)
# 1.1 if Alignments
#### AIRR methods ###
def parse_scenarios_to_airr(self, igor_fln_output_scenarios, airr_fln_output_scenarios):
# 1. Read header of and make a list
open(igor_fln_output_scenarios)
# 2.
pass
### IGOR INPUT SEQUENCES ####
class IgorIndexedSequence:
"""
Return a IgorIndexedSequence instance
"""
def __init__(self, seq_index=-1, sequence=''):
self.seq_index = seq_index
self.sequence = sequence
def __str__(self):
return str(self.to_dict())
def to_dict(self):
"""
Return a IgorIndexedSequence instance as a python dictionary.
"""
dictIndexedSequence = {
"seq_index" : self.seq_index , \
"sequence" : self.sequence
}
return dictIndexedSequence
@classmethod
def load(cls, seq_index, sequence):
cls = IgorIndexedSequence()
try:
cls.seq_index = seq_index
cls.sequence = sequence
except Exception as e:
print(e)
raise e
return cls
@classmethod
def load_FromCSVline(cls, csvline, delimiter=";"):
"""
Return a IgorIndexedSequence instance from a line of IGoR indexed_sequences.csv file.
:param csvline: String line of a csv file.
:param delimiter: Character to delimitate csv file.
:return: IgorIndexedSequence object
"""
cls = IgorIndexedSequence()
csvsplit = csvline.replace("\n", "").split(";")
try:
cls.seq_index = int (csvsplit[0])
cls.sequence = int (csvsplit[1])
except Exception as e:
print(e)
raise e
return cls
@classmethod
def load_FromSQLRecord(cls, sqlRecord):
"""
Return a IgorIndexedSequence instance from a database record accordingly.
with the database specification.
:param sqlRecord: sqlite record of one entry.
:return: IgorIndexedSequence object.
"""
cls = IgorIndexedSequence()
try:
cls.seq_index = int(sqlRecord[0])
cls.sequence = str(sqlRecord[1]).replace('\n', '')
except Exception as e:
print(e)
raise e
return cls
### IGOR ALIGNMENTS ####
class IgorRefGenome:
def __init__(self):
# FIXME: find a better way to add a default value for this and also the "/" separator
self.path_ref_genome = "."
self.fln_genomicVs = None # "genomicVs.fasta"
self.fln_genomicDs = None # "genomicDs.fasta"
self.fln_genomicJs = None # "genomicJs.fasta"
self.fln_V_gene_CDR3_anchors = None # "V_gene_CDR3_anchors.csv"
self.fln_J_gene_CDR3_anchors = None # "J_gene_CDR3_anchors.csv"
self.df_genomicVs = None
self.df_genomicDs = None
self.df_genomicJs = None
self.dict_genomicVs = None #(self.df_genomicVs.set_index('name').to_dict())['value']
self.dict_genomicDs = None
self.dict_genomicJs = None
self.df_V_ref_genome = None
self.df_J_ref_genome = None
@classmethod
def load_FromSQLRecord_list(cls, sqlrecords_genomicVs = None, sqlrecords_genomicDs = None, sqlrecords_genomicJs = None,
sqlrecords_V_gene_CDR3_anchors = None, sqlrecords_J_gene_CDR3_anchors = None):
cls = IgorRefGenome()
# TODO: make query to database
cls.df_genomicVs = pd.DataFrame.from_records(sqlrecords_genomicVs, columns=['id', 'name', 'value']).set_index('id')
# Fasta to dataframe
try:
# df_V_anchors = pd.read_csv(self.fln_V_gene_CDR3_anchors, sep=';')
df_V_anchors = pd.DataFrame.from_records(sqlrecords_V_gene_CDR3_anchors, columns=['id', 'gene', 'anchor_index']).set_index(('id'))
cls.df_V_ref_genome = cls.df_genomicVs.set_index('name').join(df_V_anchors.set_index('gene')).reset_index()
cls.dict_genomicVs = (cls.df_genomicVs.set_index('name').to_dict())['value']
except Exception as e:
print('No V genes were found.')
print(e)
pass
# J genes
cls.df_genomicJs = pd.DataFrame.from_records(sqlrecords_genomicJs, columns=['id', 'name', 'value']).set_index('id')
try:
df_J_anchors = pd.DataFrame.from_records(sqlrecords_J_gene_CDR3_anchors, columns=['id', 'gene', 'anchor_index']).set_index(('id'))
cls.df_J_ref_genome = cls.df_genomicJs.set_index('name').join(df_J_anchors.set_index('gene')).reset_index()
cls.dict_genomicJs = (cls.df_genomicJs.set_index('name').to_dict())['value']
except Exception as e:
print('No J genes were found.')
print(e)
pass
# D genes
try:
cls.df_genomicDs = pd.DataFrame.from_records(sqlrecords_genomicDs,
columns=['id', 'name', 'value']).set_index('id')
cls.dict_genomicDs = (cls.df_genomicDs.set_index('name').to_dict())['value']
# TODO: SHOULD I BE REBUNDANT? or df_genomicDs is rebundant?
# self.df_D_ref_genome
except Exception as e:
print('No D genes were found.')
print(e)
pass
return cls
@classmethod
def load_from_path(cls, path_ref_genome):
cls = IgorRefGenome()
cls.path_ref_genome = path_ref_genome
cls.update_fln_names()
cls.load_dataframes()
return cls
def update_fln_names(self, path_ref_genome=None, fln_genomicVs=None, fln_genomicDs=None, fln_genomicJs=None, fln_V_gene_CDR3_anchors=None, fln_J_gene_CDR3_anchors=None):
if path_ref_genome is not None:
self.path_ref_genome = path_ref_genome
if fln_genomicVs is None:
self.fln_genomicVs = self.path_ref_genome + "/" + "genomicVs.fasta"
else:
self.fln_genomicVs = fln_genomicVs
if fln_genomicDs is None:
self.fln_genomicDs = self.path_ref_genome + "/" + "genomicDs.fasta"
else:
self.fln_genomicDs = fln_genomicDs
if fln_genomicJs is None:
self.fln_genomicJs = self.path_ref_genome + "/" + "genomicJs.fasta"
else:
self.fln_genomicJs = fln_genomicJs
if fln_V_gene_CDR3_anchors is None:
self.fln_V_gene_CDR3_anchors = self.path_ref_genome + "/" + "V_gene_CDR3_anchors.csv"
else:
self.fln_V_gene_CDR3_anchors = fln_V_gene_CDR3_anchors
if fln_J_gene_CDR3_anchors is None:
self.fln_J_gene_CDR3_anchors = self.path_ref_genome + "/" + "J_gene_CDR3_anchors.csv"
else:
self.fln_J_gene_CDR3_anchors = fln_J_gene_CDR3_anchors
# TODO: LOAD INSTANCE FROM DEFINED FILES, what is the difference btwn load_dataframes?
def load_from_files(self, fln_genomicVs=None, fln_genomicDs=None, fln_genomicJs=None,
fln_V_gene_CDR3_anchors=None, fln_J_gene_CDR3_anchors=None):
self.fln_genomicVs = fln_genomicVs
self.fln_genomicDs = fln_genomicDs
self.fln_genomicJs = fln_genomicJs
self.fln_V_gene_CDR3_anchors = fln_V_gene_CDR3_anchors
self.fln_J_gene_CDR3_anchors = fln_J_gene_CDR3_anchors
self.load_dataframes()
def load_dataframes(self):
# Fasta to dataframe
# V genes
self.df_genomicVs = from_fasta_to_dataframe(self.fln_genomicVs)
try:
df_V_anchors = pd.read_csv(self.fln_V_gene_CDR3_anchors, sep=';')
self.df_V_ref_genome = self.df_genomicVs.set_index('name').join(df_V_anchors.set_index('gene')).reset_index()
self.dict_genomicVs = (self.df_genomicVs.set_index('name').to_dict())['value']
except Exception as e:
print('No V genes were found.')
print(e)
pass
# J genes
self.df_genomicJs = from_fasta_to_dataframe(self.fln_genomicJs)
try:
df_J_anchors = pd.read_csv(self.fln_J_gene_CDR3_anchors, sep=';')
self.df_J_ref_genome = self.df_genomicJs.set_index('name').join(df_J_anchors.set_index('gene')).reset_index()
self.dict_genomicJs = (self.df_genomicJs.set_index('name').to_dict())['value']
except Exception as e:
print('No J genes were found.')
print(e)
pass
# D genes
try:
self.df_genomicDs = from_fasta_to_dataframe(self.fln_genomicDs)
self.dict_genomicDs = (self.df_genomicDs.set_index('name').to_dict())['value']
# TODO: SHOULD I BE REBUNDANT? or df_genomicDs is rebundant?
# self.df_D_ref_genome
except Exception as e:
print('No D genes were found.')
print(e)
pass
#return df_V_ref_genome, df_J_ref_genome
def get_anchors_dict(self):
dict_anchor_index = dict()
dict_anchor_index['V'] = self.df_V_ref_genome.set_index('name')['anchor_index'].to_dict()
dict_anchor_index['J'] = self.df_J_ref_genome.set_index('name')['anchor_index'].to_dict()
return dict_anchor_index
class IgorAlignment_data:
def __init__(self):
self.seq_index = -1
self.gene_id = -1
self.score = -1
self.offset = 0
self.insertions = list()
self.deletions = list()
self.mismatches = list()
self.length = 0
self.offset_5_p = 0
self.offset_3_p = 0
self.strGene_name = ""
self.strGene_class = ""
self.strGene_seq = ""
self.anchor_in_read = None
def __str__(self):
return str(self.to_dict())
def to_dict(self):
dictAlignment_data = {
"seq_index" : self.seq_index , \
"gene_id" : self.gene_id , \
"score" : self.score , \
"offset" : self.offset , \
"insertions" : self.insertions , \
"deletions" : self.deletions , \
"mismatches" : self.mismatches , \
"length" : self.length , \
"offset_5_p" : self.offset_5_p , \
"offset_3_p" : self.offset_3_p , \
"strGene_name" : self.strGene_name , \
"strGene_class" : self.strGene_class , \
"strGene_seq" : self.strGene_seq
}
return dictAlignment_data
@classmethod
def load_FromCSVLine(cls, csvline, strGene_name="", delimiter=";"):
#seq_index;gene_name;score;offset;insertions;deletions;mismatches;length;5_p_align_offset;3_p_align_offset
cls = IgorAlignment_data()
csvsplit = csvline.replace("\n", "").split(";")
try:
cls.seq_index = int (csvsplit[0])
cls.strGene_name = str (csvsplit[1])
cls.score = float(csvsplit[2])
cls.offset = int (csvsplit[3])
cls.insertions = eval (csvsplit[4].replace("{","[").replace("}","]"))
cls.deletions = eval (csvsplit[5].replace("{","[").replace("}","]"))
cls.mismatches = eval (csvsplit[6].replace("{","[").replace("}","]"))
cls.length = int (csvsplit[7])
cls.offset_5_p = int (csvsplit[8])
cls.offset_3_p = int (csvsplit[9])
except Exception as e:
print(e)
raise e
return cls
@classmethod
def load_FromSQLRecord(cls, sqlRecordAlign, strGene_name=""):
"""
Return a IgorAlignment_data instance from a IgorSqlRecord.
:param sqlRecordAlign: record of a sql database table.
:param strGene_name: gene_name associated to the record.
:return: IgorAlignment_data instance
"""
cls = IgorAlignment_data()
try:
cls.seq_index = int (sqlRecordAlign[0])
cls.gene_id = int (sqlRecordAlign[1])
cls.score = float(sqlRecordAlign[2])
cls.offset = int (sqlRecordAlign[3])
cls.insertions = eval (sqlRecordAlign[4])
cls.deletions = eval (sqlRecordAlign[5])
cls.mismatches = eval (sqlRecordAlign[6])
cls.length = int (sqlRecordAlign[7])
cls.offset_5_p = int (sqlRecordAlign[8])
cls.offset_3_p = int (sqlRecordAlign[9])
# TODO: Bestway to retrieve the name of the gene_name
if strGene_name == None:
cls.strGene_name = str(cls.gene_id)
else:
cls.strGene_name = strGene_name
return cls
except Exception as e:
print(e)
raise e
class IgorGeneTemplate:
def __init__(self):
self.flnGene = None
self.dataframe = None
def get_sequence(self, gene_name):
# TODO: use dataframe return sequence
sequence = ""
return sequence
### IGOR MODEL ####
class IgorModel:
def __init__(self, model_parms_file=None, model_marginals_file=None):
self.parms = IgorModel_Parms()
self.marginals = IgorModel_Marginals()
self.genomic_dataframe_dict = dict()
self.xdata = dict()
self.factors = list()
self.metadata = dict()
self.specie = ""
self.chain = ""
self.Pmarginal = dict()
# FIXME: But since DB is in refactor keep it for the moment
self.BestScenariosHeaderList = list() # This is a ordered list store the nicknames of events in the header of the file
# should be only necessary if no database present
# check input files
flag_parms = (model_parms_file is not None)
flag_marginals = (model_marginals_file is not None)
flag_xdata = (flag_parms and flag_marginals)
if flag_parms:
self.parms.read_model_parms(model_parms_file)
if flag_marginals:
self.marginals.read_model_marginals(model_marginals_file)
if flag_xdata:
self.generate_xdata()
self.sequence_construction_event_list = list()
def __getitem__(self, key):
return self.xdata[key]
def __str__(self):
return ".xdata" + str(self.get_events_nicknames_list())
# TODO: finish this method to load model with default installed igor.
@classmethod
def load_default(cls, IgorSpecie, IgorChain, modelpath=None): #rcParams['paths.igor_models']):
"""
:return IgorModel loaded with the default location for specie and chain
"""
# IGoR run parameters
#IgorSpecie = specie #"mouse"
#IgorChain = chain #"tcr_beta"
if modelpath is None:
try:
modelpath = run_igor_datadir() + "/models"
except Exception as e:
print("ERROR: getting default igor datadir.", e)
IgorModelPath = modelpath+"/"+IgorSpecie+"/"+IgorChain+"/"
print("Loading default IGoR model from path : ", IgorModelPath)
# FIXME: FIND A WAY TO GENERALIZE THIS WITH SOMEKIND OF STANDARD NAME
flnModelParms = IgorModelPath + "models/model_parms.txt"
flnModelMargs = IgorModelPath + "models/model_marginals.txt"
print("Parms filename: ", flnModelParms)
print("Margs filename: ", flnModelMargs)
print("-"*50)
# IgorRefGenomePath = IgorModelPath+"ref_genome/"
# flnVGeneTemplate = IgorRefGenomePath+"genomicVs.fasta"
# flnDGeneTemplate = IgorRefGenomePath+"genomicDs.fasta"
# flnJGeneTemplate = IgorRefGenomePath+"genomicJs.fasta"
#
# flnVGeneCDR3Anchors = IgorRefGenomePath+"V_gene_CDR3_anchors.csv"
# flnJGeneCDR3Anchors = IgorRefGenomePath+"J_gene_CDR3_anchors.csv"
cls = IgorModel(model_parms_file=flnModelParms, model_marginals_file=flnModelMargs)
cls.specie = IgorSpecie
cls.chain = IgorChain
return cls
@classmethod
def load_from_parms_marginals_object(cls, mdl_parms, mdl_marginals):
cls = IgorModel()
cls.parms = mdl_parms
cls.marginals = mdl_marginals
cls.generate_xdata()
return cls
# FIXME:
@classmethod
def load_from_networkx(cls, IgorSpecie, IgorChain):
"""
:return IgorModel loaded with the default location for specie and chain
"""
cls = IgorModel(model_parms_file=flnModelParms, model_marginals_file=flnModelMargs)
return cls
def generate_xdata(self):
# TODO: CHANGE TO
Event_Genechoice_List = ['v_choice', 'j_choice', 'd_gene']
Event_Dinucl_List = ['vd_dinucl', 'dj_dinucl', 'vj_dinucl']
Event_Insertion_List = ['vd_ins', 'dj_ins', 'vj_ins']
Event_Deletion_List = ['v_3_del', 'j_5_del', 'd_3_del', 'd_5_del']
for key in self.marginals.marginals_dict:
event = self.parms.get_Event(key)
if event.event_type == 'DinucMarkov':
#if key in Event_Dinucl_List:
self.xdata[key] = xr.DataArray(self.marginals.marginals_dict[key].reshape(4,4), \
dims=('x', 'y'))
labels = self.parms.Event_dict[key]['value'].values
strDim = 'x'
self.xdata[key][strDim] = range(len(self.xdata[key][strDim]))
strCoord = 'lbl__' + strDim
self.xdata[key][strCoord] = (strDim, labels)
strDim = 'y'
self.xdata[key][strDim] = range(len(self.xdata[key][strDim]))
strCoord = 'lbl__' + strDim
self.xdata[key][strCoord] = (strDim, labels)
else:
self.xdata[key] = xr.DataArray(self.marginals.marginals_dict[key], \
dims=tuple(self.marginals.network_dict[key]))
#print "key: ", key, self.xdata[key].dims
for strDim in self.xdata[key].dims:
self.xdata[key][strDim] = range(len(self.xdata[key][strDim]))
if strDim in Event_Genechoice_List:
#print strDim
#labels = self.parms.Event_dict[strDim]['name'].map(genLabel).values # FIXME: use the exact name defined in model_parms
labels = self.parms.Event_dict[strDim]['name'].values
strCoord = 'lbl__'+strDim
self.xdata[key][strCoord] = (strDim, labels) # range(len(self.xdata[key][coord]))
sequences = self.parms.Event_dict[strDim]['value'].values
strCoord = 'seq__' + strDim
self.xdata[key][strCoord] = (strDim, sequences)
elif not (strDim in Event_Dinucl_List):
labels = self.parms.Event_dict[strDim]['value'].values
strCoord = 'lbl__'+strDim
self.xdata[key][strCoord] = (strDim, labels) # range(len(self.xdata[key][coord]))
# event = self.parms.get_Event(key)
# print(event.event_type)
# self.xdata[key].attrs["event_type"] = event.event_type
# self.xdata[key].attrs["seq_type"] = event.seq_type
# self.xdata[key].attrs["seq_side"] = event.seq_side
# Event attributes
self.xdata[key].attrs["nickname"] = event.nickname
self.xdata[key].attrs["event_type"] = event.event_type
self.xdata[key].attrs["seq_type"] = event.seq_type
self.xdata[key].attrs["seq_side"] = event.seq_side
self.xdata[key].attrs["priority"] = event.priority
self.xdata[key].attrs["parents"] = list(self.parms.G.predecessors(key))
self.xdata[key].attrs["childs"] = list(self.parms.G.successors(key))
self.generate_Pmarginals()
def get_zero_xarray_from_list(self, strEvents_list:list):
#strEvents_list = ['v_choice', 'j_choice']
strEvents_tuple = tuple(strEvents_list)
# Use model parms to create xarray with values
da_shape_list = [len(self.parms.Event_dict[str_event_nickname]) for str_event_nickname in strEvents_list]
da_shape_tuple = tuple(da_shape_list)
da = xr.DataArray(np.zeros(da_shape_tuple), dims=strEvents_tuple)
for event_nickname in strEvents_list:
da[event_nickname] = self.parms.Event_dict[event_nickname].index.values
labels = self.parms.Event_dict[event_nickname]['name'].values
strCoord = 'lbl__' + event_nickname
da[strCoord] = (event_nickname, labels)
return da
def VE_get_Pmarginals_initial_factors(self):
factors = list()
for da in self.xdata.values():
if da.attrs["event_type"] == 'DinucMarkov':
sarray = da.stack(z=('x', 'y'))
sarray = sarray.rename({"z": da.attrs["nickname"]})
# FIXME: I'm removing DinucMarkov in the factors because
# P(vd_dinucl) = P(y|x) and we don't have P(x)
# So we can't marginalize P(y) neither P(x,y)
else:
factors.append(da)
return factors
# Doesn't need to be a self method, but ...
def VE_get_factors_by_sum_out_variable(self, var_to_eliminate, factors):
# var_to_eliminate = 'j_choice'
factors_to_sum_out = list()
_factors = list()
# separate factors to sum-out
for factor in factors:
# if factor.attrs["event_type"] == 'DinucMarkov':
# lista = [factor.attrs["nickname"]] + factor.attrs["parents"]
# var_intersection = {var_to_eliminate}.intersection(set(lista))
# else:
var_intersection = {var_to_eliminate}.intersection(set(factor.dims))
# print("var_intersection : ", var_intersection)
if len(var_intersection) > 0:
# print(var_intersection)
factors_to_sum_out.append(factor)
else:
_factors.append(factor)
# sum-out-variables
da_sum_over_event = 1
for factor in factors_to_sum_out:
da_sum_over_event = da_sum_over_event * factor
new_factor = da_sum_over_event.sum(dim=var_to_eliminate)
factors = _factors + [new_factor]
return factors
def VE_get_Pmarginal_of_event(self, strEvent):
# FIXME: use xdata instead of self.parms
sorted_events = self.parms.get_Event_list_sorted()
sorted_events_to_marginalize = [event for event in sorted_events if not event.event_type == "DinucMarkov"]
sorted_events_to_marginalize_without_VE = [event for event in sorted_events_to_marginalize if
not event.nickname == strEvent]
# Start eliminating events
factors = self.VE_get_Pmarginals_initial_factors()
for event_to_eliminate_VE in sorted_events_to_marginalize_without_VE:
factors = self.VE_get_factors_by_sum_out_variable(event_to_eliminate_VE.nickname, factors)
# Now multiply the remaining factors to get the marginal.
Pmarginal = 1
for factor in factors:
Pmarginal = Pmarginal * factor
return Pmarginal
def generate_Pmarginals(self):
# Apply Variable elimination method for this
# 1. Get a list of event sorted by high priority and less number of parents
self.Pmarginal = dict()
# Get the marginal for each event
for key, darray in self.xdata.items():
# print(key, darray)
if darray.attrs["event_type"] == "DinucMarkov":
self.Pmarginal[key] = darray
else:
self.Pmarginal[key] = self.VE_get_Pmarginal_of_event(key)
# # FIXME: MAKE IT GENERAL
# def generate_Pmarginals(self):
# # FIXME: GENERALIZE FOR ANY NETWORK NOT ONLY FOR VDJ AND VJ
# strEvent = 'v_choice'
# self.Pmarginal[strEvent] = self.xdata[strEvent]
#
# strEvent = 'j_choice'
# strEventParent01 = 'v_choice'
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(self.xdata[strEventParent01])
#
# strEvent = 'v_3_del'
# strEventParent01 = 'v_choice'
# Pjoint_aux = self.Pmarginal[strEventParent01]
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(Pjoint_aux)
#
# strEvent = 'j_5_del'
# strEventParent01 = 'j_choice'
# Pjoint_aux = self.Pmarginal[strEventParent01]
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(Pjoint_aux)
#
# if 'd_gene' in self.xdata.keys():
# strEvent = 'd_gene'
# strEventParent01 = 'v_choice'
# strEventParent02 = 'j_choice'
# Pjoint_aux = self.xdata[strEventParent02] * self.Pmarginal[strEventParent01]
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(Pjoint_aux)
#
# strEvent = 'd_gene'
# strEventParent01 = 'v_choice'
# strEventParent02 = 'j_choice'
# Pjoint_aux = self.xdata[strEventParent02] * self.Pmarginal[strEventParent01]
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(Pjoint_aux)
#
# strEvent = 'd_5_del'
# strEventParent01 = 'd_gene'
# Pjoint_aux = self.Pmarginal[strEventParent01]
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(Pjoint_aux)
#
# strEvent = 'd_3_del'
# strEventParent01 = 'd_gene'
# strEventParent02 = 'd_5_del'
# Pjoint_aux = self.xdata[strEventParent02] * self.Pmarginal[strEventParent01]
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(Pjoint_aux)
#
# strEvent = 'vd_ins'
# self.Pmarginal[strEvent] = self.xdata[strEvent]
#
# strEvent = 'vd_dinucl'
# self.Pmarginal[strEvent] = self.xdata[strEvent]
#
# strEvent = 'dj_ins'
# self.Pmarginal[strEvent] = self.xdata[strEvent]
#
# strEvent = 'dj_dinucl'
# self.Pmarginal[strEvent] = self.xdata[strEvent]
#
# else: # VJ NETWORK
# strEvent = 'vj_ins'
# self.Pmarginal[strEvent] = self.xdata[strEvent]
#
# strEvent = 'vj_dinucl'
# self.Pmarginal[strEvent] = self.xdata[strEvent]
def export_csv(self, fln_prefix, sep=';'):
# FIXME: TEMPORARY SOLUTION FOR VERY PARTICULAR CASES.
#################################################################################
strEvent = 'v_choice'
da = self.xdata[strEvent]
# print(list(self.parms.G.predecessors(strEvent)))
evento = self.parms.get_Event(strEvent)
df = pd.DataFrame(data=da.values, index=da['lbl__' + strEvent].values,
columns=["P"]) # da['lbl__' + strEvent].values
lbl_file = fln_prefix + "P__" + strEvent + ".csv"
df.to_csv(lbl_file, index_label=evento.seq_type, sep=sep)
### v_3_del
strEvent = 'v_3_del'
da = self.xdata[strEvent]
# print(list(self.parms.G.predecessors(strEvent)))
parents = list(self.parms.G.predecessors(strEvent))
evento = self.parms.get_Event(strEvent)
dependencias = list(self.xdata[strEvent].dims)
# print("********", dependencias, strEvent)
dependencias.remove(strEvent)
dependencias_dim = [self.xdata[strEvent][dep].shape[0] for dep in dependencias]
if len(parents) == 1:
df = pd.DataFrame(data=da.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + strEvent].values)
lbl_file = fln_prefix + "P__" + strEvent + "__G__" + dependencias[0] + ".csv"
df.to_csv(lbl_file, sep=sep) # , index_label=evento.seq_type)
#################################################################################
strEvent = 'j_choice'
da = self.xdata[strEvent]
parents = list(self.parms.G.predecessors(strEvent))
evento = self.parms.get_Event(strEvent)
dependencias = list(self.xdata[strEvent].dims)
#print("********", dependencias, strEvent)
dependencias.remove(strEvent)
dependencias_dim = [self.xdata[strEvent][dep].shape[0] for dep in dependencias]
if len(parents) == 0:
df = pd.DataFrame(data=da.values, index=da['lbl__' + strEvent].values,
columns=["P"]) # da['lbl__' + strEvent].values
lbl_file = fln_prefix + "P__" + strEvent + ".csv"
df.to_csv(lbl_file, index_label=evento.seq_type, sep=sep)
elif len(parents) == 1:
df = pd.DataFrame(data=da.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + strEvent].values)
lbl_file = fln_prefix + "P__" + strEvent + "__G__" + dependencias[0] + ".csv"
df.to_csv(lbl_file, sep=sep) #, index_label=evento.seq_type)
else:
print("Recombination event "+strEvent+" has an export problem!")
### j_5_del
strEvent = 'j_5_del'
da = self.xdata[strEvent]
# print(list(self.parms.G.predecessors(strEvent)))
parents = list(self.parms.G.predecessors(strEvent))
evento = self.parms.get_Event(strEvent)
dependencias = list(self.xdata[strEvent].dims)
# print("********", dependencias, strEvent)
dependencias.remove(strEvent)
dependencias_dim = [self.xdata[strEvent][dep].shape[0] for dep in dependencias]
if len(parents) == 1:
df = pd.DataFrame(data=da.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + strEvent].values)
lbl_file = fln_prefix + "P__" + strEvent + "__G__" + dependencias[0] + ".csv"
df.to_csv(lbl_file, sep=sep) # , index_label=evento.seq_type)
#################################################################################
if 'd_gene' in self.xdata.keys():
strEvent = 'd_gene'
da = self.xdata[strEvent]
parents = list(self.parms.G.predecessors(strEvent))
print(parents)
evento = self.parms.get_Event(strEvent)
print(evento.event_type)
print(evento.seq_type)
dependencias = list(self.xdata[strEvent].dims)
print("********", dependencias, strEvent)
dependencias.remove(strEvent)
dependencias_dim = [self.xdata[strEvent][dep].shape[0] for dep in dependencias]
if len(parents) == 0:
df = pd.DataFrame(data=da.values, index=da['lbl__' + strEvent].values,
columns=["P"]) # da['lbl__' + strEvent].values
lbl_file = fln_prefix + "P__" + strEvent + ".csv"
df.to_csv(lbl_file, index_label=evento.seq_type, sep=sep)
elif len(parents) == 1:
df = pd.DataFrame(data=da.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + strEvent].values)
lbl_file = fln_prefix + "P__" + strEvent + "__G__" + dependencias[0] + ".csv"
df.to_csv(lbl_file, sep=sep) # , index_label=evento.seq_type)
elif len(parents) == 2:
lbl_file = fln_prefix + "P__" + strEvent + "__G__" + dependencias[0] + "__" + dependencias[1] + ".csv"
with open(lbl_file, 'w') as ofile:
for ii in da[strEvent].values:
title = "P(" + da["lbl__"+strEvent].values[ii] +"| "+dependencias[0] + "," + dependencias[1]+")"
ofile.write("\n"+title+"\n")
da_ii = da[{strEvent: ii}]
df = pd.DataFrame(data=da_ii.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + dependencias[1]].values)
df.to_csv(ofile, mode='a', sep=sep) # , index_label=evento.seq_type)
else:
print("Recombination event " + strEvent + " has an export problem!")
strEvent = 'd_gene'
da = self.xdata[strEvent]
parents = list(self.parms.G.predecessors(strEvent))
evento = self.parms.get_Event(strEvent)
dependencias = list(self.xdata[strEvent].dims)
dependencias.remove(strEvent)
dependencias_dim = [self.xdata[strEvent][dep].shape[0] for dep in dependencias]
if len(parents) == 0:
df = pd.DataFrame(data=da.values, index=da['lbl__' + strEvent].values,
columns=["P"]) # da['lbl__' + strEvent].values
lbl_file = fln_prefix + "P__" + strEvent + ".csv"
df.to_csv(lbl_file, index_label=evento.seq_type, sep=sep)
elif len(parents) == 1:
df = pd.DataFrame(data=da.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + strEvent].values)
lbl_file = fln_prefix + "P__" + strEvent + "__G__" + dependencias[0] + ".csv"
df.to_csv(lbl_file, sep=sep) # , index_label=evento.seq_type)
elif len(parents) == 2:
lbl_file = fln_prefix + "P__" + strEvent + "__G__" + dependencias[0] + "__" + dependencias[1] + ".csv"
with open(lbl_file, 'w') as ofile:
for ii in da[strEvent].values:
title = "P(" + da["lbl__" + strEvent].values[ii] + "| " + dependencias[0] + "," + dependencias[1] + ")"
ofile.write("\n" + title + "\n")
da_ii = da[{strEvent: ii}]
df = pd.DataFrame(data=da_ii.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + dependencias[1]].values)
df.to_csv(ofile, mode='a', sep=sep) # , index_label=evento.seq_type)
else:
print("Recombination event " + strEvent + " has an export problem!")
#return df
## P(D3, D5 | D) = P( D3| D5,D) x P (D5,D)
#### Deletions in D
da = self.xdata['d_3_del']*self.xdata['d_5_del']
### DELETIONS
strEvent = 'd_gene'
da = self.xdata[strEvent]
dependencias = list(da.dims)
print("********", dependencias, strEvent)
dependencias.remove(strEvent)
dependencias_dim = [da[dep].shape[0] for dep in dependencias]
lbl_file = fln_prefix + "P__" + strEvent + "__deletions" + ".csv"
with open(lbl_file, 'w') as ofile:
for ii in da[strEvent].values:
da_ii = da[{strEvent: ii}]
lbl_event_realization = da['lbl__' + strEvent].values[ii]
title = "_P(" + dependencias[0] + "," + dependencias[1] + "| " + strEvent + " = " + lbl_event_realization + ")"
ofile.write(title + "\n")
df = pd.DataFrame(data=da_ii.values, index=da['lbl__' + dependencias[0]].values,
columns=da['lbl__' + dependencias[1]].values)
df.to_csv(ofile, mode='a', sep=sep) # , index_label=evento.seq_type)
ofile.write("\n")
# self.xdata['d_3_del'] # P( D3| D5,D)
### INSERTIONS
strEvent = 'vd_ins'
da = self.xdata[strEvent]
df_vd = pd.DataFrame(data=da.values, index=da['lbl__' + strEvent].values,
columns=["P("+strEvent+")"]) # da['lbl__' + strEvent].values
strEvent = 'dj_ins'
da = self.xdata[strEvent]
df_dj = pd.DataFrame(data=da.values, index=da['lbl__' + strEvent].values,
columns=["P("+strEvent+")"]) # da['lbl__' + strEvent].values
df = df_vd.merge(df_dj, left_index=True, right_index=True)
lbl_file = fln_prefix + "P__" + "insertions" + ".csv"
df.to_csv(lbl_file, index_label="Insertions", sep=sep)
### DINUCL
strEvent = 'vd_dinucl'
da = self.xdata[strEvent]
print(da)
df = pd.DataFrame(data=da.values, index=da['lbl__x'].values,
columns=da['lbl__y'].values)
lbl_file = fln_prefix + "P__" + strEvent + ".csv"
df.to_csv(lbl_file, index_label="From\To", sep=sep)
strEvent = 'dj_dinucl'
da = self.xdata[strEvent]
print(da)
df = pd.DataFrame(data=da.values, index=da['lbl__x'].values,
columns=da['lbl__y'].values)
lbl_file = fln_prefix + "P__" + strEvent + ".csv"
df.to_csv(lbl_file, index_label="From\To", sep=sep)
else:
### INSERTIONS
strEvent = 'vj_ins'
da = self.xdata[strEvent]
df = pd.DataFrame(data=da.values, index=da['lbl__' + strEvent].values,
columns=["P(" + strEvent + ")"]) # da['lbl__' + strEvent].values
lbl_file = fln_prefix + "P__" + "insertions" + ".csv"
df.to_csv(lbl_file, index_label="Insertions", sep=sep)
### DINUCL
strEvent = 'vj_dinucl'
da = self.xdata[strEvent]
print(da)
df = pd.DataFrame(data=da.values, index=da['lbl__x'].values,
columns=da['lbl__y'].values)
lbl_file = fln_prefix + "P__" + strEvent + ".csv"
df.to_csv(lbl_file, index_label="From\To", sep=sep)
#################################################################################
# strEvent = 'j_choice'
# strEventParent01 = 'v_choice'
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(self.xdata[strEventParent01])
#
# strEvent = 'v_3_del'
# strEventParent01 = 'v_choice'
# Pjoint_aux = self.Pmarginal[strEventParent01]
# self.Pmarginal[strEvent] = self.xdata[strEvent].dot(Pjoint_aux)
# FIXME: THIS METHOD IS NOT FINISH!!
def export_event_to_csv(self, event_nickname, fln_prefix, sep=';'):
# if kwargs.get('sep') is None:
# kwargs['sep'] = ';'
da = self.xdata[event_nickname]
event = self.parms.get_Event(event_nickname)
if da.event_type == 'GeneChoice':
# print(list(self.parms.G.predecessors(strEvent)))
df = pd.DataFrame(data=da.values, index=da['lbl__' + event_nickname].values,
columns=["P"]) # da['lbl__' + strEvent].values
lbl_file = fln_prefix + "P__" + event_nickname + ".csv"
df.to_csv(lbl_file, index_label=event.seq_type, sep=sep)
def export_Pmarginal_to_csv(self, event_nickname:str, *args, **kwargs):
if kwargs.get('sep') is None:
kwargs['sep'] = ';'
event = self.parms.get_Event(event_nickname, by_nickname=True)
da = self.xdata[event_nickname]
if event.event_type == 'GeneChoice' :
df = da.to_dataframe(name="P") #.drop('priority', 1)
df.to_csv(*args, **kwargs)
elif event.event_type == 'Insertion':
df = da.to_dataframe(name="P") # .drop('priority', 1)
df.to_csv(*args, **kwargs)
elif event.event_type == 'Deletion':
df = da.to_dataframe(name="P") # .drop('priority', 1)
df.to_csv(*args, **kwargs)
elif event.event_type == 'DinucMarkov':
### FIXME:
strEvent = 'dj_dinucl'
df = pd.DataFrame(data=da.values, index=da['lbl__x'].values,
columns=da['lbl__y'].values)
kwargs['index_label'] = "From\To"
df.to_csv(*args, **kwargs) #, index_label=, sep=sep)
else:
print("Event nickname "+event_nickname+" is not present in this model.")
print("Accepted Events nicknames are : "+str(self.get_events_nicknames_list()))
# FIXME: CHANGE EVENT MARGINAL!!!
def get_Event_Marginal(self, event_nickname: str):
"""Returns an xarray with the marginal probability of the event given the nickname"""
# FIXME: add new way to make the recursion.
# FIXME: FIRST without recursion, return
if event_nickname in self.parms.get_EventsNickname_list():
da_event = self.xdata[event_nickname]
dependencies = self.parms.Edges_dict[event_nickname]
event_parents = list( self.parms.G.predecessors(event_nickname) )
#1. Sort the dependencies by priority, then by dependencie
# if queue is not empty:
# self.get_Event_Marginal(nicki)
if event_nickname == 'v_choice':
da_marginal = da_event
return da_marginal
elif event_nickname == 'j_choice':
print(event_parents)
strEventParent = 'v_choice'
da_event_parent = self.xdata[strEventParent]
da_marginal = da_event_parent.dot(da_event)
da_parents = 1
for parent in event_parents:
da_parents = da_parents * self.xdata[parent]
print('&'*20)
print(da_event.dot(da_parents))
return da_marginal
# return da_event, da_event_parent, (da_event_parent.dot(da_event)), da_marginal.sum()
elif event_nickname == 'd_gene':
print(event_parents)
strEventParent = 'v_choice'
strEventParent2 = 'j_choice'
da_event_parent = self.xdata[strEventParent]
da_event_parent2 = self.xdata[strEventParent2]
# da_marginal = da_event.dot()
da_marginal = da_event_parent*da_event_parent2
return da_marginal
else:
print("Event nickname : " + event_nickname + " is not an event in this IGoR model.")
return list()
def plot_event_GeneChoice(self, event_nickname:str, **kwargs):
""" Return GeneChoice plot """
# Default values in plot
import numpy as np
v_genLabel = np.vectorize(genLabel)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
da = self.xdata[event_nickname]
for parent_nickname in da.dims: # attrs['parents']
da["lbl__" + parent_nickname].values = v_genLabel(da["lbl__" + parent_nickname].values)
parents_list = da.attrs['parents']
if len(parents_list) == 0:
# ONE DIMENSIONAL PLOT
titulo = "$P($" + event_nickname + "$)$"
fig, ax = plt.subplots(figsize=(18, 15))
XX = da[event_nickname].values
YY = da.values
ax.bar(XX, YY, **kwargs)
lbl_XX = da['lbl__' + event_nickname].values
ax.set_xticks(XX)
ax.set_xticklabels(v_genLabel(lbl_XX), rotation=90)
ax.set_title(titulo)
return fig, ax
elif len(parents_list) == 1:
if not 'cmap' in kwargs.keys():
kwargs['cmap'] = 'gnuplot2_r'
lbl_parents = ",".join(da.attrs['parents'])
titulo = "$P($" + event_nickname + "$|$" + lbl_parents + "$)$"
fig, ax = plt.subplots(figsize=(18, 15))
XX = da[event_nickname].values
YY = da.values
da.plot(ax=ax, **kwargs)
lbl_XX = da['lbl__' + event_nickname].values
ax.set_title(titulo)
ax.set_aspect('equal')
return fig, ax
elif len(parents_list) == 2:
if not 'cmap' in kwargs.keys():
kwargs['cmap'] = 'gnuplot2_r'
# da = self.xdata[event_nickname]
fig, ax = plt.subplots(*da[event_nickname].shape, figsize=(10, 20))
for ii, ev_realiz in enumerate(da[event_nickname]):
# print(ev_realiz.values)
da[{event_nickname: ev_realiz.values}].plot(ax=ax[ii], cmap='gnuplot2_r')
lbl_ev_realiz = str( ev_realiz["lbl__" + event_nickname].values )
lbl_parents = str( ",".join(da.attrs['parents']) )
titulo = "$P($" + event_nickname + "$ = $ " + lbl_ev_realiz + " $|$" + lbl_parents + "$)$"
ax[ii].set_title(titulo)
return fig, ax
else:
fig, ax = plt.subplots()
ax.set_title("Dimensionality not supportted for event : ", event_nickname)
return fig, ax
def plot_event_Insertion(self, event_nickname:str, **kwargs):
""" Return Insertion plot """
import numpy as np
v_genLabel = np.vectorize(genLabel)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
da = self.xdata[event_nickname]
parents_list = da.attrs['parents']
if len(parents_list) == 0:
# ONE DIMENSIONAL PLOT
titulo = "$P($" + event_nickname + "$)$"
fig, ax = plt.subplots(figsize=(18, 15))
XX = da[event_nickname].values
YY = da.values
ax.bar(XX, YY, **kwargs)
lbl_XX = da['lbl__' + event_nickname].values
ax.set_xticks(XX)
ax.set_xticklabels(lbl_XX, rotation=90)
ax.set_title(titulo)
return fig, ax
elif len(parents_list) == 1:
titulo = "$P($" + event_nickname + "$|$" + ",".join(da.attrs['parents']) + "$)$"
fig, ax = plt.subplots(figsize=(18, 15))
XX = da[event_nickname].values
YY = da.values
da.plot(ax=ax, **kwargs)
lbl_XX = da['lbl__' + event_nickname].values
ax.set_title(titulo)
ax.set_aspect('equal')
return fig, ax
elif len(parents_list) == 2:
# da = self.xdata[event_nickname]
fig, ax = plt.subplots(*da[event_nickname].shape, figsize=(10, 20))
for ii, ev_realiz in enumerate(da[event_nickname]):
# print(ev_realiz.values)
da[{event_nickname: ev_realiz.values}].plot(ax=ax[ii], cmap='gnuplot2_r')
lbl_ev_realiz = str(ev_realiz["lbl__" + event_nickname].values)
lbl_parents = str(",".join(da.attrs['parents']))
print(lbl_ev_realiz, lbl_parents)
titulo = "$P($" + event_nickname + "$ = $ " + lbl_ev_realiz + " $|$" + lbl_parents + "$)$"
ax[ii].set_title(titulo)
return fig, ax
else:
fig, ax = plt.subplots()
ax.set_title("Dimensionality not supportted for event : ", event_nickname)
return fig, ax
def plot_event_Deletion(self, event_nickname:str, **kwargs):
""" Return GeneChoice plot """
import numpy as np
v_genLabel = np.vectorize(genLabel)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
da = self.xdata[event_nickname]
parents_list = da.attrs['parents']
if len(parents_list) == 0:
# ONE DIMENSIONAL PLOT
titulo = "$P($" + event_nickname + "$)$"
fig, ax = plt.subplots(figsize=(18, 15))
XX = da[event_nickname].values
YY = da.values
ax.bar(XX, YY, **kwargs)
lbl_XX = da['lbl__' + event_nickname].values
ax.set_xticks(XX)
ax.set_xticklabels(lbl_XX, rotation=90)
ax.set_title(titulo)
return fig, ax
elif len(parents_list) == 1:
if not 'cmap' in kwargs.keys():
kwargs['cmap'] = 'gnuplot2_r'
titulo = "$P($" + event_nickname + "$|$" + ",".join(da.attrs['parents']) + "$)$"
fig, ax = plt.subplots(figsize=(18, 15))
XX = da[event_nickname].values
YY = da.values
da.plot(ax=ax, **kwargs)
lbl_XX = da['lbl__' + event_nickname].values
ax.set_title(titulo)
ax.set_aspect('equal')
return fig, ax
elif len(parents_list) == 2:
if not 'cmap' in kwargs.keys():
kwargs['cmap'] = 'gnuplot2_r'
# da = self.xdata[event_nickname]
fig, ax = plt.subplots(*da[event_nickname].shape, figsize=(10, 50))
for ii, ev_realiz in enumerate(da[event_nickname]):
# print(ev_realiz.values)
lbl_ev_realiz = str(ev_realiz["lbl__" + event_nickname].values)
lbl_parents = str(",".join(da.attrs['parents']))
da[{event_nickname: ev_realiz.values}].plot(ax=ax[ii], cmap='gnuplot2_r')
titulo = "$P($" + event_nickname + "$ = $ " + lbl_ev_realiz + " $|$" + lbl_parents + "$)$"
ax[ii].set_title(titulo)
return fig, ax
else:
fig, ax = plt.subplots()
ax.set_title("Dimensionality not supportted for event : ", event_nickname)
return fig, ax
def plot_event_DinucMarkov(self, event_nickname:str, **kwargs):
""" Return GeneChoice plot """
# Default values in plot
if not 'cmap' in kwargs.keys():
kwargs['cmap'] = 'gnuplot2_r'
import numpy as np
import matplotlib.pyplot as plt
da = self.xdata[event_nickname]
lblEvent = event_nickname.replace("_", " ")
xEtiqueta = lblEvent
yEtiqueta = "P"
fig, ax = plt.subplots()
XX = da['x'].values
YY = da['y'].values
lbl__XX = da['lbl__' + 'x'].values
lbl__YY = da['lbl__' + 'y'].values
ZZ = da.values
da.plot(ax=ax, x='x', y='y', vmin=0, vmax=1, **kwargs)
ax.set_xlabel('From')
ax.set_xticks(XX)
ax.set_xticklabels(lbl__XX, rotation=0)
ax.set_ylabel('To')
ax.set_yticks(YY)
ax.set_yticklabels(lbl__YY, rotation=0)
ax.set_title(lblEvent)
ax.set_aspect('equal')
for i, j in zip(*ZZ.nonzero()):
ax.text(j, i, ZZ[i, j], color='white', ha='center', va='center')
return fig, ax
def export_plot_events(self, outfilename_prefix):
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
with PdfPages(outfilename_prefix + ".pdf") as pdf_file:
fig, ax = plt.subplots()
self.parms.plot_Graph(ax=ax)
fig.tight_layout()
pdf_file.savefig(fig)
# GeneChoice, Insertion, Deletion, DinucMarkov
for event_nickname in self.xdata.keys():
event = self.parms.get_Event(event_nickname)
if event.event_type == 'GeneChoice':
fig, ax = self.plot_event_GeneChoice(event_nickname)
fig.tight_layout()
pdf_file.savefig(fig)
del fig
elif event.event_type == 'Insertion':
fig, ax = self.plot_event_Insertion(event_nickname)
fig.tight_layout()
pdf_file.savefig(fig)
del fig
elif event.event_type == 'Deletion':
fig, ax = self.plot_event_Deletion(event_nickname)
fig.tight_layout()
pdf_file.savefig(fig)
del fig
elif event.event_type == 'DinucMarkov':
fig, ax = self.plot_event_DinucMarkov(event_nickname)
fig.tight_layout()
pdf_file.savefig(fig)
del fig
else:
print("ERROR: EVENT NOT RECOGNIZE", event_nickname)
def export_plot_Pmarginals(self, outfilename_prefix):
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
with PdfPages(outfilename_prefix+".pdf") as pdf_file:
fig, ax = plt.subplots()
self.parms.plot_Graph(ax=ax)
fig.tight_layout()
pdf_file.savefig(fig)
for event_nickname in self.Pmarginal.keys():
fig, ax = plt.subplots(figsize=(20, 10))
self.plot_Event_Marginal(event_nickname, ax=ax)
fig.tight_layout()
# flnOutput = flnPrefix + "_" + event_nickname + ".pdf"
pdf_file.savefig(fig)
# fig.savefig(flnOutput)
def plot_Event_Marginal(self, event_nickname:str, ax=None, **kwargs):
"""
Plot marginals of model events by nickname
"""
event = self.parms.get_Event(event_nickname, by_nickname=True)
da = self.Pmarginal[event_nickname] # real marginal DataArray
lblEvent = event_nickname.replace("_", " ")
xEtiqueta = lblEvent
yEtiqueta = "P"
if ax is None:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.set_xlabel(xEtiqueta)
ax.set_ylabel(yEtiqueta, rotation=0)
if event.event_type == 'GeneChoice' :
# Bar plot
XX = da[event_nickname].values
YY = da.values
ax.bar(XX, YY, **kwargs)
lbl_XX = da['lbl__' + event_nickname].values
ax.set_xticks(XX)
ax.set_xticklabels(v_genLabel(lbl_XX), rotation=90)
#return ax
elif event.event_type == 'Insertion':
# Use labels as a coordinate.
# Insertions are in principle independent,
# FIXME: but if not what to do.
XX = da['lbl__' + event_nickname].values
YY = da.values
if not 'marker' in kwargs.keys():
kwargs['marker'] = 'o'
ax.plot(XX, YY, **kwargs)
elif event.event_type == 'Deletion':
#YY = self.xdata[event_nickname].values
#XX = self.xdata[event_nickname]['lbl__' + event_nickname].values
#ax.plot(XX, YY)
XX = da['lbl__' + event_nickname].values
YY = da.values
if not 'marker' in kwargs.keys():
kwargs['marker'] = 's'
ax.plot(XX, YY, **kwargs)
elif event.event_type == 'DinucMarkov':
XX = da['x'].values
YY = da['y'].values
lbl__XX = da['lbl__' + 'x'].values
lbl__YY = da['lbl__' + 'y'].values
ZZ = da.values
da.plot(ax=ax, x='x', y='y', vmin=0, vmax=1, cmap='gnuplot2_r', **kwargs)
ax.set_xlabel('From')
ax.set_xticks(XX)
ax.set_xticklabels(lbl__XX, rotation=0)
ax.set_ylabel('To')
ax.set_yticks(YY)
ax.set_yticklabels(lbl__YY, rotation=0)
ax.set_title(lblEvent)
ax.set_aspect('equal')
for i, j in zip(*ZZ.nonzero()):
ax.text(j, i, ZZ[i, j], color='white', ha='center', va='center')
else:
print("Event nickname "+event_nickname+" is not present in this model.")
print("Accepted Events nicknames are : "+str(self.get_events_nicknames_list()))
#return self.get_Event_Marginal(nickname)
# ax.set_title(lblEvent)
return ax
def get_events_types_list(self):
"Return list of event types in current model"
# The event list should be extracted from the Event_list
events_set = set()
for event in self.parms.Event_list:
events_set.add(event.event_type)
return list(events_set)
def get_events_nicknames_list(self):
"Return list of event nicknames in current model"
# The event list should be extracted from the Event_list
events_set = set()
for event in self.parms.Event_list:
events_set.add(event.nickname)
return list(events_set)
# PLOTS:
def plot_Bayes_network(self, filename=None):
if filename is None:
return self.parms.plot_Graph()
else:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax_ = self.parms.plot_Graph(ax=ax)
fig.savefig(filename)
return ax_
def plot(self, event_nickname:str, ax=None):
if ax is None:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
da = self.xdata[event_nickname]
return ax
# def infer(self, batchname=None, iterations=5):
# import subprocess
# igor_exec = rcParams['paths.igor_exec']
# wd = "."
# cmd = igor_exec +" -set_wd " + wd + " -set_custom_model " + self.parms.model_parms_file + " -infer --N_iter "+str(iterations)
# print(cmd)
def export_event_to_csv(self, strEvent, *args, **kargs):
# if path_or_buf is None:
# path_or_buf = 'event__'+strEvent+".csv"
# strEvent = 'd_3_del'
df = self.xdata[strEvent].to_dataframe(name="Prob").drop('priority', 1)
df.to_csv(*args, **kargs)
def plot_dumm_report(self, strEvent):
# strEvent = 'd_gene'
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
dependencias = list(self.xdata[strEvent].dims)
dependencias.remove(strEvent)
dependencias_dim = [self.xdata[strEvent][dep].shape[0] for dep in dependencias]
# eventos = eventos.remove(strEvent)
lista = list()
import numpy as np
# np.ndindex()
for index in np.ndindex(*dependencias_dim):
dictionary = dict(zip(dependencias, index))
# TODO: PLOT EACH DAMM FIGURE
self.xdata[strEvent][dictionary].plot()
aaa = [str(key) + "__" + str(dictionary[key]) for key in dictionary.keys()]
lbl_file = "___".join(aaa)
df = self.xdata[strEvent][dictionary].to_dataframe("P").drop('priority', 1)
df.plot.bar(x="lbl__"+strEvent, y='P', ax=ax)
print("*"*10)
print(lbl_file)
print(df)
#df.to_csv(lbl_file+".csv")
#fig.savefig(lbl_file+".png")
#ax.clear()
return fig
def set_genomic_dataframe_dict(self, dataframe_dict):
self.genomic_dataframe_dict = dataframe_dict
def scenario_from_database(self, scenarios_list):
scen = scenarios_list[0]
scen_dict = scen.realizations_ids_dict
for event in self.parms.Event_list:
if not (event.event_type == 'DinucMarkov'):
scen_dict[event.nickname] = scen_dict.pop('id_' + event.nickname)
# FIXME:
def export_model(self, model_parms_file=None, model_marginals_file=None):
self.parms.write_model_parms(filename=model_parms_file)
self.marginals
self.xdata
print("Exporting model to ")
def get_event_realizations_DataFrame(self, event_nickname):
return self.parms.Event_dict[event_nickname]
def get_event_realization_of_event(self, event_nickname, event_id):
if type(event_id) is list:
return list( map( lambda x: self.parms.get_Event(event_nickname).realizations[x], event_id) )
else:
return self.parms.get_Event(event_nickname).realizations[event_id]
def get_realizations_dict_from_scenario_dict(self, scenario_realization_dict:dict):
realization_dict = dict()
# print(scenario_realization_dict)
for event_nickname, event_id in scenario_realization_dict.items():
if not ( event_nickname == 'mismatcheslen' or event_nickname == 'mismatches' or event_nickname == 'errors') :
realization_dict[event_nickname] = self.get_event_realization_of_event(event_nickname, event_id)
return realization_dict
def set_realization_event_from_DataFrame(self, event_nickname, new_df):
self.parms.set_event_realizations_from_DataFrame(event_nickname, new_df)
self.marginals.initialize_uniform_from_model_parms(self.parms)
self.generate_xdata()
def write_model(self, fln_model_parms, fln_model_marginals):
self.parms.write_model_parms(filename=fln_model_parms)
self.marginals.write_model_marginals(filename=fln_model_marginals, model_parms=self.parms)
# FIXME: DEPRECATED
# FIXME: Find a better way to get the order to construct a sequence.
def generate_sequence_construction_list(self):
""" Generate the list of events to reconstruct a sequence from an scenario self.sequence_construction_event_list """
sequence_arrengement_dict = dict()
# 1. 'V_gene'
V_gene_list = [event for event in self.parms.Event_list if event.seq_type == 'V_gene']
# 2. 'D_gene'
D_gene_list = [event for event in self.parms.Event_list if event.seq_type == 'D_gene']
# 3. 'J_gene'
J_gene_list = [event for event in self.parms.Event_list if event.seq_type == 'J_gene']
V_gene_list = sorted(V_gene_list,
key=lambda event: 100 * event.priority - len(self.xdata[event.nickname].attrs['parents']),
reverse=True)
J_gene_list = sorted(J_gene_list,
key=lambda event: 100 * event.priority - len(self.xdata[event.nickname].attrs['parents']),
reverse=True)
sequence_arrengement_dict['V_gene'] = V_gene_list
sequence_arrengement_dict['J_gene'] = J_gene_list
# since d_3_del and d_5_del have the same priority then
arrengement_list = list()
if len(D_gene_list) == 0:
VJ_gene_list = [event for event in self.parms.Event_list if event.seq_type == 'VJ_gene']
VJ_gene_list = sorted(VJ_gene_list,
key=lambda event: 100 * event.priority - len(self.xdata[event.nickname].attrs['parents']),
reverse=True)
sequence_arrengement_dict['VJ_gene'] = VJ_gene_list
arrengement_list = V_gene_list + VJ_gene_list + J_gene_list
else:
D_gene_list = sorted(D_gene_list,
key=lambda event: 100 * event.priority - len(self.xdata[event.nickname].attrs['parents']),
reverse=True)
sequence_arrengement_dict['D_gene'] = D_gene_list
VD_gene_list = [event for event in self.parms.Event_list if event.seq_type == 'VD_genes']
VD_gene_list = sorted(VD_gene_list,
key=lambda event: 100 * event.priority - len(self.xdata[event.nickname].attrs['parents']),
reverse=True)
sequence_arrengement_dict['VD_gene'] = VD_gene_list
DJ_gene_list = [event for event in self.parms.Event_list if event.seq_type == 'DJ_gene']
DJ_gene_list = sorted(DJ_gene_list,
key=lambda event: 100 * event.priority - len(self.xdata[event.nickname].attrs['parents']),
reverse=True)
sequence_arrengement_dict['DJ_gene'] = VD_gene_list
arrengement_list = V_gene_list + VD_gene_list + D_gene_list + DJ_gene_list + J_gene_list
self.sequence_construction_event_list = arrengement_list
return sequence_arrengement_dict # arrengement_list
def construct_sequence_VDJ_from_realization_dict(self, scen_realization_dict):
"""return VDJ gene segment, which are the gene with the deletions of palindromic insertions"""
# print("scen_realization_dict : ", scen_realization_dict)
V_segment_dict = get_gene_segment(scen_realization_dict['v_choice'].value,
int_gene_3_del=scen_realization_dict['v_3_del'].value)
D_segment_dict = get_gene_segment(scen_realization_dict['d_gene'].value,
int_gene_5_del=scen_realization_dict['d_5_del'].value,
int_gene_3_del=scen_realization_dict['d_3_del'].value)
J_segment_dict = get_gene_segment(scen_realization_dict['j_choice'].value,
int_gene_5_del=scen_realization_dict['j_5_del'].value)
VD_segment_dict = collections.OrderedDict()
DJ_segment_dict = collections.OrderedDict()
VD_segment_dict['gene_segment'] = "".join([realiz.value for realiz in scen_realization_dict['vd_dinucl']])
DJ_segment_dict['gene_segment'] = "".join([realiz.value for realiz in scen_realization_dict['dj_dinucl']][::-1])
return V_segment_dict, VD_segment_dict, D_segment_dict, DJ_segment_dict, J_segment_dict
def construct_sequence_VJ_from_realization_dict(self, scen_realization_dict):
"""return VJ sequence, which are the gene with the deletions of palindromic insertions"""
# print("scen_realization_dict : ", scen_realization_dict)
V_segment_dict = get_gene_segment(scen_realization_dict['v_choice'].value,
int_gene_3_del=scen_realization_dict['v_3_del'].value)
J_segment_dict = get_gene_segment(scen_realization_dict['j_choice'].value,
int_gene_5_del=scen_realization_dict['j_5_del'].value)
VJ_segment_dict = collections.OrderedDict()
VJ_segment_dict['gene_segment'] = "".join([realiz.value for realiz in scen_realization_dict['vj_dinucl']])
return V_segment_dict, VJ_segment_dict, J_segment_dict
# TODO: MAKE A METHOD TO EXPORT A LINE FROM AN SCENARIO
def get_AIRR_VDJ_rearragement_dict_from_scenario(self, scenario, str_sequence, v_offset=0, pgen=None, junction=None, junction_aa=None):
# get_AIRR_VDJ_rearragement_dict_from_scenario(scenario, indexed_seq.seq_index, indexed_seq.sequence)
# airr_dict = dict()
from .AIRR import AIRR_VDJ_rearrangement
realizations_ids_dict = scenario.realizations_ids_dict
realization_dict = self.get_realizations_dict_from_scenario_dict(realizations_ids_dict)
v_segment, vd_segment, d_segment, dj_segment, j_segment = self.construct_sequence_VDJ_from_realization_dict(realization_dict)
airr_vdj = AIRR_VDJ_rearrangement(sequence_id=scenario.seq_index, sequence=str_sequence)
airr_vdj.v_data.call = realization_dict['v_choice'].name
airr_vdj.d_data.call = realization_dict['d_gene'].name
airr_vdj.j_data.call = realization_dict['j_choice'].name
airr_vdj.sequence_alignment = v_segment['gene_segment'] + vd_segment['gene_segment'] + d_segment['gene_segment'] + dj_segment['gene_segment'] + j_segment['gene_segment']
airr_vdj.np1 = v_segment['palindrome_3_end'] + vd_segment['gene_segment']
airr_vdj.np2 = dj_segment['gene_segment'] + j_segment['palindrome_5_end']
airr_vdj.pgen = pgen
airr_vdj.junction = junction
airr_vdj.junction_aa = junction_aa
airr_vdj.rev_comp = False
# FIXME: CORRECT CIGAR FORMAT TEMPORARY SOLUTION
airr_vdj.v_data.cigar = str(len(v_segment['gene_cut']))+"M"
airr_vdj.d_data.cigar = str(len(d_segment['gene_cut'])) + "M"
airr_vdj.j_data.cigar = str(len(j_segment['gene_cut'])) + "M"
airr_vdj.v_data.score = 5 * len(v_segment['gene_cut'])
airr_vdj.d_data.score = 5 * len(d_segment['gene_cut'])
airr_vdj.j_data.score = 5 * len(j_segment['gene_cut'])
# V
airr_vdj.v_data.sequence_start = 1
airr_vdj.v_data.sequence_end = len(v_segment['palindrome_5_end']) + len(v_segment['gene_cut'])
airr_vdj.v_data.germline_start = airr_vdj.v_data.sequence_start - v_offset - 1
airr_vdj.v_data.germline_end = airr_vdj.v_data.sequence_end - airr_vdj.v_data.sequence_start - 1
# = airr_vdj.v_data.germline_start + len(v_segment['palindrome_5_end']) + len(v_segment['gene_cut'])
airr_vdj.p3v_length = len(v_segment['palindrome_3_end'])
# VD
airr_vdj.n1_length = realization_dict['vd_ins'].value
airr_vdj.np1_length = airr_vdj.p3v_length + airr_vdj.n1_length
airr_vdj.np1 = vd_segment['gene_segment'] # This include the palindromic insertions
# D
airr_vdj.p5d_length = len(d_segment['palindrome_5_end'])
airr_vdj.d_data.germline_start = d_segment['gene_ini'] + 1
airr_vdj.d_data.germline_end = d_segment['gene_end'] + 1
airr_vdj.d_data.sequence_start = airr_vdj.np1_length + (airr_vdj.v_data.sequence_end - airr_vdj.v_data.sequence_start - 1 )
airr_vdj.d_data.sequence_end = airr_vdj.d_data.sequence_start + len(d_segment['gene_cut']) - 1
airr_vdj.p3d_length = len(d_segment['palindrome_3_end'])
# DJ
airr_vdj.n2_length = realization_dict['dj_ins'].value
airr_vdj.np2_length = airr_vdj.p5d_length + airr_vdj.n2_length + airr_vdj.p3d_length
airr_vdj.np2 = dj_segment['gene_segment'] # This include the palindromic insertions
# J
airr_vdj.p5j_length = len(j_segment['palindrome_5_end'])
airr_vdj.j_data.germline_start = j_segment['gene_ini'] + 1
airr_vdj.j_data.germline_end = j_segment['gene_end'] + 1
airr_vdj.j_data.sequence_start = airr_vdj.np2_length + (airr_vdj.d_data.sequence_end - airr_vdj.d_data.sequence_start - 1)
airr_vdj.j_data.sequence_end = airr_vdj.j_data.sequence_start + len(j_segment['gene_cut']) - 1
return airr_vdj.to_dict()
def get_AIRR_VJ_rearragement_dict_from_scenario(self, scenario, str_sequence, v_offset=0, pgen=None, junction=None, junction_aa=None):
"""
Return airr rearragement from scenario.
"""
# get_AIRR_VDJ_rearragement_dict_from_scenario(scenario, indexed_seq.seq_index, indexed_seq.sequence)
# airr_dict = dict()
from .AIRR import AIRR_VDJ_rearrangement
realizations_ids_dict = scenario.realizations_ids_dict
realization_dict = self.get_realizations_dict_from_scenario_dict(realizations_ids_dict)
# FIXME: HERE
v_segment, vj_segment, j_segment = self.construct_sequence_VJ_from_realization_dict(realization_dict)
airr_vj = AIRR_VDJ_rearrangement(sequence_id=scenario.seq_index, sequence=str_sequence)
airr_vj.v_data.call = realization_dict['v_choice'].name
airr_vj.d_data.call = None #realization_dict['d_gene'].name
airr_vj.j_data.call = realization_dict['j_choice'].name
airr_vj.sequence_alignment = v_segment['gene_segment'] + vj_segment['gene_segment'] + j_segment['gene_segment']
airr_vj.np1 = v_segment['palindrome_3_end'] + vj_segment['gene_segment'] + j_segment['palindrome_5_end']
airr_vj.np2 = None
airr_vj.pgen = pgen
airr_vj.junction = junction
airr_vj.junction_aa = junction_aa
airr_vj.rev_comp = False
# FIXME: CORRECT CIGAR FORMAT TEMPORARY SOLUTION
airr_vj.v_data.cigar = str(len(v_segment['gene_cut']))+"M"
airr_vj.d_data.cigar = None #str(len(d_segment['gene_cut'])) + "M"
airr_vj.j_data.cigar = str(len(j_segment['gene_cut'])) + "M"
airr_vj.v_data.score = 5 * len(v_segment['gene_cut'])
airr_vj.d_data.score = None #5 * len(d_segment['gene_cut'])
airr_vj.j_data.score = 5 * len(j_segment['gene_cut'])
# V
airr_vj.v_data.sequence_start = 1
airr_vj.v_data.sequence_end = len(v_segment['palindrome_5_end']) + len(v_segment['gene_cut'])
airr_vj.v_data.germline_start = airr_vj.v_data.sequence_start - v_offset - 1
airr_vj.v_data.germline_end = airr_vj.v_data.sequence_end - airr_vj.v_data.sequence_start - 1
# = airr_vdj.v_data.germline_start + len(v_segment['palindrome_5_end']) + len(v_segment['gene_cut'])
airr_vj.p3v_length = len(v_segment['palindrome_3_end'])
# FIXME: WHY i NEED TO PUT IT FIRST?
airr_vj.p5j_length = len(j_segment['palindrome_5_end'])
# VJ
airr_vj.n1_length = realization_dict['vj_ins'].value
airr_vj.np1_length = airr_vj.p3v_length + airr_vj.n1_length + airr_vj.p5j_length
airr_vj.np1 = vj_segment['gene_segment'] # This include the palindromic insertions
# J
airr_vj.j_data.germline_start = j_segment['gene_ini'] + 1
airr_vj.j_data.germline_end = j_segment['gene_end'] + 1
airr_vj.j_data.sequence_start = airr_vj.np1_length + (
airr_vj.v_data.sequence_end - airr_vj.v_data.sequence_start - 1)
airr_vj.j_data.sequence_end = airr_vj.j_data.sequence_start + len(j_segment['gene_cut']) - 1
return airr_vj.to_dict()
class IgorModel_Parms:
"""
Class to get a list of Events directly from the *_parms.txt
:param model_parms_file: Igor parms file path.
"""
def __init__(self, model_parms_file=None):
## Parms file representation
self.Event_list = list() # list of Rec_event
self.Edges = list()
self.ErrorRate_dict = dict()
## pygor definitions
self.Event_dict = dict()
self.Edges_dict = dict()
self.dictNameNickname = dict()
self.dictNicknameName = dict()
self.G = nx.DiGraph()
self.preMarginalDF = pd.DataFrame()
self.model_parms_file = ""
if model_parms_file is not None:
print(model_parms_file)
self.read_model_parms(model_parms_file)
#self.get_EventDict_DataFrame()
def __str__(self):
tmpstr = "{ 'len Event_list': " + str(len(self.Event_list)) \
+", 'len Egdes': " + str(len(self.Edges)) \
+", 'len ErrorRate': " + str(len(self.ErrorRate_dict)) + " }"
return tmpstr
#return "{ Event_list, Egdes, ErrorRate}"
@classmethod
def from_network_dict(cls, network_dict:dict):
# outfile << event_type<< ";" <<
# SingleErrorRate
cls = IgorModel_Parms()
# 1. Create Event_list
cls.Event_list = list()
for nickname in network_dict.keys():
# FIXME: Use default values
# Create a default event by nickname
dict_IgorRec_Event = IgorRec_Event_default_dict[nickname]
event = IgorRec_Event.from_dict(dict_IgorRec_Event)
try:
cls.Event_list.append(event)
print("New event has been added: ", dict_IgorRec_Event)
except Exception as e:
raise e
# 2. Fill Events with realizations
# 2.1. if event.event_type == 'GeneChoice':
# load file
#mdl0.parms.Event_list[0].realizations[0])
# load events from default dictionary.
#
return cls
@classmethod
def from_database(cls, db):
print("Loading Model Parms from database.")
@classmethod
def make_default_VJ(cls, df_genomicVs, df_genomicJs, lims_deletions=None, lims_insertions=None):
"""Create a default VJ model from V and J genes dataframes
lims_deletions tuple with min and maximum value for deletions, e.g. (-4,20)
lims_insertions tuple with min and maximum value for deletions, e.g. (0,30)
"""
cls = IgorModel_Parms()
if lims_deletions is None:
lims_deletions = (-4, 17)
if lims_insertions is None:
lims_insertions = (0, 41)
# Add events to Event_list
for event_nickname in Igor_VJ_default_nickname_list:
event_dict = IgorRec_Event_default_dict[event_nickname]
if event_nickname == 'j_choice':
event_dict["priority"] = 6
event = IgorRec_Event.from_dict(event_dict)
cls.Event_list.append(event)
if event.event_type == 'DinucMarkov':
value_list = ['A', 'C', 'G', 'T']
name_list = ['' for val in value_list]
event_df = pd.DataFrame.from_dict({'name': name_list, 'value': value_list})
event_df.index.name = 'id'
cls.set_event_realizations_from_DataFrame(event_nickname, event_df)
elif event.event_type == 'Deletion':
value_list = list(range(*lims_deletions))
name_list = ['' for val in value_list]
event_df = pd.DataFrame.from_dict({'name': name_list, 'value': value_list})
event_df.index.name = 'id'
cls.set_event_realizations_from_DataFrame(event_nickname, event_df)
elif event.event_type == 'Insertion':
value_list = list(range(*lims_insertions))
name_list = ['' for val in value_list]
event_df = pd.DataFrame.from_dict({'name': name_list, 'value': value_list})
event_df.index.name = 'id'
cls.set_event_realizations_from_DataFrame(event_nickname, event_df)
elif event.event_type == 'GeneChoice':
if event_nickname == 'v_choice':
cls.set_event_realizations_from_DataFrame(event_nickname, df_genomicVs)
elif event_nickname == 'j_choice':
cls.set_event_realizations_from_DataFrame(event_nickname, df_genomicJs)
else:
print("Unrecognized type of event. There are only 4 types of events:")
print(" - GeneChoice")
print(" - Deletions")
print(" - Insertions")
print(" - DinucMarkov")
# Update names
# for event in self.Event_list:
# event.name ="jodete"
# Now edges
cls.set_Edges_from_dict(Igor_VJ_default_parents_dict)
# Error Rate
cls.ErrorRate_dict = {'error_type': 'SingleErrorRate', 'error_values': '0.000396072'}
return cls
@classmethod
def make_default_VDJ(cls, df_genomicVs, df_genomicDs, df_genomicJs, lims_deletions=None, lims_insertions=None):
"""Create a default VJ model from V and J genes dataframes
lims_deletions tuple with min and maximum value for deletions, e.g. (-4,20)
lims_insertions tuple with min and maximum value for deletions, e.g. (0,30)
"""
cls = IgorModel_Parms()
if lims_deletions is None:
lims_deletions = (-4, 17)
if lims_insertions is None:
lims_insertions = (0, 41)
for event_nickname in Igor_VDJ_default_nickname_list:
event_dict = IgorRec_Event_default_dict[event_nickname]
event = IgorRec_Event.from_dict(event_dict)
cls.Event_list.append(event)
if event.event_type == 'DinucMarkov':
value_list = ['A', 'C', 'G', 'T']
name_list = ['' for val in value_list]
event_df = pd.DataFrame.from_dict({'name': name_list, 'value': value_list})
event_df.index.name = 'id'
cls.set_event_realizations_from_DataFrame(event_nickname, event_df)
elif event.event_type == 'Deletion':
value_list = list(range(*lims_deletions))
name_list = ['' for val in value_list]
event_df = pd.DataFrame.from_dict({'name': name_list, 'value': value_list})
event_df.index.name = 'id'
cls.set_event_realizations_from_DataFrame(event_nickname, event_df)
elif event.event_type == 'Insertion':
value_list = list(range(*lims_insertions))
name_list = ['' for val in value_list]
event_df = pd.DataFrame.from_dict({'name': name_list, 'value': value_list})
event_df.index.name = 'id'
cls.set_event_realizations_from_DataFrame(event_nickname, event_df)
elif event.event_type == 'GeneChoice':
if event_nickname == 'v_choice':
cls.set_event_realizations_from_DataFrame(event_nickname, df_genomicVs)
elif event_nickname == 'd_gene':
cls.set_event_realizations_from_DataFrame(event_nickname, df_genomicDs)
elif event_nickname == 'j_choice':
cls.set_event_realizations_from_DataFrame(event_nickname, df_genomicJs)
else:
print("ERROR: GeneChoice event "+event.nickname+" is not a default nickname.")
else:
print("ERROR: Unrecognized type of event. There are only 4 types of events:")
print(" - GeneChoice")
print(" - Deletions")
print(" - Insertions")
print(" - DinucMarkov")
cls.update_events_name()
# Now edges
cls.set_Edges_from_dict(Igor_VDJ_default_parents_dict)
# Error Rate
cls.ErrorRate_dict = {'error_type': 'SingleErrorRate', 'error_values': '0.000396072'}
return cls
def load_events_from_dict(self, dicto):
print(dicto)
# TODO: Check how the imgt functions return data
def load_GeneChoice_realizations_by_nickname(self, event_nickname:str, flnGenomic):
event = self.get_Event(event_nickname)
from Bio import SeqIO
if event.event_type == 'GeneChoice':
for index, record in enumerate(SeqIO.parse(flnGenomic, "fasta")):
event_realization = IgorEvent_realization()
event_realization.id = index
event_realization.value = record.seq
event_realization.name = record.description
event.add_realization(IgorEvent_realization)
print(event_nickname, " from file : ", flnGenomic)
def load_Deletion_realizations_by_nickname(self, event_nickname: str, limits=(-4, 20)):
event = self.get_Event(event_nickname)
if event.event_type == 'Deletion':
start, end = limits
for index, ndels in enumerate(range(start, end)):
event_realization = IgorEvent_realization()
event_realization.id = index
event_realization.value = ndels
print(event_nickname, " limits : ", limits)
def load_Insertion_realizations_by_nickname(self, event_nickname: str, limits=(0, 24)):
event = self.get_Event(event_nickname)
if event.event_type == 'Insertion':
start, end = limits
# FIXME: VALIDATE FOR POSITIVE VALUES
for index, nins in enumerate(range(start, end)):
event_realization = IgorEvent_realization()
event_realization.id = index
event_realization.value = nins
print(event_nickname, " limits : ", limits)
def load_DinucMarkov_realizations_by_nickname(self, event_nickname: str):
event = self.get_Event(event_nickname)
if event.event_type == 'DinucMarkov':
for index, nt_char in enumerate(['A', 'C', 'G', 'T']):
event_realization = IgorEvent_realization()
event_realization.id = index
event_realization.value = nt_char
def read_model_parms(self, filename):
"""Reads a model graph structure from a model params file.
Note that for now this method does not read the error rate information.
"""
with open(filename, "r") as ofile:
# dictionary containing recarrays?
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
if strip_line == "@Event_list":
self.read_Event_list(ofile)
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
if strip_line == "@Edges":
self.read_Edges(ofile)
# FIXME: ErrorRate added
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
if strip_line == "@ErrorRate" :
self.read_ErrorRate(ofile)
self.model_parms_file = filename
self.gen_EventDict_DataFrame()
# save in Event_list
def read_Event_list(self, ofile):
lastPos = ofile.tell()
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
#event = Rec_Event()
while strip_line[0] == '#':
# get the metadata of the event list
event_metadata = strip_line[1:].split(";") #GeneChoice;V_gene;Undefined_side;7;v_choice
event_metadata[3] = int(event_metadata[3]) # change priority to integer
event = IgorRec_Event(*event_metadata)
#self.G.add_node(event.nickname)
# Now read the realizations (or possibilities)
lastPos = ofile.tell()
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
while strip_line[0] == '%':
realization = IgorEvent_realization()
realizData = strip_line[1:].split(";")
if event.event_type == "GeneChoice":
realization.name = realizData[0]
realization.value = realizData[1]
realization.id = int(realizData[2])
elif event.event_type == "DinucMarkov":
realization.value = realizData[0]
realization.id = int(realizData[1])
else:
realization.value = int(realizData[0])
realization.id = int(realizData[1])
event.add_realization(realization)
# next line
lastPos = ofile.tell()
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
self.Event_list.append(event)
ofile.seek(lastPos)
def read_Edges(self, ofile):
#print "read_Edges"
lastPos = ofile.tell()
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
while strip_line[0] == '%':
edge = strip_line[1:].split(';')
self.Edges.append(edge)
#read nextline
lastPos = ofile.tell()
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
ofile.seek(lastPos)
def add_Egde(self, parent_nickname, child_nickname):
try:
parent_name = self.dictNicknameName[parent_nickname]
child_name = self.dictNicknameName[child_nickname]
# TODO: CHECK IF EDGE exist!
self.Edges.append([parent_name, child_name])
self.getBayesGraph()
#self.Edges_dict[child_nickname].append(parent_nickname)
except Exception as e:
print("Edge : ", parent_nickname, child_nickname, " couldn't be added.")
print(e)
pass
def set_Edges_from_dict(self, parents_dict):
try:
for child_nickname, parents in parents_dict.items():
for parent_nickname in parents:
parent_name = self.dictNicknameName[parent_nickname]
child_name = self.dictNicknameName[child_nickname]
self.Edges.append([parent_name, child_name])
self.getBayesGraph()
except Exception as e:
print("set_Edges_from_dict : ", parent_nickname, child_nickname, " couldn't be added.")
print(e)
pass
def remove_Edge(self, parent_nickname, child_nickname):
try:
parent_name = self.dictNicknameName[parent_nickname]
child_name = self.dictNicknameName[child_nickname]
# TODO: CHECK IF EDGE exist!
new_Edges = [edge for edge in self.Edges if not (parent_name == edge[0] and child_name == edge[1])]
self.Edges = new_Edges
self.getBayesGraph()
#self.Edges_dict[child_nickname].append(parent_nickname)
except Exception as e:
print("Edge : ", parent_nickname, child_nickname, " couldn't be added.")
print(e)
pass
def set_event_realizations_from_DataFrame(self, event_nickname, df):
# FIXME: unnecesary copy find a better way.
new_Event_list = list()
for event in self.Event_list:
if event.nickname == event_nickname:
event.update_realizations_from_dataframe(df)
new_Event_list.append(event)
self.Event_list = new_Event_list
self.gen_EventDict_DataFrame()
def read_ErrorRate(self, ofile):
lastPos = ofile.tell()
line = ofile.readline()
strip_line = line.rstrip('\n') # Remove end of line character
strip_line = strip_line.rstrip('\r') # Remove carriage return character (if needed)
while strip_line[0] == '#':
# TODO: SAVE THE FOLLOWING TEXT AFTER # AS ERROR TYPE
self.ErrorRate_dict = dict()
self.ErrorRate_dict['error_type'] = strip_line[1:]
lastPos = ofile.tell()
line = ofile.readline()
strip_line = line.rstrip('\n').rstrip()
error = strip_line
self.ErrorRate_dict['error_values'] = error
# if 'SingleErrorRate' == strip_line[1:] :
# lastPos = ofile.tell()
# line = ofile.readline()
# strip_line = line.rstrip('\n').rstrip()
# error = strip_line
# self.ErrorRate = {"SingleErrorRate" : error }
ofile.seek(lastPos)
# FIXME: FINISH THIS METHOD
def write_model_parms(self, filename=None):
"""Writes a model graph structure from a model params object.
Note that for now this method does not read the error rate information.
"""
if filename is None:
filename = "tmp_mdl_parms.txt"
# Sort events in list with the your specific preference.
# FIXME: FIND ANOTHER WAY TO WRITE IN A CORRECT ORDER
# igor_nickname_list = ["v_choice", "j_choice", "d_gene", "v_3_del"]
# self.get_Event(nicknameList)
# self.Event_list
strSepChar = ";"
try:
import os
#print("AAAAAAAAAAAAAAA:", os.path.dirname(filename), filename)
os.makedirs(os.path.dirname(filename), exist_ok=True)
except Exception as e:
print("WARNING: write_model_parms path ", e)
print("Writing model parms in file ", filename)
with open(filename, "w") as ofile:
# 1. Write events
self.write_Event_list(ofile, delimiter=strSepChar)
# 2. Write Edges
self.write_Edges(ofile, delimiter=strSepChar)
# 3. Write ErrorRate
self.write_ErrorRate(ofile, delimiter=strSepChar)
def write_Event_list(self, ofile, delimiter=None):
if delimiter is None:
strSepChar = ';'
else:
strSepChar = delimiter
ofile.write("@Event_list\n")
# for event in self.Event_list:
# for nickname in Igor_nickname_list: # Igor_nicknameList is in IgorDefaults.py
for event in self.Event_list:
try:
## self.write_event(ofile, event:IgorRec_Event)
# event = self.get_Event(nickname)
strLine = "#" + \
str(event.event_type) + strSepChar + \
str(event.seq_type) + strSepChar + \
str(event.seq_side) + strSepChar + \
str(event.priority) + strSepChar + \
str(event.nickname) + "\n"
ofile.write(strLine)
# WRITE THE LIST OF REALIZATIONS adding character '%'
df = event.get_realization_DataFrame()
str_df = df.to_csv(sep=strSepChar, header=False)
str_realization_list = ""
for strLine in str_df.split("\n"):
# Asi se tiene = ['id', 'value', 'name']
strLine_list = strLine.split(strSepChar)
# print(strLine, strLine_list)
if len(strLine_list) > 1:
if event.event_type == "GeneChoice":
str_id = strLine_list[0]
str_value = strLine_list[1]
str_name = strLine_list[2]
# Asi se quiere = ['name', 'value', 'id']
str_realization = str_name + strSepChar + str_value + strSepChar + str_id
else:
str_id = strLine_list[0]
str_value = strLine_list[1]
# Asi se quiere = ['value', 'id']
str_realization = str_value + strSepChar + str_id
str_realization_list = str_realization_list + "%" + str_realization + "\n"
ofile.write(str_realization_list)
except Exception as e:
print("ERROR: write_Event_list, ", event.nickname)
print(e)
pass
def write_Edges(self, ofile, delimiter=None):
if delimiter is None:
strSepChar = ';'
else:
strSepChar = delimiter
ofile.write("@Edges\n")
try:
for edge in self.Edges:
ofile.write("%"+edge[0]+strSepChar+edge[1]+"\n")
except Exception as e:
print("ERROR: write_Edges")
print(e)
pass
def write_ErrorRate(self, ofile, delimiter=None):
if delimiter is None:
strSepChar = ';'
else:
strSepChar = delimiter
ofile.write("@ErrorRate\n")
ofile.write("#"+self.ErrorRate_dict['error_type']+"\n")
ofile.write(self.ErrorRate_dict['error_values']+"\n")
def get_EventsNickname_list(self):
return [event.nickname for event in self.Event_list]
def get_EventsName_list(self):
return [event.name for event in self.Event_list]
def get_Event(self, event_nickname_or_name, by_nickname=True):
"""Returns the RecEvent with corresponding name or nickname."""
if by_nickname:
for ev in self.Event_list:
if ev.nickname == event_nickname_or_name:
return ev
raise Exception(
'RecEvent with nickname \"' + event_nickname_or_name + "\" not found.")
else:
for ev in self.Event_list:
if ev.name == event_nickname_or_name:
return ev
raise Exception(
'RecEvent with name \"' + event_nickname_or_name + "\" not found.")
def gen_EventDict_DataFrame(self):
self.Event_dict = dict()
#dictio = dict()
for event in self.Event_list:
#dictio[event.nickname] = event.get_realization_DataFrame()
self.Event_dict[event.nickname] = event.get_realization_DataFrame()
self.gen_NameNickname_dict()
self.getBayesGraph()
def gen_NameNickname_dict(self):
self.dictNameNickname = dict()
for event in self.Event_list:
event.update_name()
self.dictNameNickname[event.name] = event.nickname
# return dictio
self.dictNicknameName = {v: k for k, v in self.dictNameNickname.items()}
def get_event_dict(self, str_key, str_value):
"""
Return a python dictionary of the event_dict, like ('nickname', 'priority')
{'v_choice:7, 'd_gene':6, ...}
"""
dicto = dict()
for event in self.Event_list:
event_dict = event.to_dict()
dicto[event_dict[str_key]] = event_dict[str_value]
return dicto
def getBayesGraph(self):
self.G = nx.DiGraph()
for rec_event in self.Event_list:
self.G.add_node(rec_event.nickname)
self.Edges_dict[rec_event.nickname] = list()
self.dictNameNickname[rec_event.name] = rec_event.nickname
for edge in self.Edges:
# Graph to get the dependecies
self.G.add_edge(self.dictNameNickname[edge[0]], self.dictNameNickname[edge[1]])
self.Edges_dict[self.dictNameNickname[edge[1]]].append(self.dictNameNickname[edge[0]])
#self.G = self.G.reverse()
def genPreMarginalDF(self):
data=[]
for event in self.Event_list:
#print (parms.dictNameNickname[event.name])
#parms.Edges
#tmpDict = dict()
lista = []
for edge in self.Edges:
if edge[1] == event.name:
#print(parms.dictNameNickname[edge[0]])
#print(edge[0])
lista.append(self.dictNameNickname[edge[0]])
tmpDict = {'event': event.nickname, 'priority': event.priority, 'Edges': lista }
data.append(tmpDict)
self.preMarginalDF = pd.DataFrame(data) #.set_index('event')
self.preMarginalDF['nEdges'] = self.preMarginalDF['Edges'].map(len)
self.preMarginalDF.sort_values(['priority', 'nEdges'], ascending=[False,True])
def genMarginalFile(self, model_marginals_file=None):
self.genPreMarginalDF()
#self.preMarginalDF
if model_marginals_file == None:
model_marginals_file = "model_marginals.txt"
ofile = open(model_marginals_file, "w")
for index, row in self.preMarginalDF.iterrows():
nickname = row['event']
ofile.write("@"+nickname+"\n")
#DimEvent = len(parms.Event_dict[event.nickname])
#DimEdges = len(parms.Edges_dict[event.nickname])
DimEvent = len(self.Event_dict[nickname])
strDimLine = "$Dim["
DimList = []
if row['nEdges'] == 0:
strDimLine = strDimLine +str(DimEvent)
strDimLine = strDimLine +"]"
else:
for evNick in row['Edges']: #parms.Edges_dict[event.nickname]:
Dim = len(self.Event_dict[evNick])
strDimLine = strDimLine +str(Dim)+","
DimList.append(Dim)
strDimLine = strDimLine + str(DimEvent)
strDimLine = strDimLine +"]"
ofile.write(strDimLine+"\n")
lista = row['Edges'] # self.Event_dict[nickname]
for indices in np.ndindex(tuple(DimList)):
#print indices
strTmp = "#"
for ii in range(len(lista)):
strTmp = strTmp+"["+lista[ii]+","+str(indices[ii])+"]"
if not (ii == len(lista)-1):
strTmp = strTmp + ","
ofile.write(strTmp+"\n")
ofile.write("%")
unifProb = (1./DimEvent)
for jj in range(DimEvent):
ofile.write(str(unifProb))
if not (jj == DimEvent-1):
ofile.write(",")
ofile.write("\n")
ofile.close()
def plot_Graph(self, ax=None, **kwargs): # FIXME: ALLOW the possibility to pass an ax like ax=None):
"""Return a plot of the bayesian network """
#if ax is None:
pos = nx.spring_layout(self.G)
# priorities up
prio_dict = dict()
for event in self.Event_list:
if not (event.priority in prio_dict):
prio_dict[event.priority] = list()
prio_dict[event.priority].append(event)
#print(str(prio_dict))
xwidth = 240
yfactor = 40
for key in prio_dict:
lenKey = len(prio_dict[key])
if lenKey == 1:
pos[prio_dict[key][0].nickname] = np.array([float(xwidth) / 2.0, float(key) * yfactor])
else:
xx = np.linspace(0, xwidth, lenKey)
for ii, ev in enumerate(prio_dict[key]):
xpos = xx[ii] # float(xwidth)*float(ii)/float(lenKey)
pos[ev.nickname] = np.array([xpos, float(key) * yfactor])
try:
import hvplot.networkx as hvnx
print("hvplot")
graph = hvnx.draw(self.G, with_labels=True, FontSize=10, pos=pos, alpha=0.5,
arrowstyle='fancy', arrowsize=2000, node_size=1000, width=400, height=400)
##, arrows=True, arrowsize=20, node_size=800, font_size=10, font_weight='bold')
return graph
except ImportError as e:
try:
if ax is None:
#print("matplotlib")
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.set_aspect('equal')
nx.draw(self.G, pos=pos, ax=ax, with_labels=True, arrows=True, arrowsize=20,
node_size=800, font_size=10, font_weight='bold') # FIXME: make a better plot: cutting edges.
return ax
except Exception as e:
print(e)
raise
def get_Event_dependencies(self, strEvent):
print(strEvent)
return list(self.G.predecessors(strEvent))
def get_Event_list_sorted(self):
# FIXME: GENERALIZE THIS PROCESS with the parents priority
# Order events by priority.
events_list_with_parents = list()
for event in self.Event_list:
events_list_with_parents.append((event, list(self.G.predecessors(event.nickname))))
# print(events_list_with_parents)
sorted_events_list_with_parents = sorted(events_list_with_parents,
key=lambda tupla: (tupla[0].priority, -len(tupla[1])), reverse=True)
return [sorted_event_parent[0] for sorted_event_parent in sorted_events_list_with_parents]
def from_scenario(self, scenario, strEvent):
return self.Event_dict[strEvent].loc[scenario[strEvent]]
# TODO: GIVEN A SCENARIO A DICT WITH REALIZATIONS
def realiz_dict_from_scenario(self, scenario): #IgorScenario):
realizations_dict = dict()
for nickname_key in scenario.realizations_ids_dict:
if nickname_key == 'mismatches':
realizations_dict[nickname_key] = scenario[nickname_key]
elif nickname_key == 'mismatcheslen':
realizations_dict[nickname_key] = scenario[nickname_key]
else:
event = self.get_Event(nickname_key)
print(nickname_key, scenario[nickname_key], event.event_type)
if event.event_type == 'DinucMarkov':
realizations_dict[nickname_key] = list()
for realiz_id in scenario[nickname_key]:
realizations_dict[nickname_key].append(event.realizations[realiz_id])
else:
realizations_dict[nickname_key] = event.realizations[ scenario[nickname_key] ]
# except Exception as e:
# print("ERROR: ", nickname_key, " while parsing to realizations.")
# print(e)
return realizations_dict
def update_events_name(self):
for event in self.Event_list:
event.update_name()
self.gen_NameNickname_dict()
# FIXME: SCENARIO FROM CSV LINE IN GENERATED SEQUENCES
def get_scenario_from_line_CSV(self, str_line, file_header_list, sep=';'):
dicto = dict()
str_line_list = str_line.split(sep)
for str_header in file_header_list:
if str_header == 'seq_index':
pass
return dicto
class IgorRec_Event:
"""Recombination event class containing event's name, type, realizations,
etc... Similar to IGoR's C++ RecEvent class.
"""
def __init__(self, event_type, seq_type, seq_side, priority,
nickname):
self.event_type = event_type
self.seq_type = seq_type
self.seq_side = seq_side
self.priority = priority
self.realizations = list()
self.name = ""
self.nickname = nickname
# if nickname is not None:
# self.nickname = nickname
self.update_name()
def __getitem__(self, item):
return self.realizations[item]
def __str__(self):
return str(self.to_dict())
def __lt__(self, other):
# TODO: Less parents bigger event
return self.priority < other.priority
# if ( self.priority < other.priority ):
# return True
# elif ( self.priority == other.priority ):
# # FIXME: less dependencies should be on top
def __gt__(self, other):
# TODO: Less parents bigger event
return self.priority > other.priority
def to_dict(self):
dictIgorRec_Event = {
"event_type": self.event_type, \
"seq_type": self.seq_type, \
"seq_side": self.seq_side, \
"priority": self.priority, \
"realizations": self.realizations, \
"name": self.name, \
"nickname": self.nickname
}
return dictIgorRec_Event
def add_realization(self):
realization = IgorEvent_realization()
self.realizations.append(realization)
self.realizations = sorted(self.realizations)
self.update_name()
@classmethod
def from_dict(cls, dict_IgorRec_Event:dict):
"""Returns a IgorRec_Event based on dictionary
"""
cls = IgorRec_Event(dict_IgorRec_Event["event_type"], dict_IgorRec_Event["seq_type"],
dict_IgorRec_Event["seq_side"], dict_IgorRec_Event["priority"], dict_IgorRec_Event["nickname"])
# 'event_type', 'seq_type', 'seq_side', 'priority', and 'nickname'
# FIXME: Is better to make this class as an extension of a dictionary container?
# Given the nickname complete the events with
#cls.nickname = dict_IgorRec_Event["nickname"]
#cls.event_type = dict_IgorRec_Event["event_type"]
#cls.seq_type = dict_IgorRec_Event["seq_type"]
#cls.seq_side = dict_IgorRec_Event["seq_side"]
#cls.priority = dict_IgorRec_Event["priority"]
cls.realizations = dict_IgorRec_Event["realizations"] # TODO: CREATE FUNCTION TO GENERATE realizations vector
cls.update_name()
return cls
def update_realizations_from_fasta(self, flnGenomic):
from Bio import SeqIO
if self.event_type == 'GeneChoice':
for index, record in enumerate(SeqIO.parse(flnGenomic, "fasta")):
event_realization = IgorEvent_realization()
event_realization.id = index
event_realization.value = record.seq
event_realization.name = record.description
self.add_realization(event_realization)
def export_realizations_to_fasta(self, flnGenomic):
from Bio.SeqRecord import SeqRecord
from Bio.Seq import Seq
sequences_list = list()
for realization in self.realizations:
record = SeqRecord(Seq(realization.value), realization.name, '', '')
sequences_list.append(record)
SeqIO.write(sequences_list, flnGenomic, "fasta")
def update_realizations_from_dataframe(self, dataframe):
"""
Update realizations with a dataframe (index, value, name)
"""
self.realizations = list()
for index, row in dataframe.iterrows():
dict_realiz = row.to_dict()
# print(index, dict_realiz)
dict_realiz['index'] = index
realiz = IgorEvent_realization.from_dict(dict_realiz)
self.realizations.append(realiz)
self.realizations = sorted(self.realizations)
self.update_name()
@classmethod
def from_default_nickname(cls, nickname:str):
cls = IgorRec_Event.to_dict(IgorRec_Event_default_dict[nickname])
return cls
# TODO:
def add_realization(self, realization):
"""Add a realization to the RecEvent realizations list."""
self.realizations.append(realization)
self.realizations = sorted(self.realizations)
self.update_name()
def update_name(self):
"""Updates the name of the event (will have no effect if the RecEvent
has not been modified since the last call).
"""
if self.event_type == "DinucMarkov":
self.name = self.event_type + "_" + self.seq_type + "_" + \
self.seq_side + "_prio" + \
str(self.priority) + "_size" + \
str(len(self.realizations) ** 2)
else:
self.name = self.event_type + "_" + self.seq_type + "_" + \
self.seq_side + "_prio" + \
str(self.priority) + "_size" + \
str(len(self.realizations))
# TODO: Create a realization vector from a fasta file
def set_realization_vector(self):
if self.event_type == 'GeneChoice':
print('GeneChoice')
def set_realization_vector_GeneChoice(self, flnGenomic:str):
"""
Sets a realization vector from a filename
:param flnGenomic: fasta file with the genomic template IMGT or other template.
"""
#FIXME: FINISH IT
# TODO: Add realizations from fasta file.
from Bio import SeqIO
#for record in list(SeqIO.parse(flnGenomic, "fasta")):
def get_realization_vector(self):
"""This methods returns the event realizations sorted by the
realization index as a list.
"""
if self.event_type == 'GeneChoice':
tmp = [""] * len(self.realizations) # empty(, dtype = str)
else:
tmp = np.empty(len(self.realizations),
dtype=type(self.realizations[0].value))
# print("Unfinished method get realization vector")
processed_real_indices = []
for real in self.realizations:
if processed_real_indices.count(real.index) == 0:
if real.name != "":
tmp[real.index] = real.name
else:
tmp[real.index] = real.value
processed_real_indices.append(real.index)
else:
print("REALIZATION INDICES ARE DEGENERATE")
return tmp
def get_realization_DataFrame(self):
""" return an Event realizations as a pandas DataFrame to manipulate it.
"""
return pd.DataFrame.from_records([realiz.to_dict() for realiz in self.realizations], index='id').sort_index()
class IgorEvent_realization:
"""A small class storing for each RecEvent realization its name, value and
corresponding index.
"""
__slots__ = ('id', 'name', 'value')
def __init__(self):
self.id = "" #index
self.name = "" #name
self.value = "" #value
def __lt__(self, other):
return self.id < other.id
def __gt__(self, other):
return self.id > other.id
def __str__(self):
if self.name == "":
return "{value};{id}".format(value=self.value, id=self.id)
# return str(self.value)+";"+str(self.id)
else:
return "{name};{value};{id}".format(name=self.name, value=self.value, id=self.id)
# return self.name+";"+str(self.value)+";"+str(self.id)
def __repr__(self):
return "Event_realization(" + str(self.id) + ")"
def to_dict(self):
return {
'id': self.id,
'value': self.value,
'name': self.name
}
@classmethod
def from_tuple(cls, id, value, name=""):
cls = IgorEvent_realization()
cls.id = id
cls.value = value
cls.name = name
return cls
@classmethod
def from_dict(self, event_dict:dict):
cls = IgorEvent_realization()
cls.id = event_dict['index']
cls.value = event_dict['value']
cls.name = event_dict['name']
return cls
# def __eq__(self, other):
# if isinstance(self, other.__class__):
# return ( (self.event_type == other.event_type) \
# and (self.seq_type == other.seq_type) \
# and (self.seq_side == other.seq_side) \
# and (self.priority == other.priority) \
# and (self.realizations == other.realizations) \
# and (self.name == other.name) \
# and (self.nickname == other.nickname) \
# )
# else:
# return NotImplemented
#
# def __hash__(self):
# return hash((self.event_type, \
# self.seq_type, \
# self.seq_side, \
# self.priority, \
# self.realizations, \
# self.name, \
# self.nickname))
# def __str__(self):
# return self.event_type+";"+self.seq_type+";"+self.seq_side+";"+self.priority+";"+self.nickname
#
# def __repr__(self):
# return "Rec_event(" + self.nickname + ")"
### FIXME:
# @recombinationEvent
# $Dim
# #Indices of the realizations of the parent events
# %1d probability array.
class IgorModel_Marginals:
"""
Class to get a list of Events directly from the *_parms.txt
:param model_marginals_file: Igor marginals file.
"""
def __init__(self, model_marginals_file=None):
# self.Event_list = list() # list of Rec_event
# self.Edges = list()
# self.error_rate = list()
self.marginals_dict = {}
self.network_dict = {}
self.model_marginals_file = ""
if model_marginals_file is not None:
self.read_model_marginals(model_marginals_file)
# @d_3_del
# $Dim[3,21,21]
# #[d_gene,0],[d_5_del,0]
# %0,0,0,1.6468e-08,0.00482319,1.08101e-09,0.0195311,0.0210679,0.0359338,0.0328678,2.25686e-05,4.97463e-07,0,9.31048e-08,1.01642e-05,0.000536761,0.0260845,0.0391021,0.319224,0.289631,0.211165
# #[d_gene,0],[d_5_del,1]
# %0,0,6.86291e-08,2.00464e-09,0.00163832,2.02919e-06,0.0306066,0.0126832,0.000872623,0.016518,0.00495292,0.000776747,4.45576e-05,0.000667902,0.00274004,0.00435049,0.300943,0.182499,0.13817,0.302534,0
@classmethod
def make_uniform_from_parms(cls, parms:IgorModel_Parms):
cls = IgorModel_Marginals()
cls.initialize_uniform_from_model_parms(parms)
return cls
def read_model_marginals(self, filename, dim_names=False):
"""Reads a model marginals file. Returns a tuple containing a dict
containing the individual events probabilities indexed by the events
nicknames and a dict containing the list of dimension names/ordering for
each event.
"""
with open(filename, "r") as ofile:
# Model parameters are stored inside a dictionnary of ndarrays
# marginals_dict = {}
# network_dict = {}
element_name = ""
first = True
first_dim_line = False
element_marginal_array = []
indices_array = []
for line in ofile:
strip_line = line.rstrip("\n") # Remove end of line character
if strip_line[0] == "@":
first_dim_line = True
if not first:
# Add the previous to the dictionnary
self.marginals_dict[element_name] = element_marginal_array
else:
first = False
element_name = strip_line[1:]
# print element_name
if strip_line[0] == "$":
# define array dimensions
coma_index = strip_line.find(",")
dimensions = []
# Get rid of $Dim[
previous_coma_index = 4
while coma_index != -1:
dimensions.append(
int(strip_line[previous_coma_index + 1:coma_index]))
previous_coma_index = coma_index
coma_index = strip_line.find(",", coma_index + 1)
# Add last dimension and get rid of the closing bracket
dimensions.append(int(strip_line[previous_coma_index + 1:-1]))
element_marginal_array = np.ndarray(shape=dimensions)
if strip_line[0] == "#":
if first_dim_line:
dimensions_names = []
if len(dimensions) > 1:
comma_index = strip_line.find(",")
opening_bracket_index = strip_line.find("[")
while opening_bracket_index != -1:
dimensions_names.append(
strip_line[
opening_bracket_index + 1:comma_index])
opening_bracket_index = strip_line.find(
"[", comma_index)
comma_index = strip_line.find(
",", opening_bracket_index)
first_dim_line = False
dimensions_names.append(element_name)
self.network_dict[element_name] = dimensions_names
# update indices
indices_array = []
if len(dimensions) > 1:
comma_index = strip_line.find(",")
closing_brack_index = strip_line.find("]")
while closing_brack_index != -1:
indices_array.append(int(
strip_line[comma_index + 1:closing_brack_index]))
opening_bracket_index = strip_line.find(
"[", closing_brack_index)
comma_index = strip_line.find(
",", opening_bracket_index)
closing_brack_index = strip_line.find(
"]", closing_brack_index + 1)
if strip_line[0] == "%":
# read doubles
coma_index = strip_line.find(",")
marginals_values = []
# Get rid of the %
previous_coma_index = 0
while coma_index != -1:
marginals_values.append(
float(strip_line[previous_coma_index + 1:coma_index]))
previous_coma_index = coma_index
coma_index = strip_line.find(",", coma_index + 1)
# Add last dimension and get rid of the closing bracket
marginals_values.append(
float(strip_line[previous_coma_index + 1:]))
if len(marginals_values) != dimensions[-1]:
print("problem")
element_marginal_array[tuple(indices_array)] = marginals_values
self.marginals_dict[element_name] = element_marginal_array
self.model_marginals_file = filename
#return marginals_dict, network_dict
def initialize_uniform_event_from_model_parms(self, event_nickname, parms:IgorModel_Parms):
event = parms.get_Event(event_nickname)
if event.event_type == 'DinucMarkov':
# do something
dimension = len(parms.get_Event(event.nickname).realizations)
narr = np.ones(dimension * dimension) / (dimension * dimension)
self.marginals_dict[event.nickname] = narr
else:
dimensions = [len(parms.get_Event(strEvent).realizations) for strEvent in
self.network_dict[event.nickname]]
# print(dimensions[-1])
narr = np.ones(dimensions) / dimensions[-1]
self.marginals_dict[event.nickname] = narr
def initialize_uniform_from_model_parms(self, parms:IgorModel_Parms):
self.network_dict = dict()
for key, value in parms.Edges_dict.items():
self.network_dict[key] = value + [key]
# Create a marginal for
self.marginals_dict = dict()
for event in parms.Event_list:
self.initialize_uniform_event_from_model_parms(event.nickname, parms)
# if event.event_type == 'DinucMarkov':
# # do something
# dimension = len(parms.get_Event(event.nickname).realizations)
# narr = np.ones(dimension*dimension) / (dimension*dimension)
# self.marginals_dict[event.nickname] = narr
# else:
# dimensions = [len(parms.get_Event(strEvent).realizations) for strEvent in
# self.network_dict[event.nickname]]
# # print(dimensions[-1])
# narr = np.ones(dimensions) / dimensions[-1]
#
# self.marginals_dict[event.nickname] = narr
def write_model_marginals(self, filename=None, model_parms=None):
# self.marginals_dict = {}
# self.network_dict = {}
if filename is None:
filename = "tmp_mdl_marginals.txt"
if model_parms is None:
print("model parms need it")
raise
parms = model_parms #IgorModel_Parms(model_parms_file=model_parms_file)
if filename is None:
filename = "tmp_mdl_marginals.txt"
try:
import os
os.makedirs(os.path.dirname(filename), exist_ok=True)
except Exception as e:
print("WARNING: IgorModel_Marginals.write_model_marginals path ", e)
print("Writing model marginals in file ", filename)
with open(filename, "w") as fw:
for event in parms.Event_list:
strEvent = event.nickname
# strEvent = "v_choice"
self.write_event_probabilities(fw, strEvent)
def write_event_probabilities(self, ofile, event_nickname):
import itertools
np_array = self.marginals_dict[event_nickname]
parents_list = self.network_dict[event_nickname]
parents_to_write = parents_list[:-1]
# dims_list = tuple( map( lambda x: list(range(x)), array_to_write.shape) )
dims_list = tuple(map(lambda x: list(range(x)), np_array.shape[:-1]))
ofile.write("@" + event_nickname + "\n")
str_shape = str(list(np_array.shape)).replace(" ", "").replace("[", "").replace("]", "")
strDim = "$Dim[" + str_shape + "]\n"
ofile.write(strDim)
for elem in itertools.product(*dims_list):
title = ""
# print(parents_to_write)
title = str(list(zip(parents_to_write, elem)))
title = title.replace("[", "#")
title = title.replace("]", "\n")
title = title.replace(" ", "")
title = title.replace("(", "[")
title = title.replace(")", "]")
title = title.replace("\'", "")
ofile.write(title)
slice_index = tuple(list(elem) + [None])
linea = str(list(np_array[slice_index].flat))
linea = linea.replace(" ", "")
linea = linea.replace(" ", "")
linea = linea.replace("[", "%")
linea = linea.replace("]", "\n")
ofile.write(linea)
# ofile.write()
class IgorAnchors:
def __init__(self, flnVanchors, flnJanchors):
self.flnVanchors = flnVanchors
self.flnJanchors = flnJanchors
self.df_Vanchors = pd.read_csv(flnVanchors, sep=';')
self.df_Janchors = pd.read_csv(flnJanchors, sep=';')
# rename indices.
class IgorScenario:
def __init__(self):
self.seq_index = -1
self.scenario_rank = -1
self.scenario_proba_cond_seq = -1
#self.events_ordered_list = list()
self.realizations_ids_dict = dict()
# given a templated list with ids
self.mdl = None
# self.mdl.parms.Event_dict[strEv].loc[self.id_d_gene]['name']
def __getitem__(self, key):
return self.realizations_ids_dict[key]
def to_dict(self):
dictScenario = dict()
dictScenario['seq_index'] = self.seq_index
dictScenario['scenario_rank'] = self.scenario_rank
dictScenario['scenario_proba_cond_seq'] = self.scenario_proba_cond_seq
dictScenario.update(self.realizations_ids_dict)
return dictScenario
# TODO: This method should return a scenario in a fasta format with corresponding ID and events
def get_scenario_fasta(self, mdl:IgorModel):
str_fasta = ""
# sort events to construct fasta sequence:
mdl.parms.Event_list
for key in mdl.xdata.keys():
self.realizations_ids_dict[key]
return str_fasta
def set_model(self, mdl:IgorModel):
""" Initiate scenario dictionary with a IgorModel """
for key in mdl.xdata.keys():
self.realizations_ids_dict[key] = -1
# TODO: in DEV - FINISH THIS METHOD
def set_model_from_headers(self, header_line:str):
# seq_index;scenario_rank;scenario_proba_cond_seq;GeneChoice_V_gene_Undefined_side_prio7_size35;GeneChoice_J_gene_Undefined_side_prio7_size14;GeneChoice_D_gene_Undefined_side_prio6_size2;Deletion_V_gene_Three_prime_prio5_size21;Deletion_D_gene_Five_prime_prio5_size21;Deletion_D_gene_Three_prime_prio5_size21;Deletion_J_gene_Five_prime_prio5_size23;Insertion_VD_genes_Undefined_side_prio4_size31;DinucMarkov_VD_genes_Undefined_side_prio3_size16;Insertion_DJ_gene_Undefined_side_prio2_size31;DinucMarkov_DJ_gene_Undefined_side_prio1_size16;Mismatches
header_line = "seq_index;scenario_rank;scenario_proba_cond_seq;GeneChoice_V_gene_Undefined_side_prio7_size35;GeneChoice_J_gene_Undefined_side_prio7_size14;GeneChoice_D_gene_Undefined_side_prio6_size2;Deletion_V_gene_Three_prime_prio5_size21;Deletion_D_gene_Five_prime_prio5_size21;Deletion_D_gene_Three_prime_prio5_size21;Deletion_J_gene_Five_prime_prio5_size23;Insertion_VD_genes_Undefined_side_prio4_size31;DinucMarkov_VD_genes_Undefined_side_prio3_size16;Insertion_DJ_gene_Undefined_side_prio2_size31;DinucMarkov_DJ_gene_Undefined_side_prio1_size16;Mismatches"
header_fields = header_line.split(";")
events_list = header_fields[3:]
print("hoajs")
# FIXME:
@classmethod
def load_FromLineBestScenario(cls, line, delimiter=";"):
#seq_index;scenario_rank;scenario_proba_cond_seq;GeneChoice_V_gene_Undefined_side_prio7_size35;GeneChoice_J_gene_Undefined_side_prio7_size14;GeneChoice_D_gene_Undefined_side_prio6_size2;Deletion_V_gene_Three_prime_prio5_size21;Deletion_D_gene_Five_prime_prio5_size21;Deletion_D_gene_Three_prime_prio5_size21;Deletion_J_gene_Five_prime_prio5_size23;Insertion_VD_genes_Undefined_side_prio4_size31;DinucMarkov_VD_genes_Undefined_side_prio3_size16;Insertion_DJ_gene_Undefined_side_prio2_size31;DinucMarkov_DJ_gene_Undefined_side_prio1_size16;Mismatches
cls = IgorScenario()
linesplit = line.split(delimiter)
for ii in range(len(linesplit)):
# TODO: find a better way to do this, if is a list keep it as list
if (ii in [ 11, 13, 14 ]):
linesplit[ii] = linesplit[ii]
else:
linesplit[ii] = linesplit[ii].replace("(", "").replace(")", "")
@classmethod
def load_FromSQLRecord(cls, sqlRecordScenario:list, sql_scenario_name_type_list:list):
cls = IgorScenario()
for ii, (col_name, tipo) in enumerate(sql_scenario_name_type_list):
if col_name == 'seq_index':
cls.seq_index = int(sqlRecordScenario[ii])
elif col_name == 'scenario_rank':
cls.scenario_rank = int(sqlRecordScenario[ii])
elif col_name == 'scenario_proba_cond_seq':
cls.scenario_proba_cond_seq = float(sqlRecordScenario[ii])
else:
if tipo == 'integer':
cls.realizations_ids_dict[col_name] = int(sqlRecordScenario[ii])
else:
cls.realizations_ids_dict[col_name] = eval(sqlRecordScenario[ii])
return cls
def export_to_AIRR_line(self, scenario_col_list:list, sep='\t'):
str_line = ""
self.seq_index = -1
self.scenario_rank = -1
self.scenario_proba_cond_seq = -1
# n_d_5_del = self.mdlParms.Event_dict[strEv].loc[self.id_d_5_del]['value']
# name_D = self.mdlParms.Event_dict[strEv].loc[self.id_d_gene]['name']
# header_list=['sequence_id', 'sequence', 'v_call', 'd_call', 'j_call', 'v_score', 'd_score', 'j_score'])
# sequence_id sequence rev_comp productive v_call d_call j_call c_call sequence_alignment germline_alignment junction junction_aa v_score v_cigar d_score d_cigar j_score j_cigar c_score c_cigar vj_in_frame stop_codon v_identity v_evalue d_identity d_evalue j_identity j_evalue v_sequence_start v_sequence_end v_germline_start v_germline_end d_sequence_start d_sequence_end d_germline_start d_germline_end j_sequence_start j_sequence_end j_germline_start j_germline_end junction_length np1_length np2_length duplicate_count consensus_count
airr_header_list = ["sequence_id", "sequence", "rev_comp", "productive", "v_call", "d_call", "j_call", "c_call",
"sequence_alignment", "germline_alignment", "junction", "junction_aa", "v_score", "v_cigar", "d_score", "d_cigar", "j_score", "j_cigar", "c_score", "c_cigar", "vj_in_frame", "stop_codon", "v_identity", "v_evalue", "d_identity", "d_evalue", "j_identity", "j_evalue", "v_sequence_start", "v_sequence_end", "v_germline_start", "v_germline_end", "d_sequence_start", "d_sequence_end", "d_germline_start", "d_germline_end", "j_sequence_start", "j_sequence_end", "j_germline_start", "j_germline_end", "junction_length", "np1_length", "np2_length", "duplicate_count", "consensus_count"]
from pygor3 import IgorModel_Parms
mdl_parms = IgorModel_Parms()
# mdl_parms = self.mdl.parms
# TODO: No general way, just select between VJ OR VDJ, SO RECHECK IN MODEL IF 'd_gene' is present and make arrangement.
airr_line_list = list()
for event_nickname in scenario_col_list:
event_realization_id = self.realizations_ids_dict[event_nickname]
event_realization_value = mdl_parms.Event_dict[event_nickname].loc[event_realization_id]['value']
event_realization_name = mdl_parms.Event_dict[event_nickname].loc[event_realization_id]['name']
airr_line_list.append(str(self.seq_index))
bs_realiz = mdl_parms.realiz_dict_from_scenario(bs)
mdl_parms.from_scenario()
# GeneChoice
self.realizations_ids_dict[event_nickname]
# Deletions
# Insertions
# DinucMarkov
str_line = sep.join([self.seq_index, self.scenario_rank, self.scenario_proba_cond_seq])
return str_line
### IGOR BEST SCENARIOS VDJ ###
class IgorBestScenariosVDJ:
def __init__(self):
self.seq_index = -1
self.scenario_rank = -1
self.scenario_proba_cond_seq = -1
self.id_v_choice = -1
self.id_j_choice = -1
self.id_d_gene = -1
self.id_v_3_del = -1
self.id_d_5_del = -1
self.id_d_3_del = -1
self.id_j_5_del = -1
self.id_vd_ins = -1
self.vd_dinucl = list()
self.id_dj_ins = -1
self.dj_dinucl = list()
self.mismatches = list()
self.mismatcheslen = -1
# Case of use of the IgorModel.Model_Parms class.
self.flnModelParms = ""
self.mdlParms = ""
self.mdl = ""
# self.IgorModelParms
self.strSeq_index = ""
# # To fetch data from datase connection to a database
# self.IgorDB = ""
def __str__(self):
return str(self.to_dict())
def setModel_Parms(self, flnModelParms):
self.flnModelParms = flnModelParms
self.mdlParms = IgorModel_Parms(model_parms_file=self.flnModelParms)
def to_dict(self):
dictBestScenario = {
"seq_index": self.seq_index, \
"scenario_rank": self.scenario_rank, \
"scenario_proba_cond_seq": self.scenario_proba_cond_seq, \
"v_choice": self.id_v_choice, \
"j_choice": self.id_j_choice, \
"d_gene": self.id_d_gene, \
"v_3_del": self.id_v_3_del, \
"d_5_del": self.id_d_5_del, \
"d_3_del": self.id_d_3_del, \
"j_5_del": self.id_j_5_del, \
"vd_ins": self.id_vd_ins, \
"vd_dinucl": self.vd_dinucl, \
"dj_ins": self.id_dj_ins, \
"dj_dinucl": self.dj_dinucl, \
"mismatches": self.mismatches, \
"mismatcheslen": self.mismatcheslen
}
return dictBestScenario
def to_dict_names(self):
dictBestScenario = {
"seq_index": self.seq_index, \
"scenario_rank": self.scenario_rank, \
"scenario_proba_cond_seq": self.scenario_proba_cond_seq, \
"v_choice": self.getV_gene_name(), \
"j_choice": self.getJ_gene_name(), \
"d_gene": self.getD_gene_name(), \
"v_3_del": self.getV_3_dels(), \
"d_5_del": self.getD_5_dels(), \
"d_3_del": self.getD_3_dels(), \
"j_5_del": self.getJ_5_dels(), \
"vd_ins": self.getVD_ins(), \
"vd_dinucl": self.vd_dinucl, \
"dj_ins": self.getDJ_ins(), \
"dj_dinucl": self.dj_dinucl, \
"mismatches": self.mismatches, \
"mismatcheslen": self.mismatcheslen
}
return dictBestScenario
def to_dict_ntsequences(self):
dictBestScenario = {
"seq_index": self.seq_index, \
"scenario_rank": self.scenario_rank, \
"scenario_proba_cond_seq": self.scenario_proba_cond_seq, \
"v_choice": self.getV_ntsequence(), \
"j_choice": self.getJ_ntsequence(), \
"d_gene": self.getD_ntsequence(), \
"v_3_del": self.getV_3_dels(), \
"d_5_del": self.getD_5_dels(), \
"d_3_del": self.getD_3_dels(), \
"j_5_del": self.getJ_5_dels(), \
"vd_ins": self.getVD_ins(), \
"vd_dinucl": self.getVD_Region(), \
"dj_ins": self.getDJ_ins(), \
"dj_dinucl": self.getDJ_Region(), \
"mismatches": self.mismatches, \
"mismatcheslen": self.mismatcheslen
}
return dictBestScenario
@classmethod
def load_FromLineBestScenario(cls, line, delimiter=";"):
# seq_index;scenario_rank;scenario_proba_cond_seq;GeneChoice_V_gene_Undefined_side_prio7_size35;GeneChoice_J_gene_Undefined_side_prio7_size14;GeneChoice_D_gene_Undefined_side_prio6_size2;Deletion_V_gene_Three_prime_prio5_size21;Deletion_D_gene_Five_prime_prio5_size21;Deletion_D_gene_Three_prime_prio5_size21;Deletion_J_gene_Five_prime_prio5_size23;Insertion_VD_genes_Undefined_side_prio4_size31;DinucMarkov_VD_genes_Undefined_side_prio3_size16;Insertion_DJ_gene_Undefined_side_prio2_size31;DinucMarkov_DJ_gene_Undefined_side_prio1_size16;Mismatches
cls = IgorBestScenariosVDJ()
linesplit = line.split(delimiter)
linesplit = line.split(";")
for ii in range(len(linesplit)):
# TODO: find a better way to do this, if is a list keep it as list
if (ii in [11, 13, 14]):
linesplit[ii] = linesplit[ii]
else:
linesplit[ii] = linesplit[ii].replace("(", "").replace(")", "")
try:
# 1;1;0.123596;(14);(9);(1);(4);(8);(7);(11);(0);();(9);(0,2,0,1,2,3,2,0,0);(122,123,124)
cls.seq_index = int(linesplit[0])
cls.scenario_rank = int(linesplit[1])
cls.scenario_proba_cond_seq = float(linesplit[2])
print(linesplit[3], type(linesplit[3]), len(linesplit[3]))
cls.id_v_choice = int(linesplit[3])
cls.id_j_choice = int(linesplit[4])
cls.id_d_gene = int(linesplit[5])
cls.id_v_3_del = int(linesplit[6])
cls.id_d_5_del = int(linesplit[7])
cls.id_d_3_del = int(linesplit[8])
cls.id_j_5_del = int(linesplit[9])
cls.id_vd_ins = int(linesplit[10])
cls.vd_dinucl = eval(linesplit[11])
cls.id_dj_ins = int(linesplit[12])
cls.dj_dinucl = eval(linesplit[13])
cls.mismatches = eval(linesplit[14])
cls.mismatcheslen = int(len(cls.mismatches))
return cls
except Exception as e:
print(e)
raise e
@classmethod
def load_FromDict(cls, dictBestScenarios):
"""
Return a IgorBestScenariosVDJ instance from a IgorSqlRecord.
:param sqlRecordAlign: record of a sql database table.
:param strGene_name: gene_name associated to the record.
:return: IgorAlignment_data instance
"""
cls = IgorBestScenariosVDJ()
try:
# cls.seq_index = int(sqlRecordBestScenarios[ 0])
# cls.scenario_rank = int(sqlRecordBestScenarios[ 1])
# cls.scenario_proba_cond_seq = float(sqlRecordBestScenarios[2])
cls.id_v_choice = int(dictBestScenarios["v_choice"])
cls.id_j_choice = int(dictBestScenarios["j_choice"])
cls.id_d_gene = int(dictBestScenarios["d_gene"])
cls.id_v_3_del = int(dictBestScenarios["v_3_del"])
cls.id_d_5_del = int(dictBestScenarios["d_5_del"])
cls.id_d_3_del = int(dictBestScenarios["d_3_del"])
cls.id_j_5_del = int(dictBestScenarios["j_5_del"])
cls.id_vd_ins = int(dictBestScenarios["vd_ins"])
cls.vd_dinucl = eval(dictBestScenarios["vd_dinucl"])
cls.id_dj_ins = int(dictBestScenarios["dj_ins"])
cls.dj_dinucl = eval(dictBestScenarios["dj_dinucl"])
# cls.mismatches = eval(dictBestScenarios["mismatches"])
# cls.mismatcheslen = int(len(cls.mismatches) )
return cls
except Exception as e:
print(e)
raise e
@classmethod
def load_FromSQLRecord(cls, sqlRecordBestScenarios):
"""
Return a IgorBestScenariosVDJ instance from a IgorSqlRecord.
:param sqlRecordAlign: record of a sql database table.
:param strGene_name: gene_name associated to the record.
:return: IgorAlignment_data instance
"""
cls = IgorBestScenariosVDJ()
try:
cls.seq_index = int(sqlRecordBestScenarios[0])
cls.scenario_rank = int(sqlRecordBestScenarios[1])
cls.scenario_proba_cond_seq = float(sqlRecordBestScenarios[2])
cls.id_v_choice = int(sqlRecordBestScenarios[3])
cls.id_j_choice = int(sqlRecordBestScenarios[4])
cls.id_d_gene = int(sqlRecordBestScenarios[5])
cls.id_v_3_del = int(sqlRecordBestScenarios[6])
cls.id_d_5_del = int(sqlRecordBestScenarios[7])
cls.id_d_3_del = int(sqlRecordBestScenarios[8])
cls.id_j_5_del = int(sqlRecordBestScenarios[9])
cls.id_vd_ins = int(sqlRecordBestScenarios[10])
cls.vd_dinucl = eval(sqlRecordBestScenarios[11])
cls.id_dj_ins = int(sqlRecordBestScenarios[12])
cls.dj_dinucl = eval(sqlRecordBestScenarios[13])
cls.mismatches = eval(sqlRecordBestScenarios[14])
cls.mismatcheslen = int(sqlRecordBestScenarios[15])
return cls
except Exception as e:
print(e)
raise e
# TODO: finish this class
@classmethod
def load_FromEventNameValues(cls, mdl, seq_index, strSeq_index, scenario_dict):
# v_3_del = 2
# d_5_del = 6
# d_3_del = 1
# vd_ins = 1 and should be a "C"
# dj_ins = 3 and should be a "TCT"
# FIXME: CORRECT ERROR MESSAGES and ADD documentation and VALIDATIONS OF EVENTS
"""
Return a IgorBestScenariosVDJ instance from a dict of names or values.
:param strGene_name: gene_name associated to the record.
:return: IgorAlignment_data instance
"""
##### The event I think is the best one
cls = IgorBestScenariosVDJ() # .load_FromSQLRecord(record_bs[ii])
# cls.setModel_Parms(flnModelParms)
cls.mdl = mdl
cls.seq_index = seq_index # 59
cls.strSeq_index = strSeq_index # db.fetch_IgorIndexedSeq_By_seq_index(seq_index)[1]
Event_GeneChoice = ['v_choice', 'j_choice', 'd_gene']
Event_Deletions = ['v_3_del', 'd_5_del', 'd_3_del', 'j_5_del']
Event_Insertions = ['vd_ins', 'dj_ins']
Event_Dinucl = ['vd_dinucl', 'dj_dinucl']
for event_nickname in scenario_dict.keys():
if event_nickname in Event_GeneChoice:
pd_event = cls.mdl.parms.Event_dict[event_nickname]
gene_name = scenario_dict[event_nickname] # 'TRBV17*01'
gene_id = pd_event.loc[pd_event['name'] == gene_name].index.values[0]
if event_nickname == 'v_choice':
cls.id_v_choice = gene_id
elif event_nickname == 'j_choice':
cls.id_j_choice = gene_id
elif event_nickname == 'd_gene':
cls.id_d_gene = gene_id
else:
print("Something vey bad happen with " + str(scenario_dict[event_nickname]))
elif event_nickname in Event_Deletions:
pd_event = cls.mdl.parms.Event_dict[event_nickname]
realiz_name = scenario_dict[event_nickname]
realiz_id = pd_event.loc[pd_event['value'] == realiz_name].index.values[0]
if event_nickname == 'v_3_del':
cls.id_v_3_del = realiz_id
elif event_nickname == 'd_5_del':
cls.id_d_5_del = realiz_id
elif event_nickname == 'd_3_del':
cls.id_d_3_del = realiz_id
elif event_nickname == 'j_5_del':
cls.id_j_5_del = realiz_id
else:
print("Something vey bad happen with " + str(scenario_dict[event_nickname]))
elif event_nickname in Event_Insertions:
pd_event = cls.mdl.parms.Event_dict[event_nickname]
realiz_name = scenario_dict[event_nickname]
realiz_id = pd_event.loc[pd_event['value'] == realiz_name].index.values[0]
if event_nickname == 'v_3_del':
cls.id_vd_ins = realiz_id
elif event_nickname == 'd_5_del':
cls.id_dj_ins = realiz_id
else:
print("Something vey bad happen with " + str(scenario_dict[event_nickname]))
elif event_nickname in Event_Dinucl:
if event_nickname == 'vd_dinucl':
pd_event = cls.mdl.parms.Event_dict[event_nickname]
str_sequence = scenario_dict[event_nickname]
list_id_seq = list()
for str_nt in str_sequence:
realiz_name = str_nt
realiz_id = pd_event.loc[pd_event['value'] == realiz_name].index.values[0]
list_id_seq.append(realiz_id)
cls.vd_dinucl = list_id_seq
elif event_nickname == 'dj_dinucl':
pd_event = cls.mdl.parms.Event_dict[event_nickname]
str_sequence = scenario_dict[event_nickname]
list_id_seq = list()
for str_nt in str_sequence:
realiz_name = str_nt
realiz_id = pd_event.loc[pd_event['value'] == realiz_name].index.values[0]
list_id_seq.append(realiz_id)
cls.dj_dinucl = list_id_seq
else:
print("Something wrong with " + str(event_nickname))
elif event_nickname in ['mismatches']:
if isinstance(scenario_dict[event_nickname], list):
cls.mismatches = scenario_dict[event_nickname]
cls.mismatcheslen = len(cls.mismatches)
else:
print("mismatches aren't list")
else:
print("Something bad happen!")
return cls
def save_scenario_fasta(self, outfilename):
ofileScen = open(outfilename, "w")
ofileScen.write(self.str_scenario_fasta())
# ofileScen.write("> "+str(self.seq_index)+", rank: "+str(self.scenario_rank)+ ", prob: "+str(self.scenario_proba_cond_seq)+"\n")
# ofileScen.write( self.strSeq_index + "\n" )
# ofileScen.write( self.getV_fasta() + "\n" )
# ofileScen.write( self.getVD_fasta() + "\n" )
# ofileScen.write( self.getD_fasta() + "\n" )
# ofileScen.write( self.getDJ_fasta() + "\n" )
# ofileScen.write( self.getJ_fasta() + "\n" )
ofileScen.close()
def str_scenario_fasta(self):
strScenarioFasta = ""
strScenarioFasta = strScenarioFasta + ">" + str(self.seq_index) + ", rank: " + str(
self.scenario_rank) + ", prob: " + str(self.scenario_proba_cond_seq) + "\n"
strScenarioFasta = strScenarioFasta + self.strSeq_index + "\n"
strScenarioFasta = strScenarioFasta + self.getV_fasta() + "\n"
strScenarioFasta = strScenarioFasta + self.getVD_fasta() + "\n"
strScenarioFasta = strScenarioFasta + self.getD_fasta() + "\n"
strScenarioFasta = strScenarioFasta + self.getDJ_fasta() + "\n"
strScenarioFasta = strScenarioFasta + self.getJ_fasta() + "\n"
# FIXME: TEMPORAL
strScenarioFasta = strScenarioFasta + "> v_choice\n"
strScenarioFasta = strScenarioFasta + self.getV_ntsequence() + "\n"
strScenarioFasta = strScenarioFasta + "> d_gene\n"
strScenarioFasta = strScenarioFasta + self.getD_ntsequence() + "\n"
strScenarioFasta = strScenarioFasta + "> j_choice\n"
strScenarioFasta = strScenarioFasta + self.getJ_ntsequence() + "\n"
return strScenarioFasta
#### V region methods
def getV_fasta(self):
strV_fasta = ""
strV_fasta = strV_fasta + ">" + str(self.id_v_choice) + ": " + self.getV_gene_name() + ", dels 3' = " + str(
self.getV_3_dels()) + "\n"
strV_fasta = strV_fasta + self.getV_Region() + "\n"
return strV_fasta
def getV_gene_name(self):
strEv = 'v_choice'
name_V = self.mdlParms.Event_dict[strEv].loc[self.id_v_choice]['name']
return name_V
def getV_ntsequence(self):
strEv = 'v_choice'
seq_V = self.mdlParms.Event_dict[strEv].loc[self.id_v_choice]['value']
return seq_V
def getV_3_dels(self):
strEv = 'v_3_del'
n_v_3_del = self.mdlParms.Event_dict[strEv].loc[self.id_v_3_del]['value']
return n_v_3_del
def getV_Region(self):
# seq_id=59
strEv = 'v_choice'
seq_V = self.mdlParms.Event_dict[strEv].loc[self.id_v_choice]['value']
n_v_3_del = self.getV_3_dels()
if n_v_3_del == 0:
return (seq_V)
# FIXME: ADD palindromic insertions
elif n_v_3_del < 0:
return (seq_V + 'X' * n_v_3_del)
else:
return (seq_V[:-n_v_3_del])
#### J region methods
def getJ_fasta(self):
strJ_fasta = ""
strJ_fasta = strJ_fasta + ">" + str(self.id_j_choice) + ": " + self.getJ_gene_name() + ", dels 5' = " + str(
self.getJ_5_dels()) + "\n"
strJ_fasta = strJ_fasta + self.getJ_Region() + "\n"
return strJ_fasta
def getJ_gene_name(self):
strEv = 'j_choice'
name_J = self.mdlParms.Event_dict[strEv].loc[self.id_j_choice]['name']
return name_J
def getJ_ntsequence(self):
strEv = 'j_choice'
seq_J = self.mdlParms.Event_dict[strEv].loc[self.id_j_choice]['value']
return seq_J
def getJ_5_dels(self):
strEv = 'j_5_del'
n_j_5_del = self.mdlParms.Event_dict[strEv].loc[self.id_j_5_del]['value']
return n_j_5_del
def getJ_Region(self):
# seq_id=59
strEv = 'j_choice'
seq_J = self.mdlParms.Event_dict[strEv].loc[self.id_j_choice]['value'] # .values
n_j_5_del = self.getJ_5_dels()
if n_j_5_del == 0:
return (seq_J)
# FIXME: ADD palindromic insertions
elif n_j_5_del < 0:
return (seq_J + 'X' * n_j_5_del)
else:
return (seq_J[n_j_5_del:])
#### D region methods
def getD_fasta(self):
strD_fasta = ""
strD_fasta = strD_fasta + ">" + str(self.id_d_gene) + ": " + self.getD_gene_name() \
+ ", " + str(self.id_d_5_del) + " dels 5' = " + str(self.getD_5_dels()) \
+ ", " + str(self.id_d_3_del) + " dels 3' = " + str(self.getD_3_dels()) + "\n"
strD_fasta = strD_fasta + self.getD_Region() + "\n"
return strD_fasta
def getD_gene_name(self):
strEv = 'd_gene'
name_D = self.mdlParms.Event_dict[strEv].loc[self.id_d_gene]['name']
return name_D
def getD_ntsequence(self):
strEv = 'd_gene'
seq_D = self.mdlParms.Event_dict[strEv].loc[self.id_d_gene]['value']
return seq_D
def getD_5_dels(self):
strEv = 'd_5_del'
n_d_5_del = self.mdlParms.Event_dict[strEv].loc[self.id_d_5_del]['value']
return n_d_5_del
def getD_3_dels(self):
strEv = 'd_3_del'
n_d_3_del = self.mdlParms.Event_dict[strEv].loc[self.id_d_3_del]['value']
return n_d_3_del
def getD_Region(self):
strEv = 'd_gene'
seq_D = self.mdlParms.Event_dict[strEv].loc[self.id_d_gene]['value'] # .values
n_d_5_del = self.getD_5_dels()
n_d_3_del = self.getD_3_dels()
if n_d_3_del == 0:
return (seq_D[n_d_5_del:])
# FIXME: ADD palindromic insertions
elif n_d_3_del < 0:
return (seq_D[n_d_5_del:] + 'X' * n_d_3_del)
else:
return (seq_D[n_d_5_del:-n_d_3_del])
#### VD region methods
def getVD_fasta(self):
strVD_fasta = ""
strVD_fasta = strVD_fasta + ">" + str(self.id_vd_ins) + ", VD insertions = " + str(self.getVD_ins()) + "\n"
strVD_fasta = strVD_fasta + self.getVD_Region() + "\n"
return strVD_fasta
def getVD_ins(self):
strEv = 'vd_ins'
n_vd_ins = self.mdlParms.Event_dict[strEv].loc[self.id_vd_ins]['value']
return n_vd_ins
def getVD_Region(self):
strEv = 'vd_dinucl'
seq_VD_dinucl = self.mdlParms.Event_dict[strEv].loc[self.vd_dinucl]['value'].values
return (''.join(seq_VD_dinucl.tolist()))
#### DJ region methods
def getDJ_fasta(self):
strDJ_fasta = ""
strDJ_fasta = strDJ_fasta + ">" + str(self.id_dj_ins) + ", DJ insertions = " + str(self.getDJ_ins()) + "\n"
strDJ_fasta = strDJ_fasta + self.getDJ_Region() + "\n"
return strDJ_fasta
def getDJ_ins(self):
strEv = 'dj_ins'
n_dj_ins = self.mdlParms.Event_dict[strEv].loc[self.id_dj_ins]['value']
return n_dj_ins
def getDJ_Region(self):
strEv = 'dj_dinucl'
seq_DJ_dinucl = self.mdlParms.Event_dict[strEv].loc[self.dj_dinucl]['value'].values
return (''.join(seq_DJ_dinucl.tolist()))
# FIXME: CHANGE NAME TO get_ScenarioProb.
def get_EventProb(self):
###### v_choice
strEvent = 'v_choice'
da_event = self.mdl.xdata[strEvent]
p_V = da_event[{strEvent: self.id_v_choice}]
# print("p_V = ", p_V)
# p_V = da_event.where(da_event['lbl__'+strEvent] == bs.getV_gene_name() ,drop=True)
###### j_choice
strEvent = 'j_choice'
da_event = self.mdl.xdata[strEvent]
p_J = da_event[{strEvent: self.id_j_choice}]
# print("p_J = ", p_J)
# p_J = da_event.where(da_event['lbl__'+strEvent] == bs.getJ_gene_name() ,drop=True)
###### d_gene
strEvent = 'd_gene'
da_event = self.mdl.xdata[strEvent]
p_DgJ = da_event[{'d_gene': self.id_d_gene, 'j_choice': self.id_j_choice}]
# print("p_DgJ = ", p_DgJ)
# p_DgJ
###### v_3_del
strEvent = 'v_3_del'
da_event = self.mdl.xdata[strEvent]
p_V_3_del = da_event[{'v_3_del': self.id_v_3_del, 'v_choice': self.id_v_choice}]
# print("p_V_3_del = ", p_V_3_del)
# p_V_3_del
###### j_5_del
strEvent = 'j_5_del'
da_event = self.mdl.xdata[strEvent]
p_J_5_del = da_event[{'j_5_del': self.id_j_5_del, 'j_choice': self.id_j_choice}]
# print("p_J_5_del = ", p_J_5_del)
# p_J_5_del
###### d_5_del
strEvent = 'd_5_del'
da_event = self.mdl.xdata[strEvent]
p_D_5_del = da_event[{'d_5_del': self.id_d_5_del, 'd_gene': self.id_d_gene}]
# print("p_D_5_del = ", p_D_5_del)
# p_D_5_del
###### d_3_del
strEvent = 'd_3_del'
da_event = self.mdl.xdata[strEvent]
p_D_3_del = da_event[{'d_3_del': self.id_d_3_del, 'd_5_del': self.id_d_5_del, 'd_gene': self.id_d_gene}]
# print("p_D_3_del = ", p_D_3_del)
# p_D_3_del
###### vd_ins
strEvent = 'vd_ins'
da_event = self.mdl.xdata[strEvent]
p_VD_ins = da_event[{'vd_ins': self.id_vd_ins}]
# print("p_VD_ins = ", p_VD_ins)
###### vd_dinucl
strEvent = 'vd_dinucl'
da_event = self.mdl.xdata[strEvent]
# Get the last nucleotide of V region (after deletions)
str_prev_nt = self.getV_Region()[-1]
pd_tmp = self.mdl.parms.Event_dict[strEvent]
prev_nt = pd_tmp.loc[pd_tmp['value'] == str_prev_nt].index.values[0]
# for each nucleotide on inserted list
Num_nt = 4 # 4 nucleotides A, C, G, T
p_VD_dinucl = 1
for curr_nt in self.vd_dinucl:
id_dinucl = prev_nt * Num_nt + curr_nt
prob_tmp = da_event[{'vd_dinucl': id_dinucl}]
p_VD_dinucl = p_VD_dinucl * prob_tmp
# print(prev_nt, curr_nt, id_dinucl, prob_tmp, p_VD_dinucl)
prev_nt = curr_nt
#
# print("p_VD_dinucl = ", p_VD_dinucl)
###### dj_ins
strEvent = 'dj_ins'
da_event = self.mdl.xdata[strEvent]
p_DJ_ins = da_event[{'dj_ins': self.id_dj_ins}]
# print("p_DJ_ins = ", p_DJ_ins)
###### dj_dinucl
strEvent = 'dj_dinucl'
da_event = self.mdl.xdata[strEvent]
# Get the last nucleotide of V region (after deletions)
# str_prev_nt = (self.getV_Region() + self.getVD_Region() + self.getD_Region() )[-1]
str_prev_nt = (self.getJ_Region())[0]
pd_tmp = self.mdl.parms.Event_dict[strEvent]
prev_nt = pd_tmp.loc[pd_tmp['value'] == str_prev_nt].index.values[0]
# print("prev_nt : ", prev_nt)
# for each nucleotide on inserted list
Num_nt = 4 # 4 nucleotides A, C, G, T
p_DJ_dinucl = 1
# self.dj_dinucl = self.dj_dinucl[::-1]
for curr_nt in self.dj_dinucl:
id_dinucl = prev_nt * Num_nt + curr_nt
prob_tmp = da_event[{'dj_dinucl': id_dinucl}]
p_DJ_dinucl = p_DJ_dinucl * prob_tmp
# print(prev_nt, curr_nt, id_dinucl, prob_tmp, p_DJ_dinucl)
prev_nt = curr_nt
# print("p_DJ_dinucl = ", p_DJ_dinucl)
p_vecE = p_V * p_J * p_DgJ * p_V_3_del * p_J_5_del * p_D_5_del * p_D_3_del * \
p_VD_ins * p_VD_dinucl * p_DJ_ins * p_DJ_dinucl
return p_vecE.values
def get_DictNicknameProbs(self):
dictNicknameProbs = dict()
{
"v_choice": self.id_v_choice, \
"j_choice": self.id_j_choice, \
"d_gene": self.id_d_gene, \
"v_3_del": self.id_v_3_del, \
"d_5_del": self.id_d_5_del, \
"d_3_del": self.id_d_3_del, \
"j_5_del": self.id_j_5_del, \
"vd_ins": self.id_vd_ins, \
"vd_dinucl": self.vd_dinucl, \
"dj_ins": self.id_dj_ins, \
"dj_dinucl": self.dj_dinucl, \
"mismatches": self.mismatches, \
"mismatcheslen": self.mismatcheslen
}
###### v_choice
strEvent = 'v_choice'
da_event = self.mdl.xdata[strEvent]
p_V = da_event[{strEvent: self.id_v_choice}]
dictNicknameProbs[strEvent] = p_V
# print("p_V = ", p_V)
# p_V = da_event.where(da_event['lbl__'+strEvent] == bs.getV_gene_name() ,drop=True)
###### j_choice
strEvent = 'j_choice'
da_event = self.mdl.xdata[strEvent]
p_J = da_event[{strEvent: self.id_j_choice}]
dictNicknameProbs[strEvent] = p_J
# print("p_J = ", p_J)
# p_J = da_event.where(da_event['lbl__'+strEvent] == bs.getJ_gene_name() ,drop=True)
###### d_gene
strEvent = 'd_gene'
da_event = self.mdl.xdata[strEvent]
p_DgJ = da_event[{'d_gene': self.id_d_gene, 'j_choice': self.id_j_choice}]
dictNicknameProbs[strEvent] = p_DgJ
# print("p_DgJ = ", p_DgJ)
# p_DgJ
###### v_3_del
strEvent = 'v_3_del'
da_event = self.mdl.xdata[strEvent]
p_V_3_del = da_event[{'v_3_del': self.id_v_3_del, 'v_choice': self.id_v_choice}]
dictNicknameProbs[strEvent] = p_V_3_del
# print("p_V_3_del = ", p_V_3_del)
# p_V_3_del
###### j_5_del
strEvent = 'j_5_del'
da_event = self.mdl.xdata[strEvent]
p_J_5_del = da_event[{'j_5_del': self.id_j_5_del, 'j_choice': self.id_j_choice}]
dictNicknameProbs[strEvent] = p_J_5_del
###### d_5_del
strEvent = 'd_5_del'
da_event = self.mdl.xdata[strEvent]
p_D_5_del = da_event[{'d_5_del': self.id_d_5_del, 'd_gene': self.id_d_gene}]
dictNicknameProbs[strEvent] = p_D_5_del
###### d_3_del
strEvent = 'd_3_del'
da_event = self.mdl.xdata[strEvent]
p_D_3_del = da_event[{'d_3_del': self.id_d_3_del, 'd_5_del': self.id_d_5_del, 'd_gene': self.id_d_gene}]
dictNicknameProbs[strEvent] = p_D_3_del
###### vd_ins
strEvent = 'vd_ins'
da_event = self.mdl.xdata[strEvent]
p_VD_ins = da_event[{'vd_ins': self.id_vd_ins}]
dictNicknameProbs[strEvent] = p_VD_ins
###### vd_dinucl
strEvent = 'vd_dinucl'
da_event = self.mdl.xdata[strEvent]
# Get the last nucleotide of V region (after deletions)
str_prev_nt = self.getV_Region()[-1]
pd_tmp = self.mdl.parms.Event_dict[strEvent]
prev_nt = pd_tmp.loc[pd_tmp['value'] == str_prev_nt].index.values[0]
# for each nucleotide on inserted list
Num_nt = 4 # 4 nucleotides A, C, G, T
p_VD_dinucl = 1
for curr_nt in self.vd_dinucl:
id_dinucl = prev_nt * Num_nt + curr_nt
prob_tmp = da_event[{'vd_dinucl': id_dinucl}]
p_VD_dinucl = p_VD_dinucl * prob_tmp
# print(prev_nt, curr_nt, id_dinucl, prob_tmp, p_VD_dinucl)
prev_nt = curr_nt
dictNicknameProbs[strEvent] = p_VD_dinucl
###### dj_ins
strEvent = 'dj_ins'
da_event = self.mdl.xdata[strEvent]
p_DJ_ins = da_event[{'dj_ins': self.id_dj_ins}]
dictNicknameProbs[strEvent] = p_DJ_ins
###### dj_dinucl
strEvent = 'dj_dinucl'
da_event = self.mdl.xdata[strEvent]
# Get the last nucleotide of V region (after deletions)
str_prev_nt = (self.getV_Region() + self.getVD_Region() + self.getD_Region())[-1]
pd_tmp = self.mdl.parms.Event_dict[strEvent]
prev_nt = pd_tmp.loc[pd_tmp['value'] == str_prev_nt].index.values[0]
# print("prev_nt : ", prev_nt)
# for each nucleotide on inserted list
Num_nt = 4 # 4 nucleotides A, C, G, T
p_DJ_dinucl = 1
for curr_nt in self.dj_dinucl:
id_dinucl = prev_nt * Num_nt + curr_nt
prob_tmp = da_event[{'dj_dinucl': id_dinucl}]
p_DJ_dinucl = p_DJ_dinucl * prob_tmp
# print(prev_nt, curr_nt, id_dinucl, prob_tmp, p_DJ_dinucl)
prev_nt = curr_nt
dictNicknameProbs[strEvent] = p_DJ_dinucl
return dictNicknameProbs
def get_ErrorProb(self):
r = float(self.mdl.parms.ErrorRate['SingleErrorRate'])
L = len(self.strSeq_index)
print("error rate: ", r, "n mismatches", self.mismatcheslen)
# return r**(self.mismatcheslen)
return (r / 3) ** (self.mismatcheslen) # * (1-r)**( L - self.mismatcheslen)
|
PypiClean
|
/brand_alert-1.0.0-py3-none-any.whl/brandalert/models/response.py
|
import copy
import datetime
import re
from .base import BaseModel
import sys
if sys.version_info < (3, 9):
import typing
_re_date_format = re.compile(r'^\d\d\d\d-\d\d-\d\d$')
def _date_value(values: dict, key: str) -> datetime.date or None:
if key in values and values[key] is not None:
if _re_date_format.match(values[key]) is not None:
return datetime.datetime.strptime(
values[key], '%Y-%m-%d').date()
return None
def _string_value(values: dict, key: str) -> str:
if key in values:
return str(values[key])
return ''
def _int_value(values: dict, key: str) -> int:
if key in values:
return int(values[key])
return 0
def _list_value(values: dict, key: str) -> list:
if key in values and type(values[key]) is list:
return copy.deepcopy(values[key])
return []
def _list_of_objects(values: dict, key: str, classname: str) -> list:
r = []
if key in values and type(values[key]) is list:
r = [globals()[classname](x) for x in values[key]]
return r
class Domain(BaseModel):
domain_name: str
action: str
date: datetime.date or None
def __init__(self, values):
super().__init__()
self.domain_name = ''
self.action = ''
self.date = None
if values is not None:
self.domain_name = _string_value(values, 'domainName')
self.action = _string_value(values, 'action')
self.date = _date_value(values, 'date')
class Response(BaseModel):
domains_count: int
if sys.version_info < (3, 9):
domains_list: typing.List[Domain]
else:
domains_list: [Domain]
def __init__(self, values):
super().__init__()
self.domains_count = 0
self.domains_list = []
if values is not None:
self.domains_count = _int_value(values, 'domainsCount')
self.domains_list = _list_of_objects(
values, 'domainsList', 'Domain')
class ErrorMessage(BaseModel):
code: int
message: str
def __init__(self, values):
super().__init__()
self.int = 0
self.message = ''
if values is not None:
self.code = _int_value(values, 'code')
self.message = _string_value(values, 'messages')
|
PypiClean
|
/django-material-orange-0.11.6.tar.gz/django-material-orange-0.11.6/material/templatetags/material_form.py
|
import os
import re
from collections import defaultdict
from django.forms.utils import flatatt
from django.forms.forms import BoundField
from django.template import Library
from django.template.base import (
TemplateSyntaxError, Node, Variable, token_kwargs)
from django.template.loader import get_template
from django.template.loader_tags import IncludeNode
from django.utils.safestring import mark_safe
from ..compat import context_flatten
register = Library()
ATTRS_RE = re.compile(
r'(?P<attr>[-\w]+)(\s*=\s*[\'"](?P<val>.*?)[\'"])?',
re.MULTILINE | re.DOTALL
)
def _render_parts(context, parts_list):
parts = context['form_parts']
for partnode in parts_list:
part = partnode.resolve_part(context)
if partnode.section not in parts[part]:
value = partnode.render(context)
parts[part][partnode.section] = value
@register.tag('form')
class FormNode(Node):
"""
Template based form rendering.
Example::
{% form template='material/form.html' form=form layout=view.layout %}
{% part form.email prepend %}
<span class="input-group-addon" id="basic-addon1">@</span>
{% endpart %}
{% endform %}
"""
def __init__(self, parser, token): # noqa D102
bits = token.split_contents()
remaining_bits = bits[1:]
self.kwargs = token_kwargs(remaining_bits, parser)
if remaining_bits:
raise TemplateSyntaxError("%r received an invalid token: %r" %
(bits[0], remaining_bits[0]))
for key in self.kwargs:
if key not in ('form', 'layout', 'template'):
raise TemplateSyntaxError("%r received an invalid key: %r" %
(bits[0], key))
self.kwargs[key] = self.kwargs[key]
self.nodelist = parser.parse(('end{}'.format(bits[0])))
parser.delete_first_token()
def render(self, context): # noqa D102
form = self.kwargs.get('form')
form = form.resolve(context) if form else context.get('form')
if form is None:
return ''
# Take one of view.layout or form.layout
layout = self.kwargs.get('layout')
if layout is not None:
layout = layout.resolve(context)
if layout is None:
if 'view' in context:
view = context['view']
if hasattr(view, 'layout'):
layout = view.layout
if layout is None:
if hasattr(form, 'layout'):
layout = form.layout
template_name = self.kwargs.get('template', 'material/form.html')
template = get_template(template_name)
# Render form and parts
parts = defaultdict(dict) # part -> section -> value
attrs = defaultdict(dict) # field -> section -> atts
with context.push(
form=form,
layout=layout,
form_template_pack=os.path.dirname(template_name),
form_parts=parts,
form_widget_attrs=attrs):
# direct children
children = (
node for node in self.nodelist
if isinstance(node, FormPartNode)
)
_render_parts(context, children)
attrs = (
node for node in self.nodelist
if isinstance(node, WidgetAttrNode)
)
for attr in attrs:
attr.render(context)
# include
children = (
node for node in self.nodelist
if isinstance(node, IncludeNode)
)
for included_list in children:
included = included_list.template.resolve(context)
children = (
node for node in included.nodelist
if isinstance(node, FormPartNode)
)
_render_parts(context, children)
attrs = (
node for node in self.nodelist
if isinstance(node, WidgetAttrNode)
)
for attr in attrs:
attr.render(context)
return template.render(context_flatten(context))
@register.tag('part')
class FormPartNode(Node):
"""Named piece of HTML layout."""
def __init__(self, parser, token): # noqa D102
bits = token.split_contents()
if len(bits) > 5:
raise TemplateSyntaxError(
"%r accepts at most 4 arguments (part_id, section,"
" asvar, varname), got: {}".format(bits[0], ','.join(bits[1:]))
)
self.part_id = Variable(bits[1])
self.section = bits[2] if len(bits) >= 3 else None
self.varname = None
if len(bits) > 3:
if bits[3] != 'asvar':
raise TemplateSyntaxError(
'Forth argument should be asvar," " got {}'.format(bits[3])
)
if len(bits) < 4:
raise TemplateSyntaxError('Variable name not provided')
else:
self.varname = Variable(bits[4])
self.nodelist = parser.parse(('end{}'.format(bits[0]),))
parser.delete_first_token()
def resolve_part(self, context):
"""Resolve field reference form context."""
part = self.part_id.resolve(context)
if isinstance(part, BoundField):
part = part.field
return part
def render(self, context): # noqa D102
part = self.resolve_part(context)
parts = context['form_parts']
if self.section in parts[part]:
# already rendered
if self.varname is not None:
varname = self.varname.resolve(context)
context[varname] = parts[part][self.section]
return ""
else:
return parts[part][self.section]
# child parts
children = (
node for node in self.nodelist
if isinstance(node, FormPartNode)
)
_render_parts(context, children)
# render own content
value = self.nodelist.render(context).strip()
if self.varname is not None:
context[self.varname.resolve(context)] = value
return ''
else:
if not value:
return ''
return value
@register.tag('attrs')
class WidgetAttrsNode(Node):
"""
Renders attrs for the html tag.
<input{% attrs boundfield 'widget' default field.widget.attrs %}
id="id_{{ bound_field.name }}"
class="{% if bound_field.errors %}invalid{% endif %}"
{% endattrs %}>
"""
def __init__(self, parser, token): # noqa D102
bits = token.split_contents()
if len(bits) < 3:
raise TemplateSyntaxError(
"%r accepts at least 2 arguments (bound_field,"
" 'groupname'), got: {}".format(bits[0], ','.join(bits[1:]))
)
if len(bits) > 5:
raise TemplateSyntaxError(
"%r accepts at mast 4 arguments (bound_field, 'groupname'"
" default attrs_dict ), got: {}".format(
bits[0], ','.join(bits[1:]))
)
if len(bits) > 3 and bits[3] != 'default':
raise TemplateSyntaxError(
"%r 3d argument should be 'default' (bound_field, 'groupname'"
" default attrs_dict ), got: {}".format(
bits[0], ','.join(bits[1:]))
)
self.field = Variable(bits[1])
self.group = Variable(bits[2])
self.widget_attrs = Variable(bits[4]) if len(bits) >= 5 else None
self.nodelist = parser.parse(('end{}'.format(bits[0])))
parser.delete_first_token()
def resolve_field(self, context):
"""Resolve field reference form context."""
field = self.field.resolve(context)
if isinstance(field, BoundField):
field = field.field
return field
def render(self, context): # noqa D102
field = self.resolve_field(context)
group = self.group.resolve(context)
form_widget_attrs = context['form_widget_attrs']
override = {}
if group in form_widget_attrs[field]:
override = form_widget_attrs[field][group]
build_in_attrs, tag_content = {}, self.nodelist.render(context)
for attr, _, value in ATTRS_RE.findall(tag_content):
build_in_attrs[attr] = mark_safe(value) if value != '' else True
widget_attrs = {}
if self.widget_attrs is not None:
widget_attrs = self.widget_attrs.resolve(context)
result = build_in_attrs.copy()
if 'class' in result and 'class' in widget_attrs:
result['class'] += ' ' + widget_attrs.pop('class')
result.update(widget_attrs)
for attr, (value, action) in override.items():
if action == 'override':
result[attr] = value
elif action == 'append':
if attr in result:
result[attr] += " " + value
else:
result[attr] = value
return flatatt(result)
@register.tag('attr')
class WidgetAttrNode(Node):
"""The tag allows to add or override specific attribute in the rendered HTML.
The first argumnent is the attribute group name, second is the
attribute name. The third optional flag shows to override (by
default) or `append` the value.
Usage::
{% attr form.email 'widget' 'data-validate' %}email{% endattr %}
{% attr form.email 'widget' 'class' append %}green{% endattr %}
{% attr form.email 'widget' 'required' %}True{% endattr %}
"""
def __init__(self, parser, token): # noqa D102
bits = token.split_contents()
if len(bits) < 4:
raise TemplateSyntaxError(
"{} accepts at least 3 arguments (bound_field, 'groupname'"
" 'attr_name'), got: {}".format(bits[0], ','.join(bits[1:]))
)
if len(bits) > 5:
raise TemplateSyntaxError(
"{} accepts at mast 4 arguments (bound_field, 'groupname'"
" 'attr_name' action ), got: {}".format(
bits[0], ','.join(bits[1:]))
)
if len(bits) >= 5 and bits[4] not in ['append', 'override']:
raise TemplateSyntaxError(
"{} unknown action {} should be 'append'"
" of 'override'".format(bits[0], ','.join(bits[4]))
)
self.field = Variable(bits[1])
self.group = Variable(bits[2])
self.attr = bits[3]
self.action = bits[4] if len(bits) >= 5 else 'override'
self.nodelist = parser.parse(('end{}'.format(bits[0])))
parser.delete_first_token()
def resolve_field(self, context):
"""Resolve field reference form context."""
field = self.field.resolve(context)
if isinstance(field, BoundField):
field = field.field
return field
def render(self, context): # noqa D102
field = self.resolve_field(context)
group = self.group.resolve(context)
form_widget_attrs = context['form_widget_attrs']
value = self.nodelist.render(context)
if group not in form_widget_attrs[field]:
form_widget_attrs[field][group] = {}
attrs = form_widget_attrs[field][group]
if self.attr not in attrs or self.action == 'override':
attrs[self.attr] = (value, self.action)
else:
old_value, old_action = attrs[self.attr]
if old_action != 'override':
attrs[self.attr] = (
'{} {}'.format(old_value, value),
self.action
)
|
PypiClean
|
/lab-partner-0.7.57.tar.gz/lab-partner-0.7.57/src/lab_partner/docker/run_builder.py
|
import os
from pathlib import Path
from typing import Optional
from .unix_user import UnixUser
class InvalidDockerOptionConfiguration(Exception):
pass
class DockerRunOptions(object):
"""
This class helps build up a list of options for the invocation of Docker.
"""
def __init__(self):
self._options = set()
self._user_info = UnixUser()
def with_remove_on_exit(self) -> 'DockerRunOptions':
self._options.add(f'--rm')
return self
def with_name(self, name: str) -> 'DockerRunOptions':
self._options.add(f'--name {name}')
return self
def with_hostname(self, hostname: str) -> 'DockerRunOptions':
self._options.add(f'--name {hostname}')
return self
def with_user(self) -> 'DockerRunOptions':
self._options.add(f'--user {self._user_info.uid}:{self._user_info.gid}')
return self
def with_privileged(self) -> 'DockerRunOptions':
self._options.add('--privileged')
return self
def with_tty(self) -> 'DockerRunOptions':
if self._is_daemon():
raise InvalidDockerOptionConfiguration('Launch a tty is not compatible with daemon mode')
self._options.add('-it')
return self
def _is_tty(self) -> bool:
return '-it' in self._options
def with_daemon(self) -> 'DockerRunOptions':
if self._is_tty():
raise InvalidDockerOptionConfiguration('Launching as a daemon is not compatible with tty mode')
self._options.add('-d')
return self
def _is_daemon(self) -> bool:
return '-d' in self._options
def with_network(self, network_name: str) -> 'DockerRunOptions':
self._options.add(f'--network={network_name}')
return self
def with_workdir(self, workdir: str) -> 'DockerRunOptions':
self._options.add(f'--workdir={workdir}')
return self
def with_env(self, key: str, value: Optional[str] = None) -> 'DockerRunOptions':
"""
Adds an environment value option to the Docker command line, assuming both the
key and value are non-empty.
:param key: Environment variable name
:param value: Environment variable value
:return: self
"""
if key and value:
self._options.add(f'-e {key}={value}')
elif key:
self._options.add(f'-e {key}')
return self
def with_mount_home(self) -> 'DockerRunOptions':
self._options.add(f'-v {self._user_info.home}:{self._user_info.home}')
return self
def with_home_dir_bind_mount(self, source: str, target: str, validate_source_exists: bool = True) -> 'DockerRunOptions':
return self.with_bind_mount(self._user_info.home_subdir(source), target, validate_source_exists)
def with_bind_mount(self, source: str, target: str, validate_source_exists: bool = True) -> 'DockerRunOptions':
"""
Adds an option to bind mount a host volume
:param source: Source host path to be mounted
:param target: Target path inside container to attach the mount
:param validate_source_exists: If True (the default) the mount command will only
be included if the source volume actually exists.
:return: self
"""
if source and target and self._path_exists(source, validate_source_exists):
self._options.add(f'-v {source}:{target}')
return self
def with_named_volume(self, name: str, target: str) -> 'DockerRunOptions':
self._options.add(f'--mount type=volume,src={name},dst={target}')
return self
def with_port_mapping(self, external_port: int, internal_port: int):
self._options.add(f'-p {external_port}:{internal_port}')
return self
def build(self) -> str:
"""
Builds the accumulated options into a space-separated string.
:return: String containing all the options.
"""
return ' '.join(self._options)
@staticmethod
def _path_exists(path: str, should_validate: bool) -> bool:
"""
Method used to check for the existence of a source path
:param path: Path to be checked
:param should_validate: Whether we should really check. If False, the method
will return True regardless of whether the path exists.
:return:
"""
if not should_validate:
return True
else:
return Path(path).exists()
class DockerRunBuilder(object):
def __init__(self, image_name: str):
self._image_name = image_name
self._options = DockerRunOptions()
def options(self):
return self._options
def build(self) -> str:
return f'docker run \
{self._options.build()} \
{self._image_name}'
|
PypiClean
|
/fin_model_course-0.1.17.tar.gz/fin_model_course-0.1.17/fin_model_course/lectures/lab_exercises/notes.py
|
import datetime
import os
import pandas as pd
import statsmodels.api as sm
import numpy as np
from finstmt import BalanceSheets, IncomeStatements, FinancialStatements
from build_tools.config import LAB_FOLDER_NAME, LAB_EXERCISES_PATH, SITE_URL
from lectures.lab_exercise import LabExerciseLecture
from lectures.model import LectureNotes, Lecture, LectureResource, Equation, Link
from lectures.python_basics.notes import LECTURE_4_COMMON_RESOURCES
from resources.models import RESOURCES
from schedule.main import LECTURE_2_NAME, LECTURE_3_NAME, LECTURE_4_NAME, LECTURE_5_NAME, LECTURE_6_NAME, \
LECTURE_7_NAME, LECTURE_8_NAME, LECTURE_9_NAME, LECTURE_10_NAME, LECTURE_11_NAME, LECTURE_12_NAME
COURSE_SITE = Link(href=SITE_URL, display_text='the course site')
LAB_LECTURE_4_COMMON_RESOURCES = [
LECTURE_4_COMMON_RESOURCES[1],
RESOURCES.lectures.beyond_initial_python.slides,
]
LAB_LECTURE_6_COMMON_RESOURCES = [
RESOURCES.labs.visualization.pandas_visualization_notebook,
RESOURCES.lectures.visualization.slides,
]
LECTURE_7_SLIDES = RESOURCES.lectures.sensitivity_analysis.slides
LECTURE_7_LAB_NOTEBOOK = RESOURCES.labs.python_basics.dicts_lists_comprehensions_notebook
LECTURE_7_EXAMPLE_NOTEBOOK = RESOURCES.examples.intro.python.dicts_list_comp_imports_notebook
LECTURE_8_SLIDES = RESOURCES.lectures.probability.slides
LECTURE_9_SLIDES = RESOURCES.lectures.combining_excel_python.slides
LECTURE_10_SLIDES = RESOURCES.lectures.monte_carlo.slides
LECTURE_11_SLIDES = RESOURCES.lectures.dcf_cost_capital.slides
LECTURE_12_SLIDES = RESOURCES.lectures.dcf_fcf.slides
def get_simple_retirement_lab_lecture() -> LabExerciseLecture:
title = 'Extending a Simple Retirement Model'
short_title = 'Vary Savings Rate Lab'
youtube_id = 'KVVabq4n-ow'
week_covered = 2
bullets = [
[
"Now we want to see the effect of savings rate on time until retirement, in addition to interest rate",
"In both Excel and Python, calculate the years to retirement for savings rates of 10%, 25%, and 40%, "
"and each of these cases with each of the interest rate cases, 4%, 5%, and 6%",
f"Be sure that you drag formulas in Excel and use for loops in Python to accomplish this",
"In total you should have 9 calculated years to retirement numbers, in each of the two models.",
]
]
answers = [
[
"Martha has 61.1 years to retirement if she earns a 4% return and saves 10%.",
"Martha has 41.0 years to retirement if she earns a 4% return and saves 25%.",
"Martha has 31.9 years to retirement if she earns a 4% return and saves 40%.",
"Martha has 53.3 years to retirement if she earns a 5% return and saves 10%.",
"Martha has 36.7 years to retirement if she earns a 5% return and saves 25%.",
"Martha has 29.0 years to retirement if she earns a 5% return and saves 40%.",
"Martha has 47.6 years to retirement if she earns a 6% return and saves 10%.",
"Martha has 33.4 years to retirement if she earns a 6% return and saves 25%.",
"Martha has 26.7 years to retirement if she earns a 6% return and saves 40%.",
]
]
resources = [
RESOURCES.examples.intro.excel.simple_retirement_model,
RESOURCES.examples.intro.python.simple_retirement_model,
RESOURCES.lectures.getting_started.slides,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_extend_dynamic_retirement_excel_lab_lecture() -> LabExerciseLecture:
title = 'Determining Desired Cash in the Dynamic Salary Retirement Excel Model'
short_title = 'Dynamic Desired Cash in Excel'
youtube_id = 'cM3uKsHXS3M'
week_covered = 2
bullets = [
[
'We want to relax the assumption that the amount needed in retirement is given by '
'a fixed amount of desired cash',
'Add new inputs to the model, "Annual Cash Spend During Retirement" and "Years in Retirement"',
'Calculate desired cash based on interest, cash spend, and years in retirement',
'Use the calculated desired cash in the model to determine years to retirement',
]
]
answers = [
[
r'If annual spend is 40k for 25 years in retirement, \$563,757.78 should be the retirement cash and there '
r'should be 18 years to retirement.'
]
]
resources = [
RESOURCES.examples.intro.excel.dynamic_salary_retirement_model,
RESOURCES.lectures.depth_excel.slides,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_python_basics_conditionals_lab_lecture() -> LabExerciseLecture:
title = 'Python Basics - Conditionals'
short_title = 'Python Conditionals Lab'
youtube_id = 'T4LK0QgPbNA'
week_covered = 2
bullets = [
[
f"The Jupyter notebook called Python Basics Lab contains "
f"all of the labs for today's lecture",
'Please complete the exercises under "Conditionals"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_4_COMMON_RESOURCES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_python_basics_lists_lab_lecture() -> LabExerciseLecture:
title = 'Python Basics - Lists'
short_title = 'Python Lists Lab'
youtube_id = 'AViA3IBpXcc'
week_covered = 3
bullets = [
[
"Keep working off of Python Basics Lab.ipynb",
'Please complete the exercises under "Working with Lists"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_4_COMMON_RESOURCES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_python_basics_functions_lab_lecture() -> LabExerciseLecture:
title = 'Python Basics - Functions'
short_title = 'Python Functions Lab'
youtube_id = 'xOxJst-SMy8'
week_covered = 3
bullets = [
[
"Keep working off of Python Basics Lab.ipynb",
'Please complete the exercises under "Functions"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_4_COMMON_RESOURCES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_python_basics_data_types_lab_lecture() -> LabExerciseLecture:
title = 'Python Basics - Data Types'
short_title = 'Python Data Types Lab'
youtube_id = 'pyjfrIzdjgo'
week_covered = 3
bullets = [
[
"Keep working off of Python Basics Lab.ipynb",
'Please complete the exercises under "Data Types"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_4_COMMON_RESOURCES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_python_basics_classes_lab_lecture() -> LabExerciseLecture:
title = 'Python Basics - Classes'
short_title = 'Python Classes Lab'
youtube_id = 'znxtmT66UAM'
week_covered = 3
bullets = [
[
"Keep working off of Python Basics Lab.ipynb",
'Make sure you have car_example.py in the same folder',
'Please complete the exercises under "Working with Classes"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_4_COMMON_RESOURCES,
RESOURCES.examples.intro.python.car_example,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_extend_dynamic_retirement_python_lab_lecture() -> LabExerciseLecture:
title = 'Determining Desired Cash in the Dynamic Salary Retirement Python Model'
short_title = 'Dynamic Desired Cash in Python'
youtube_id = 'TotudqllyGo'
week_covered = 4
bullets = [
[
'We want to relax the assumption that the amount needed in retirement is given by '
'a fixed amount of desired cash',
'Start from the completed retirement model Jupyter notebook Dynamic Salary Retirement Model.ipynb',
'Add new inputs to the model, "Annual Cash Spend During Retirement" and "Years in Retirement"',
'Calculate desired cash based on interest, cash spend, and years in retirement',
'Use the calculated desired cash in the model to determine years to retirement',
]
]
answers = [
[
r'If annual spend is 40k for 25 years in retirement, \$563,757.78 should be the retirement cash and there '
r'should be 18 years to retirement.'
]
]
resources = [
RESOURCES.examples.intro.python.dynamic_salary_retirement_model,
RESOURCES.lectures.depth_python.slides,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_intro_to_pandas_lab_lecture() -> LabExerciseLecture:
title = 'Getting Started with Pandas'
short_title = 'Intro Pandas Lab'
youtube_id = 'XYBS-XhHyHo'
week_covered = 5
bullets = [
[
'Work off of the Jupyter notebook Pandas and Visualization Labs.ipynb',
'Complete the lab exercises in the first section entitled "Pandas"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_6_COMMON_RESOURCES,
RESOURCES.external.visualization.pandas_official_intro,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_pandas_styling_lab_lecture() -> LabExerciseLecture:
title = 'Styling Pandas DataFrames'
short_title = 'Pandas Styling Lab'
youtube_id = 'LwO9NblsC40'
week_covered = 5
bullets = [
[
'Keep working with the same lab Jupyter Notebook',
'Complete the lab exercises in the second section entitled "Pandas Styling"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_6_COMMON_RESOURCES,
RESOURCES.external.visualization.pandas_styling_guide,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_intro_python_visualization_lab_lecture() -> LabExerciseLecture:
title = 'Introduction to Graphing with Pandas'
short_title = 'Intro Visualization Lab'
youtube_id = 'C9yYyuzZPDw'
week_covered = 5
bullets = [
[
'Keep working with the same lab Jupyter Notebook',
'Complete the lab exercises in the final section entitled "Graphics"'
]
]
answers = [
[
]
]
resources = [
*LAB_LECTURE_6_COMMON_RESOURCES,
RESOURCES.external.visualization.pandas_visualization_guide,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_sensitivity_analysis_excel_lab_lecture() -> LabExerciseLecture:
title = 'Adding Sensitivity Analysis to Project 1 - Excel'
short_title = 'Sensitivity Analysis in Excel Lab'
youtube_id = 'p9n8uKZOqAY'
week_covered = 6
bullets = [
[
'Add sensitivity analysis to your Excel model from Project 1',
'See how the NPV changes when the number of machines and initial demand change',
'Do a one-way Data Table with a graph for each of the two inputs, then a two-way '
'data table with conditional formatting'
]
]
answers = [
[
]
]
resources = [
LECTURE_7_SLIDES
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_dictionaries_lab_lecture() -> LabExerciseLecture:
title = 'Learning How to Use Dictionaries'
short_title = 'Dictionaries Lab'
youtube_id = 'AwdowFEtoOU'
week_covered = 6
bullets = [
[
'For this Python section, lab exercises are in the Jupyter notebook '
'Dicts and List Comprehensions Lab.ipynb',
'Complete the exercises in the dictionaries section for now',
]
]
answers = [
[
]
]
resources = [
LECTURE_7_SLIDES,
LECTURE_7_EXAMPLE_NOTEBOOK,
RESOURCES.labs.python_basics.dicts_lists_comprehensions_notebook,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_list_comprehensions_lab_lecture() -> LabExerciseLecture:
title = 'Learning How to Use List Comprehensions'
short_title = 'List Comprehensions Lab'
youtube_id = 'fEPsRi1DWdg'
week_covered = 6
bullets = [
[
'Continue working on the same Jupyter notebook from the previous lab exercise',
'Complete the exercises in the List Comprehensions section for now',
]
]
answers = [
[
]
]
resources = [
LECTURE_7_SLIDES,
LECTURE_7_EXAMPLE_NOTEBOOK,
LECTURE_7_LAB_NOTEBOOK,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_sensitivity_analysis_python_lab_lecture() -> LabExerciseLecture:
title = 'Adding Sensitivity Analysis to Project 1 - Python'
short_title = 'Sensitivity Analysis in Python Lab'
youtube_id = 'r8ly1gY3jDA'
week_covered = 7
bullets = [
[
'Add sensitivity analysis to your Python model from Project 1',
'See how the NPV changes when the number of machines and initial demand change',
'Output both a hex-bin plot and a styled DataFrame'
]
]
answers = [
[
]
]
resources = [
LECTURE_7_SLIDES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_scenario_analysis_excel_lab_lecture() -> LabExerciseLecture:
title = 'Adding Scenario Analysis to Project 1 - Excel'
short_title = 'Scenario Analysis Excel Lab'
youtube_id = 'wOrBz9ddCpA'
week_covered = 7
bullets = [
[
'Add external scenario analysis to your Excel model from Project 1',
'Create three cases, for a bad, normal, and good economy. Change the initial demand '
'and price per phone in each of the cases. Both demand and price should be higher '
'in better economic situations. ',
]
]
answers = [
[
]
]
resources = [
LECTURE_8_SLIDES
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_scenario_analysis_python_lab_lecture() -> LabExerciseLecture:
title = 'Adding Scenario Analysis to Project 1 - Python'
short_title = 'Scenario Analysis Python Lab'
youtube_id = '4MDIUB1kcY4'
week_covered = 8
bullets = [
[
'Add external scenario analysis to your Python model from Project 1',
'Create three cases, for a bad, normal, and good economy. Change the initial demand '
'and price per phone in each of the cases. Both demand and price should be higher '
'in better economic situations. ',
]
]
answers = [
[
]
]
resources = [
LECTURE_8_SLIDES
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_randomness_excel_lab_lecture() -> LabExerciseLecture:
title = 'Generating and Visualizing Random Numbers - Excel'
short_title = 'Randomness Excel Lab'
youtube_id = 'dxCFS4dQqVo'
week_covered = 8
bullets = [
[
'Complete the following excercise in Excel for n=10 and n=1000',
'Generate n values between 0 and 1 with a uniform distribution',
'Generate n values from a normal distribution with a 0.5 mean and 10 standard deviation',
'Visualize each of the two outputs with a histogram',
'Calculate the mean and standard deviation of each of the two sets of generated numbers',
'Re-calculate it a few times, take note of how much the mean and standard deviation change'
]
]
answers = [
[
]
]
resources = [
LECTURE_8_SLIDES
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_randomness_python_lab_lecture() -> LabExerciseLecture:
title = 'Generating and Visualizing Random Numbers - Python'
short_title = 'Randomness Python Lab'
youtube_id = 'A42MrQL6Dz4'
week_covered = 8
bullets = [
[
'Complete the following excercise in Python for n=10 and n=1000',
'Generate n values between 0 and 1 with a uniform distribution',
'Generate n values from a normal distribution with a 0.5 mean and 10 standard deviation',
'Visualize each of the two outputs with a histogram',
'Calculate the mean and standard deviation of each of the two sets of generated numbers',
'Re-calculate it a few times, take note of how much the mean and standard deviation change'
]
]
answers = [
[
]
]
resources = [
LECTURE_8_SLIDES
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_random_stock_model_lab_lecture() -> LabExerciseLecture:
title = 'Building a Simple Model of Stock Returns'
short_title = 'Internal Randomness Simple Model Lab'
youtube_id = 'mRiaOqoKuQQ'
week_covered = 8
bullets = [
[
'Create the following model in both Excel and Python',
'A stock starts out priced at 100. Each period, it can either go up or down.',
'When it goes up, it will grow by 1%. When it goes down, it will decrease by 1%.',
'The likelihood of the stock going up is 60%, and down 40%.',
'Build a model which shows how the stock price changes throughout time. Visualize it up to 100 periods and '
'show the final price.'
]
]
answers = [
[
]
]
resources = [
LECTURE_8_SLIDES
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_extend_model_internal_randomness_lab_lecture() -> LabExerciseLecture:
title = 'Extending the Project 1 Model with Internal Randomness'
short_title = 'Internal Randomness Model Lab'
youtube_id = 'LBXRPocOCDs'
week_covered = 9
bullets = [
[
"Add internal randomness to your Project 1 Excel and Python models",
"Now assume that the interest rate is drawn from a normal distribution",
"For baseline values of the inputs, you can use a 4% mean and 3% standard deviation",
"You should be able to run the model repeatedly and see a different NPV each time"
]
]
answers = [
[
]
]
resources = [
LECTURE_8_SLIDES
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_read_write_excel_pandas_lab_lecture() -> LabExerciseLecture:
title = 'Reading and Writing to Excel with Pandas'
short_title = 'Read Write Pandas Lab'
youtube_id = 'Y1A39qzglik'
week_covered = 9
bullets = [
[
'Download "MSFT Financials.xls" from the course site',
'Read the sheet "Income Statement" into a DataFrame',
'Write the DataFrame to a new workbook, "My Data.xlsx", with the sheet '
'name "Income Statement"'
],
[
'Use the same "MSFT Financials.xls" from the first exercise',
'Output to five separate workbooks, named "My Data1.xlsx", "My Data2.xlsx", and so on.',
['Do this without writing the to_excel command multiple times']
],
[
'Note: this exercise uses the Advanced material covered in the example '
'Jupyter notebook Read Write Excel Pandas.ipynb',
['Use the same "MSFT Financials.xls" from the first exercise'],
'Output to five separate sheets in the same workbook "My Data.xlsx". The sheets should '
'be named "Income Statement 1", "Income Statement 2", and so on.',
['Do this without writing the to_excel command multiple times']
]
]
answers = [
[], [], []
]
resources = [
LECTURE_9_SLIDES,
RESOURCES.labs.connecting_python_excel.pandas.msft_financials,
RESOURCES.examples.connecting_python_excel.pandas.read_write_excel_pandas,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_read_write_xlwings_lab_lecture() -> LabExerciseLecture:
title = 'Reading and Writing to Excel with xlwings'
short_title = 'Read Write xlwings Lab'
youtube_id = 'Jgjml7JnYwY'
week_covered = 9
bullets = [
[
['For all of the xlwings lab exercises, work with "xlwings Lab.xlsx".'],
['Use xlwings to read the values in the column A and then write them beside',
'the initial values in column B']
],
[
'Get the value in C9 and multiply it by 2.5 in Python',
],
[
'Read the table which starts in E4 into Python. Multiply the prices by 2.5, and then output '
'back into Excel starting in cell H5.',
'Ensure that the outputted table appears in the same format as the original (pay attention to '
'index and header)'
],
[
'In column L, write 5, 10, 15 ... 100 spaced two cells apart, so L1 would have 5, L4 would have 10, '
'and so on.'
]
]
answers = [
[], [], [], []
]
resources = [
LECTURE_9_SLIDES,
RESOURCES.labs.connecting_python_excel.xlwings.lab_xlsx,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_intro_monte_carlo_lab_lecture() -> LabExerciseLecture:
title = 'Monte Carlo Simulation of DDM'
short_title = 'Intro Monte Carlo Lab'
youtube_id = 'ZR8AiEaOEJs'
week_covered = 10
bullets = [
[
'You are trying to determine the value of a mature company. The company has had stable dividend '
'growth for a long time so you select the dividend discount model (DDM).',
Equation(latex=r'P = \frac{d_1}{r_s - g}'),
r'The next dividend will be \$1, and your baseline estimates of the cost of capital and growth are '
r'9% and 4%, respectively',
'Write a function which is able to get the price based on values of the inputs',
'Then you are concerned about mis-estimation of the inputs and how it could affect the price. So then '
'assume that the growth rate has a mean of 4% but a standard deviation of 1%',
'Visualize and summarize the resulting probability distribution of the price'
],
[
'Continue from the first lab exercise',
'Now you are also concerned you have mis-estimated the cost of capital. So now use a mean of 9% and '
'standard deviation of 2%, in addition to varying the growth',
'Visualize and summarize the resulting probability distribution of the price',
'Be careful as in some cases, the drawn cost of capital will be lower than the drawn growth rate, '
'which breaks the DDM. You will need to modify your logic to throw out these cases.'
]
]
answers = [
[], [],
]
resources = [
LECTURE_10_SLIDES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_python_retirement_monte_carlo_lab_lecture() -> LabExerciseLecture:
title = 'Monte Carlo Simulation of Python Models'
short_title = 'Monte Carlo Python Lab'
youtube_id = 'CkfhvKfXR9k'
week_covered = 10
bullets = [
[
'Work off of your existing Project 1 Python model',
'You are concerned the NPV could be heavily affected by changes in the interest rate. '
'Instead of fixing it, draw it from a normal distribution with mean of 7% and standard deviation of 2%.',
'Run the model 10,000 times and collect the NPV results. Visualize the results. Create a '
'table of probabilities and the minimum NPV we could expect with that probability. Output '
'the chance that the NPV will be more than \\$400,000,000.'
],
[
"Continue from the first lab exercise. Now you are also concerned that your assembly line will not be "
"as efficient and so the number of phones per machine will be lower. So draw that from a normal "
"distribution with mean 100,000 and standard deviation of 20,000. ",
"As you run the model, also store what were the interest and number of phones corresponding "
"to the NPV. You want to see which has a greater impact on the NPV: "
"interest or number of phones. Visualize the relationship between interest and NPV, and "
"the relationship between number of phones and NPV. Also run a regression "
"to quantitatively determine which has a greater effect."
]
]
answers = [
[], [],
]
resources = [
LECTURE_10_SLIDES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_excel_retirement_monte_carlo_lab_lecture() -> LabExerciseLecture:
title = 'Monte Carlo Simulation of Excel Models'
short_title = 'Monte Carlo Excel Lab'
youtube_id = 'xCMov82vyD4'
week_covered = 10
bullets = [
[
'You will be running Monte Carlo simulations on your existing Excel model from Project 1',
'You are concerned that your estimate for the number of phones that will be sold is incorrect. ',
'The number of phones should instead be drawn from a normal distribution with mean 100,000 and '
'standard deviation of 20,000.',
'Estimate the model 1,000 times and output the results back to Excel',
'In Excel, visualize the results. Create a '
'table of probabilities and the minimum NPV we could expect with that probability. Output '
r'the chance that the NPV will be more than \$400,000,000.'
],
[
"Continue from the first lab exercise. Now you are also concerned that there is varying quality "
"in the machines, so they may have a different lifespan. Draw that from a normal distribution with mean "
"10 years and standard deviation of 2 years.",
"As you run the model, also store what were the number of phones and machine life corresponding "
"to the NPV, all in Excel. You want to see which has a greater impact on the NPV: "
"number of phones or machine life. Visualize the relationship between number of phones and NPV, and "
"the relationship between beginning machine life and NPV. Try to determine which has a greater effect."
]
]
answers = [
[], [],
]
resources = [
LECTURE_10_SLIDES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_enterprise_value_lab_lecture() -> LabExerciseLecture:
title = 'Finding Enterprise and Equity Value Given FCF and WACC'
short_title = 'Enterprise and Equity Value Lab'
youtube_id = 'iWEDRKSZx70'
week_covered = 11
bullets = [
[
'You are the CFO for a startup developing artificial intelligence technologies. There will be an '
'initial research phase before making any money. Google is watching your development and will purchase '
'the company after it is profitable.',
r'For the first two years (years 0 and 1), the company loses \$20 million. Then there is one breakeven year, after which '
r'the profit is \$10 million for year 3. Finally in year 4, Google purchases the company for \$70 million.',
'The WACC for the company is 15% and it has 1 million shares outstanding. The company has \$5 million '
'in debt and \$1 million in cash.',
'What is the enterprise value of the stock at year 4 before Google acquires the company? '
'What about the enterprise value today? '
'What is the price of the stock today?'
],
[
'A pharmaceutical company developed a new drug and has 4 years to sell it before the patent expires. '
'It forms a new company to manufacture and sell the drug. After 4 years, the company will be sold to '
'someone that wants to continue manufacturing at the lower price. The company is just about to pay a dividend.',
r'The new company pays a dividend of \$1 per share each year for years 0 to 3, before selling it for \$30 million in '
r'year 4.',
r'There are 10 million shares outstanding, \$10 million of debt and \$1 million of cash throughout the '
r'life of the company. The WACC is 10% today.',
'What is the enterprise value at year 4 and today? What is the price of the stock today?'
]
]
answers = [
[
r'The enterprise value at year 4 is \$70 million',
r'The enterprise value at year 0 is \$9.2 million',
r'The equity value at year 0 is \$5.21 million so the share price is \$5.21'
],
[
r'The enterprise value at year 4 is \$30 million',
r'The equity value at year 0 is \$48.5 million so the share price is \$4.85',
r'The enterprise value at year 0 is \$57.5 million',
]
]
resources = [
LECTURE_11_SLIDES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_dcf_cost_equity_lab_lecture() -> LabExerciseLecture:
title = 'Finding Cost of Equity Given Historical Prices'
short_title = 'DCF Cost of Equity Lab'
youtube_id = 'GRlIQDVznGE'
week_covered = 11
risk_free = 0.02
data_path = LAB_EXERCISES_PATH / 'DCF' / 'Cost of Equity' / 'prices.xlsx'
df = pd.read_excel(data_path)
returns = df.pct_change().dropna()
returns['MRP'] = returns['Market'] - risk_free
model = sm.OLS(returns['Asset'], sm.add_constant(returns['MRP']), hasconst=True)
results = model.fit()
beta = results.params['MRP']
market_return = returns['Market'].mean()
cost_of_equity = risk_free + beta * (market_return - risk_free)
recession_cost_of_equity = risk_free + beta * ((market_return - 0.03) - risk_free)
bullets = [
[
'Download "prices.xlsx" from the course site',
f'Assume the risk free rate is {risk_free:.0%}',
'What is the beta and the cost of equity for this company?',
'If you thought there was going to be a recession, such that the average market return would be '
'3% lower, then what would you expect the cost of equity to be?',
'Complete this exercise with the tool of your choice.'
],
]
answers = [
[
rf'The beta is {beta:.3f}',
rf'The cost of equity is {cost_of_equity:.2%}',
rf'The cost of equity in the recession is {recession_cost_of_equity:.2%}'
],
]
resources = [
LECTURE_11_SLIDES,
RESOURCES.labs.dcf.cost_of_equity.prices_xlsx,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_dcf_cost_debt_lab_lecture() -> LabExerciseLecture:
title = 'Finding Cost of Debt Given Financial and Market Info'
short_title = 'DCF Cost of Debt Lab'
youtube_id = 'ozWU9mIkXCM'
week_covered = 11
risk_free = 0.02
today = datetime.datetime.today().date()
bond_price = 1042.12
coupon_rate = 0.07
maturity_date = datetime.date(today.year + 3, today.month, today.day)
par_value = 1000
tax_rate = 0.35
# Levels 1 exercise
l1_pretax_cost_of_debt = np.irr(
[-bond_price] + [coupon_rate * par_value for _ in range(3 - 1)] + [(1 + coupon_rate) * par_value])
l1_aftertax_cost_of_debt = l1_pretax_cost_of_debt * (1 - tax_rate)
# Level 2 exercise
wmt_int_exp = 641
wmt_total_debt = 56396
wmt_ebt = 4913
wmt_tax_paid = 1233
wmt_tax_rate = wmt_tax_paid / wmt_ebt
wmt_pre_tax_cd = wmt_int_exp / wmt_total_debt
wmt_after_tax_cd = wmt_pre_tax_cd * (1 - wmt_tax_rate)
bullets = [
[
rf'A chemical manufacturer has a {coupon_rate:.1%} coupon, annual pay {par_value} par value bond outstanding, priced '
rf'at \${bond_price} on {today}.',
f'If the bond matures on {maturity_date}, what is the '
rf'cost of debt for this company? The tax rate is {tax_rate:.0%}.',
],
[
['Go to', Link(href='https://stockrow.com'),
"and search for WMT to get Walmart's financials. Calculate "
"the cost of debt for 2019-07-31 using the financial statements approach. Note that you will also "
"need to determine the effective tax rate using actual tax paid and EBT."]
],
]
answers = [
[
f'The pre-tax cost of debt for the chemical manufacturer is {l1_pretax_cost_of_debt:.2%}',
f'The after-tax cost of debt for the chemical manufacturer is {l1_aftertax_cost_of_debt:.2%}',
],
[
f'The pre-tax cost of debt for Walmart in 2019-07-31 is {wmt_pre_tax_cd:.2%}',
f'The after-tax cost of debt for Walmart in 2019-07-31 is {wmt_after_tax_cd:.2%}',
],
]
resources = [
LECTURE_11_SLIDES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_fcf_calculation_lab_lecture() -> LabExerciseLecture:
title = 'Free Cash Flow Calculation'
short_title = 'Calculate FCF Lab'
youtube_id = 'zVTkT5p0SHs'
week_covered = 12
lab_1_inputs = dict(
adjustments=100,
change_ar=1000,
change_inventory=500,
change_ap=800,
change_ppe=2000,
dep_amort=200,
net_income=300
)
lab_1_nwc = lab_1_inputs['change_ar'] + lab_1_inputs['change_inventory'] - lab_1_inputs['change_ap']
lab_1_capex = lab_1_inputs['change_ppe'] + lab_1_inputs['dep_amort']
lab_1_fcf = lab_1_inputs['net_income'] + lab_1_inputs['adjustments'] - lab_1_nwc - lab_1_capex
stmt_folder = LAB_EXERCISES_PATH / 'DCF' / 'FCF'
bs_path = os.path.join(stmt_folder, 'WMT Balance Sheet.xlsx')
inc_path = os.path.join(stmt_folder, 'WMT Income Statement.xlsx')
bs_df = pd.read_excel(bs_path, index_col=0)
inc_df = pd.read_excel(inc_path, index_col=0)
bs_data = BalanceSheets.from_df(bs_df)
inc_data = IncomeStatements.from_df(inc_df)
stmts = FinancialStatements(inc_data, bs_data)
lab_2_date_1 = '2019-04-30'
lab_2_date_2 = '2019-07-31'
bullets = [
[
'Calculate free cash flow from the following information:',
f"Net income is {lab_1_inputs['net_income']}, the total of non-cash expenditures is "
f"{lab_1_inputs['adjustments']}, "
f"the changes in accounts receivable, inventory, accounts payable, and PPE are {lab_1_inputs['change_ar']}, "
f"{lab_1_inputs['change_inventory']}, {lab_1_inputs['change_ap']}, and {lab_1_inputs['change_ppe']}, "
f"and depreciation & amortization is {lab_1_inputs['dep_amort']}."
],
[
'Load in the income statement and balance sheet data associated with Project 3, "WMT Balance Sheet.xlsx" '
'and "WMT Income Statement.xlsx"',
'Calculate the free cash flows from these data. Note that some items are missing in these data such as '
'depreciation. You will just need to exclude any missing items from your calculation',
f'Get the FCFs for {lab_2_date_1} and {lab_2_date_2}.'
]
]
answers = [
[
fr'The NWC is \${lab_1_nwc:,.0f}',
fr'The CapEx is \${lab_1_capex:,.0f}',
fr'The FCF is \${lab_1_fcf:,.0f}'
],
[
fr'The FCF for {lab_2_date_1} is \${stmts.fcf[lab_2_date_1]:,.0f}',
fr'The FCF for {lab_2_date_2} is \${stmts.fcf[lab_2_date_2]:,.0f}',
]
]
resources = [
LECTURE_12_SLIDES,
RESOURCES.labs.dcf.fcf.wmt_balance_sheet,
RESOURCES.labs.dcf.fcf.wmt_income_statement,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_simple_forecast_lab_lecture() -> LabExerciseLecture:
title = 'Forecasting Simple Time-Series'
short_title = 'Simple Forecast Lab'
youtube_id = '9td30aTGAN0'
week_covered = 12
# NOTE: to get answers, ran Forecast Sales COGS simple but loading in these data instead
bullets = [
[
['Go to', COURSE_SITE, 'and download "Debt Interest.xlsx"'],
'Forecast the next value of total debt using trend regression approach',
'Forecast the next value of interest using the four approaches (average, recent, trend, CAGR)',
'Forecast the next value of interest using the % of total debt method, with the percentages forecasted '
'using the four approaches (average, recent, trend, CAGR)',
],
]
answers = [
[
r'The forecasted value of total debt should be \$6,867',
r'The directly forecasted values of interest should be \$1,600, \$1,900, \$2,300, and \$2,391, '
r'for average, recent, trend, CAGR, respectively',
r'The % of debt forecasted values of interest should be \$2,072, \$2,139, \$2,379, and \$2,312, '
r'for average, recent, trend, CAGR, respectively',
],
]
resources = [
LECTURE_12_SLIDES,
RESOURCES.labs.dcf.forecasting.simple.debt_interest,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_complex_forecast_lab_lecture() -> LabExerciseLecture:
title = 'Forecasting Complex Time-Series'
short_title = 'Complex Forecast Lab'
youtube_id = 'eX3GdQ530gE'
week_covered = 13
# NOTE: to get answers, ran Forecast Quarterly Financial Statements.ipynb but loading in these data instead
bullets = [
[
['Go to', COURSE_SITE, 'and download "CAT Balance Sheet.xlsx" and "CAT Income Statement.xlsx"'],
'Forecast the next four periods (one year) of cash using both the Quarterly Seasonal Trend Model and '
'the automated software approach.',
'Plot both forecasts to see how they worked.'
],
]
answers = [
[
r'The forecasted values of cash using the Quarterly Seasonal Trend Model should be \$8,454,920,455, '
r'\$8,833,593,182, \$8,869,693,182, and \$10,251,393,182',
r'The forecasted values of cash using the automated approach should be \$8,071,641,657, \$8,185,822,286, '
r'\$9,132,093,865, and \$9,502,395,879'
],
]
resources = [
LECTURE_12_SLIDES,
RESOURCES.labs.dcf.forecasting.complex.cat_balance_sheet,
RESOURCES.labs.dcf.forecasting.complex.cat_income_statement,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_dcf_tv_lab_lecture() -> LabExerciseLecture:
title = 'DCF Stock Price using Terminal Values'
short_title = 'Terminal Values Lab'
youtube_id = 'KuI96M7Syqs'
week_covered = 13
ev_ebitda = 18.58
ev_sales = 1.92
ev_fcf = 11.82
pe = 39.30
ebitda = 1500
sales = 7898
shrout = 561
fcf = 2.36 * shrout
earnings = 232
debt = 11631
cash = 4867
wacc = 0.1
growth = 0.03
def p_from_ev(ev):
current_ev = np.npv(wacc, [0] + [fcf] * 4 + [fcf + ev])
equity_value = current_ev - debt + cash
return equity_value / shrout
ev_from_ebitda = ev_ebitda * ebitda
p_from_ebitda = p_from_ev(ev_from_ebitda)
ev_from_sales = ev_sales * sales
p_from_sales = p_from_ev(ev_from_sales)
ev_from_fcf = ev_fcf * fcf
p_from_fcf = p_from_ev(ev_from_fcf)
eps = earnings / shrout
tv_p_from_pe = pe * eps
eq_from_pe = tv_p_from_pe * shrout
ev_from_pe = eq_from_pe + debt - cash
p_from_pe = p_from_ev(ev_from_pe)
ev_from_perp = (fcf * (1 + growth)) / (wacc - growth)
p_from_perp = p_from_ev(ev_from_perp)
bullets = [
[
'Calculate possible stock prices today for a hypothetical company. Use EV/EBITDA, EV/Sales, EV/FCF, and P/E '
'and the perpetuity growth method to determine five different possible terminal values. '
'You have already determined that the next 5 years '
fr'FCFs will be \${fcf:,.0f}M in each year. ',
fr'EV/EBITDA is {ev_ebitda:.2f}, EV/Sales is {ev_sales:.2f}, EV/FCF is {ev_fcf:.2f}, and P/E is {pe:.2f}.',
fr'Final period forecasted financial statement values are as follows: EBITDA is \${ebitda:,.0f}M, '
fr'sales is \${sales:,.0f}M, and net income is \${earnings:,.0f}M',
fr'Total debt is \${debt:,.0f}M, and '
fr'cash is \${cash:,.0f}M, both current and final period forecasted',
fr'Shares outstanding is \${shrout:,.0f}M and WACC is {wacc:.1%} for the entire time period',
f'The terminal growth rate is {growth:.1%}',
'You can assume the next free cash flow is one year away.'
],
]
answers = [
[
'The stock prices using the five methods are as follows:',
fr'EV/EBITDA: \${p_from_ebitda:.2f}',
fr'EV/Sales: \${p_from_sales:.2f}',
fr'EV/FCF: \${p_from_fcf:.2f}',
fr'P/E: \${p_from_pe:.2f}',
fr'Perpetuity Growth: \${p_from_perp:.2f}',
],
]
resources = [
LECTURE_12_SLIDES,
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
def get_lab_lecture() -> LabExerciseLecture:
title = ''
short_title = title
youtube_id = ''
week_covered = 0
bullets = [
[
]
]
answers = [
[
]
]
resources = [
]
return LabExerciseLecture.from_seq_of_seq(
title, bullet_content=bullets, answers_content=answers, short_title=short_title,
youtube_id=youtube_id, resources=resources, week_covered=week_covered,
)
|
PypiClean
|
/pulumi_azure_native-2.5.1a1693590910.tar.gz/pulumi_azure_native-2.5.1a1693590910/pulumi_azure_native/healthbot/v20230501/get_bot.py
|
import copy
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
__all__ = [
'GetBotResult',
'AwaitableGetBotResult',
'get_bot',
'get_bot_output',
]
@pulumi.output_type
class GetBotResult:
"""
Azure Health Bot resource definition
"""
def __init__(__self__, id=None, identity=None, location=None, name=None, properties=None, sku=None, system_data=None, tags=None, type=None):
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if identity and not isinstance(identity, dict):
raise TypeError("Expected argument 'identity' to be a dict")
pulumi.set(__self__, "identity", identity)
if location and not isinstance(location, str):
raise TypeError("Expected argument 'location' to be a str")
pulumi.set(__self__, "location", location)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if properties and not isinstance(properties, dict):
raise TypeError("Expected argument 'properties' to be a dict")
pulumi.set(__self__, "properties", properties)
if sku and not isinstance(sku, dict):
raise TypeError("Expected argument 'sku' to be a dict")
pulumi.set(__self__, "sku", sku)
if system_data and not isinstance(system_data, dict):
raise TypeError("Expected argument 'system_data' to be a dict")
pulumi.set(__self__, "system_data", system_data)
if tags and not isinstance(tags, dict):
raise TypeError("Expected argument 'tags' to be a dict")
pulumi.set(__self__, "tags", tags)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def id(self) -> str:
"""
Fully qualified resource Id for the resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def identity(self) -> Optional['outputs.IdentityResponse']:
"""
The identity of the Azure Health Bot.
"""
return pulumi.get(self, "identity")
@property
@pulumi.getter
def location(self) -> str:
"""
The geo-location where the resource lives
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the resource
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def properties(self) -> 'outputs.HealthBotPropertiesResponse':
"""
The set of properties specific to Azure Health Bot resource.
"""
return pulumi.get(self, "properties")
@property
@pulumi.getter
def sku(self) -> 'outputs.SkuResponse':
"""
SKU of the Azure Health Bot.
"""
return pulumi.get(self, "sku")
@property
@pulumi.getter(name="systemData")
def system_data(self) -> 'outputs.SystemDataResponse':
"""
Metadata pertaining to creation and last modification of the resource
"""
return pulumi.get(self, "system_data")
@property
@pulumi.getter
def tags(self) -> Optional[Mapping[str, str]]:
"""
Resource tags.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the resource.
"""
return pulumi.get(self, "type")
class AwaitableGetBotResult(GetBotResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetBotResult(
id=self.id,
identity=self.identity,
location=self.location,
name=self.name,
properties=self.properties,
sku=self.sku,
system_data=self.system_data,
tags=self.tags,
type=self.type)
def get_bot(bot_name: Optional[str] = None,
resource_group_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetBotResult:
"""
Get a HealthBot.
:param str bot_name: The name of the Bot resource.
:param str resource_group_name: The name of the Bot resource group in the user subscription.
"""
__args__ = dict()
__args__['botName'] = bot_name
__args__['resourceGroupName'] = resource_group_name
opts = pulumi.InvokeOptions.merge(_utilities.get_invoke_opts_defaults(), opts)
__ret__ = pulumi.runtime.invoke('azure-native:healthbot/v20230501:getBot', __args__, opts=opts, typ=GetBotResult).value
return AwaitableGetBotResult(
id=pulumi.get(__ret__, 'id'),
identity=pulumi.get(__ret__, 'identity'),
location=pulumi.get(__ret__, 'location'),
name=pulumi.get(__ret__, 'name'),
properties=pulumi.get(__ret__, 'properties'),
sku=pulumi.get(__ret__, 'sku'),
system_data=pulumi.get(__ret__, 'system_data'),
tags=pulumi.get(__ret__, 'tags'),
type=pulumi.get(__ret__, 'type'))
@_utilities.lift_output_func(get_bot)
def get_bot_output(bot_name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetBotResult]:
"""
Get a HealthBot.
:param str bot_name: The name of the Bot resource.
:param str resource_group_name: The name of the Bot resource group in the user subscription.
"""
...
|
PypiClean
|
/whisparr_py-0.3.0-py3-none-any.whl/whisparr/api/collection_api.py
|
from __future__ import absolute_import
import re # noqa: F401
from pydantic import validate_arguments, ValidationError
from typing_extensions import Annotated
from pydantic import StrictInt, StrictStr
from typing import List, Optional
from whisparr.models.collection_resource import CollectionResource
from whisparr.models.collection_update_resource import CollectionUpdateResource
from whisparr.api_client import ApiClient
from whisparr.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class CollectionApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient.get_default()
self.api_client = api_client
@validate_arguments
def get_collection_by_id(self, id : StrictInt, **kwargs) -> CollectionResource: # noqa: E501
"""get_collection_by_id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_collection_by_id(id, async_req=True)
>>> result = thread.get()
:param id: (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: CollectionResource
"""
kwargs['_return_http_data_only'] = True
return self.get_collection_by_id_with_http_info(id, **kwargs) # noqa: E501
@validate_arguments
def get_collection_by_id_with_http_info(self, id : StrictInt, **kwargs): # noqa: E501
"""get_collection_by_id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_collection_by_id_with_http_info(id, async_req=True)
>>> result = thread.get()
:param id: (required)
:type id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:type _content_type: string, optional: force content-type for the request
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(CollectionResource, status_code(int), headers(HTTPHeaderDict))
"""
_params = locals()
_all_params = [
'id'
]
_all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth',
'_content_type',
'_headers'
]
)
# validate the arguments
for _key, _val in _params['kwargs'].items():
if _key not in _all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_collection_by_id" % _key
)
_params[_key] = _val
del _params['kwargs']
_collection_formats = {}
# process the path parameters
_path_params = {}
if _params['id']:
_path_params['id'] = _params['id']
# process the query parameters
_query_params = []
# process the header parameters
_header_params = dict(_params.get('_headers', {}))
# process the form parameters
_form_params = []
_files = {}
# process the body parameter
_body_params = None
# set the HTTP header `Accept`
_header_params['Accept'] = self.api_client.select_header_accept(
['text/plain', 'application/json', 'text/json']) # noqa: E501
# authentication setting
_auth_settings = ['apikey', 'X-Api-Key'] # noqa: E501
_response_types_map = {
'200': "CollectionResource",
}
return self.api_client.call_api(
'/api/v3/collection/{id}', 'GET',
_path_params,
_query_params,
_header_params,
body=_body_params,
post_params=_form_params,
files=_files,
response_types_map=_response_types_map,
auth_settings=_auth_settings,
async_req=_params.get('async_req'),
_return_http_data_only=_params.get('_return_http_data_only'), # noqa: E501
_preload_content=_params.get('_preload_content', True),
_request_timeout=_params.get('_request_timeout'),
collection_formats=_collection_formats,
_request_auth=_params.get('_request_auth'))
@validate_arguments
def list_collection(self, tmdb_id : Optional[StrictInt] = None, **kwargs) -> List[CollectionResource]: # noqa: E501
"""list_collection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_collection(tmdb_id, async_req=True)
>>> result = thread.get()
:param tmdb_id:
:type tmdb_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: List[CollectionResource]
"""
kwargs['_return_http_data_only'] = True
return self.list_collection_with_http_info(tmdb_id, **kwargs) # noqa: E501
@validate_arguments
def list_collection_with_http_info(self, tmdb_id : Optional[StrictInt] = None, **kwargs): # noqa: E501
"""list_collection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_collection_with_http_info(tmdb_id, async_req=True)
>>> result = thread.get()
:param tmdb_id:
:type tmdb_id: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:type _content_type: string, optional: force content-type for the request
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(List[CollectionResource], status_code(int), headers(HTTPHeaderDict))
"""
_params = locals()
_all_params = [
'tmdb_id'
]
_all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth',
'_content_type',
'_headers'
]
)
# validate the arguments
for _key, _val in _params['kwargs'].items():
if _key not in _all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method list_collection" % _key
)
_params[_key] = _val
del _params['kwargs']
_collection_formats = {}
# process the path parameters
_path_params = {}
# process the query parameters
_query_params = []
if _params.get('tmdb_id') is not None: # noqa: E501
_query_params.append(('tmdbId', _params['tmdb_id']))
# process the header parameters
_header_params = dict(_params.get('_headers', {}))
# process the form parameters
_form_params = []
_files = {}
# process the body parameter
_body_params = None
# set the HTTP header `Accept`
_header_params['Accept'] = self.api_client.select_header_accept(
['text/plain', 'application/json', 'text/json']) # noqa: E501
# authentication setting
_auth_settings = ['apikey', 'X-Api-Key'] # noqa: E501
_response_types_map = {
'200': "List[CollectionResource]",
}
return self.api_client.call_api(
'/api/v3/collection', 'GET',
_path_params,
_query_params,
_header_params,
body=_body_params,
post_params=_form_params,
files=_files,
response_types_map=_response_types_map,
auth_settings=_auth_settings,
async_req=_params.get('async_req'),
_return_http_data_only=_params.get('_return_http_data_only'), # noqa: E501
_preload_content=_params.get('_preload_content', True),
_request_timeout=_params.get('_request_timeout'),
collection_formats=_collection_formats,
_request_auth=_params.get('_request_auth'))
@validate_arguments
def put_collection(self, collection_update_resource : Optional[CollectionUpdateResource] = None, **kwargs) -> None: # noqa: E501
"""put_collection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_collection(collection_update_resource, async_req=True)
>>> result = thread.get()
:param collection_update_resource:
:type collection_update_resource: CollectionUpdateResource
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
kwargs['_return_http_data_only'] = True
return self.put_collection_with_http_info(collection_update_resource, **kwargs) # noqa: E501
@validate_arguments
def put_collection_with_http_info(self, collection_update_resource : Optional[CollectionUpdateResource] = None, **kwargs): # noqa: E501
"""put_collection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_collection_with_http_info(collection_update_resource, async_req=True)
>>> result = thread.get()
:param collection_update_resource:
:type collection_update_resource: CollectionUpdateResource
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:type _content_type: string, optional: force content-type for the request
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: None
"""
_params = locals()
_all_params = [
'collection_update_resource'
]
_all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth',
'_content_type',
'_headers'
]
)
# validate the arguments
for _key, _val in _params['kwargs'].items():
if _key not in _all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method put_collection" % _key
)
_params[_key] = _val
del _params['kwargs']
_collection_formats = {}
# process the path parameters
_path_params = {}
# process the query parameters
_query_params = []
# process the header parameters
_header_params = dict(_params.get('_headers', {}))
# process the form parameters
_form_params = []
_files = {}
# process the body parameter
_body_params = None
if _params['collection_update_resource']:
_body_params = _params['collection_update_resource']
# set the HTTP header `Content-Type`
_content_types_list = _params.get('_content_type',
self.api_client.select_header_content_type(
['application/json', 'text/json', 'application/*+json']))
if _content_types_list:
_header_params['Content-Type'] = _content_types_list
# authentication setting
_auth_settings = ['apikey', 'X-Api-Key'] # noqa: E501
_response_types_map = {}
return self.api_client.call_api(
'/api/v3/collection', 'PUT',
_path_params,
_query_params,
_header_params,
body=_body_params,
post_params=_form_params,
files=_files,
response_types_map=_response_types_map,
auth_settings=_auth_settings,
async_req=_params.get('async_req'),
_return_http_data_only=_params.get('_return_http_data_only'), # noqa: E501
_preload_content=_params.get('_preload_content', True),
_request_timeout=_params.get('_request_timeout'),
collection_formats=_collection_formats,
_request_auth=_params.get('_request_auth'))
@validate_arguments
def update_collection(self, id : StrictStr, collection_resource : Optional[CollectionResource] = None, **kwargs) -> CollectionResource: # noqa: E501
"""update_collection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_collection(id, collection_resource, async_req=True)
>>> result = thread.get()
:param id: (required)
:type id: str
:param collection_resource:
:type collection_resource: CollectionResource
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: CollectionResource
"""
kwargs['_return_http_data_only'] = True
return self.update_collection_with_http_info(id, collection_resource, **kwargs) # noqa: E501
@validate_arguments
def update_collection_with_http_info(self, id : StrictStr, collection_resource : Optional[CollectionResource] = None, **kwargs): # noqa: E501
"""update_collection # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_collection_with_http_info(id, collection_resource, async_req=True)
>>> result = thread.get()
:param id: (required)
:type id: str
:param collection_resource:
:type collection_resource: CollectionResource
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:type _content_type: string, optional: force content-type for the request
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(CollectionResource, status_code(int), headers(HTTPHeaderDict))
"""
_params = locals()
_all_params = [
'id',
'collection_resource'
]
_all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth',
'_content_type',
'_headers'
]
)
# validate the arguments
for _key, _val in _params['kwargs'].items():
if _key not in _all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_collection" % _key
)
_params[_key] = _val
del _params['kwargs']
_collection_formats = {}
# process the path parameters
_path_params = {}
if _params['id']:
_path_params['id'] = _params['id']
# process the query parameters
_query_params = []
# process the header parameters
_header_params = dict(_params.get('_headers', {}))
# process the form parameters
_form_params = []
_files = {}
# process the body parameter
_body_params = None
if _params['collection_resource']:
_body_params = _params['collection_resource']
# set the HTTP header `Accept`
_header_params['Accept'] = self.api_client.select_header_accept(
['text/plain', 'application/json', 'text/json']) # noqa: E501
# set the HTTP header `Content-Type`
_content_types_list = _params.get('_content_type',
self.api_client.select_header_content_type(
['application/json', 'text/json', 'application/*+json']))
if _content_types_list:
_header_params['Content-Type'] = _content_types_list
# authentication setting
_auth_settings = ['apikey', 'X-Api-Key'] # noqa: E501
_response_types_map = {
'200': "CollectionResource",
}
return self.api_client.call_api(
'/api/v3/collection/{id}', 'PUT',
_path_params,
_query_params,
_header_params,
body=_body_params,
post_params=_form_params,
files=_files,
response_types_map=_response_types_map,
auth_settings=_auth_settings,
async_req=_params.get('async_req'),
_return_http_data_only=_params.get('_return_http_data_only'), # noqa: E501
_preload_content=_params.get('_preload_content', True),
_request_timeout=_params.get('_request_timeout'),
collection_formats=_collection_formats,
_request_auth=_params.get('_request_auth'))
|
PypiClean
|
/mlflow_tmp-2.2.26-py3-none-any.whl/mlflow/store/tracking/sqlalchemy_store.py
|
import json
import logging
import random
import time
import uuid
import threading
from functools import reduce
import math
import sqlalchemy
import sqlalchemy.sql.expression as sql
from sqlalchemy import sql
from sqlalchemy.future import select
from mlflow.entities import RunTag, Metric
from mlflow.entities.lifecycle_stage import LifecycleStage
from mlflow.store.tracking import SEARCH_MAX_RESULTS_DEFAULT, SEARCH_MAX_RESULTS_THRESHOLD
from mlflow.store.db.db_types import MYSQL, MSSQL
import mlflow.store.db.utils
from mlflow.store.tracking.dbmodels.models import (
SqlExperiment,
SqlRun,
SqlMetric,
SqlParam,
SqlTag,
SqlExperimentTag,
SqlLatestMetric,
)
from mlflow.store.db.base_sql_model import Base
from mlflow.entities import RunStatus, SourceType, Experiment
from mlflow.store.tracking.abstract_store import AbstractStore
from mlflow.store.entities.paged_list import PagedList
from mlflow.entities import ViewType
from mlflow.exceptions import MlflowException
from mlflow.protos.databricks_pb2 import (
INVALID_PARAMETER_VALUE,
RESOURCE_ALREADY_EXISTS,
INVALID_STATE,
RESOURCE_DOES_NOT_EXIST,
INTERNAL_ERROR,
)
from mlflow.utils.name_utils import _generate_random_name
from mlflow.utils.uri import is_local_uri, extract_db_type_from_uri, resolve_uri_if_local
from mlflow.utils.file_utils import mkdir, local_file_uri_to_path
from mlflow.utils.search_utils import SearchUtils, SearchExperimentsUtils
from mlflow.utils.string_utils import is_string_type
from mlflow.utils.uri import append_to_uri_path
from mlflow.utils.validation import (
_validate_batch_log_limits,
_validate_batch_log_data,
_validate_run_id,
_validate_metric,
_validate_experiment_tag,
_validate_tag,
_validate_param_keys_unique,
_validate_param,
_validate_experiment_name,
)
from mlflow.utils.mlflow_tags import MLFLOW_LOGGED_MODELS, MLFLOW_RUN_NAME, _get_run_name_from_tags
from mlflow.utils.time_utils import get_current_time_millis
_logger = logging.getLogger(__name__)
# For each database table, fetch its columns and define an appropriate attribute for each column
# on the table's associated object representation (Mapper). This is necessary to ensure that
# columns defined via backreference are available as Mapper instance attributes (e.g.,
# ``SqlExperiment.tags`` and ``SqlRun.params``). For more information, see
# https://docs.sqlalchemy.org/en/latest/orm/mapping_api.html#sqlalchemy.orm.configure_mappers
# and https://docs.sqlalchemy.org/en/latest/orm/mapping_api.html#sqlalchemy.orm.mapper.Mapper
sqlalchemy.orm.configure_mappers()
class SqlAlchemyStore(AbstractStore):
"""
SQLAlchemy compliant backend store for tracking meta data for MLflow entities. MLflow
supports the database dialects ``mysql``, ``mssql``, ``sqlite``, and ``postgresql``.
As specified in the
`SQLAlchemy docs <https://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls>`_ ,
the database URI is expected in the format
``<dialect>+<driver>://<username>:<password>@<host>:<port>/<database>``. If you do not
specify a driver, SQLAlchemy uses a dialect's default driver.
This store interacts with SQL store using SQLAlchemy abstractions defined for MLflow entities.
:py:class:`mlflow.store.dbmodels.models.SqlExperiment`,
:py:class:`mlflow.store.dbmodels.models.SqlRun`,
:py:class:`mlflow.store.dbmodels.models.SqlTag`,
:py:class:`mlflow.store.dbmodels.models.SqlMetric`, and
:py:class:`mlflow.store.dbmodels.models.SqlParam`.
Run artifacts are stored in a separate location using artifact stores conforming to
:py:class:`mlflow.store.artifact_repo.ArtifactRepository`. Default artifact locations for
user experiments are stored in the database along with metadata. Each run artifact location
is recorded in :py:class:`mlflow.store.dbmodels.models.SqlRun` and stored in the backend DB.
"""
ARTIFACTS_FOLDER_NAME = "artifacts"
DEFAULT_EXPERIMENT_ID = "0"
_db_uri_sql_alchemy_engine_map = {}
_db_uri_sql_alchemy_engine_map_lock = threading.Lock()
def __init__(self, db_uri, default_artifact_root):
"""
Create a database backed store.
:param db_uri: The SQLAlchemy database URI string to connect to the database. See
the `SQLAlchemy docs
<https://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls>`_
for format specifications. Mlflow supports the dialects ``mysql``,
``mssql``, ``sqlite``, and ``postgresql``.
:param default_artifact_root: Path/URI to location suitable for large data (such as a blob
store object, DBFS path, or shared NFS file system).
"""
super().__init__()
self.db_uri = db_uri
self.db_type = extract_db_type_from_uri(db_uri)
self.artifact_root_uri = resolve_uri_if_local(default_artifact_root)
# Quick check to see if the respective SQLAlchemy database engine has already been created.
if db_uri not in SqlAlchemyStore._db_uri_sql_alchemy_engine_map:
with SqlAlchemyStore._db_uri_sql_alchemy_engine_map_lock:
# Repeat check to prevent race conditions where one thread checks for an existing
# engine while another is creating the respective one, resulting in multiple
# engines being created. It isn't combined with the above check to prevent
# inefficiency from multiple threads waiting for the lock to check for engine
# existence if it has already been created.
if db_uri not in SqlAlchemyStore._db_uri_sql_alchemy_engine_map:
SqlAlchemyStore._db_uri_sql_alchemy_engine_map[
db_uri
] = mlflow.store.db.utils.create_sqlalchemy_engine_with_retry(db_uri)
self.engine = SqlAlchemyStore._db_uri_sql_alchemy_engine_map[db_uri]
# On a completely fresh MLflow installation against an empty database (verify database
# emptiness by checking that 'experiments' etc aren't in the list of table names), run all
# DB migrations
if not mlflow.store.db.utils._all_tables_exist(self.engine):
mlflow.store.db.utils._initialize_tables(self.engine)
Base.metadata.bind = self.engine
SessionMaker = sqlalchemy.orm.sessionmaker(bind=self.engine)
self.ManagedSessionMaker = mlflow.store.db.utils._get_managed_session_maker(
SessionMaker, self.db_type
)
mlflow.store.db.utils._verify_schema(self.engine)
if is_local_uri(default_artifact_root):
mkdir(local_file_uri_to_path(default_artifact_root))
if len(self.search_experiments(view_type=ViewType.ALL)) == 0:
with self.ManagedSessionMaker() as session:
self._create_default_experiment(session)
def _get_dialect(self):
return self.engine.dialect.name
def _dispose_engine(self):
self.engine.dispose()
def _set_zero_value_insertion_for_autoincrement_column(self, session):
if self.db_type == MYSQL:
# config letting MySQL override default
# to allow 0 value for experiment ID (auto increment column)
session.execute(sql.text("SET @@SESSION.sql_mode='NO_AUTO_VALUE_ON_ZERO';"))
if self.db_type == MSSQL:
# config letting MSSQL override default
# to allow any manual value inserted into IDENTITY column
session.execute(sql.text("SET IDENTITY_INSERT experiments ON;"))
# DB helper methods to allow zero values for columns with auto increments
def _unset_zero_value_insertion_for_autoincrement_column(self, session):
if self.db_type == MYSQL:
session.execute(sql.text("SET @@SESSION.sql_mode='';"))
if self.db_type == MSSQL:
session.execute(sql.text("SET IDENTITY_INSERT experiments OFF;"))
def _create_default_experiment(self, session):
"""
MLflow UI and client code expects a default experiment with ID 0.
This method uses SQL insert statement to create the default experiment as a hack, since
experiment table uses 'experiment_id' column is a PK and is also set to auto increment.
MySQL and other implementation do not allow value '0' for such cases.
ToDo: Identify a less hacky mechanism to create default experiment 0
"""
table = SqlExperiment.__tablename__
creation_time = get_current_time_millis()
default_experiment = {
SqlExperiment.experiment_id.name: int(SqlAlchemyStore.DEFAULT_EXPERIMENT_ID),
SqlExperiment.name.name: Experiment.DEFAULT_EXPERIMENT_NAME,
SqlExperiment.artifact_location.name: str(self._get_artifact_location(0)),
SqlExperiment.lifecycle_stage.name: LifecycleStage.ACTIVE,
SqlExperiment.creation_time.name: creation_time,
SqlExperiment.last_update_time.name: creation_time,
}
def decorate(s):
if is_string_type(s):
return repr(s)
else:
return str(s)
# Get a list of keys to ensure we have a deterministic ordering
columns = list(default_experiment.keys())
values = ", ".join([decorate(default_experiment.get(c)) for c in columns])
try:
self._set_zero_value_insertion_for_autoincrement_column(session)
session.execute(
sql.text(f"INSERT INTO {table} ({', '.join(columns)}) VALUES ({values});")
)
finally:
self._unset_zero_value_insertion_for_autoincrement_column(session)
def _save_to_db(self, session, objs):
"""
Store in db
"""
if type(objs) is list:
session.add_all(objs)
else:
# single object
session.add(objs)
def _get_or_create(self, session, model, **kwargs):
instance = session.query(model).filter_by(**kwargs).first()
created = False
if instance:
return instance, created
else:
instance = model(**kwargs)
self._save_to_db(objs=instance, session=session)
created = True
return instance, created
def _get_artifact_location(self, experiment_id):
return append_to_uri_path(self.artifact_root_uri, str(experiment_id))
def create_experiment(self, name, artifact_location=None, tags=None):
_validate_experiment_name(name)
if artifact_location:
artifact_location = resolve_uri_if_local(artifact_location)
with self.ManagedSessionMaker() as session:
try:
creation_time = get_current_time_millis()
experiment = SqlExperiment(
name=name,
lifecycle_stage=LifecycleStage.ACTIVE,
artifact_location=artifact_location,
creation_time=creation_time,
last_update_time=creation_time,
)
experiment.tags = (
[SqlExperimentTag(key=tag.key, value=tag.value) for tag in tags] if tags else []
)
session.add(experiment)
if not artifact_location:
# this requires a double write. The first one to generate an autoincrement-ed ID
eid = session.query(SqlExperiment).filter_by(name=name).first().experiment_id
experiment.artifact_location = self._get_artifact_location(eid)
except sqlalchemy.exc.IntegrityError as e:
raise MlflowException(
"Experiment(name={}) already exists. Error: {}".format(name, str(e)),
RESOURCE_ALREADY_EXISTS,
)
session.flush()
return str(experiment.experiment_id)
def _search_experiments(
self,
view_type,
max_results,
filter_string,
order_by,
page_token,
):
def compute_next_token(current_size):
next_token = None
if max_results + 1 == current_size:
final_offset = offset + max_results
next_token = SearchExperimentsUtils.create_page_token(final_offset)
return next_token
if not isinstance(max_results, int) or max_results < 1:
raise MlflowException(
"Invalid value for max_results. It must be a positive integer,"
f" but got {max_results}",
INVALID_PARAMETER_VALUE,
)
if max_results > SEARCH_MAX_RESULTS_THRESHOLD:
raise MlflowException(
f"Invalid value for max_results. It must be at most {SEARCH_MAX_RESULTS_THRESHOLD},"
f" but got {max_results}",
INVALID_PARAMETER_VALUE,
)
with self.ManagedSessionMaker() as session:
parsed_filters = SearchExperimentsUtils.parse_search_filter(filter_string)
attribute_filters, non_attribute_filters = _get_search_experiments_filter_clauses(
parsed_filters, self._get_dialect()
)
order_by_clauses = _get_search_experiments_order_by_clauses(order_by)
offset = SearchUtils.parse_start_offset_from_page_token(page_token)
lifecycle_stags = set(LifecycleStage.view_type_to_stages(view_type))
stmt = (
reduce(lambda s, f: s.join(f), non_attribute_filters, select(SqlExperiment))
.options(*self._get_eager_experiment_query_options())
.filter(*attribute_filters, SqlExperiment.lifecycle_stage.in_(lifecycle_stags))
.order_by(*order_by_clauses)
.offset(offset)
.limit(max_results + 1)
)
queried_experiments = session.execute(stmt).scalars(SqlExperiment).all()
experiments = [e.to_mlflow_entity() for e in queried_experiments]
next_page_token = compute_next_token(len(experiments))
return experiments[:max_results], next_page_token
def search_experiments(
self,
view_type=ViewType.ACTIVE_ONLY,
max_results=SEARCH_MAX_RESULTS_DEFAULT,
filter_string=None,
order_by=None,
page_token=None,
):
experiments, next_page_token = self._search_experiments(
view_type, max_results, filter_string, order_by, page_token
)
return PagedList(experiments, next_page_token)
def _get_experiment(self, session, experiment_id, view_type, eager=False):
"""
:param eager: If ``True``, eagerly loads the experiments's tags. If ``False``, these tags
are not eagerly loaded and will be loaded if/when their corresponding
object properties are accessed from the resulting ``SqlExperiment`` object.
"""
experiment_id = experiment_id or SqlAlchemyStore.DEFAULT_EXPERIMENT_ID
stages = LifecycleStage.view_type_to_stages(view_type)
query_options = self._get_eager_experiment_query_options() if eager else []
experiment = (
session.query(SqlExperiment)
.options(*query_options)
.filter(
SqlExperiment.experiment_id == experiment_id,
SqlExperiment.lifecycle_stage.in_(stages),
)
.one_or_none()
)
if experiment is None:
raise MlflowException(
f"No Experiment with id={experiment_id} exists", RESOURCE_DOES_NOT_EXIST
)
return experiment
@staticmethod
def _get_eager_experiment_query_options():
"""
:return: A list of SQLAlchemy query options that can be used to eagerly load the following
experiment attributes when fetching an experiment: ``tags``.
"""
return [
# Use a subquery load rather than a joined load in order to minimize the memory overhead
# of the eager loading procedure. For more information about relationship loading
# techniques, see https://docs.sqlalchemy.org/en/13/orm/
# loading_relationships.html#relationship-loading-techniques
sqlalchemy.orm.subqueryload(SqlExperiment.tags),
]
def get_experiment(self, experiment_id):
with self.ManagedSessionMaker() as session:
return self._get_experiment(
session, experiment_id, ViewType.ALL, eager=True
).to_mlflow_entity()
def get_experiment_by_name(self, experiment_name):
"""
Specialized implementation for SQL backed store.
"""
with self.ManagedSessionMaker() as session:
stages = LifecycleStage.view_type_to_stages(ViewType.ALL)
experiment = (
session.query(SqlExperiment)
.options(*self._get_eager_experiment_query_options())
.filter(
SqlExperiment.name == experiment_name, SqlExperiment.lifecycle_stage.in_(stages)
)
.one_or_none()
)
return experiment.to_mlflow_entity() if experiment is not None else None
def delete_experiment(self, experiment_id):
with self.ManagedSessionMaker() as session:
experiment = self._get_experiment(session, experiment_id, ViewType.ACTIVE_ONLY)
experiment.lifecycle_stage = LifecycleStage.DELETED
experiment.last_update_time = get_current_time_millis()
runs = self._list_run_infos(session, experiment_id)
for run in runs:
self._mark_run_deleted(session, run)
self._save_to_db(objs=experiment, session=session)
def _hard_delete_experiment(self, experiment_id):
"""
Permanently delete a experiment (metadata and metrics, tags, parameters).
This is used by the ``mlflow gc`` command line and is not intended to be used elsewhere.
"""
with self.ManagedSessionMaker() as session:
experiment = self._get_experiment(
experiment_id=experiment_id, session=session, view_type=ViewType.DELETED_ONLY
)
session.delete(experiment)
def _mark_run_deleted(self, session, run):
run.lifecycle_stage = LifecycleStage.DELETED
run.deleted_time = get_current_time_millis()
self._save_to_db(objs=run, session=session)
def _mark_run_active(self, session, run):
run.lifecycle_stage = LifecycleStage.ACTIVE
run.deleted_time = None
self._save_to_db(objs=run, session=session)
def _list_run_infos(self, session, experiment_id):
runs = session.query(SqlRun).filter(SqlRun.experiment_id == experiment_id).all()
return runs
def restore_experiment(self, experiment_id):
with self.ManagedSessionMaker() as session:
experiment = self._get_experiment(session, experiment_id, ViewType.DELETED_ONLY)
experiment.lifecycle_stage = LifecycleStage.ACTIVE
experiment.last_update_time = get_current_time_millis()
runs = self._list_run_infos(session, experiment_id)
for run in runs:
self._mark_run_active(session, run)
self._save_to_db(objs=experiment, session=session)
def rename_experiment(self, experiment_id, new_name):
with self.ManagedSessionMaker() as session:
experiment = self._get_experiment(session, experiment_id, ViewType.ALL)
if experiment.lifecycle_stage != LifecycleStage.ACTIVE:
raise MlflowException("Cannot rename a non-active experiment.", INVALID_STATE)
experiment.name = new_name
experiment.last_update_time = get_current_time_millis()
self._save_to_db(objs=experiment, session=session)
def create_run(self, experiment_id, user_id, start_time, tags, run_name):
with self.ManagedSessionMaker() as session:
experiment = self.get_experiment(experiment_id)
self._check_experiment_is_active(experiment)
# Note: we need to ensure the generated "run_id" only contains digits and lower
# case letters, because some query filters contain "IN" clause, and in MYSQL the
# "IN" clause is case-insensitive, we use a trick that filters out comparison values
# containing upper case letters when parsing "IN" clause inside query filter.
run_id = uuid.uuid4().hex
artifact_location = append_to_uri_path(
experiment.artifact_location, run_id, SqlAlchemyStore.ARTIFACTS_FOLDER_NAME
)
tags = tags or []
run_name_tag = _get_run_name_from_tags(tags)
if run_name and run_name_tag and (run_name != run_name_tag):
raise MlflowException(
"Both 'run_name' argument and 'mlflow.runName' tag are specified, but with "
f"different values (run_name='{run_name}', mlflow.runName='{run_name_tag}').",
INVALID_PARAMETER_VALUE,
)
run_name = run_name or run_name_tag or _generate_random_name()
if not run_name_tag:
tags.append(RunTag(key=MLFLOW_RUN_NAME, value=run_name))
run = SqlRun(
name=run_name,
artifact_uri=artifact_location,
run_uuid=run_id,
experiment_id=experiment_id,
source_type=SourceType.to_string(SourceType.UNKNOWN),
source_name="",
entry_point_name="",
user_id=user_id,
status=RunStatus.to_string(RunStatus.RUNNING),
start_time=start_time,
end_time=None,
deleted_time=None,
source_version="",
lifecycle_stage=LifecycleStage.ACTIVE,
)
run.tags = [SqlTag(key=tag.key, value=tag.value) for tag in tags]
self._save_to_db(objs=run, session=session)
return run.to_mlflow_entity()
def _get_run(self, session, run_uuid, eager=False):
"""
:param eager: If ``True``, eagerly loads the run's summary metrics (``latest_metrics``),
params, and tags when fetching the run. If ``False``, these attributes
are not eagerly loaded and will be loaded when their corresponding
object properties are accessed from the resulting ``SqlRun`` object.
"""
query_options = self._get_eager_run_query_options() if eager else []
runs = (
session.query(SqlRun).options(*query_options).filter(SqlRun.run_uuid == run_uuid).all()
)
if len(runs) == 0:
raise MlflowException(f"Run with id={run_uuid} not found", RESOURCE_DOES_NOT_EXIST)
if len(runs) > 1:
raise MlflowException(
"Expected only 1 run with id={}. Found {}.".format(run_uuid, len(runs)),
INVALID_STATE,
)
return runs[0]
@staticmethod
def _get_eager_run_query_options():
"""
:return: A list of SQLAlchemy query options that can be used to eagerly load the following
run attributes when fetching a run: ``latest_metrics``, ``params``, and ``tags``.
"""
return [
# Use a select in load rather than a joined load in order to minimize the memory
# overhead of the eager loading procedure. For more information about relationship
# loading techniques, see https://docs.sqlalchemy.org/en/13/orm/
# loading_relationships.html#relationship-loading-techniques
sqlalchemy.orm.selectinload(SqlRun.latest_metrics),
sqlalchemy.orm.selectinload(SqlRun.params),
sqlalchemy.orm.selectinload(SqlRun.tags),
]
def _check_run_is_active(self, run):
if run.lifecycle_stage != LifecycleStage.ACTIVE:
raise MlflowException(
"The run {} must be in the 'active' state. Current state is {}.".format(
run.run_uuid, run.lifecycle_stage
),
INVALID_PARAMETER_VALUE,
)
def _check_experiment_is_active(self, experiment):
if experiment.lifecycle_stage != LifecycleStage.ACTIVE:
raise MlflowException(
"The experiment {} must be in the 'active' state. "
"Current state is {}.".format(experiment.experiment_id, experiment.lifecycle_stage),
INVALID_PARAMETER_VALUE,
)
def update_run_info(self, run_id, run_status, end_time, run_name):
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
if run_status is not None:
run.status = RunStatus.to_string(run_status)
if end_time is not None:
run.end_time = end_time
if run_name:
run.name = run_name
run_name_tag = self._try_get_run_tag(session, run_id, MLFLOW_RUN_NAME)
if run_name_tag is None:
run.tags.append(SqlTag(key=MLFLOW_RUN_NAME, value=run_name))
else:
run_name_tag.value = run_name
self._save_to_db(objs=run, session=session)
run = run.to_mlflow_entity()
return run.info
def _try_get_run_tag(self, session, run_id, tagKey, eager=False):
query_options = self._get_eager_run_query_options() if eager else []
return (
session.query(SqlTag)
.options(*query_options)
.filter(SqlTag.run_uuid == run_id, SqlTag.key == tagKey)
.one_or_none()
)
def get_run(self, run_id):
with self.ManagedSessionMaker() as session:
# Load the run with the specified id and eagerly load its summary metrics, params, and
# tags. These attributes are referenced during the invocation of
# ``run.to_mlflow_entity()``, so eager loading helps avoid additional database queries
# that are otherwise executed at attribute access time under a lazy loading model.
run = self._get_run(run_uuid=run_id, session=session, eager=True)
return run.to_mlflow_entity()
def restore_run(self, run_id):
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
run.lifecycle_stage = LifecycleStage.ACTIVE
run.deleted_time = None
self._save_to_db(objs=run, session=session)
def delete_run(self, run_id):
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
run.lifecycle_stage = LifecycleStage.DELETED
run.deleted_time = get_current_time_millis()
self._save_to_db(objs=run, session=session)
def _hard_delete_run(self, run_id):
"""
Permanently delete a run (metadata and metrics, tags, parameters).
This is used by the ``mlflow gc`` command line and is not intended to be used elsewhere.
"""
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
session.delete(run)
def _get_deleted_runs(self, older_than=0):
"""
Get all deleted run ids.
Args:
older_than: get runs that is older than this variable in number of milliseconds.
defaults to 0 ms to get all deleted runs.
"""
current_time = get_current_time_millis()
with self.ManagedSessionMaker() as session:
runs = (
session.query(SqlRun)
.filter(
SqlRun.lifecycle_stage == LifecycleStage.DELETED,
SqlRun.deleted_time <= (current_time - older_than),
)
.all()
)
return [run.run_uuid for run in runs]
def _get_metric_value_details(self, metric):
_validate_metric(metric.key, metric.value, metric.timestamp, metric.step)
is_nan = math.isnan(metric.value)
if is_nan:
value = 0
elif math.isinf(metric.value):
# NB: Sql can not represent Infs = > We replace +/- Inf with max/min 64b float value
value = 1.7976931348623157e308 if metric.value > 0 else -1.7976931348623157e308
else:
value = metric.value
return metric, value, is_nan
def log_metric(self, run_id, metric):
# simply call _log_metrics and let it handle the rest
self._log_metrics(run_id, [metric])
def _log_metrics(self, run_id, metrics):
if not metrics:
return
# Duplicate metric values are eliminated here to maintain
# the same behavior in log_metric
metric_instances = []
seen = set()
for metric in metrics:
metric, value, is_nan = self._get_metric_value_details(metric)
if metric not in seen:
metric_instances.append(
SqlMetric(
run_uuid=run_id,
key=metric.key,
value=value,
timestamp=metric.timestamp,
step=metric.step,
is_nan=is_nan,
)
)
seen.add(metric)
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
def _insert_metrics(metric_instances):
self._save_to_db(session=session, objs=metric_instances)
self._update_latest_metrics_if_necessary(metric_instances, session)
session.commit()
try:
_insert_metrics(metric_instances)
except sqlalchemy.exc.IntegrityError:
# Primary key can be violated if it is tried to log a metric with same value,
# timestamp, step, and key within the same run.
# Roll back the current session to make it usable for further transactions. In
# the event of an error during "commit", a rollback is required in order to
# continue using the session. In this case, we re-use the session to query
# SqlMetric
session.rollback()
# Divide metric keys into batches of 100 to avoid loading too much metric
# history data into memory at once
metric_keys = [m.key for m in metric_instances]
metric_key_batches = [
metric_keys[i : i + 100] for i in range(0, len(metric_keys), 100)
]
for metric_key_batch in metric_key_batches:
# obtain the metric history corresponding to the given metrics
metric_history = (
session.query(SqlMetric)
.filter(
SqlMetric.run_uuid == run_id,
SqlMetric.key.in_(metric_key_batch),
)
.all()
)
# convert to a set of Metric instance to take advantage of its hashable
# and then obtain the metrics that were not logged earlier within this
# run_id
metric_history = {m.to_mlflow_entity() for m in metric_history}
non_existing_metrics = [
m for m in metric_instances if m.to_mlflow_entity() not in metric_history
]
# if there exist metrics that were tried to be logged & rolled back even
# though they were not violating the PK, log them
_insert_metrics(non_existing_metrics)
def _update_latest_metrics_if_necessary(self, logged_metrics, session):
def _compare_metrics(metric_a, metric_b):
"""
:return: True if ``metric_a`` is strictly more recent than ``metric_b``, as determined
by ``step``, ``timestamp``, and ``value``. False otherwise.
"""
return (metric_a.step, metric_a.timestamp, metric_a.value) > (
metric_b.step,
metric_b.timestamp,
metric_b.value,
)
def _overwrite_metric(new_metric, old_metric):
"""
writes content of new_metric over old_metric. The content are
`value`, `step`, `timestamp`, and `is_nan`.
:return: old_metric with its content updated.
"""
old_metric.value = new_metric.value
old_metric.step = new_metric.step
old_metric.timestamp = new_metric.timestamp
old_metric.is_nan = new_metric.is_nan
return old_metric
if not logged_metrics:
return
# Fetch the latest metric value corresponding to the specified run_id and metric keys and
# lock their associated rows for the remainder of the transaction in order to ensure
# isolation
latest_metrics = {}
metric_keys = [m.key for m in logged_metrics]
# Divide metric keys into batches of 500 to avoid binding too many parameters to the SQL
# query, which may produce limit exceeded errors or poor performance on certain database
# platforms
metric_key_batches = [metric_keys[i : i + 500] for i in range(0, len(metric_keys), 500)]
for metric_key_batch in metric_key_batches:
# First, determine which metric keys are present in the database
latest_metrics_key_records_from_db = (
session.query(SqlLatestMetric.key)
.filter(
SqlLatestMetric.run_uuid == logged_metrics[0].run_uuid,
SqlLatestMetric.key.in_(metric_key_batch),
)
.all()
)
# Then, take a write lock on the rows corresponding to metric keys that are present,
# ensuring that they aren't modified by another transaction until they can be
# compared to the metric values logged by this transaction while avoiding gap locking
# and next-key locking which may otherwise occur when issuing a `SELECT FOR UPDATE`
# against nonexistent rows
if len(latest_metrics_key_records_from_db) > 0:
latest_metric_keys_from_db = [
record[0] for record in latest_metrics_key_records_from_db
]
latest_metrics_batch = (
session.query(SqlLatestMetric)
.filter(
SqlLatestMetric.run_uuid == logged_metrics[0].run_uuid,
SqlLatestMetric.key.in_(latest_metric_keys_from_db),
)
# Order by the metric run ID and key to ensure a consistent locking order
# across transactions, reducing deadlock likelihood
.order_by(SqlLatestMetric.run_uuid, SqlLatestMetric.key)
.with_for_update()
.all()
)
latest_metrics.update({m.key: m for m in latest_metrics_batch})
# iterate over all logged metrics and compare them with corresponding
# SqlLatestMetric entries
# if there's no SqlLatestMetric entry for the current metric key,
# create a new SqlLatestMetric instance and put it in
# new_latest_metric_dict so that they can be saved later.
new_latest_metric_dict = {}
for logged_metric in logged_metrics:
latest_metric = latest_metrics.get(logged_metric.key)
# a metric key can be passed more then once within logged metrics
# with different step/timestamp/value. However SqlLatestMetric
# entries are inserted after this loop is completed.
# so, retrieve the instances they were just created and use them
# for comparison.
new_latest_metric = new_latest_metric_dict.get(logged_metric.key)
# just create a new SqlLatestMetric instance since both
# latest_metric row or recently created instance does not exist
if not latest_metric and not new_latest_metric:
new_latest_metric = SqlLatestMetric(
run_uuid=logged_metric.run_uuid,
key=logged_metric.key,
value=logged_metric.value,
timestamp=logged_metric.timestamp,
step=logged_metric.step,
is_nan=logged_metric.is_nan,
)
new_latest_metric_dict[logged_metric.key] = new_latest_metric
# there's no row but a new instance is recently created.
# so, update the recent instance in new_latest_metric_dict if
# metric comparison is successful.
elif not latest_metric and new_latest_metric:
if _compare_metrics(logged_metric, new_latest_metric):
new_latest_metric = _overwrite_metric(logged_metric, new_latest_metric)
new_latest_metric_dict[logged_metric.key] = new_latest_metric
# compare with the row
elif _compare_metrics(logged_metric, latest_metric):
# editing the attributes of latest_metric, which is a
# SqlLatestMetric instance will result in UPDATE in DB side.
latest_metric = _overwrite_metric(logged_metric, latest_metric)
if new_latest_metric_dict:
self._save_to_db(session=session, objs=list(new_latest_metric_dict.values()))
def get_metric_history(self, run_id, metric_key, max_results=None, page_token=None):
"""
Return all logged values for a given metric.
:param run_id: Unique identifier for run
:param metric_key: Metric name within the run
:param max_results: An indicator for paginated results. This functionality is not
implemented for SQLAlchemyStore and is unused in this store's implementation.
:param page_token: An indicator for paginated results. This functionality is not
implemented for SQLAlchemyStore and if the value is overridden with a value other than
``None``, an MlflowException will be thrown.
:return: A List of :py:class:`mlflow.entities.Metric` entities if ``metric_key`` values
have been logged to the ``run_id``, else an empty list.
"""
# NB: The SQLAlchemyStore does not currently support pagination for this API.
# Raise if `page_token` is specified, as the functionality to support paged queries
# is not implemented.
if page_token is not None:
raise MlflowException(
"The SQLAlchemyStore backend does not support pagination for the "
f"`get_metric_history` API. Supplied argument `page_token` '{page_token}' must be "
"`None`."
)
with self.ManagedSessionMaker() as session:
metrics = session.query(SqlMetric).filter_by(run_uuid=run_id, key=metric_key).all()
return PagedList([metric.to_mlflow_entity() for metric in metrics], None)
class MetricWithRunId(Metric):
def __init__(self, metric: Metric, run_id):
super().__init__(
key=metric.key,
value=metric.value,
timestamp=metric.timestamp,
step=metric.step,
)
self._run_id = run_id
@property
def run_id(self):
return self._run_id
def to_dict(self):
return {
"key": self.key,
"value": self.value,
"timestamp": self.timestamp,
"step": self.step,
"run_id": self.run_id,
}
def get_metric_history_bulk(self, run_ids, metric_key, max_results):
"""
Return all logged values for a given metric.
:param run_ids: Unique identifiers of the runs from which to fetch the metric histories for
the specified key.
:param metric_key: Metric name within the runs.
:param max_results: The maximum number of results to return.
:return: A List of :py:class:`SqlAlchemyStore.MetricWithRunId` objects if ``metric_key``
values have been logged to one or more of the specified ``run_ids``, else an empty
list. Results are sorted by run ID in lexicographically ascending order, followed by
timestamp, step, and value in numerically ascending order.
"""
# NB: The SQLAlchemyStore does not currently support pagination for this API.
# Raise if `page_token` is specified, as the functionality to support paged queries
# is not implemented.
with self.ManagedSessionMaker() as session:
metrics = (
session.query(SqlMetric)
.filter(
SqlMetric.key == metric_key,
SqlMetric.run_uuid.in_(run_ids),
)
.order_by(
SqlMetric.run_uuid,
SqlMetric.timestamp,
SqlMetric.step,
SqlMetric.value,
)
.limit(max_results)
.all()
)
return [
SqlAlchemyStore.MetricWithRunId(
run_id=metric.run_uuid,
metric=metric.to_mlflow_entity(),
)
for metric in metrics
]
def log_param(self, run_id, param):
_validate_param(param.key, param.value)
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
# if we try to update the value of an existing param this will fail
# because it will try to create it with same run_uuid, param key
try:
# This will check for various integrity checks for params table.
# ToDo: Consider prior checks for null, type, param name validations, ... etc.
self._get_or_create(
model=SqlParam,
session=session,
run_uuid=run_id,
key=param.key,
value=param.value,
)
# Explicitly commit the session in order to catch potential integrity errors
# while maintaining the current managed session scope ("commit" checks that
# a transaction satisfies uniqueness constraints and throws integrity errors
# when they are violated; "get_or_create()" does not perform these checks). It is
# important that we maintain the same session scope because, in the case of
# an integrity error, we want to examine the uniqueness of parameter values using
# the same database state that the session uses during "commit". Creating a new
# session synchronizes the state with the database. As a result, if the conflicting
# parameter value were to be removed prior to the creation of a new session,
# we would be unable to determine the cause of failure for the first session's
# "commit" operation.
session.commit()
except sqlalchemy.exc.IntegrityError:
# Roll back the current session to make it usable for further transactions. In the
# event of an error during "commit", a rollback is required in order to continue
# using the session. In this case, we re-use the session because the SqlRun, `run`,
# is lazily evaluated during the invocation of `run.params`.
session.rollback()
existing_params = [p.value for p in run.params if p.key == param.key]
if len(existing_params) > 0:
old_value = existing_params[0]
if old_value != param.value:
raise MlflowException(
"Changing param values is not allowed. Param with key='{}' was already"
" logged with value='{}' for run ID='{}'. Attempted logging new value"
" '{}'.".format(param.key, old_value, run_id, param.value),
INVALID_PARAMETER_VALUE,
)
else:
raise
def _log_params(self, run_id, params):
if not params:
return
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
existing_params = {p.key: p.value for p in run.params}
new_params = []
non_matching_params = []
for param in params:
if param.key in existing_params:
if param.value != existing_params[param.key]:
non_matching_params.append(
{
"key": param.key,
"old_value": existing_params[param.key],
"new_value": param.value,
}
)
continue
new_params.append(SqlParam(run_uuid=run_id, key=param.key, value=param.value))
if non_matching_params:
raise MlflowException(
"Changing param values is not allowed. Params were already"
f" logged='{non_matching_params}' for run ID='{run_id}'.",
INVALID_PARAMETER_VALUE,
)
if not new_params:
return
self._save_to_db(session=session, objs=new_params)
def set_experiment_tag(self, experiment_id, tag):
"""
Set a tag for the specified experiment
:param experiment_id: String ID of the experiment
:param tag: ExperimentRunTag instance to log
"""
_validate_experiment_tag(tag.key, tag.value)
with self.ManagedSessionMaker() as session:
experiment = self._get_experiment(
session, experiment_id, ViewType.ALL
).to_mlflow_entity()
self._check_experiment_is_active(experiment)
session.merge(
SqlExperimentTag(experiment_id=experiment_id, key=tag.key, value=tag.value)
)
def set_tag(self, run_id, tag):
"""
Set a tag on a run.
:param run_id: String ID of the run
:param tag: RunTag instance to log
"""
with self.ManagedSessionMaker() as session:
_validate_tag(tag.key, tag.value)
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
if tag.key == MLFLOW_RUN_NAME:
run_status = RunStatus.from_string(run.status)
self.update_run_info(run_id, run_status, run.end_time, tag.value)
else:
# NB: Updating the run_info will set the tag. No need to do it twice.
session.merge(SqlTag(run_uuid=run_id, key=tag.key, value=tag.value))
def _set_tags(self, run_id, tags):
"""
Set multiple tags on a run
:param run_id: String ID of the run
:param tags: List of RunTag instances to log
"""
if not tags:
return
for tag in tags:
_validate_tag(tag.key, tag.value)
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
def _try_insert_tags(attempt_number, max_retries):
try:
current_tags = (
session.query(SqlTag)
.filter(SqlTag.run_uuid == run_id, SqlTag.key.in_([t.key for t in tags]))
.all()
)
current_tags = {t.key: t for t in current_tags}
new_tag_dict = {}
for tag in tags:
# NB: If the run name tag is explicitly set, update the run info attribute
# and do not resubmit the tag for overwrite as the tag will be set within
# `set_tag()` with a call to `update_run_info()`
if tag.key == MLFLOW_RUN_NAME:
self.set_tag(run_id, tag)
else:
current_tag = current_tags.get(tag.key)
new_tag = new_tag_dict.get(tag.key)
# update the SqlTag if it is already present in DB
if current_tag:
current_tag.value = tag.value
continue
# if a SqlTag instance is already present in `new_tag_dict`,
# this means that multiple tags with the same key were passed to
# `set_tags`.
# In this case, we resolve potential conflicts by updating the value
# of the existing instance to the value of `tag`
if new_tag:
new_tag.value = tag.value
# otherwise, put it into the dict
else:
new_tag = SqlTag(run_uuid=run_id, key=tag.key, value=tag.value)
new_tag_dict[tag.key] = new_tag
# finally, save new entries to DB.
self._save_to_db(session=session, objs=list(new_tag_dict.values()))
session.commit()
except sqlalchemy.exc.IntegrityError:
session.rollback()
# two concurrent operations may try to attempt to insert tags.
# apply retry here.
if attempt_number > max_retries:
raise MlflowException(
"Failed to set tags with given within {} retries. Keys: {}".format(
max_retries, [t.key for t in tags]
)
)
sleep_duration = (2**attempt_number) - 1
sleep_duration += random.uniform(0, 1)
time.sleep(sleep_duration)
_try_insert_tags(attempt_number + 1, max_retries=max_retries)
_try_insert_tags(attempt_number=0, max_retries=3)
def delete_tag(self, run_id, key):
"""
Delete a tag from a run. This is irreversible.
:param run_id: String ID of the run
:param key: Name of the tag
"""
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
filtered_tags = session.query(SqlTag).filter_by(run_uuid=run_id, key=key).all()
if len(filtered_tags) == 0:
raise MlflowException(
f"No tag with name: {key} in run with id {run_id}",
error_code=RESOURCE_DOES_NOT_EXIST,
)
elif len(filtered_tags) > 1:
raise MlflowException(
"Bad data in database - tags for a specific run must have "
"a single unique value. "
"See https://mlflow.org/docs/latest/tracking.html#adding-tags-to-runs",
error_code=INVALID_STATE,
)
session.delete(filtered_tags[0])
def _search_runs(
self, experiment_ids, filter_string, run_view_type, max_results, order_by, page_token
):
def compute_next_token(current_size):
next_token = None
if max_results == current_size:
final_offset = offset + max_results
next_token = SearchUtils.create_page_token(final_offset)
return next_token
if max_results > SEARCH_MAX_RESULTS_THRESHOLD:
raise MlflowException(
"Invalid value for request parameter max_results. It must be at "
f"most {SEARCH_MAX_RESULTS_THRESHOLD}, but got value {max_results}",
INVALID_PARAMETER_VALUE,
)
stages = set(LifecycleStage.view_type_to_stages(run_view_type))
with self.ManagedSessionMaker() as session:
# Fetch the appropriate runs and eagerly load their summary metrics, params, and
# tags. These run attributes are referenced during the invocation of
# ``run.to_mlflow_entity()``, so eager loading helps avoid additional database queries
# that are otherwise executed at attribute access time under a lazy loading model.
parsed_filters = SearchUtils.parse_search_filter(filter_string)
cases_orderby, parsed_orderby, sorting_joins = _get_orderby_clauses(order_by, session)
stmt = select(SqlRun, *cases_orderby)
attribute_filters, non_attribute_filters = _get_sqlalchemy_filter_clauses(
parsed_filters, session, self._get_dialect()
)
for non_attr_filter in non_attribute_filters:
stmt = stmt.join(non_attr_filter)
# using an outer join is necessary here because we want to be able to sort
# on a column (tag, metric or param) without removing the lines that
# do not have a value for this column (which is what inner join would do)
for j in sorting_joins:
stmt = stmt.outerjoin(j)
offset = SearchUtils.parse_start_offset_from_page_token(page_token)
stmt = (
stmt.distinct()
.options(*self._get_eager_run_query_options())
.filter(
SqlRun.experiment_id.in_(experiment_ids),
SqlRun.lifecycle_stage.in_(stages),
*attribute_filters,
)
.order_by(*parsed_orderby)
.offset(offset)
.limit(max_results)
)
queried_runs = session.execute(stmt).scalars(SqlRun).all()
runs = [run.to_mlflow_entity() for run in queried_runs]
next_page_token = compute_next_token(len(runs))
return runs, next_page_token
def log_batch(self, run_id, metrics, params, tags):
_validate_run_id(run_id)
_validate_batch_log_data(metrics, params, tags)
_validate_batch_log_limits(metrics, params, tags)
_validate_param_keys_unique(params)
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
try:
self._log_params(run_id, params)
self._log_metrics(run_id, metrics)
self._set_tags(run_id, tags)
except MlflowException as e:
raise e
except Exception as e:
raise MlflowException(e, INTERNAL_ERROR)
def record_logged_model(self, run_id, mlflow_model):
from mlflow.models import Model
if not isinstance(mlflow_model, Model):
raise TypeError(
"Argument 'mlflow_model' should be mlflow.models.Model, got '{}'".format(
type(mlflow_model)
)
)
model_dict = mlflow_model.to_dict()
with self.ManagedSessionMaker() as session:
run = self._get_run(run_uuid=run_id, session=session)
self._check_run_is_active(run)
previous_tag = [t for t in run.tags if t.key == MLFLOW_LOGGED_MODELS]
if previous_tag:
value = json.dumps(json.loads(previous_tag[0].value) + [model_dict])
else:
value = json.dumps([model_dict])
_validate_tag(MLFLOW_LOGGED_MODELS, value)
session.merge(SqlTag(key=MLFLOW_LOGGED_MODELS, value=value, run_uuid=run_id))
def _get_attributes_filtering_clauses(parsed, dialect):
clauses = []
for sql_statement in parsed:
key_type = sql_statement.get("type")
key_name = sql_statement.get("key")
value = sql_statement.get("value")
comparator = sql_statement.get("comparator").upper()
if SearchUtils.is_string_attribute(
key_type, key_name, comparator
) or SearchUtils.is_numeric_attribute(key_type, key_name, comparator):
# key_name is guaranteed to be a valid searchable attribute of entities.RunInfo
# by the call to parse_search_filter
attribute = getattr(SqlRun, SqlRun.get_attribute_name(key_name))
clauses.append(
SearchUtils.get_sql_comparison_func(comparator, dialect)(attribute, value)
)
return clauses
def _get_sqlalchemy_filter_clauses(parsed, session, dialect):
"""
Creates run attribute filters and subqueries that will be inner-joined to SqlRun to act as
multi-clause filters and return them as a tuple.
"""
attribute_filters = []
non_attribute_filters = []
for sql_statement in parsed:
key_type = sql_statement.get("type")
key_name = sql_statement.get("key")
value = sql_statement.get("value")
comparator = sql_statement.get("comparator").upper()
key_name = SearchUtils.translate_key_alias(key_name)
if SearchUtils.is_string_attribute(
key_type, key_name, comparator
) or SearchUtils.is_numeric_attribute(key_type, key_name, comparator):
if key_name == "run_name":
# Treat "attributes.run_name == <value>" as "tags.`mlflow.runName` == <value>".
# The name column in the runs table is empty for runs logged in MLflow <= 1.29.0.
key_filter = SearchUtils.get_sql_comparison_func("=", dialect)(
SqlTag.key, MLFLOW_RUN_NAME
)
val_filter = SearchUtils.get_sql_comparison_func(comparator, dialect)(
SqlTag.value, value
)
non_attribute_filters.append(
session.query(SqlTag).filter(key_filter, val_filter).subquery()
)
else:
attribute = getattr(SqlRun, SqlRun.get_attribute_name(key_name))
attr_filter = SearchUtils.get_sql_comparison_func(comparator, dialect)(
attribute, value
)
attribute_filters.append(attr_filter)
else:
if SearchUtils.is_metric(key_type, comparator):
entity = SqlLatestMetric
value = float(value)
elif SearchUtils.is_param(key_type, comparator):
entity = SqlParam
elif SearchUtils.is_tag(key_type, comparator):
entity = SqlTag
else:
raise MlflowException(
"Invalid search expression type '%s'" % key_type,
error_code=INVALID_PARAMETER_VALUE,
)
key_filter = SearchUtils.get_sql_comparison_func("=", dialect)(entity.key, key_name)
val_filter = SearchUtils.get_sql_comparison_func(comparator, dialect)(
entity.value, value
)
non_attribute_filters.append(
session.query(entity).filter(key_filter, val_filter).subquery()
)
return attribute_filters, non_attribute_filters
def _get_orderby_clauses(order_by_list, session):
"""Sorts a set of runs based on their natural ordering and an overriding set of order_bys.
Runs are naturally ordered first by start time descending, then by run id for tie-breaking.
"""
clauses = []
ordering_joins = []
clause_id = 0
observed_order_by_clauses = set()
select_clauses = []
# contrary to filters, it is not easily feasible to separately handle sorting
# on attributes and on joined tables as we must keep all clauses in the same order
if order_by_list:
for order_by_clause in order_by_list:
clause_id += 1
(key_type, key, ascending) = SearchUtils.parse_order_by_for_search_runs(order_by_clause)
key = SearchUtils.translate_key_alias(key)
if SearchUtils.is_string_attribute(
key_type, key, "="
) or SearchUtils.is_numeric_attribute(key_type, key, "="):
order_value = getattr(SqlRun, SqlRun.get_attribute_name(key))
else:
if SearchUtils.is_metric(key_type, "="): # any valid comparator
entity = SqlLatestMetric
elif SearchUtils.is_tag(key_type, "="):
entity = SqlTag
elif SearchUtils.is_param(key_type, "="):
entity = SqlParam
else:
raise MlflowException(
"Invalid identifier type '%s'" % key_type,
error_code=INVALID_PARAMETER_VALUE,
)
# build a subquery first because we will join it in the main request so that the
# metric we want to sort on is available when we apply the sorting clause
subquery = session.query(entity).filter(entity.key == key).subquery()
ordering_joins.append(subquery)
order_value = subquery.c.value
# sqlite does not support NULLS LAST expression, so we sort first by
# presence of the field (and is_nan for metrics), then by actual value
# As the subqueries are created independently and used later in the
# same main query, the CASE WHEN columns need to have unique names to
# avoid ambiguity
if SearchUtils.is_metric(key_type, "="):
case = sql.case(
# Ideally the use of "IS" is preferred here but owing to sqlalchemy
# translation in MSSQL we are forced to use "=" instead.
# These 2 options are functionally identical / unchanged because
# the column (is_nan) is not nullable. However it could become an issue
# if this precondition changes in the future.
(subquery.c.is_nan == sqlalchemy.true(), 1),
(order_value.is_(None), 2),
else_=0,
).label("clause_%s" % clause_id)
else: # other entities do not have an 'is_nan' field
case = sql.case((order_value.is_(None), 1), else_=0).label("clause_%s" % clause_id)
clauses.append(case.name)
select_clauses.append(case)
select_clauses.append(order_value)
if (key_type, key) in observed_order_by_clauses:
raise MlflowException(f"`order_by` contains duplicate fields: {order_by_list}")
observed_order_by_clauses.add((key_type, key))
if ascending:
clauses.append(order_value)
else:
clauses.append(order_value.desc())
if (SearchUtils._ATTRIBUTE_IDENTIFIER, SqlRun.start_time.key) not in observed_order_by_clauses:
clauses.append(SqlRun.start_time.desc())
clauses.append(SqlRun.run_uuid)
return select_clauses, clauses, ordering_joins
def _get_search_experiments_filter_clauses(parsed_filters, dialect):
attribute_filters = []
non_attribute_filters = []
for f in parsed_filters:
type_ = f["type"]
key = f["key"]
comparator = f["comparator"]
value = f["value"]
if type_ == "attribute":
if SearchExperimentsUtils.is_string_attribute(
type_, key, comparator
) and comparator not in ("=", "!=", "LIKE", "ILIKE"):
raise MlflowException.invalid_parameter_value(
f"Invalid comparator for string attribute: {comparator}"
)
if SearchExperimentsUtils.is_numeric_attribute(
type_, key, comparator
) and comparator not in ("=", "!=", "<", "<=", ">", ">="):
raise MlflowException.invalid_parameter_value(
f"Invalid comparator for numeric attribute: {comparator}"
)
attr = getattr(SqlExperiment, key)
attr_filter = SearchUtils.get_sql_comparison_func(comparator, dialect)(attr, value)
attribute_filters.append(attr_filter)
elif type_ == "tag":
if comparator not in ("=", "!=", "LIKE", "ILIKE"):
raise MlflowException.invalid_parameter_value(
f"Invalid comparator for tag: {comparator}"
)
val_filter = SearchUtils.get_sql_comparison_func(comparator, dialect)(
SqlExperimentTag.value, value
)
key_filter = SearchUtils.get_sql_comparison_func("=", dialect)(
SqlExperimentTag.key, key
)
non_attribute_filters.append(
select(SqlExperimentTag).filter(key_filter, val_filter).subquery()
)
else:
raise MlflowException.invalid_parameter_value(f"Invalid token type: {type_}")
return attribute_filters, non_attribute_filters
def _get_search_experiments_order_by_clauses(order_by):
order_by_clauses = []
for type_, key, ascending in map(
SearchExperimentsUtils.parse_order_by_for_search_experiments,
order_by or ["creation_time DESC", "experiment_id ASC"],
):
if type_ == "attribute":
order_by_clauses.append((getattr(SqlExperiment, key), ascending))
else:
raise MlflowException.invalid_parameter_value(f"Invalid order_by entity: {type_}")
# Add a tie-breaker
if not any(col == SqlExperiment.experiment_id for col, _ in order_by_clauses):
order_by_clauses.append((SqlExperiment.experiment_id, False))
return [col.asc() if ascending else col.desc() for col, ascending in order_by_clauses]
|
PypiClean
|
/pastawrap-0.1.0.tar.gz/pastawrap-0.1.0/README.md
|
pastaWRAP
=========
**pastaWRAP** is a Python wrapper for **R-based Awesome Toolkit for PASTA**,
more commonly known as [**ratPASTA**](https://github.com/ikodvanj/ratPASTA) - an
R package used for processing and visualising data from startle experiments in
rodents or experiments measuring grip strength in rodents. Currently, pastaWRAP only
supports ratPASTA functionality for startle experiments, with plans for adding
griPASTA (grip strength test) functionality wrapping at a later date. The input data
for this package is created with a **PASTA** solution (**Platform for Acoustic STArtle**),
described in detail here:
*Virag, D., Homolak, J., Kodvanj, I., Babic Perhoc, A., Knezovic, A.,
Osmanovic Barilar, J., & Salkovic-Petrisic, M. (2020). Repurposing a
digital kitchen scale for neuroscience research: a complete hardware and
software cookbook for PASTA. BioRxiv, 2020.04.10.035766.
https://doi.org/10.1101/2020.04.10.035766*
Using the same platform for measuring grip strength in rodents is
described here:
*Homolak, J., Virag, D., Kodvanj, I., Matak, I., Babic Perhoc, A.,
Knezovic, A., Osmanovic Barilar, J., Salkovic-Petrisic, M. (2020).
griPASTA: A hacked kitchen scale for quantification of grip strength in
rodents. BioRxiv, 2020.07.23.217737.
https://doi.org/10.1101/2020.07.23.217737*
|
PypiClean
|
/networking-mlnx-21.0.0.tar.gz/networking-mlnx-21.0.0/HACKING.rst
|
Neutron Style Commandments
=======================
- Step 1: Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Step 2: Read on
Neutron Specific Commandments
--------------------------
- [N319] Validate that debug level logs are not translated
- [N320] Validate that LOG messages, except debug ones, have translations
- [N321] Validate that jsonutils module is used instead of json
- [N322] Detect common errors with assert_called_once_with
- [N323] Enforce namespace-less imports for oslo libraries
Creating Unit Tests
-------------------
For every new feature, unit tests should be created that both test and
(implicitly) document the usage of said feature. If submitting a patch for a
bug that had no unit test, a new passing unit test should be added. If a
submitted bug fix does have a unit test, be sure to add a new one that fails
without the patch and passes with the patch.
All unittest classes must ultimately inherit from testtools.TestCase. In the
Neutron test suite, this should be done by inheriting from
neutron.tests.base.BaseTestCase.
All setUp and tearDown methods must upcall using the super() method.
tearDown methods should be avoided and addCleanup calls should be preferred.
Never manually create tempfiles. Always use the tempfile fixtures from
the fixture library to ensure that they are cleaned up.
|
PypiClean
|
/dsin100daysv32-6.0.1.tar.gz/dsin100daysv32-6.0.1/notebook/static/notebook/js/menubar.js
|
define([
'jquery',
'base/js/namespace',
'base/js/dialog',
'base/js/utils',
'base/js/i18n',
'./celltoolbar',
'./tour',
'moment',
], function($, IPython, dialog, utils, i18n, celltoolbar, tour, moment) {
"use strict";
var MenuBar = function (selector, options) {
/**
* Constructor
*
* A MenuBar Class to generate the menubar of Jupyter notebook
*
* Parameters:
* selector: string
* options: dictionary
* Dictionary of keyword arguments.
* notebook: Notebook instance
* contents: ContentManager instance
* events: $(Events) instance
* save_widget: SaveWidget instance
* quick_help: QuickHelp instance
* base_url : string
* notebook_path : string
* notebook_name : string
* config: ConfigSection instance
*/
options = options || {};
this.base_url = options.base_url || utils.get_body_data("baseUrl");
this.selector = selector;
this.notebook = options.notebook;
this.actions = this.notebook.keyboard_manager.actions;
this.contents = options.contents;
this.events = options.events;
this.save_widget = options.save_widget;
this.quick_help = options.quick_help;
this.actions = options.actions;
this.config = options.config;
try {
this.tour = new tour.Tour(this.notebook, this.events);
} catch (e) {
this.tour = undefined;
console.log("Failed to instantiate Notebook Tour", e);
}
if (this.selector !== undefined) {
this.element = $(selector);
this.style();
this.add_bundler_items();
this.bind_events();
}
};
// TODO: This has definitively nothing to do with style ...
MenuBar.prototype.style = function () {
var that = this;
this.element.find("li").click(function (event, ui) {
// The selected cell loses focus when the menu is entered, so we
// re-select it upon selection.
var i = that.notebook.get_selected_index();
that.notebook.select(i, false);
}
);
};
MenuBar.prototype.add_bundler_items = function() {
var that = this;
this.config.loaded.then(function() {
var bundlers = that.config.data.bundlerextensions;
if(bundlers) {
// Stable sort the keys to ensure menu items don't hop around
var ids = Object.keys(bundlers).sort()
ids.forEach(function(bundler_id) {
var bundler = bundlers[bundler_id];
var group = that.element.find('#'+bundler.group+'_menu')
// Validate menu item metadata
if(!group.length) {
console.warn('unknown group', bundler.group, 'for bundler ID', bundler_id, '; skipping');
return;
} else if(!bundler.label) {
console.warn('no label for bundler ID', bundler_id, '; skipping');
return;
}
// Insert menu item into correct group, click handler
group.parent().removeClass('hidden');
var $li = $('<li>')
.appendTo(group);
$('<a>')
.attr('href', '#')
.text(bundler.label)
.appendTo($li)
.on('click', that._bundle.bind(that, bundler_id))
.appendTo($li);
});
}
});
};
MenuBar.prototype._new_window = function(url) {
var w = window.open('', IPython._target);
if (this.notebook.dirty && this.notebook.writable) {
this.notebook.save_notebook().then(function() {
w.location = url;
});
} else {
w.location = url;
}
};
MenuBar.prototype._bundle = function(bundler_id) {
// Read notebook path and base url here in case they change
var notebook_path = utils.encode_uri_components(this.notebook.notebook_path);
var url = utils.url_path_join(
this.base_url,
'bundle',
notebook_path
) + '?bundler=' + utils.encode_uri_components(bundler_id);
this._new_window(url);
};
MenuBar.prototype._nbconvert = function (format, download) {
download = download || false;
var notebook_path = utils.encode_uri_components(this.notebook.notebook_path);
var url = utils.url_path_join(
this.base_url,
'nbconvert',
format,
notebook_path
) + "?download=" + download.toString();
this._new_window(url);
};
MenuBar.prototype._size_header = function() {
/**
* Update header spacer size.
*/
console.warn('`_size_header` is deprecated and will be removed in future versions.'+
' Please trigger the `resize-header.Page` manually if you rely on it.');
this.events.trigger('resize-header.Page');
};
MenuBar.prototype.bind_events = function () {
/**
* File
*/
var that = this;
this.element.find('#open_notebook').click(function () {
var parent = utils.url_path_split(that.notebook.notebook_path)[0];
window.open(
utils.url_path_join(
that.base_url, 'tree',
utils.encode_uri_components(parent)
), IPython._target);
});
this.element.find('#copy_notebook').click(function () {
that.notebook.copy_notebook();
return false;
});
this.element.find('#save_notebook_as').click(function() {
that.notebook.save_notebook_as();
return false;
});
this.element.find('#print_preview').click(function () {
that._nbconvert('html', false);
});
this.element.find('#download_menu li').click(function (ev) {
that._nbconvert(ev.target.parentElement.getAttribute('id').substring(9), true);
});
this.events.on('trust_changed.Notebook', function (event, trusted) {
if (trusted) {
that.element.find('#trust_notebook')
.addClass("disabled").off('click')
.find("a").text(i18n.msg._("Trusted Notebook"));
} else {
that.element.find('#trust_notebook')
.removeClass("disabled").on('click', function () {
that.notebook.trust_notebook();
})
.find("a").text(i18n.msg._("Trust Notebook"));
}
});
// View
this._add_celltoolbar_list();
// Edit
this.element.find('#edit_nb_metadata').click(function () {
that.notebook.edit_metadata({
notebook: that.notebook,
keyboard_manager: that.notebook.keyboard_manager});
});
var id_actions_dict = {
'#trust_notebook' : 'trust-notebook',
'#rename_notebook' : 'rename-notebook',
'#find_and_replace' : 'find-and-replace',
'#save_checkpoint': 'save-notebook',
'#shutdown_kernel': 'confirm-shutdown-kernel',
'#restart_kernel': 'confirm-restart-kernel',
'#restart_clear_output': 'confirm-restart-kernel-and-clear-output',
'#restart_run_all': 'confirm-restart-kernel-and-run-all-cells',
'#close_and_halt': 'close-and-halt',
'#int_kernel': 'interrupt-kernel',
'#cut_cell': 'cut-cell',
'#copy_cell': 'copy-cell',
'#delete_cell': 'delete-cell',
'#undelete_cell': 'undo-cell-deletion',
'#split_cell': 'split-cell-at-cursor',
'#merge_cell_above': 'merge-cell-with-previous-cell',
'#merge_cell_below': 'merge-cell-with-next-cell',
'#move_cell_up': 'move-cell-up',
'#move_cell_down': 'move-cell-down',
'#toggle_header': 'toggle-header',
'#toggle_toolbar': 'toggle-toolbar',
'#toggle_line_numbers': 'toggle-all-line-numbers',
'#insert_cell_above': 'insert-cell-above',
'#insert_cell_below': 'insert-cell-below',
'#run_cell': 'run-cell',
'#run_cell_select_below': 'run-cell-and-select-next',
'#run_cell_insert_below': 'run-cell-and-insert-below',
'#run_all_cells': 'run-all-cells',
'#run_all_cells_above': 'run-all-cells-above',
'#run_all_cells_below': 'run-all-cells-below',
'#to_code': 'change-cell-to-code',
'#to_markdown': 'change-cell-to-markdown',
'#to_raw': 'change-cell-to-raw',
'#toggle_current_output': 'toggle-cell-output-collapsed',
'#toggle_current_output_scroll': 'toggle-cell-output-scrolled',
'#clear_current_output': 'clear-cell-output',
'#toggle_all_output': 'toggle-all-cells-output-collapsed',
'#toggle_all_output_scroll': 'toggle-all-cells-output-scrolled',
'#clear_all_output': 'clear-all-cells-output',
'#cut_cell_attachments': 'cut-cell-attachments',
'#copy_cell_attachments': 'copy-cell-attachments',
'#paste_cell_attachments': 'paste-cell-attachments',
'#insert_image': 'insert-image',
'#edit_keyboard_shortcuts' : 'edit-command-mode-keyboard-shortcuts',
};
for(var idx in id_actions_dict){
if (!id_actions_dict.hasOwnProperty(idx)){
continue;
}
var id_act = 'jupyter-notebook:'+id_actions_dict[idx];
if(!that.actions.exists(id_act)){
console.warn('actions', id_act, 'does not exist, still binding it in case it will be defined later...');
}
// Immediately-Invoked Function Expression cause JS.
(function(that, id_act, idx){
that.element.find(idx).click(function(event){
that.actions.call(id_act, event);
});
})(that, id_act, idx);
}
// Kernel
this.element.find('#reconnect_kernel').click(function () {
that.notebook.kernel.reconnect();
});
// Help
if (this.tour) {
this.element.find('#notebook_tour').click(function () {
that.tour.start();
});
} else {
this.element.find('#notebook_tour').addClass("disabled");
}
this.element.find('#keyboard_shortcuts').click(function () {
that.quick_help.show_keyboard_shortcuts();
});
this.update_restore_checkpoint(null);
this.events.on('checkpoints_listed.Notebook', function (event, data) {
that.update_restore_checkpoint(that.notebook.checkpoints);
});
this.events.on('checkpoint_created.Notebook', function (event, data) {
that.update_restore_checkpoint(that.notebook.checkpoints);
});
this.events.on('notebook_loaded.Notebook', function() {
var langinfo = that.notebook.metadata.language_info || {};
that.update_nbconvert_script(langinfo);
});
this.events.on('kernel_ready.Kernel', function(event, data) {
var langinfo = data.kernel.info_reply.language_info || {};
that.update_nbconvert_script(langinfo);
that.add_kernel_help_links(data.kernel.info_reply.help_links || []);
});
};
MenuBar.prototype._add_celltoolbar_list = function () {
var that = this;
var submenu = $("#menu-cell-toolbar-submenu");
function preset_added(event, data) {
var name = data.name;
submenu.append(
$("<li/>")
.attr('data-name', encodeURIComponent(name))
.append(
$("<a/>")
.attr('href', '#')
.text(name)
.click(function () {
if (name ==='None') {
celltoolbar.CellToolbar.global_hide();
delete that.notebook.metadata.celltoolbar;
} else {
celltoolbar.CellToolbar.global_show();
celltoolbar.CellToolbar.activate_preset(name, that.events);
that.notebook.metadata.celltoolbar = name;
}
that.notebook.focus_cell();
})
)
);
}
// Setup the existing presets
var presets = celltoolbar.CellToolbar.list_presets();
preset_added(null, {name: i18n.msg._("None")});
presets.map(function (name) {
preset_added(null, {name: name});
});
// Setup future preset registrations
this.events.on('preset_added.CellToolbar', preset_added);
// Handle unregistered presets
this.events.on('unregistered_preset.CellToolbar', function (event, data) {
submenu.find("li[data-name='" + encodeURIComponent(data.name) + "']").remove();
});
};
MenuBar.prototype.update_restore_checkpoint = function(checkpoints) {
var ul = this.element.find("#restore_checkpoint").find("ul");
ul.empty();
if (!checkpoints || checkpoints.length === 0) {
ul.append(
$("<li/>")
.addClass("disabled")
.append(
$("<a/>")
.text(i18n.msg._("No checkpoints"))
)
);
return;
}
var that = this;
checkpoints.map(function (checkpoint) {
var d = new Date(checkpoint.last_modified);
ul.append(
$("<li/>").append(
$("<a/>")
.attr("href", "#")
.text(moment(d).format("LLLL"))
.click(function () {
that.notebook.restore_checkpoint_dialog(checkpoint);
})
)
);
});
};
MenuBar.prototype.update_nbconvert_script = function(langinfo) {
/**
* Set the 'Download as foo' menu option for the relevant language.
*/
var el = this.element.find('#download_script');
// Set menu entry text to e.g. "Python (.py)"
var langname = (langinfo.name || 'Script');
langname = langname.charAt(0).toUpperCase()+langname.substr(1); // Capitalise
el.find('a').text(langname + ' ('+(langinfo.file_extension || 'txt')+')');
};
MenuBar.prototype.add_kernel_help_links = function(help_links) {
/** add links from kernel_info to the help menu */
var divider = $("#kernel-help-links");
if (divider.length === 0) {
// insert kernel help section above about link
var about = $("#notebook_about").parent();
divider = $("<li>")
.attr('id', "kernel-help-links")
.addClass('divider');
about.prev().before(divider);
}
// remove previous entries
while (!divider.next().hasClass('divider')) {
divider.next().remove();
}
if (help_links.length === 0) {
// no help links, remove the divider
divider.remove();
return;
}
var cursor = divider;
help_links.map(function (link) {
cursor.after($("<li>")
.append($("<a>")
.attr('target', '_blank')
.attr('title', i18n.msg._('Opens in a new window'))
.attr('href', requirejs.toUrl(link.url))
.append($("<i>")
.addClass("fa fa-external-link menu-icon pull-right")
)
.append($("<span>")
.text(link.text)
)
)
);
cursor = cursor.next();
});
};
return {'MenuBar': MenuBar};
});
|
PypiClean
|
/m_n_kappa-0.0.1-py3-none-any.whl/m_n_kappa/geometry.py
|
from abc import ABC, abstractmethod
from dataclasses import dataclass
from .general import (
print_sections,
str_start_end,
StrainPosition,
EffectiveWidths,
interpolation,
)
from .material import Material
from .section import Section
from .crosssection import Crosssection
from .log import log_init, logging, log_return
from functools import partial
logger = logging.getLogger(__name__)
logs_init = partial(log_init, logger=logger)
logs_return = partial(log_return, logger=logger)
"""
Geometries
##########
Basic geometries providing parameters like area, centroid, etc.
Currently available basic geometries
------------------------------------
- Rectangle
- Circle
- Trapezoid
Currently available composed geometries
---------------------------------------
- IProfile
- RebarLayer
- UPEProfile
"""
class ComposedGeometry:
"""
Geometry consisting of basic geometries
.. versionadded:: 0.1.0
Supported basic geometries must inherit :py:class:`Geometry`:
- :py:class:`~m_n_kappa.Rectangle`
- :py:class:`~m_n_kappa.Circle`
- :py:class:`~m_n_kappa.Trapezoid`
See Also
--------
IProfile : composed geometry consisting of several :py:class:`Rectangle` forming an ``I``
UPEProfile : composed geometry consisting of several :py:class:`Rectangle` forming an ``U``
RebarLayer : composed geometry consisting of several :py:class:`Circle`
Examples
--------
Building a :py:class:`~m_n_kappa.geometry.ComposedGeometry` is as easy as adding two basic geometries together:
>>> from m_n_kappa import Rectangle
>>> rectangle_1 = Rectangle(top_edge=0.0, bottom_edge = 10.0, width=10.0)
>>> rectangle_2 = Rectangle(top_edge=10.0, bottom_edge = 20.0, width=10.0)
>>> composed_geometry = rectangle_1 + rectangle_2
>>> composed_geometry
ComposedGeometry(geometries=[Rectangle(top_edge=0.00, bottom_edge=10.00, width=10.00, left_edge=-5.00, right_edge=5.00), Rectangle(top_edge=10.00, bottom_edge=20.00, width=10.00, left_edge=-5.00, right_edge=5.00)])
Adding another basic geometry is also easily done.
This applies also for adding one composed geometry to another.
>>> rectangle_3 = Rectangle(top_edge=20.0, bottom_edge = 30.0, width=10.0)
>>> composed_geometry += rectangle_3
>>> composed_geometry
ComposedGeometry(geometries=[Rectangle(top_edge=0.00, bottom_edge=10.00, width=10.00, left_edge=-5.00, right_edge=5.00), Rectangle(top_edge=10.00, bottom_edge=20.00, width=10.00, left_edge=-5.00, right_edge=5.00), Rectangle(top_edge=20.00, bottom_edge=30.00, width=10.00, left_edge=-5.00, right_edge=5.00)])
The composed geometry is also easily combined by adding a :py:class:`~m_n_kappa.Material`
like :py:class:`~m_n_kappa.Steel` merging to a :py:class:`~m_n_kappa.Crosssection`
>>> from m_n_kappa import Steel
>>> steel = Steel(f_y = 300.0, f_u = 350.0, failure_strain=0.25)
>>> cross_section = composed_geometry + steel
>>> cross_section
Crosssection(sections=sections)
"""
def __init__(self):
self._geometries = []
def __add__(self, other):
return self._build(other)
def __radd__(self, other):
return self._build(other)
def __mul__(self, other):
return self._build(other)
def __repr__(self):
return f"ComposedGeometry(geometries={self.geometries})"
def _build(self, other):
if isinstance(other, Material):
sections = [
Section(geometry=geometry, material=other)
for geometry in self.geometries
]
logger.info("Create Crosssection by adding Material")
return Crosssection(sections)
elif isinstance(other, Geometry):
new_geometry = ComposedGeometry()
new_geometry._geometries = self.geometries
new_geometry._geometries.append(other)
logger.info("Add Geometry-instance")
return new_geometry
elif isinstance(other, ComposedGeometry):
new_geometry = ComposedGeometry()
new_geometry._geometries = self.geometries + other.geometries
logger.info("Add other ComposedGeometry")
return new_geometry
else:
raise TypeError(
f'unsupported operand type(s) for +: "{type(self)}" and "{type(other)}"'
)
@property
def geometries(self) -> list:
"""number of :py:class:`Geometry` instances"""
return self._geometries
class Geometry(ABC):
"""
basic geometry class
.. versionadded:: 0.1.0
the basic geometries must inherit from this class
"""
def __add__(self, other):
return self._build(other)
def __radd__(self, other):
return self._build(other)
def __mul__(self, other):
return self._build(other)
def _build(self, other):
"""builds depending on the input-type a :py:class:`ComposedGeometry` or a :py:class:`Section`"""
if isinstance(other, Geometry):
new_geometry = ComposedGeometry()
new_geometry._geometries = [self, other]
logger.info("Build ComposedGeometry by adding Geometry-Instance")
return new_geometry
elif isinstance(other, ComposedGeometry):
new_geometry = other
new_geometry._geometries.append(self)
logger.info("Build ComposedGeometry by adding ComposedGeometry-Instance")
return new_geometry
elif isinstance(other, Material):
logger.info("Build section by adding material")
return Section(geometry=self, material=other)
else:
raise TypeError(
f'unsupported operand type(s) for +: "{type(self)}" and "{type(other)}"'
)
@abstractmethod
def area(self) -> float:
...
@abstractmethod
def centroid(self) -> float:
...
# @abstractmethod
# def width(self):
# ...
@abstractmethod
def split(
self, at_points: list[StrainPosition], max_widths: EffectiveWidths = None
):
...
@property
@abstractmethod
def edges(self) -> list[float]:
"""vertical edges"""
...
@property
@abstractmethod
def sides(self) -> list[float]:
"""horizontal edges"""
...
def check_width(
width: float = None, left_edge: float = None, right_edge: float = None
) -> tuple:
"""
make sure all properties corresponding with the width of a :py:class:`~m_n_kappa.Rectangle` or
:py:class:`~m_n_kappa.Trapezoid` are fulfilled.
The corresponding properties are: width, left_edge, right_edge
.. versionadded: 0.1.0
Parameters
----------
width: float
width of the :py:class:`~m_n_kappa.Rectangle` or :py:class:`~m_n_kappa.Trapezoid` (Default: None)
left_edge: float
horizontal position (Y-Axis) of left edge of the :py:class:`~m_n_kappa.Rectangle` or
:py:class:`~m_n_kappa.Trapezoid` (Default: None)
right_edge: float = None
horizontal position (Y-Axis) of the right edge of the :py:class:`~m_n_kappa.Rectangle` or
:py:class:`~m_n_kappa.Trapezoid` (Default: None)
Returns
-------
width : float
width of the :py:class:`~m_n_kappa.Rectangle` or :py:class:`~m_n_kappa.Trapezoid` obtained from the given
information
left_edge : float
horizontal position (Y-Axis) of left edge of the :py:class:`~m_n_kappa.Rectangle` or
:py:class:`~m_n_kappa.Trapezoid` obtained from the given information
right_edge : float
horizontal position (Y-Axis) of right edge of the :py:class:`~m_n_kappa.Rectangle` or
:py:class:`~m_n_kappa.Trapezoid` obtained from the given information
Raises
------
ValueError
if all input-values are ``None``
ValueError
if all input-values are given, but do not meet each other: ``right_edge - left_edge != width``
"""
if left_edge is not None and right_edge is not None:
if left_edge > right_edge:
left_edge, right_edge = right_edge, left_edge
if width is not None and left_edge is None and right_edge is None:
left_edge = -0.5 * width
right_edge = 0.5 * width
elif width is None and left_edge is not None and right_edge is not None:
width = abs(left_edge - right_edge)
elif width is not None and left_edge is None and right_edge is not None:
left_edge = right_edge - width
elif width is not None and left_edge is not None and right_edge is None:
right_edge = left_edge + width
elif width is not None and left_edge is not None and right_edge is not None:
if abs(left_edge - right_edge) != width:
raise ValueError(
f"abs(left_edge - right_edge) = abs({left_edge} - {right_edge}) != width = {width}. "
f"Please check/adapt input-values."
)
else:
raise ValueError(
"At least two of arguments 'width', 'right_edge' and 'left_edge' must be given."
)
return width, left_edge, right_edge
class Rectangle(Geometry):
"""
Represents a rectangle
.. versionadded:: 0.1.0
"""
@logs_init
def __init__(
self,
top_edge: float,
bottom_edge: float,
width: float = None,
left_edge: float = None,
right_edge: float = None,
):
"""
Neither two of the following arguments ``width``, ``right_edge`` and ``left_edge`` must be given.
If only argument ``width`` is given ``left_edge = -0.5*width`` and ``right_edge = 0.5*width``
Parameters
----------
top_edge : float
vertical position of top-edge of the rectangle :math:`z_\\mathrm{top}`
bottom_edge : float
vertical position of bottom-edge of the rectangle :math:`z_\\mathrm{bottom}`
width : float
width of the rectangle :math:`b` (Default: None)
left_edge : float
horizontal position of left-edge of the rectangle :math:`y_\\mathrm{left}` (Default: None).
right_edge : float
horizontal position of right-edge of the rectangle :math:`y_\\mathrm{right}` (Default: None)
.. figure:: ../images/geometry_rectangle-light.svg
:class: only-light
.. figure:: ../images/geometry_rectangle-dark.svg
:class: only-dark
Rectangle - dimensions
See Also
--------
Circle : creates a circular geometry object
Trapezoid : creates a trapezoidal geometry object
IProfile : creates a geometry object comprised of various :py:class:`~m_n_kappa.Rectangle`-objects forming an ``I``
UPEProfile : creates a geometry object comprised of varous :py:class:`~m_n_kappa.Rectangle`-objects forming an ``U``
Examples
--------
A rectangle object is easily instantiated as follows.
>>> from m_n_kappa import Rectangle
>>> rectangle = Rectangle(top_edge=10, bottom_edge=20, width=10)
In case only ``width`` is passed as argument the centerline of the rectangle is assumed to be a :math:`y = 0`.
In consequence ``left_edge = -0.5 * width`` and ``right_edge = 0.5 * width``
>>> rectangle.left_edge, rectangle.right_edge
(-5.0, 5.0)
For building a :py:class:`~m_n_kappa.Section` the ``rectangle`` must only be added to a material.
>>> from m_n_kappa import Steel
>>> steel = Steel(f_y=355)
>>> section = rectangle + steel
>>> type(section)
<class 'm_n_kappa.section.Section'>
"""
self._top_edge = top_edge
self._bottom_edge = bottom_edge
self._width = width
self._left_edge = left_edge
self._right_edge = right_edge
self._check_input_values()
self._width, self._left_edge, self._right_edge = check_width(
self.width, self.left_edge, self.right_edge
)
def _check_input_values(self) -> None:
"""rearrange input-values to match the needed arrangement"""
if self.bottom_edge < self.top_edge:
self._top_edge, self._bottom_edge = self.bottom_edge, self.top_edge
logger.info(f"{self.__repr__()} switched values: top_edge and bottom_edge")
if (
self.left_edge is not None
and self.right_edge is not None
and self.right_edge < self.left_edge
):
self._left_edge, self._right_edge = self.right_edge, self.left_edge
logger.info(f"{self.__repr__()} switched values: left_edge and right_edge")
def __eq__(self, other):
return (
self.top_edge == other.top_edge
and self.bottom_edge == other.bottom_edge
and self.width == other.width
and self.left_edge == other.left_edge
and self.right_edge == other.right_edge
)
def __repr__(self) -> str:
return (
f"Rectangle("
f"top_edge={self.top_edge:.2f}, "
f"bottom_edge={self.bottom_edge:.2f}, "
f"width={self.width:.2f}, "
f"left_edge={self.left_edge:.2f}, "
f"right_edge={self.right_edge:.2f})"
)
@str_start_end
def __str__(self) -> str:
text = [
"Rectangle",
"=========",
"",
"Initialization",
"--------------",
self.__repr__(),
"",
"Properties",
"----------",
f"Area: {self.area:.2f}",
f"Centroid: {self.centroid:.2f}",
]
return print_sections(text)
@property
def top_edge(self):
"""vertical position (Z-Axis) of the top-edge of the rectangle :math:`z_\\mathrm{top}`"""
return self._top_edge
@property
def bottom_edge(self):
"""vertical position (Z-Axis) of the bottom-edge of the rectangle :math:`z_\\mathrm{bottom}`"""
return self._bottom_edge
@property
def right_edge(self) -> float:
"""horizontal position (Y-Axis) of the right-edge of the rectangle :math:`y_\\mathrm{right}`"""
return self._right_edge
@property
def left_edge(self) -> float:
"""horizontal position (Y-Axis) of the left-edge of the rectangle :math:`y_\\mathrm{left}`"""
return self._left_edge
@property
def edges(self) -> list[float]:
"""vertical positions (Z-Axis) top- and bottom-edge"""
return [self.top_edge, self.bottom_edge]
@property
def sides(self) -> list[float]:
"""horizontal positions (Y-Axis) of left- and right-edge"""
return [self.left_edge, self.right_edge]
@property
def height(self) -> float:
"""height of the rectangle"""
return abs(self.top_edge - self.bottom_edge)
@property
def width(self) -> float:
"""width of the rectangle"""
return self._width
@property
def area(self) -> float:
"""cross-sectional area of rectangle"""
return self.width * self.height
@property
def centroid(self) -> float:
"""centroid of the rectangle in vertical direction (Z-Axis)"""
return self.top_edge + 0.5 * self.height
@property
def width_slope(self) -> float:
"""slope of the width depending of vertical position :math:`z`"""
return 0.0
@property
def width_interception(self) -> float:
"""interception of the width"""
return self.width
def split(
self, at_points: list[StrainPosition], max_widths: EffectiveWidths = None
) -> list[Geometry]:
"""
splitting the rectangle horizontally in smaller rectangles
Parameters
----------
at_points : list[:py:class:`~m_n_kappa.StrainPosition`]
points where the rectangle is split into smaller rectangles
max_widths: :py:class:`~m_n_kappa.EffectiveWidths`
widths under consideration of bending or membran loading
Returns
-------
list[Rectangle]
rectangles assembling to the original rectangle
"""
rectangles = []
at_points.sort(key=lambda x: x.position)
top_edge = StrainPosition(
at_points[0].strain, self.top_edge, at_points[0].material
)
for bottom_edge in at_points:
if self.top_edge < bottom_edge.position < self.bottom_edge:
if bottom_edge.strain == 0.0:
edge = top_edge
else:
edge = bottom_edge
left_edge, right_edge = self.get_horizontal_edges(edge, max_widths)
rectangles.append(
Rectangle(
top_edge.position,
bottom_edge.position,
left_edge=left_edge,
right_edge=right_edge,
)
)
top_edge = bottom_edge
if top_edge.strain == 0.0:
edge = StrainPosition(
at_points[-1].strain, self.bottom_edge, at_points[-1].material
)
else:
edge = top_edge
left_edge, right_edge = self.get_horizontal_edges(edge, max_widths)
rectangles.append(
Rectangle(
top_edge.position,
self.bottom_edge,
left_edge=left_edge,
right_edge=right_edge,
)
)
logger.debug(f"Split {self.__repr__()} into following rectangles: {rectangles}")
return rectangles
def get_horizontal_edges(
self, point: StrainPosition, max_widths: EffectiveWidths
) -> tuple:
"""
Get the horizontal edges of the rectangles considering the effective widths
as well as real dimensions of the rectangle
Parameters
----------
point : StrainPosition
position and strain at this position as well as the corresponding material.
Needed to differentiate between rectangle under tension and under compression.
max_widths : EffectiveWidths
effective widths to consider
Returns
-------
tuple[float, float]
left and right edge considering the effective widths as well as real dimensions of the rectangle
"""
if max_widths is not None:
effective_width = max_widths.width(point.material, point.strain)
right_edge = min(effective_width, self.right_edge)
left_edge = max(-effective_width, self.left_edge)
else:
right_edge, left_edge = self.right_edge, self.left_edge
return left_edge, right_edge
class Circle(Geometry):
@logs_init
def __init__(self, diameter: float, centroid_y: float, centroid_z: float):
"""
Circle
.. versionadded:: 0.1.0
applies only for circles that are small compared to the other dimensions of the cross-section
Parameters
----------
diameter: float
diameter of the circle :math:`d`
centroid_y: float
position of centroid of the circle in horizontal direction :math:`y_\\mathrm{centroid}`
centroid_z: float
position of centroid of the circle in vertical direction :math:`z_\\mathrm{centroid}`
.. figure:: ../images/geometry_circle-dark.svg
:class: only-dark
:alt: circle dimensions
.. figure:: ../images/geometry_circle-light.svg
:class: only-light
:alt: circle dimensions
Circle - dimensions
See Also
--------
Rectangle : creates a rectangular geometry object
Trapezoid : creates a trapezoidal geometry object
RebarLayer : creates a number of circular objects representing a layer of reinforcement-bars
Examples
--------
A circle object is easily instantiated as follows.
>>> from m_n_kappa import Circle
>>> circle = Circle(diameter=10, centroid_y=10, centroid_z=-10)
For building a :py:class:`~m_n_kappa.Section` the ``circle`` must only be added to a material.
>>> from m_n_kappa import Steel
>>> steel = Steel(f_y=355)
>>> section = circle + steel
>>> type(section)
<class 'm_n_kappa.section.Section'>
"""
self._diameter = diameter
self._centroid_y = centroid_y
self._centroid_z = centroid_z
def __eq__(self, other) -> bool:
return (
self.diameter == other.diameter
and self.centroid_y == other.centroid_y
and self.centroid_z == other.centroid_z
)
def __repr__(self) -> str:
return (
f"Circle("
f"diameter={self.diameter}, "
f"centroid_y={self._centroid_y}, "
f"centroid_z={self._centroid_z})"
)
@str_start_end
def __str__(self) -> str:
text = [
"Circle",
"======",
"",
"Initialization",
"--------------",
self.__repr__(),
"",
"Properties",
"----------",
f"Area: {self.area:.2f}",
f"Centroid: ({self.centroid_y:.2f}, {self.centroid_y:.2f})",
]
return "\n".join(text)
@property
def diameter(self) -> float:
"""diameter of the circle :math:`d`"""
return self._diameter
@property
def centroid(self) -> float:
return self._centroid_z
@property
def centroid_y(self):
"""position of centroid of the circle in horizontal direction :math:`y_\\mathrm{centroid}`"""
return self._centroid_y
@property
def centroid_z(self):
"""position of centroid of the circle in vertical direction :math:`z_\\mathrm{centroid}`"""
return self._centroid_z
@property
def area(self):
"""area of the circle"""
return 3.145 * (0.5 * self.diameter) ** 2.0
@property
def edges(self) -> list[float]:
"""edges in vertical direction"""
return [self.centroid_z]
@property
def sides(self) -> list[float]:
"""edges in horizontal direction"""
return [self.centroid_y]
@property
def top_edge(self):
"""vertical position (Z-Axis) of the top-edge of the circle"""
return self.centroid_z - 0.5 * self.diameter
@property
def bottom_edge(self):
"""vertical position (Z-Axis) of the bottom-edge of the circle"""
return self.centroid_z + 0.5 * self.diameter
@property
def height(self) -> float:
return 0.0
def split(
self, at_points: list[StrainPosition], max_widths: EffectiveWidths = None
) -> list:
"""check if circle is within effective width
In case ``max_widths=None`` this circle is returned.
Otherwise, this circle is only returned if position of the circle is smaller ``max_widths``
See Also
--------
Rectangle.split : method that splits :py:class:`Rectangle` considering strain-points and effective widths in
a number of smaller rectangle
Trapezoid.split : method that splits :py:class:`Trapezoid` considering strain-points and effective widths in
a number of smaller trapezoids
Parameters
----------
at_points : :py:class:`~m_n_kappa.general.StrainPosition`
has no effect on splitting process
max_widths : :py:class:`~m_n_kappa.general.EffectiveWidths`
criteria to return the circle for further computations (Default: None)
Returns
-------
list[Circle]
this circles
"""
if max_widths is None:
return [self]
elif self._is_in_effective_width(at_points, max_widths):
return [self] # [Circle(self.diameter, self.centroid)]
else:
return []
def _is_in_effective_width(
self, points: list[StrainPosition], max_widths: EffectiveWidths
) -> bool:
"""checks if centroid of circle is within the effective width"""
for point_index in range(len(points)):
two_points = [
points[point_index].position,
points[point_index + 1].position,
]
if min(two_points) <= self.centroid_y <= max(two_points):
width = max_widths.width(
points[0].material,
sum([points[point_index].strain, points[point_index + 1].strain]),
)
logger.info(f"{self.__repr__()} is within effective width")
return -width <= self.centroid_z <= width
else:
logger.info(f"{self.__repr__()} is NOT within effective width")
return False
class Trapezoid(Geometry):
"""
Represents a trapezoidal
.. versionadded:: 0.1.0
The trapezoid has vertical edges parallel to each other and
two horizontal edges that are *not parallel* to each other.
"""
@logs_init
def __init__(
self,
top_edge: float,
bottom_edge: float,
top_width: float,
top_left_edge: float = None,
top_right_edge: float = None,
bottom_width: float = None,
bottom_left_edge: float = None,
bottom_right_edge: float = None,
):
"""
Parameters
----------
top_edge : float
top-edge of the trapezoid :math:`z_\\mathrm{top}`
bottom_edge : float
bottom-edge of the trapezoid :math:`z_\\mathrm{bottom}`
top_width : float
width of the trapezoid at the top-edge :math:`b_\\mathrm{top}` (Default: None).
top_left_edge : float
left-edge position of the trapezoid at the top-edge :math:`y_\\mathrm{top-left}` (Default: None).
top_right_edge : float
right-edge position of the trapezoid at the top-edge :math:`y_\\mathrm{top-right}` (Default: None).
bottom_width : float
width of the trapezoid at the bottom-edge :math:`b_\\mathrm{bottom}` (Default: None).
bottom_left_edge : float
left-edge position of the trapezoid at the bottom-edge :math:`y_\\mathrm{bottom-left}` (Default: None).
bottom_right_edge : float
right-edge position of the trapezoid at the bottom-edge :math:`y_\\mathrm{bottom-right}` (Default: None).
.. figure:: ../images/geometry_trapezoid-light.svg
:class: only-light
.. figure:: ../images/geometry_trapezoid-dark.svg
:class: only-dark
Trapezoid - dimensions
See Also
--------
Rectangle : creates a rectangular geometry object
Circle : creates a circular geometry object
Examples
--------
A trapezoid object is easily instantiated as follows.
>>> from m_n_kappa import Trapezoid
>>> trapezoid = Trapezoid(top_edge=0, bottom_edge=10, top_width=10, bottom_width=20)
In case only ``top_width`` or ``bottom_width`` is passed as argument the centerline of the specific
width of the trapezoid is assumed to be a :math:`y = 0`.
In consequence ``top_left_edge = -0.5 * top_width`` and ``top_right_edge = 0.5 * top_width``.
Similar for the bottom-edge.
For building a :py:class:`~m_n_kappa.Section` the ``trapezoid`` must only be added to a material.
>>> from m_n_kappa import Steel
>>> steel = Steel(f_y=355)
>>> section = trapezoid + steel
>>> type(section)
<class 'm_n_kappa.section.Section'>
"""
self._top_edge = top_edge
self._bottom_edge = bottom_edge
self._top_width = top_width
self._top_left_edge = top_left_edge
self._top_right_edge = top_right_edge
self._bottom_width = bottom_width
self._bottom_left_edge = bottom_left_edge
self._bottom_right_edge = bottom_right_edge
self._check_input_values()
self._top_width, self._top_left_edge, self._top_right_edge = check_width(
self.top_width, self.top_left_edge, self.top_right_edge
)
(
self._bottom_width,
self._bottom_left_edge,
self._bottom_right_edge,
) = check_width(
self.bottom_width, self.bottom_left_edge, self.bottom_right_edge
)
def _check_input_values(self) -> None:
"""check input-value to match the needed arrangement"""
if self.bottom_edge < self.top_edge:
self._top_edge, self._bottom_edge = self.bottom_edge, self.top_edge
logger.info(f"{self.__repr__()} switched: top-edge and bottom-edge")
if (
self.top_left_edge is not None
and self.top_right_edge is not None
and self.top_right_edge < self.top_left_edge
):
self._top_left_edge, self._top_right_edge = (
self.top_right_edge,
self.top_left_edge,
)
logger.info(f"{self.__repr__()} switched: top-left-edge and top-right-edge")
if (
self.bottom_left_edge is not None
and self.bottom_right_edge is not None
and self.bottom_right_edge < self.bottom_left_edge
):
self._bottom_left_edge, self._bottom_right_edge = (
self.bottom_right_edge,
self.bottom_left_edge,
)
logger.info(
f"{self.__repr__()} switched: bottom-left-edge and bottom-right-edge"
)
@property
def top_left_edge(self) -> float:
"""left-edge position of the trapezoid at the top-edge :math:`y_\\mathrm{top-left}`"""
return self._top_left_edge
@property
def bottom_left_edge(self) -> float:
"""left-edge position of the trapezoid at the bottom-edge :math:`y_\\mathrm{bottom-left}`"""
return self._bottom_left_edge
@property
def top_right_edge(self) -> float:
"""right-edge position of the trapezoid at the top-edge :math:`y_\\mathrm{top-right}`"""
return self._top_right_edge
@property
def bottom_right_edge(self) -> float:
"""right-edge position of the trapezoid at the bottom-edge :math:`y_\\mathrm{bottom-right}`"""
return self._bottom_right_edge
def __repr__(self) -> str:
return (
f"Trapezoid(top_edge={self.top_edge}, "
f"bottom_edge={self.bottom_edge}, "
f"top_width={self.top_width}, "
f"bottom_width={self.bottom_width})"
)
@str_start_end
def __str__(self) -> str:
text = [
"Trapezoid",
"=========",
"",
"Initialization",
"--------------",
self.__repr__(),
"",
"Properties",
"----------",
"Area: {:.2f}".format(self.area),
"Centroid: {:.2f}".format(self.centroid),
]
return print_sections(text)
def __eq__(self, other) -> bool:
return (
self.top_edge == other.top_edge
and self.bottom_edge == other.bottom_edge
and self.top_width == other.top_width
and self.bottom_width == other.bottom_width
)
@property
def top_edge(self) -> float:
"""vertical position of top-edge of the trapezoid :math:`z_\\mathrm{top}`"""
return self._top_edge
@property
def bottom_edge(self) -> float:
"""vertical position of bottom-edge of the trapezoid :math:`z_\\mathrm{bottom}`"""
return self._bottom_edge
@property
def edges(self) -> list:
"""edges of trapezoid in vertical direction"""
return [self.top_edge, self.bottom_edge]
@property
def sides(self) -> list[float]:
return [
self.top_left_edge,
self.top_right_edge,
self.bottom_left_edge,
self.bottom_right_edge,
]
@property
def top_width(self) -> float:
"""width of trapezoid on top-edge :math:`b_\\mathrm{top}`"""
return self._top_width
@property
def bottom_width(self) -> float:
"""width of trapezoid on bottom-edge :math:`b_\\mathrm{top}`"""
return self._bottom_width
@property
def height(self) -> float:
"""height of the trapezoid :math:`h`"""
return abs(self.top_edge - self.bottom_edge)
@property
def area(self) -> float:
"""cross-sectional area of the trapezoid"""
return 0.5 * self.height * (self.top_width + self.bottom_width)
@property
def centroid(self) -> float:
"""vertical position of the centroid of the trapezoid"""
return (
self.top_edge
+ self.height
- (
1.0
/ 3.0
* self.height
* (
(self.bottom_width + 2.0 * self.top_width)
/ (self.bottom_width + self.top_width)
)
)
)
def width(self, vertical_position: float) -> float:
"""width of trapezoid at given vertical position
in case ``vertical_position`` is outside of the trapezoid zero is returned
Parameters
----------
vertical_position : float
vertical position the width of the trapezoid shall be given
Returns
-------
float
width of trapezoid at given vertical position
"""
if self.top_edge <= vertical_position <= self.bottom_edge:
return interpolation(
position_value=vertical_position,
first_pair=[self.top_edge, self.top_width],
second_pair=[self.bottom_edge, self.bottom_width],
)
else:
return 0.0
def left_edge(self, vertical_position: float) -> float:
"""left edge at the given vertical position
in case ``vertical_position`` is outside of the trapezoid, then zero is returned
Parameters
----------
vertical_position : float
vertical position the width of the trapezoid shall be given
Returns
-------
float
horizontal position of the left-edge of trapezoid at given vertical position
"""
if self.top_edge <= vertical_position <= self.bottom_edge:
return interpolation(
position_value=vertical_position,
first_pair=[self.top_edge, self.top_left_edge],
second_pair=[self.bottom_edge, self.bottom_left_edge],
)
else:
return 0.0
def right_edge(self, vertical_position: float) -> float:
"""right edge at the given vertical position
in case ``vertical_position`` is outside of the trapezoid zero is returned
Parameters
----------
vertical_position : float
vertical position the width of the trapezoid shall be given
Returns
-------
float
horizontal position of the right-edge of trapezoid at given vertical position
"""
if self.top_edge <= vertical_position <= self.bottom_edge:
return interpolation(
position_value=vertical_position,
first_pair=[self.top_edge, self.top_right_edge],
second_pair=[self.bottom_edge, self.bottom_right_edge],
)
else:
return 0.0
@logs_return
def split(
self, at_points: list[StrainPosition], max_widths: EffectiveWidths = None
) -> list[Geometry]:
"""
split trapezoid at the given points and if needed to
Parameters
----------
at_points
max_widths
Returns
-------
list[Trapezoid]
trapezoid split at the material-points into sub-trapezoids
"""
top_edge = self.top_edge
trapezoids = []
at_points.sort(key=lambda x: x.position)
for point in at_points:
if self.top_edge < point.position < self.bottom_edge:
trapezoids.append(
Trapezoid(
top_edge=top_edge,
bottom_edge=point.position,
top_width=self.width(top_edge),
top_left_edge=self.left_edge(top_edge),
bottom_width=self.width(point.position),
bottom_left_edge=self.left_edge(point.position),
)
)
top_edge = point.position
trapezoids.append(
Trapezoid(
top_edge=top_edge,
bottom_edge=self.bottom_edge,
top_width=self.width(top_edge),
top_left_edge=self.left_edge(top_edge),
bottom_width=self.bottom_width,
bottom_left_edge=self.bottom_left_edge,
)
)
return trapezoids
@property
def width_slope(self) -> float:
"""change of the width of the trapezoid depending on vertical position"""
return (self.bottom_width - self.top_width) / self.height
@property
def width_interception(self) -> float:
"""theoretical width of the trapezoid at coordinate-origin"""
return self.top_width - self.top_edge * self.width_slope
@dataclass
class IProfile(ComposedGeometry):
"""
I-Profile composed of :py:class:`~m_n_kappa.Rectangle` instances
.. versionadded:: 0.1.0
Inherits from :py:class:`~m_n_kappa.geometry.ComposedGeometry` and makes a variety of geometries
possible as the following figure shows.
In case the desired profile has no bottom-flange choose ``has_bottom_flange=False``.
Similar if no top-flange is needed then use ``has_top_flange=False``.
In case top- and bottom flange are similar only passing values to ``t_fo`´ and ``t_fo`` is needed.
Parameters
----------
top_edge: float
top_edge of the I-profile
t_w: float
web-thickness of the I-profile
h_w: float
web-height of the I-profile
t_fo: float = None
thickness of the top-flange
b_fo: float = None
width of the top-flange
t_fu: float = None
thickness of the bottom-flange
b_fu: float = None
width of the bottom-flange
has_top_flange: bool = True
decide if I-profile has a top-flange (Default: True).
If False: no top-flange is created
has_bottom_flange: bool = True
decide if I-profile has a bottom-flange (Default: True)
if False: no top-flange is created
centroid_y: float
horizontal position of the centroid of the I-profile (Default: 0)
.. figure:: ../images/geometry_i-profile-light.svg
:class: only-light
.. figure:: ../images/geometry_i-profile-dark.svg
:class: only-dark
I-Profile - dimensions - a) asymmetric I-profile, b) without bottom-flange, c) without top-flange,
d) without top- and bottom-flange
See Also
--------
Rectangle : basic geometry object
UPEProfile : composed geometry consisting of several :py:class:`Rectangle` forming an ``U``
RebarLayer : composed geometry consisting of several :py:class:`Circle`
Example
-------
An HEB 200-profile may be composed as follows
>>> from m_n_kappa import IProfile
>>> heb200_geometry = IProfile(top_edge=0., t_fo=15.5, b_fo=200.0, t_w=9.5, h_w=169.0)
>>> heb200_geometry
IProfile(top_edge=0.0, t_w=9.5, h_w=169.0, t_fo=15.5, b_fo=200.0, t_fu=15.5, b_fu=200.0, has_top_flange=True, \
has_bottom_flange=True, centroid_y=0.0, geometries=[\
Rectangle(top_edge=0.00, bottom_edge=15.50, width=200.00, left_edge=-100.00, right_edge=100.00), \
Rectangle(top_edge=15.50, bottom_edge=184.50, width=9.50, left_edge=-4.75, right_edge=4.75), \
Rectangle(top_edge=184.50, bottom_edge=200.00, width=200.00, left_edge=-100.00, right_edge=100.00)])
As :py:class:`~m_n_kappa.geometry.IProfile` inherits from :py:class:`~m_n_kappa.geometry.ComposedGeometry`
it also inherits its functionality tranforming to a :py:class:`m_n_kappa.Crosssection`
by adding :py:class:`m_n_kappa.Material`.
>>> from m_n_kappa import Steel
>>> steel = Steel(f_y = 300.0, f_u = 350.0, failure_strain=0.25)
>>> cross_section = heb200_geometry + steel
>>> cross_section
Crosssection(sections=sections)
"""
top_edge: float
t_w: float
h_w: float
t_fo: float = None
b_fo: float = None
t_fu: float = None
b_fu: float = None
has_top_flange: bool = True
has_bottom_flange: bool = True
centroid_y: float = 0.0
geometries: list = None
def __post_init__(self):
self.geometries = []
if self.has_bottom_flange and self.t_fu is None and self.t_fo is not None:
self.t_fu = self.t_fo
if self.has_bottom_flange and self.b_fu is None and self.b_fo is not None:
self.b_fu = self.b_fo
self._add_top_flange()
self._add_web()
self._add_bottom_flange()
logger.info(f"Created {self.__repr__()}")
def _add_top_flange(self):
"""add top-flange to geometry if wanted and geometric values are given"""
if self.has_top_flange and self.t_fo is not None and self.b_fo is not None:
self.geometries.append(
Rectangle(
top_edge=self.top_edge,
bottom_edge=self.top_edge + self.t_fo,
width=self.b_fo,
left_edge=self.centroid_y - 0.5 * self.b_fo,
)
)
def _add_web(self) -> None:
"""add web to the geometry of the profile"""
self.geometries.append(
Rectangle(
top_edge=self.top_edge + self.t_fo,
bottom_edge=self.top_edge + self.t_fo + self.h_w,
width=self.t_w,
left_edge=self.centroid_y - 0.5 * self.t_w,
)
)
def _add_bottom_flange(self) -> None:
"""add bottom-flange to geometry if wanted and geometric values are given"""
if self.has_bottom_flange and self.t_fu is not None and self.b_fu is not None:
self.geometries.append(
Rectangle(
top_edge=self.top_edge + self.t_fo + self.h_w,
bottom_edge=self.top_edge + self.t_fo + self.h_w + self.t_fu,
width=self.b_fu,
left_edge=self.centroid_y - 0.5 * self.b_fu,
)
)
@dataclass
class RebarLayer(ComposedGeometry):
"""
rebar-layer composed of several reinforcement-bars of :py:class:`Circle`
.. versionadded:: 0.1.0
Parameters
----------
rebar_diameter: float
diameter of rebars in the layer
centroid_z: float
position of the centroid in vertical direction
rebar_number: int = None
number of rebars within the layer (Alternative to argument ``width``)
width: float = None
width of the rebar-layer :math:`b` (together with ``rebar_horizontal_distance`` alternative to argument ``rebar_number``).
In case ``rebar_number`` is defined, the argument ``width`` as well as ``rebar_horizontal_distance`` value is ignored.
rebar_horizontal_distance : float
distance between the rebars in horizontal direction :math:`s_\\mathrm{y}` (Default: None).
See description in argument ``width``.
left_edge : float
horizontal position of the centroid of the left-most circle :math:`y_\\mathrm{left}` (Default: None).
right_edge : float
horizontal position of the centroid of the right-most circle :math:`y_\\mathrm{right}` (Default: None)
rebar_horizontal_distance : float
horizontal-distance between the rebar-centroids :math:`s_\\mathrm{y}` (Default: None)
.. figure:: ../images/geometry_rebar-layer-light.svg
:class: only-light
.. figure:: ../images/geometry_rebar-layer-dark.svg
:class: only-dark
Rebar-layer - dimensions
See Also
--------
Circle : basic geometric class
IProfile : composed geometry consisting of several :py:class:`Rectangle` forming an ``I``
UPEProfile : composed geometry consisting of several :py:class:`Rectangle` forming an ``U``
Example
-------
The following example creates 10 circles with diameter 12 and a vertical position of 10
>>> from m_n_kappa import RebarLayer
>>> rebar_layer = RebarLayer(rebar_diameter=12.0, centroid_z=10.0, rebar_number=10, rebar_horizontal_distance=100)
Adding a material to ``rebar_layer`` creates a cross-section.
>>> from m_n_kappa import Reinforcement
>>> rebar_steel = Reinforcement(f_s=500, f_su=550, failure_strain=0.25)
>>> rebars = rebar_layer + rebar_steel
>>> rebars
Crosssection(sections=sections)
"""
rebar_diameter: float
centroid_z: float
rebar_number: int = None
width: float = None
left_edge: float = None
right_edge: float = None
rebar_horizontal_distance: float = None
geometries: list = None
def __post_init__(self):
if self.rebar_number is None and (
self.width is None or self.rebar_horizontal_distance is None
):
raise ValueError(
"Neither argument 'rebar_number' or 'width' and "
"'rebar_horizontal_distance' must be defined"
)
if self.rebar_number is None:
self.rebar_number = int(self.width / self.rebar_horizontal_distance)
if self.width is None:
self.width = float(self.rebar_number - 1) * self.rebar_horizontal_distance
if self.rebar_horizontal_distance is None:
self.rebar_horizontal_distance = float(self.width / self.rebar_number)
self.width, self.left_edge, self.right_edge = check_width(
self.width, self.left_edge, self.right_edge
)
self.geometries = []
for index in range(self.rebar_number):
centroid_y = index * self.rebar_horizontal_distance + self.left_edge
self.geometries.append(
Circle(
diameter=self.rebar_diameter,
centroid_y=centroid_y,
centroid_z=self.centroid_z,
)
)
logger.info(f"Created {self.__repr__()}")
@dataclass
class UPEProfile(ComposedGeometry):
"""
UPE-Profile composed of class Rectangles forming a reversed ``U``
.. versionadded:: 0.1.0
Parameters
----------
top_edge : float
top-edge of the rectangle :math:`z_\\mathrm{top}`
t_f: float
flange-thickness :math:`t_\\mathrm{f}`
b_f: float
flange-width :math:`b_\\mathrm{f}`
t_w: float
web-thickness :math:`t_\\mathrm{w}`
h_w: float = None
web-height :math:`h_\\mathrm{w}` (Default: None).
Alternative argument ``h`` must be given, otherwise an exception will be risen.
h: float = None
overall height of the steel-profile :math:`h` (Default: None).
Alternative arguments ``h_w`` and ``t_f`` must be given.
centroid_y: floats
horizontal position of the centroid of the UPE-profile :math:`y_\\mathrm{centroid}` (Default: 0.0)
.. figure:: ../images/geometry_upe-light.svg
:class: only-light
.. figure:: ../images/geometry_upe-dark.svg
:class: only-dark
UPE-Profile - dimensions
See Also
--------
IProfile : composed geometry consisting of several :py:class:`Rectangle` forming an ``I``
RebarLayer : composed geometry consisting of several :py:class:`Circle`
Example
-------
The following example creates :py:class:`~m_n_kappa.geometry.Rectangle` instances forming an UPE 200 profile
>>> from m_n_kappa import UPEProfile
>>> upe200_geometry = UPEProfile(top_edge=10, t_f=5.2, b_f=76, t_w=9.0, h=200)
>>> upe200_geometry
UPEProfile(top_edge=10, t_f=5.2, b_f=76, t_w=9.0, h_w=189.6, h=200, centroid_y=0.0, \
geometries=[\
Rectangle(top_edge=10.00, bottom_edge=86.00, width=5.20, left_edge=-100.00, right_edge=-94.80), \
Rectangle(top_edge=10.00, bottom_edge=19.00, width=189.60, left_edge=-94.80, right_edge=94.80), \
Rectangle(top_edge=10.00, bottom_edge=86.00, width=5.20, left_edge=94.80, right_edge=100.00)])
As :py:class:`~m_n_kappa.geometry.UPEProfile` inherits from :py:class:`~m_n_kappa.geometry.ComposedGeometry`
it also inherits its functionality tranforming to a :py:class:`m_n_kappa.Crosssection`
by adding :py:class:`m_n_kappa.Material`.
>>> from m_n_kappa import Steel
>>> steel = Steel(f_y = 300.0, f_u = 350.0, failure_strain=0.25)
>>> cross_section = upe200_geometry + steel
>>> cross_section
Crosssection(sections=sections)
"""
top_edge: float
t_f: float
b_f: float
t_w: float
h_w: float = None
h: float = None
centroid_y: float = 0.0
geometries: list = None
def __post_init__(self):
if self.h_w is None and self.h is None:
raise ValueError(
'neither argument "h_w" (web-height) or "h" (profile-height) must be defined'
)
if self.h_w is None:
self.h_w = self.h - 2.0 * self.t_f
self.geometries = [
self._left_flange(),
self._web(),
self._right_flange(),
]
logger.info(f"Created {self.__repr__()}")
def _left_flange(self) -> Rectangle:
return Rectangle(
top_edge=self.top_edge,
bottom_edge=self.top_edge + self.b_f,
width=self.t_f,
left_edge=self.centroid_y - 0.5 * self.h_w - self.t_f,
)
def _web(self) -> Rectangle:
return Rectangle(
top_edge=self.top_edge,
bottom_edge=self.top_edge + self.t_w,
width=self.h_w,
left_edge=self.centroid_y - 0.5 * self.h_w,
)
def _right_flange(self) -> Rectangle:
return Rectangle(
top_edge=self.top_edge,
bottom_edge=self.top_edge + self.b_f,
width=self.t_f,
left_edge=self.centroid_y + 0.5 * self.h_w,
)
|
PypiClean
|
/tensorflow_datasets-4.9.2-py3-none-any.whl/tensorflow_datasets/public_api.py
|
from tensorflow_datasets import core
from tensorflow_datasets import typing
from tensorflow_datasets.core import beam_utils as beam
from tensorflow_datasets.core import dataset_builders
from tensorflow_datasets.core import decode
from tensorflow_datasets.core import deprecated
from tensorflow_datasets.core import download
from tensorflow_datasets.core import features
from tensorflow_datasets.core import folder_dataset
from tensorflow_datasets.core import transform # pylint: disable=unused-import
from tensorflow_datasets.core import visualization
from tensorflow_datasets.core.as_dataframe import as_dataframe
from tensorflow_datasets.core.dataset_utils import as_numpy
from tensorflow_datasets.core.download import GenerateMode
from tensorflow_datasets.core.folder_dataset import ImageFolder
from tensorflow_datasets.core.folder_dataset import TranslateFolder
from tensorflow_datasets.core.load import builder
from tensorflow_datasets.core.load import builder_cls
from tensorflow_datasets.core.load import data_source
from tensorflow_datasets.core.load import dataset_collection
from tensorflow_datasets.core.load import list_builders
from tensorflow_datasets.core.load import list_dataset_collections
from tensorflow_datasets.core.load import load
from tensorflow_datasets.core.read_only_builder import builder_from_directories
from tensorflow_datasets.core.read_only_builder import builder_from_directory
from tensorflow_datasets.core.splits import Split
from tensorflow_datasets.core.subsplits_utils import even_splits
from tensorflow_datasets.core.subsplits_utils import split_for_jax_process
from tensorflow_datasets.core.utils.benchmark import benchmark
from tensorflow_datasets.core.utils.gcs_utils import is_dataset_on_gcs
from tensorflow_datasets.core.utils.lazy_imports_utils import lazy_imports
from tensorflow_datasets.core.utils.read_config import ReadConfig
from tensorflow_datasets.core.utils.tqdm_utils import disable_progress_bar
from tensorflow_datasets.core.utils.tqdm_utils import display_progress_bar
from tensorflow_datasets.core.utils.tqdm_utils import enable_progress_bar
from tensorflow_datasets.core.visualization import show_examples
from tensorflow_datasets.core.visualization import show_statistics
from tensorflow_datasets.version import __version__
deprecated = core.utils.docs.deprecated(deprecated)
with lazy_imports():
from tensorflow_datasets import testing # pylint: disable=g-import-not-at-top
del lazy_imports
__all__ = [
"as_dataframe",
"as_numpy",
"beam",
"benchmark",
"builder",
"builder_cls",
"builder_from_directory",
"builder_from_directories",
"core",
"data_source",
"dataset_builders",
"dataset_collection",
"decode",
"deprecated",
"disable_progress_bar",
"display_progress_bar",
"download",
"enable_progress_bar",
"even_splits",
"features",
"folder_dataset",
"GenerateMode",
"ImageFolder",
"is_dataset_on_gcs",
"list_builders",
"list_dataset_collections",
"load",
"ReadConfig",
"Split",
"split_for_jax_process",
"show_examples",
"show_statistics",
"testing",
"TranslateFolder",
"typing",
"visualization",
"__version__",
]
|
PypiClean
|
/reedwolf.rules-0.0.1-py3-none-any.whl/reedwolf/rules/expressions.py
|
from __future__ import annotations
import operator
from enum import Enum
from functools import partial
from dataclasses import dataclass
from typing import (
List,
Optional,
Union,
Any,
)
from .exceptions import (
RuleSetupValueError,
RuleSetupError,
RuleSetupNameError,
RuleError,
RuleInternalError,
RuleSetupNameNotFoundError,
)
from .utils import (
composite_functions,
UNDEFINED,
)
from .namespaces import RubberObjectBase, GlobalNS, Namespace, ThisNS, UtilsNS
# ------------------------------------------------------------
class VExpStatusEnum(str, Enum):
INITIALIZED = "INIT"
OK = "OK"
ERR_NOT_FOUND = "ERR_NOT_FOUND"
ERR_TO_IMPLEMENT = "ERR_TO_IMPLEMENT"
# ------------------------------------------------------------
class Operation:
def __init__(self, op: str, first: Any, second: Optional[Any] = None):
self.op, self.first, self.second = op, first, second
self.op_function = self.OPCODE_TO_FUNCTION.get(self.op, None)
if self.op_function is None:
raise RuleSetupValueError(owner=self, msg="Invalid operation code, {self.op} not one of: {', '.join(self.OP_TO_CODE.keys())}")
self._status : VExpStatusEnum = VExpStatusEnum.INITIALIZED
self._all_ok : Optional[bool] = None
# no operator, needs custom logic
def apply_and(self, first, second): return bool(first) and bool(second)
def apply_or (self, first, second): return bool(first) or bool(second)
# https://florian-dahlitz.de/articles/introduction-to-pythons-operator-module
# https://docs.python.org/3/library/operator.html#mapping-operators-to-functions
OPCODE_TO_FUNCTION = {
"==" : operator.eq
, "!=" : operator.ne
, ">" : operator.gt
, ">=" : operator.ge
, "<" : operator.lt
, "<=" : operator.le
, "+" : operator.add
, "-" : operator.sub
, "*" : operator.mul
, "/" : operator.truediv
, "//" : operator.floordiv
, "in" : operator.contains
, "not" : operator.not_ # orig: ~
# no operator, needs custom logic
, "and" : apply_and # orig: &
, "or" : apply_or # orig: |
}
def Setup(self, heap: "VariableHeap", owner: Any):
# parent:"Variable"
assert self._status==VExpStatusEnum.INITIALIZED, self
if isinstance(self.first, (ValueExpression, Operation)):
self.first.Setup(heap, owner=owner, parent=None)
if self.second!=None and isinstance(self.second, (ValueExpression, Operation)):
self.second.Setup(heap, owner=owner, parent=None)
self._status=VExpStatusEnum.OK
def apply(self, heap):
first = self.first.Read(ctx)
if self.second!=None:
# binary operator
second = self.second.Read(ctx)
try:
res = self.op_function(first, second)
except Exception as ex:
raise RuleSetupError(owner=heap, item=self, msg=f"Apply {self.first} {self.op} {self.second} => {first} {self.op} {second} raised error: {ex}")
else:
# unary operator
try:
res = self.op_function(first, second)
except Exception as ex:
raise RuleSetupError(owner=heap, item=self, msg=f"Apply {self.op} {self.first} => {self.op} {first} raised error: {ex}")
return res
def __str__(self):
if self.second:
return f"({self.first} {self.op} {self.second})"
else:
return f"({self.op} {self.first})"
def __repr__(self):
return f"Op{self}"
class ValueExpression(RubberObjectBase):
# NOTE: each item in this list should be implemented as attribute or method in this class
# "GetVariable",
RESERVED_ATTR_NAMES = {"Path", "Read", "Setup", "GetNamespace",
"_var_name", "_node", "_namespace", "_name", "_func_args", "_is_top", "_read_functions", "_status"}
RESERVED_FUNCTION_NAMES = ("Value",)
# "First", "Second",
def __init__(
self,
node: Union[str, Operation],
namespace: Namespace,
Path: Optional[List[ValueExpression]] = None,
):
self._status : VExpStatusEnum = VExpStatusEnum.INITIALIZED
self._namespace = namespace
if not isinstance(self._namespace, Namespace):
raise RuleSetupValueError(owner=self, msg=f"Namespace parameter '{self._namespace}' needs to be instance of Namespace inherited class.")
self._node = node
if isinstance(self._node, str):
if self._node in self.RESERVED_ATTR_NAMES:
raise RuleSetupValueError(owner=self, msg=f"Value expression's attribute '{self._node}' is a reserved name, choose another.")
else:
if not isinstance(self._node, Operation):
raise RuleSetupValueError(owner=self, msg=f"Value expression's attribute '{self._node}' needs to be string or Operation, got: {type(self._node)}")
self._node = node
self._is_top = Path is None
self._name = str(self._node)
self.Path = [] if self._is_top else Path[:]
self.Path.append(self)
self._func_args = None
self._read_functions = UNDEFINED
self._var_name = UNDEFINED
self._reserved_function = self._name in self.RESERVED_FUNCTION_NAMES
def GetNamespace(self) -> Namespace:
return self._namespace
# NOTE: replaced with VariableHeap.get_var_by_vexp(vexp ...)
# def GetVariable(self, heap:'VariableHeap', strict=True) -> 'Variable':
# if self._var_name==UNDEFINED:
# if strict:
# raise RuleSetupNameError(owner=self, msg=f"Variable not processed/Setup yet")
# return UNDEFINED
# return heap.get_var(self._namespace, self._var_name)
def Setup(self, heap:"VariableHeap", owner:"Component", parent:"Variable") -> Optional['Variable']:
"""
owner used just for reference count.
"""
# , copy_to_heap:Optional[CopyToHeap]=None
# TODO: create single function - which is composed of functions
# see: https://florian-dahlitz.de/articles/introduction-to-pythons-operator-module
# callable = operator.attrgetter("last_name") -> .last_name
# callable = operator.itemgetter(1) -> [1]
# callable = operator.methodcaller("run", "foo", bar=1) -> .run("foot", bar=1)
# and this:
# functools.partial(function, x=1, y=2)
# https://www.geeksforgeeks.org/function-composition-in-python/
from .variables import Variable
if self._status!=VExpStatusEnum.INITIALIZED:
raise RuleInternalError(owner=self, msg=f"Setup() already called (status={self._status}).")
if self._read_functions!=UNDEFINED:
raise RuleSetupError(owner=self, msg=f"Setup() already called (found _read_functions).")
_read_functions = []
current_variable = None
last_parent = parent
var_name = None
all_ok = True
bit_length = len(self.Path)
# if "status" in str(self.Path): import pdb;pdb.set_trace()
for bnr, bit in enumerate(self.Path, 1):
is_last = (bnr==bit_length)
assert bit._namespace==self._namespace
# TODO: if self._func_args:
# operator.attrgetter("last_name")
if isinstance(bit._node, Operation):
operation = bit._node
# one level deeper
operation.Setup(heap=heap, owner=owner)
_read_functions.append(operation.apply)
else:
# ----------------------------------------
# Check if Path goes to correct variable
# ----------------------------------------
var_name = bit._node
try:
last_parent = (current_variable
if current_variable is not None
else parent)
# when copy_to_heap defined:
# read from copy_to_heap.heap_bind_from and store in both heaps in the
# same namespace (usually ModelsNS)
# heap_read_from = copy_to_heap.heap_bind_from if copy_to_heap else heap
heap_read_from = heap
current_variable = heap_read_from.getset_attribute_var(
namespace=self._namespace,
var_name=var_name,
# owner=owner,
parent_var=last_parent)
# if is_last and copy_to_heap:
# current_variable.add_bound_var(BoundVar(heap.name, copy_to_heap.var_name))
# heap.add(current_variable, alt_var_name=copy_to_heap.var_name)
except NotImplementedError as ex:
self._status = VExpStatusEnum.ERR_TO_IMPLEMENT
all_ok = False
break
# except (RuleError) as ex:
except (RuleSetupNameNotFoundError) as ex:
self._status = VExpStatusEnum.ERR_NOT_FOUND
all_ok = False
# current_variable = heap.get(namespace=bit._namespace, var_name=var_name, owner=owner, parent=current_variable)
print(f"== TODO: RuleSetupError - {self} -> Heap error {bit}: {ex}")
# raise RuleSetupError(owner=self, msg=f"Heap {heap!r} attribute {var_name} not found")
break
if not isinstance(current_variable, Variable):
raise RuleInternalError(owner=self, msg=f"Type of found object is not Variable, got: {type(current_variable)}.")
# can be Component/DataVar or can be managed Model dataclass Field - when .denied is not appliable
if hasattr(current_variable, "denied") and current_variable.denied:
raise RuleSetupValueError(owner=self, msg=f"Variable '{var_name}' (owner={owner.name}) references '{current_variable.name}' is not allowed in ValueExpression due: {current_variable.deny_reason}.")
# print(f"OK: {self} -> {bit}")
if bit._func_args is not None:
args, kwargs = bit._func_args
# getter = operator.attrgetter(var_name)
# def func_call(obj):
# return getter(obj)(*args, **kwargs)
# -> .<var_name>(*args, **kwargs)
func_call = operator.methodcaller(var_name, *args, **kwargs)
_read_functions.append(func_call)
# raise NotImplementedError(f"Call to functions {bit} in {self} not implemented yet!")
else:
getter = operator.attrgetter(var_name)
_read_functions.append(getter)
variable = None
# if "select_id_of_default_device" in repr(self):
# import pdb;pdb.set_trace()
if all_ok:
self._status = VExpStatusEnum.OK
self._all_ok = True
self._read_functions = _read_functions
variable = current_variable
if not variable:
if self._namespace not in (GlobalNS, ThisNS, UtilsNS):
raise RuleSetupValueError(owner=self, msg=f"Variable not found.")
# self._all_ok = False?
self._var_name = None
else:
# self._all_ok = False?
variable.add_reference(owner.name)
self._var_name = variable.name
else:
self._all_ok = False
self._var_name = None
self._read_functions = None
return variable
def Read(self, heap:'VariablesHeap', model_name:Optional[str]):
if not "_read_functions" in dir(self):
raise RuleSetupInternalError(owner=self, msg=f"Setup not done.")
val = UNDEFINED
# TODO: if self._var_name
for func in rself._read_functions:
if val is UNDEFINED:
val = func(heap)
else:
val = func(val)
return val
# def __getitem__(self, ind):
# # list [0] or dict ["test"]
# return ValueExpression(
# Path=self.Path
# + "."
# + str(ind)
# )
def __getattr__(self, aname):
# if aname.startswith("_"):
# raise RuleSetupNameError(owner=self, msg=f"VariableExpression name {aname} starts with _ what is reserved, choose another name.")
if aname in self.RESERVED_ATTR_NAMES: # , "%r -> %s" % (self._node, aname):
raise RuleSetupNameError(owner=self, msg=f"ValueExpression's attribute '{aname}' is reserved name, choose another.")
if aname.startswith("__") and aname.endswith("__"):
raise AttributeError(f"Attribute '{type(self)}' object has no attribute '{aname}'")
return ValueExpression(node=aname, namespace=self._namespace, Path=self.Path)
def __call__(self, *args, **kwargs):
assert self._func_args is None
self._func_args = [args, kwargs]
return self
def as_str(self):
out = ""
if self._is_top:
out += f"{self._namespace}."
out += f"{self._node}"
if self._func_args:
out += "("
args, kwargs = self._func_args
if args:
out += ", ".join([f"{a}" for a in args])
if kwargs:
out += ", ".join([f"{k}={v}" for k, v in kwargs.items()])
out += ")"
return out
def __str__(self):
return ".".join([ve.as_str() for ve in self.Path])
def __repr__(self):
return f"VExpr({self})"
# --------------------------------
# ------- Reserved methods -------
# --------------------------------
# NOTE: each method should be listed in RESERVED_ATTR_NAMES
# --------------------------------
# ------- Terminate methods ------
# return plain python objects
# ----------------------------------
# ------- Internal methods ---------
# https://realpython.com/python-bitwise-operators/#custom-data-types
def __eq__(self, other): return ValueExpression(Operation("==", self, other), namespace=GlobalNS)
def __ne__(self, other): return ValueExpression(Operation("!=", self, other), namespace=GlobalNS)
def __gt__(self, other): return ValueExpression(Operation(">", self, other), namespace=GlobalNS)
def __ge__(self, other): return ValueExpression(Operation(">=", self, other), namespace=GlobalNS)
def __lt__(self, other): return ValueExpression(Operation("<", self, other), namespace=GlobalNS)
def __le__(self, other): return ValueExpression(Operation("<=", self, other), namespace=GlobalNS)
def __add__(self, other): return ValueExpression(Operation("+", self, other), namespace=GlobalNS)
def __sub__(self, other): return ValueExpression(Operation("-", self, other), namespace=GlobalNS)
def __mul__(self, other): return ValueExpression(Operation("*", self, other), namespace=GlobalNS)
def __truediv__(self, other): return ValueExpression(Operation("/", self, other), namespace=GlobalNS)
def __floordiv__(self, other): return ValueExpression(Operation("//", self, other), namespace=GlobalNS)
def __contains__(self, other): return ValueExpression(Operation("in", self, other), namespace=GlobalNS)
def __invert__(self): return ValueExpression(Operation("not", self), namespace=GlobalNS) # ~
def __and__(self, other): return ValueExpression(Operation("and", self, other), namespace=GlobalNS) # &
def __or__(self, other): return ValueExpression(Operation("or", self, other), namespace=GlobalNS) # |
# __abs__ - abs()
# __xor__ ==> ^
# <<, >>
# ** __pow__(self, object) Exponentiation
# Matrix Multiplication a @ b matmul(a, b)
# Positive + a pos(a)
# Slice Assignment seq[i:j] = values setitem(seq, slice(i, j), values)
# Slice Deletion del seq[i:j] delitem(seq, slice(i, j))
# Slicing seq[i:j] getitem(seq, slice(i, j))
# String Formatting s % obj mod(s, obj)
# % __mod__(self, object) Modulus
# Truth Test obj truth(obj)
|
PypiClean
|
/smart_home_tng-2023.1.3.tar.gz/smart_home_tng-2023.1.3/smart_home_tng/frontend/frontend_es5/7650cba1.js
|
(self.webpackChunkhome_assistant_frontend=self.webpackChunkhome_assistant_frontend||[]).push([[67794],{67794:function(t,e,r){var n,o,i,a;function u(t){return u="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t},u(t)}t=r.nmd(t),window,a=function(){return function(t){var e={};function r(n){if(e[n])return e[n].exports;var o=e[n]={i:n,l:!1,exports:{}};return t[n].call(o.exports,o,o.exports,r),o.l=!0,o.exports}return r.m=t,r.c=e,r.d=function(t,e,n){r.o(t,e)||Object.defineProperty(t,e,{enumerable:!0,get:n})},r.r=function(t){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(t,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(t,"__esModule",{value:!0})},r.t=function(t,e){if(1&e&&(t=r(t)),8&e)return t;if(4&e&&"object"===u(t)&&t&&t.__esModule)return t;var n=Object.create(null);if(r.r(n),Object.defineProperty(n,"default",{enumerable:!0,value:t}),2&e&&"string"!=typeof t)for(var o in t)r.d(n,o,function(e){return t[e]}.bind(null,o));return n},r.n=function(t){var e=t&&t.__esModule?function(){return t.default}:function(){return t};return r.d(e,"a",e),e},r.o=function(t,e){return Object.prototype.hasOwnProperty.call(t,e)},r.p="",r(r.s=10)}([function(t,e,r){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.assignDeep=e.mapValues=void 0,e.mapValues=function(t,e){var r={};for(var n in t)if(t.hasOwnProperty(n)){var o=t[n];r[n]=e(o)}return r},e.assignDeep=function t(e){for(var r=[],n=1;n<arguments.length;n++)r[n-1]=arguments[n];return r.forEach((function(r){if(r)for(var n in r)if(r.hasOwnProperty(n)){var o=r[n];Array.isArray(o)?e[n]=o.slice(0):"object"===u(o)?(e[n]||(e[n]={}),t(e[n],o)):e[n]=o}})),e}},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0});var o=r(7),i=n(r(8)),a=r(0),u=function(){function t(e,r){this._src=e,this.opts=a.assignDeep({},t.DefaultOpts,r)}return t.use=function(t){this._pipeline=t},t.from=function(t){return new i.default(t)},Object.defineProperty(t.prototype,"result",{get:function(){return this._result},enumerable:!1,configurable:!0}),t.prototype._process=function(e,r){this.opts.quantizer,e.scaleDown(this.opts);var n=o.buildProcessOptions(this.opts,r);return t._pipeline.process(e.getImageData(),n)},t.prototype.palette=function(){return this.swatches()},t.prototype.swatches=function(){throw new Error("Method deprecated. Use `Vibrant.result.palettes[name]` instead")},t.prototype.getPalette=function(){var t=this,e=arguments[0],r=arguments[1],n="string"==typeof e?e:"default",o="string"==typeof e?r:e,i=new this.opts.ImageClass;return i.load(this._src).then((function(e){return t._process(e,{generators:[n]})})).then((function(e){return t._result=e,e.palettes[n]})).then((function(t){return i.remove(),o&&o(void 0,t),t})).catch((function(t){return i.remove(),o&&o(t),Promise.reject(t)}))},t.prototype.getPalettes=function(){var t=this,e=arguments[0],r=arguments[1],n=Array.isArray(e)?e:["*"],o=Array.isArray(e)?r:e,i=new this.opts.ImageClass;return i.load(this._src).then((function(e){return t._process(e,{generators:n})})).then((function(e){return t._result=e,e.palettes})).then((function(t){return i.remove(),o&&o(void 0,t),t})).catch((function(t){return i.remove(),o&&o(t),Promise.reject(t)}))},t.DefaultOpts={colorCount:64,quality:5,filters:[]},t}();e.default=u},function(t,e,r){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.applyFilters=e.ImageBase=void 0;var n=function(){function t(){}return t.prototype.scaleDown=function(t){var e=this.getWidth(),r=this.getHeight(),n=1;if(t.maxDimension>0){var o=Math.max(e,r);o>t.maxDimension&&(n=t.maxDimension/o)}else n=1/t.quality;n<1&&this.resize(e*n,r*n,n)},t}();e.ImageBase=n,e.applyFilters=function(t,e){if(e.length>0)for(var r=t.data,n=r.length/4,o=void 0,i=void 0,a=void 0,u=void 0,s=void 0,f=0;f<n;f++){i=r[0+(o=4*f)],a=r[o+1],u=r[o+2],s=r[o+3];for(var l=0;l<e.length;l++)if(!e[l](i,a,u,s)){r[o+3]=0;break}}return t}},function(t,e,r){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.Swatch=void 0;var n=r(4),o=function(){function t(t,e){this._rgb=t,this._population=e}return t.applyFilters=function(t,e){return e.length>0?t.filter((function(t){for(var r=t.r,n=t.g,o=t.b,i=0;i<e.length;i++)if(!e[i](r,n,o,255))return!1;return!0})):t},t.clone=function(e){return new t(e._rgb,e._population)},Object.defineProperty(t.prototype,"r",{get:function(){return this._rgb[0]},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"g",{get:function(){return this._rgb[1]},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"b",{get:function(){return this._rgb[2]},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"rgb",{get:function(){return this._rgb},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"hsl",{get:function(){if(!this._hsl){var t=this._rgb,e=t[0],r=t[1],o=t[2];this._hsl=n.rgbToHsl(e,r,o)}return this._hsl},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"hex",{get:function(){if(!this._hex){var t=this._rgb,e=t[0],r=t[1],o=t[2];this._hex=n.rgbToHex(e,r,o)}return this._hex},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"population",{get:function(){return this._population},enumerable:!1,configurable:!0}),t.prototype.toJSON=function(){return{rgb:this.rgb,population:this.population}},t.prototype.getRgb=function(){return this._rgb},t.prototype.getHsl=function(){return this.hsl},t.prototype.getPopulation=function(){return this._population},t.prototype.getHex=function(){return this.hex},t.prototype.getYiq=function(){if(!this._yiq){var t=this._rgb;this._yiq=(299*t[0]+587*t[1]+114*t[2])/1e3}return this._yiq},Object.defineProperty(t.prototype,"titleTextColor",{get:function(){return this._titleTextColor&&(this._titleTextColor=this.getYiq()<200?"#fff":"#000"),this._titleTextColor},enumerable:!1,configurable:!0}),Object.defineProperty(t.prototype,"bodyTextColor",{get:function(){return this._bodyTextColor&&(this._bodyTextColor=this.getYiq()<150?"#fff":"#000"),this._bodyTextColor},enumerable:!1,configurable:!0}),t.prototype.getTitleTextColor=function(){return this.titleTextColor},t.prototype.getBodyTextColor=function(){return this.bodyTextColor},t}();e.Swatch=o},function(t,e,r){"use strict";function n(t){var e=/^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(t);if(!e)throw new RangeError("'"+t+"' is not a valid hex color");return[e[1],e[2],e[3]].map((function(t){return parseInt(t,16)}))}function o(t,e,r){return e/=255,r/=255,t=(t/=255)>.04045?Math.pow((t+.005)/1.055,2.4):t/12.92,e=e>.04045?Math.pow((e+.005)/1.055,2.4):e/12.92,r=r>.04045?Math.pow((r+.005)/1.055,2.4):r/12.92,[.4124*(t*=100)+.3576*(e*=100)+.1805*(r*=100),.2126*t+.7152*e+.0722*r,.0193*t+.1192*e+.9505*r]}function i(t,e,r){return e/=100,r/=108.883,t=(t/=95.047)>.008856?Math.pow(t,1/3):7.787*t+16/116,[116*(e=e>.008856?Math.pow(e,1/3):7.787*e+16/116)-16,500*(t-e),200*(e-(r=r>.008856?Math.pow(r,1/3):7.787*r+16/116))]}function a(t,e,r){var n=o(t,e,r);return i(n[0],n[1],n[2])}function u(t,e){var r=t[0],n=t[1],o=t[2],i=e[0],a=e[1],u=e[2],s=r-i,f=n-a,l=o-u,c=Math.sqrt(n*n+o*o),h=i-r,p=Math.sqrt(a*a+u*u)-c,g=Math.sqrt(s*s+f*f+l*l),d=Math.sqrt(g)>Math.sqrt(Math.abs(h))+Math.sqrt(Math.abs(p))?Math.sqrt(g*g-h*h-p*p):0;return h/=1,p/=1*(1+.045*c),d/=1*(1+.015*c),Math.sqrt(h*h+p*p+d*d)}function s(t,e){return u(a.apply(void 0,t),a.apply(void 0,e))}Object.defineProperty(e,"__esModule",{value:!0}),e.getColorDiffStatus=e.hexDiff=e.rgbDiff=e.deltaE94=e.rgbToCIELab=e.xyzToCIELab=e.rgbToXyz=e.hslToRgb=e.rgbToHsl=e.rgbToHex=e.hexToRgb=e.DELTAE94_DIFF_STATUS=void 0,e.DELTAE94_DIFF_STATUS={NA:0,PERFECT:1,CLOSE:2,GOOD:10,SIMILAR:50},e.hexToRgb=n,e.rgbToHex=function(t,e,r){return"#"+((1<<24)+(t<<16)+(e<<8)+r).toString(16).slice(1,7)},e.rgbToHsl=function(t,e,r){t/=255,e/=255,r/=255;var n=Math.max(t,e,r),o=Math.min(t,e,r),i=0,a=0,u=(n+o)/2;if(n!==o){var s=n-o;switch(a=u>.5?s/(2-n-o):s/(n+o),n){case t:i=(e-r)/s+(e<r?6:0);break;case e:i=(r-t)/s+2;break;case r:i=(t-e)/s+4}i/=6}return[i,a,u]},e.hslToRgb=function(t,e,r){var n,o,i;function a(t,e,r){return r<0&&(r+=1),r>1&&(r-=1),r<1/6?t+6*(e-t)*r:r<.5?e:r<2/3?t+(e-t)*(2/3-r)*6:t}if(0===e)n=o=i=r;else{var u=r<.5?r*(1+e):r+e-r*e,s=2*r-u;n=a(s,u,t+1/3),o=a(s,u,t),i=a(s,u,t-1/3)}return[255*n,255*o,255*i]},e.rgbToXyz=o,e.xyzToCIELab=i,e.rgbToCIELab=a,e.deltaE94=u,e.rgbDiff=s,e.hexDiff=function(t,e){return s(n(t),n(e))},e.getColorDiffStatus=function(t){return t<e.DELTAE94_DIFF_STATUS.NA?"N/A":t<=e.DELTAE94_DIFF_STATUS.PERFECT?"Perfect":t<=e.DELTAE94_DIFF_STATUS.CLOSE?"Close":t<=e.DELTAE94_DIFF_STATUS.GOOD?"Good":t<e.DELTAE94_DIFF_STATUS.SIMILAR?"Similar":"Wrong"}},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}},o=n(r(6)),i=n(r(9));o.default.DefaultOpts.ImageClass=i.default,t.exports=o.default},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0});var o=n(r(1));o.default.DefaultOpts.quantizer="mmcq",o.default.DefaultOpts.generators=["default"],o.default.DefaultOpts.filters=["default"],e.default=o.default},function(t,e,r){"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.buildProcessOptions=void 0;var n=r(0);e.buildProcessOptions=function(t,e){var r=t.colorCount,o=t.quantizer,i=t.generators,a=t.filters,u={colorCount:r},s="string"==typeof o?{name:o,options:{}}:o;return s.options=n.assignDeep({},u,s.options),n.assignDeep({},{quantizer:s,generators:i,filters:a},e)}},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0});var o=n(r(1)),i=r(0),a=function(){function t(t,e){void 0===e&&(e={}),this._src=t,this._opts=i.assignDeep({},o.default.DefaultOpts,e)}return t.prototype.maxColorCount=function(t){return this._opts.colorCount=t,this},t.prototype.maxDimension=function(t){return this._opts.maxDimension=t,this},t.prototype.addFilter=function(t){return this._opts.filters?this._opts.filters.push(t):this._opts.filters=[t],this},t.prototype.removeFilter=function(t){if(this._opts.filters){var e=this._opts.filters.indexOf(t);e>0&&this._opts.filters.splice(e)}return this},t.prototype.clearFilters=function(){return this._opts.filters=[],this},t.prototype.quality=function(t){return this._opts.quality=t,this},t.prototype.useImageClass=function(t){return this._opts.ImageClass=t,this},t.prototype.useGenerator=function(t,e){return this._opts.generators||(this._opts.generators=[]),this._opts.generators.push(e?{name:t,options:e}:t),this},t.prototype.useQuantizer=function(t,e){return this._opts.quantizer=e?{name:t,options:e}:t,this},t.prototype.build=function(){return new o.default(this._src,this._opts)},t.prototype.getPalette=function(t){return this.build().getPalette(t)},t.prototype.getSwatches=function(t){return this.build().getPalette(t)},t}();e.default=a},function(t,e,r){"use strict";var n,o=this&&this.__extends||(n=function(t,e){return n=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var r in e)e.hasOwnProperty(r)&&(t[r]=e[r])},n(t,e)},function(t,e){function r(){this.constructor=t}n(t,e),t.prototype=null===e?Object.create(e):(r.prototype=e.prototype,new r)});Object.defineProperty(e,"__esModule",{value:!0});var i=function(t){function e(){return null!==t&&t.apply(this,arguments)||this}return o(e,t),e.prototype._initCanvas=function(){var t=this.image,e=this._canvas=document.createElement("canvas"),r=e.getContext("2d");if(!r)throw new ReferenceError("Failed to create canvas context");this._context=r,e.className="@vibrant/canvas",e.style.display="none",this._width=e.width=t.width,this._height=e.height=t.height,r.drawImage(t,0,0),document.body.appendChild(e)},e.prototype.load=function(t){var e,r,n,o,i,a,u,s=this;if("string"==typeof t)e=document.createElement("img"),r=t,(u=new URL(r,location.href)).protocol===location.protocol&&u.host===location.host&&u.port===location.port||(n=window.location.href,o=r,i=new URL(n),a=new URL(o),i.protocol===a.protocol&&i.hostname===a.hostname&&i.port===a.port)||(e.crossOrigin="anonymous"),e.src=r;else{if(!(t instanceof HTMLImageElement))return Promise.reject(new Error("Cannot load buffer as an image in browser"));e=t,r=t.src}return this.image=e,new Promise((function(t,n){var o=function(){s._initCanvas(),t(s)};e.complete?o():(e.onload=o,e.onerror=function(t){return n(new Error("Fail to load image: "+r))})}))},e.prototype.clear=function(){this._context.clearRect(0,0,this._width,this._height)},e.prototype.update=function(t){this._context.putImageData(t,0,0)},e.prototype.getWidth=function(){return this._width},e.prototype.getHeight=function(){return this._height},e.prototype.resize=function(t,e,r){var n=this,o=n._canvas,i=n._context,a=n.image;this._width=o.width=t,this._height=o.height=e,i.scale(r,r),i.drawImage(a,0,0)},e.prototype.getPixelCount=function(){return this._width*this._height},e.prototype.getImageData=function(){return this._context.getImageData(0,0,this._width,this._height)},e.prototype.remove=function(){this._canvas&&this._canvas.parentNode&&this._canvas.parentNode.removeChild(this._canvas)},e}(r(2).ImageBase);e.default=i},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}},o=r(5),i=n(r(11));o.use(i.default),t.exports=o},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0});var o=n(r(12)),i=n(r(16)),a=(new(r(17).BasicPipeline)).filter.register("default",(function(t,e,r,n){return n>=125&&!(t>250&&e>250&&r>250)})).quantizer.register("mmcq",o.default).generator.register("default",i.default);e.default=a},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0});var o=r(3),i=n(r(13)),a=n(r(15));function u(t,e){for(var r=t.size();t.size()<e;){var n=t.pop();if(!(n&&n.count()>0))break;var o=n.split(),i=o[0],a=o[1];if(t.push(i),a&&a.count()>0&&t.push(a),t.size()===r)break;r=t.size()}}e.default=function(t,e){if(0===t.length||e.colorCount<2||e.colorCount>256)throw new Error("Wrong MMCQ parameters");var r=i.default.build(t),n=(r.histogram.colorCount,new a.default((function(t,e){return t.count()-e.count()})));n.push(r),u(n,.75*e.colorCount);var s=new a.default((function(t,e){return t.count()*t.volume()-e.count()*e.volume()}));return s.contents=n.contents,u(s,e.colorCount-s.size()),function(t){for(var e=[];t.size();){var r=t.pop(),n=r.avg();n[0],n[1],n[2],e.push(new o.Swatch(n,r.count()))}return e}(s)}},function(t,e,r){"use strict";var n=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0});var o=n(r(14)),i=function(){function t(t,e,r,n,o,i,a){this.histogram=a,this._volume=-1,this._count=-1,this.dimension={r1:t,r2:e,g1:r,g2:n,b1:o,b2:i}}return t.build=function(e){var r=new o.default(e,{sigBits:5});return new t(r.rmin,r.rmax,r.gmin,r.gmax,r.bmin,r.bmax,r)},t.prototype.invalidate=function(){this._volume=this._count=-1,this._avg=null},t.prototype.volume=function(){if(this._volume<0){var t=this.dimension,e=t.r1,r=t.r2,n=t.g1,o=t.g2,i=t.b1,a=t.b2;this._volume=(r-e+1)*(o-n+1)*(a-i+1)}return this._volume},t.prototype.count=function(){if(this._count<0){for(var t=this.histogram,e=t.hist,r=t.getColorIndex,n=this.dimension,o=n.r1,i=n.r2,a=n.g1,u=n.g2,s=n.b1,f=n.b2,l=0,c=o;c<=i;c++)for(var h=a;h<=u;h++)for(var p=s;p<=f;p++)l+=e[r(c,h,p)];this._count=l}return this._count},t.prototype.clone=function(){var e=this.histogram,r=this.dimension;return new t(r.r1,r.r2,r.g1,r.g2,r.b1,r.b2,e)},t.prototype.avg=function(){if(!this._avg){var t=this.histogram,e=t.hist,r=t.getColorIndex,n=this.dimension,o=n.r1,i=n.r2,a=n.g1,u=n.g2,s=n.b1,f=n.b2,l=0,c=void 0,h=void 0,p=void 0;c=h=p=0;for(var g=o;g<=i;g++)for(var d=a;d<=u;d++)for(var m=s;m<=f;m++){var b=e[r(g,d,m)];l+=b,c+=b*(g+.5)*8,h+=b*(d+.5)*8,p+=b*(m+.5)*8}this._avg=l?[~~(c/l),~~(h/l),~~(p/l)]:[~~(8*(o+i+1)/2),~~(8*(a+u+1)/2),~~(8*(s+f+1)/2)]}return this._avg},t.prototype.contains=function(t){var e=t[0],r=t[1],n=t[2],o=this.dimension,i=o.r1,a=o.r2,u=o.g1,s=o.g2,f=o.b1,l=o.b2;return r>>=3,n>>=3,(e>>=3)>=i&&e<=a&&r>=u&&r<=s&&n>=f&&n<=l},t.prototype.split=function(){var t=this.histogram,e=t.hist,r=t.getColorIndex,n=this.dimension,o=n.r1,i=n.r2,a=n.g1,u=n.g2,s=n.b1,f=n.b2,l=this.count();if(!l)return[];if(1===l)return[this.clone()];var c,h,p=i-o+1,g=u-a+1,d=f-s+1,m=Math.max(p,g,d),b=null;c=h=0;var _=null;if(m===p){_="r",b=new Uint32Array(i+1);for(var v=o;v<=i;v++){c=0;for(var y=a;y<=u;y++)for(var w=s;w<=f;w++)c+=e[r(v,y,w)];h+=c,b[v]=h}}else if(m===g)for(_="g",b=new Uint32Array(u+1),y=a;y<=u;y++){for(c=0,v=o;v<=i;v++)for(w=s;w<=f;w++)c+=e[r(v,y,w)];h+=c,b[y]=h}else for(_="b",b=new Uint32Array(f+1),w=s;w<=f;w++){for(c=0,v=o;v<=i;v++)for(y=a;y<=u;y++)c+=e[r(v,y,w)];h+=c,b[w]=h}for(var M=-1,x=new Uint32Array(b.length),D=0;D<b.length;D++){var L=b[D];M<0&&L>h/2&&(M=D),x[D]=h-L}var S=this;return function(t){var e=t+"1",r=t+"2",n=S.dimension[e],o=S.dimension[r],i=S.clone(),a=S.clone(),u=M-n,s=o-M;for(u<=s?(o=Math.min(o-1,~~(M+s/2)),o=Math.max(0,o)):(o=Math.max(n,~~(M-1-u/2)),o=Math.min(S.dimension[r],o));!b[o];)o++;for(var f=x[o];!f&&b[o-1];)f=x[--o];return i.dimension[r]=o,a.dimension[e]=o+1,[i,a]}(_)},t}();e.default=i},function(t,e,r){"use strict";Object.defineProperty(e,"__esModule",{value:!0});var n=function(){function t(t,e){this.pixels=t,this.opts=e;var r=e.sigBits,n=function(t,e,n){return(t<<2*r)+(e<<r)+n};this.getColorIndex=n;var o,i,a,u,s,f,l,c,h,p=8-r,g=new Uint32Array(1<<3*r);o=a=s=0,i=u=f=Number.MAX_VALUE;for(var d=t.length/4,m=0;m<d;){var b=4*m;m++,l=t[b+0],c=t[b+1],h=t[b+2],0!==t[b+3]&&(g[n(l>>=p,c>>=p,h>>=p)]+=1,l>o&&(o=l),l<i&&(i=l),c>a&&(a=c),c<u&&(u=c),h>s&&(s=h),h<f&&(f=h))}this._colorCount=g.reduce((function(t,e){return e>0?t+1:t}),0),this.hist=g,this.rmax=o,this.rmin=i,this.gmax=a,this.gmin=u,this.bmax=s,this.bmin=f}return Object.defineProperty(t.prototype,"colorCount",{get:function(){return this._colorCount},enumerable:!1,configurable:!0}),t}();e.default=n},function(t,e,r){"use strict";Object.defineProperty(e,"__esModule",{value:!0});var n=function(){function t(t){this._comparator=t,this.contents=[],this._sorted=!1}return t.prototype._sort=function(){this._sorted||(this.contents.sort(this._comparator),this._sorted=!0)},t.prototype.push=function(t){this.contents.push(t),this._sorted=!1},t.prototype.peek=function(t){return this._sort(),t="number"==typeof t?t:this.contents.length-1,this.contents[t]},t.prototype.pop=function(){return this._sort(),this.contents.pop()},t.prototype.size=function(){return this.contents.length},t.prototype.map=function(t){return this._sort(),this.contents.map(t)},t}();e.default=n},function(t,e,r){"use strict";Object.defineProperty(e,"__esModule",{value:!0});var n=r(3),o=r(4),i={targetDarkLuma:.26,maxDarkLuma:.45,minLightLuma:.55,targetLightLuma:.74,minNormalLuma:.3,targetNormalLuma:.5,maxNormalLuma:.7,targetMutesSaturation:.3,maxMutesSaturation:.4,targetVibrantSaturation:1,minVibrantSaturation:.35,weightSaturation:3,weightLuma:6.5,weightPopulation:.5};function a(t,e,r,n,o,i,a,u,s,f){var l=null,c=0;return e.forEach((function(e){var h=e.hsl,p=h[1],g=h[2];if(p>=u&&p<=s&&g>=o&&g<=i&&!function(t,e){return t.Vibrant===e||t.DarkVibrant===e||t.LightVibrant===e||t.Muted===e||t.DarkMuted===e||t.LightMuted===e}(t,e)){var d=function(t,e,r,n,o,i,a){function u(t,e){return 1-Math.abs(t-e)}return function(){for(var t=[],e=0;e<arguments.length;e++)t[e]=arguments[e];for(var r=0,n=0,o=0;o<t.length;o+=2){var i=t[o],a=t[o+1];r+=i*a,n+=a}return r/n}(u(t,e),a.weightSaturation,u(r,n),a.weightLuma,o/i,a.weightPopulation)}(p,a,g,n,e.population,r,f);(null===l||d>c)&&(l=e,c=d)}})),l}e.default=function(t,e){e=Object.assign({},i,e);var r=function(t){var e=0;return t.forEach((function(t){e=Math.max(e,t.population)})),e}(t),u=function(t,e,r){var n={Vibrant:null,DarkVibrant:null,LightVibrant:null,Muted:null,DarkMuted:null,LightMuted:null};return n.Vibrant=a(n,t,e,r.targetNormalLuma,r.minNormalLuma,r.maxNormalLuma,r.targetVibrantSaturation,r.minVibrantSaturation,1,r),n.LightVibrant=a(n,t,e,r.targetLightLuma,r.minLightLuma,1,r.targetVibrantSaturation,r.minVibrantSaturation,1,r),n.DarkVibrant=a(n,t,e,r.targetDarkLuma,0,r.maxDarkLuma,r.targetVibrantSaturation,r.minVibrantSaturation,1,r),n.Muted=a(n,t,e,r.targetNormalLuma,r.minNormalLuma,r.maxNormalLuma,r.targetMutesSaturation,0,r.maxMutesSaturation,r),n.LightMuted=a(n,t,e,r.targetLightLuma,r.minLightLuma,1,r.targetMutesSaturation,0,r.maxMutesSaturation,r),n.DarkMuted=a(n,t,e,r.targetDarkLuma,0,r.maxDarkLuma,r.targetMutesSaturation,0,r.maxMutesSaturation,r),n}(t,r,e);return function(t,e,r){if(!t.Vibrant&&!t.DarkVibrant&&!t.LightVibrant){if(!t.DarkVibrant&&t.DarkMuted){var i=t.DarkMuted.hsl,a=i[0],u=i[1],s=i[2];s=r.targetDarkLuma,t.DarkVibrant=new n.Swatch(o.hslToRgb(a,u,s),0)}if(!t.LightVibrant&&t.LightMuted){var f=t.LightMuted.hsl;a=f[0],u=f[1],s=f[2],s=r.targetDarkLuma,t.DarkVibrant=new n.Swatch(o.hslToRgb(a,u,s),0)}}if(!t.Vibrant&&t.DarkVibrant){var l=t.DarkVibrant.hsl;a=l[0],u=l[1],s=l[2],s=r.targetNormalLuma,t.Vibrant=new n.Swatch(o.hslToRgb(a,u,s),0)}else if(!t.Vibrant&&t.LightVibrant){var c=t.LightVibrant.hsl;a=c[0],u=c[1],s=c[2],s=r.targetNormalLuma,t.Vibrant=new n.Swatch(o.hslToRgb(a,u,s),0)}if(!t.DarkVibrant&&t.Vibrant){var h=t.Vibrant.hsl;a=h[0],u=h[1],s=h[2],s=r.targetDarkLuma,t.DarkVibrant=new n.Swatch(o.hslToRgb(a,u,s),0)}if(!t.LightVibrant&&t.Vibrant){var p=t.Vibrant.hsl;a=p[0],u=p[1],s=p[2],s=r.targetLightLuma,t.LightVibrant=new n.Swatch(o.hslToRgb(a,u,s),0)}if(!t.Muted&&t.Vibrant){var g=t.Vibrant.hsl;a=g[0],u=g[1],s=g[2],s=r.targetMutesSaturation,t.Muted=new n.Swatch(o.hslToRgb(a,u,s),0)}if(!t.DarkMuted&&t.DarkVibrant){var d=t.DarkVibrant.hsl;a=d[0],u=d[1],s=d[2],s=r.targetMutesSaturation,t.DarkMuted=new n.Swatch(o.hslToRgb(a,u,s),0)}if(!t.LightMuted&&t.LightVibrant){var m=t.LightVibrant.hsl;a=m[0],u=m[1],s=m[2],s=r.targetMutesSaturation,t.LightMuted=new n.Swatch(o.hslToRgb(a,u,s),0)}}(u,0,e),u}},function(t,e,r){"use strict";var n=this&&this.__awaiter||function(t,e,r,n){return new(r||(r=Promise))((function(o,i){function a(t){try{s(n.next(t))}catch(e){i(e)}}function u(t){try{s(n.throw(t))}catch(e){i(e)}}function s(t){var e;t.done?o(t.value):(e=t.value,e instanceof r?e:new r((function(t){t(e)}))).then(a,u)}s((n=n.apply(t,e||[])).next())}))},o=this&&this.__generator||function(t,e){var r,n,o,i,a={label:0,sent:function(){if(1&o[0])throw o[1];return o[1]},trys:[],ops:[]};return i={next:u(0),throw:u(1),return:u(2)},"function"==typeof Symbol&&(i[Symbol.iterator]=function(){return this}),i;function u(i){return function(u){return function(i){if(r)throw new TypeError("Generator is already executing.");for(;a;)try{if(r=1,n&&(o=2&i[0]?n.return:i[0]?n.throw||((o=n.return)&&o.call(n),0):n.next)&&!(o=o.call(n,i[1])).done)return o;switch(n=0,o&&(i=[2&i[0],o.value]),i[0]){case 0:case 1:o=i;break;case 4:return a.label++,{value:i[1],done:!1};case 5:a.label++,n=i[1],i=[0];continue;case 7:i=a.ops.pop(),a.trys.pop();continue;default:if(!((o=(o=a.trys).length>0&&o[o.length-1])||6!==i[0]&&2!==i[0])){a=0;continue}if(3===i[0]&&(!o||i[1]>o[0]&&i[1]<o[3])){a.label=i[1];break}if(6===i[0]&&a.label<o[1]){a.label=o[1],o=i;break}if(o&&a.label<o[2]){a.label=o[2],a.ops.push(i);break}o[2]&&a.ops.pop(),a.trys.pop();continue}i=e.call(t,a)}catch(u){i=[6,u],n=0}finally{r=o=0}if(5&i[0])throw i[1];return{value:i[0]?i[1]:void 0,done:!0}}([i,u])}}};Object.defineProperty(e,"__esModule",{value:!0}),e.BasicPipeline=e.Stage=void 0;var i=r(2),a=function(){function t(t){this.pipeline=t,this._map={}}return t.prototype.names=function(){return Object.keys(this._map)},t.prototype.has=function(t){return!!this._map[t]},t.prototype.get=function(t){return this._map[t]},t.prototype.register=function(t,e){return this._map[t]=e,this.pipeline},t}();e.Stage=a;var u=function(){function t(){this.filter=new a(this),this.quantizer=new a(this),this.generator=new a(this)}return t.prototype._buildProcessTasks=function(t){var e=this,r=t.filters,n=t.quantizer,o=t.generators;return 1===o.length&&"*"===o[0]&&(o=this.generator.names()),{filters:r.map((function(t){return i(e.filter,t)})),quantizer:i(this.quantizer,n),generators:o.map((function(t){return i(e.generator,t)}))};function i(t,e){var r,n;return"string"==typeof e?r=e:(r=e.name,n=e.options),{name:r,fn:t.get(r),options:n}}},t.prototype.process=function(t,e){return n(this,void 0,void 0,(function(){var r,n,i,a,u,s,f;return o(this,(function(o){switch(o.label){case 0:return r=this._buildProcessTasks(e),n=r.filters,i=r.quantizer,a=r.generators,[4,this._filterColors(n,t)];case 1:return u=o.sent(),[4,this._generateColors(i,u)];case 2:return s=o.sent(),[4,this._generatePalettes(a,s)];case 3:return f=o.sent(),[2,{colors:s,palettes:f}]}}))}))},t.prototype._filterColors=function(t,e){return Promise.resolve(i.applyFilters(e,t.map((function(t){return t.fn}))))},t.prototype._generateColors=function(t,e){return Promise.resolve(t.fn(e.data,t.options))},t.prototype._generatePalettes=function(t,e){return n(this,void 0,void 0,(function(){var r;return o(this,(function(n){switch(n.label){case 0:return[4,Promise.all(t.map((function(t){var r=t.fn,n=t.options;return Promise.resolve(r(e,n))})))];case 1:return r=n.sent(),[2,Promise.resolve(r.reduce((function(e,r,n){return e[t[n].name]=r,e}),{}))]}}))}))},t}();e.BasicPipeline=u}])},"object"===u(e)&&"object"===u(t)?t.exports=a():(o=[],void 0===(i="function"==typeof(n=a)?n.apply(e,o):n)||(t.exports=i))}}]);
|
PypiClean
|
/TheRiddler-0.1.3.tar.gz/TheRiddler-0.1.3/README.rst
|
The Riddler
===========
The Riddler is a project for learning purposes.
It is a sample project made for
the upcoming lecture *Einfuehrung in Python* of the university *HTWG
Konstanz* at Constance, Germany.
It consists of riddles the solution of which preferably require learning Python
basics.
The riddles are intended to be solved programmatically, and most of them can
be solved by straightforward and short scripts.
The solution sometimes will require the use of extra modules, which can
all be downloaded from the internet.
The Riddler is designated to accompany the lecture and make the
students apply and consolidate their theoretical knowledge without
greater guidance and by play.
After mastering the first few riddles, the application leads to the
project itself and its distribution here on PyPI, and is then intended
to encourage the students to revise and modify, and finally republish it
for the following ones.
With this approach, the project is expected to result in the making of a sophisticated
learning project, geared to the needs of the learners, and well tried and tested.
Getting Started
---------------
At the beginning, the project is handed out to the students in the form
of an installer package (".msi" file), for "mysterious" intentions.
Once they have solved the first riddles and end up here,
The Riddler shall be downloaded over its PyPI-Homepage and the downloaded File
has to be extracted. Subsequently it can be run by moving to the
extracted folder and typing ``python theriddler`` in the shell.
Installing with Pip
^^^^^^^^^^^^^^^^^^^
It is also possible to install this project with pip from the shell by
typing ``pip install theRiddler``, but it is not meant to be installed
this way for testing and development purposes.
Anyway, it could be started then by going into the directory where pip
installed it in, and by directly running the "__main__.py" script from
there.
Prerequisites
~~~~~~~~~~~~~
- The "setup.py" script is yet built with `Setuptools <http://pypi.python.org/pypi/setuptools/>`_.
- The `Pillow <http://pypi.python.org/pypi/Pillow/5.0.0/>`_ or "PIL" (Python Imaging Library)
module is used to display pictures, as the "PhotoImage" class from the built-in
"Tkinter" module provides comparably lean functionalities.
- To run tests, `Nose <http://pypi.python.org/pypi/nose/1.3.7/>`_ is required.
- The installer package handed out at the beginning is made with
`cx_Freeze <http://pypi.python.org/pypi/cx_Freeze/6.0b1>`_.
Therefore, another "setup.py" script is used.
More information is provided in the attachment folder "misc" within the package.
- Some riddles require a valid connection to the internet.
The connection is accomplished with `certifi <http://pypi.python.org/pypi/certifi/2018.1.18>`_,
`beautifulsoup4 <http://pypi.python.org/pypi/beautifulsoup4/4.6.0>`_ and
`urllib3 <http://pypi.python.org/pypi/urllib3/1.22>`_.
Contributing
~~~~~~~~~~~~
Please read the **CONTRIBUTING** file from the "misc" folder for
details.
Versioning
~~~~~~~~~~
The versioning is intended to be made after "Semver".
Check https://semver.org/.
The initial release was "theRiddler 0.1.0", 21th February 2018.
License
~~~~~~~
The entire content of this project is "self-made", pictures included.
The icon was created with a freeware version of the application "IcoFX" (v1.64).
The author waives any copyright for the content.
This project is licensed under the MIT License - see the **LICENSE**
file for details.
Author
~~~~~~
Etienne Martin,
student of the *HTWG Konstanz*, at the department of electrical
engineering.
This project is part of the bachelor thesis created to achieve the
bachelor of science degree.
Acknowledgments
~~~~~~~~~~~~~~~
- The conception of this project was inspired by
http://www.pythonchallenge.com/. Some riddles resemble an adaptation of the ones found
in the python challenge.
Thanks to *Nadav Samet*, you can go visit his `blog <http://www.thesamet.com/>`_.
"Because everyone needs a blog".
- The "Gothon Trash Game" is an adaption and inspired by "Gothons from
Planet Percal#25" from Zed Shaw's book "Learn Python the Hard Way",
exercise 43.
|
PypiClean
|
/onnxruntime_cann-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl/onnxruntime/transformers/onnx_model_bert.py
|
from logging import getLogger
from typing import List, Optional
from convert_to_packing_mode import PackingMode
from fusion_attention import AttentionMask, FusionAttention
from fusion_bart_attention import FusionBartAttention
from fusion_biasgelu import FusionBiasGelu
from fusion_embedlayer import FusionEmbedLayerNormalization
from fusion_fastgelu import FusionFastGelu
from fusion_gelu import FusionGelu
from fusion_gelu_approximation import FusionGeluApproximation
from fusion_gemmfastgelu import FusionGemmFastGelu
from fusion_layernorm import FusionLayerNormalization, FusionLayerNormalizationTF
from fusion_options import AttentionMaskFormat, FusionOptions
from fusion_qordered_attention import FusionQOrderedAttention
from fusion_qordered_gelu import FusionQOrderedGelu
from fusion_qordered_layernorm import FusionQOrderedLayerNormalization
from fusion_qordered_matmul import FusionQOrderedMatMul
from fusion_reshape import FusionReshape
from fusion_shape import FusionShape
from fusion_skiplayernorm import FusionBiasSkipLayerNormalization, FusionSkipLayerNormalization
from fusion_utils import FusionUtils
from onnx import GraphProto, ModelProto, TensorProto, ValueInfoProto, helper
from onnx_model import OnnxModel
logger = getLogger(__name__)
class BertOptimizationOptions(FusionOptions):
"""This class is deprecated"""
def __init__(self, model_type):
logger.warning("BertOptimizationOptions is depreciated. Please use FusionOptions instead.")
super().__init__(model_type)
class BertOnnxModel(OnnxModel):
def __init__(self, model: ModelProto, num_heads: int = 0, hidden_size: int = 0):
"""Initialize BERT ONNX Model.
Args:
model (ModelProto): the ONNX model
num_heads (int, optional): number of attention heads. Defaults to 0 (detect the parameter automatically).
hidden_size (int, optional): hidden dimension. Defaults to 0 (detect the parameter automatically).
"""
assert (num_heads == 0 and hidden_size == 0) or (num_heads > 0 and hidden_size % num_heads == 0)
super().__init__(model)
self.num_heads = num_heads
self.hidden_size = hidden_size
self.attention_mask = AttentionMask(self)
self.attention_fusion = FusionAttention(self, self.hidden_size, self.num_heads, self.attention_mask)
self.qordered_attention_fusion = FusionQOrderedAttention(
self, self.hidden_size, self.num_heads, self.attention_mask
)
self.utils = FusionUtils(self)
def fuse_attention(self):
self.attention_fusion.apply()
# Only relevant in models with Q-DQ nodes
self.qordered_attention_fusion.apply()
def fuse_gelu(self):
fusion = FusionGelu(self)
fusion.apply()
fusion = FusionFastGelu(self)
fusion.apply()
# Only relevant in models with Q-DQ nodes
fusion = FusionQOrderedGelu(self)
fusion.apply()
def fuse_bias_gelu(self, is_fastgelu):
fusion = FusionBiasGelu(self, is_fastgelu)
fusion.apply()
def gelu_approximation(self):
fusion = FusionGeluApproximation(self)
fusion.apply()
def fuse_gemm_fast_gelu(self):
fusion = FusionGemmFastGelu(self)
fusion.apply()
def fuse_add_bias_skip_layer_norm(self):
fusion = FusionBiasSkipLayerNormalization(self)
fusion.apply()
def fuse_reshape(self):
fusion = FusionReshape(self)
fusion.apply()
def fuse_shape(self):
fusion = FusionShape(self)
fusion.apply()
def fuse_embed_layer(self, use_mask_index):
fusion = FusionEmbedLayerNormalization(self, use_mask_index)
fusion.apply()
def fuse_layer_norm(self):
fusion = FusionLayerNormalization(self)
fusion.apply()
fusion = FusionLayerNormalizationTF(self)
fusion.apply()
# Only relevant in models with Q-DQ nodes
fusion = FusionQOrderedLayerNormalization(self)
fusion.apply()
def fuse_skip_layer_norm(self):
fusion = FusionSkipLayerNormalization(self)
fusion.apply()
# Only relevant in models with Q-DQ nodes
def fuse_qordered_mamtul(self):
fusion = FusionQOrderedMatMul(self)
fusion.apply()
def get_graph_inputs_from_node_type(self, op_type: str, input_indices: List[int], casted: bool):
"""
Get graph inputs that feed into node type (like EmbedLayerNormalization or Attention).
Returns a list of the graph input names based on the filter whether it is casted or not.
"""
graph_inputs = []
output_name_to_node = self.output_name_to_node()
nodes = self.get_nodes_by_op_type(op_type)
for node in nodes:
bert_inputs = [node.input[i] for i in input_indices if i < len(node.input)]
for bert_input in bert_inputs:
if self.find_graph_input(bert_input):
if not casted:
graph_inputs.append(bert_input)
elif bert_input in output_name_to_node:
parent = output_name_to_node[bert_input]
if parent.op_type == "Cast" and self.find_graph_input(parent.input[0]) is not None:
if casted:
graph_inputs.append(parent.input[0])
return graph_inputs
def get_graph_inputs_from_fused_nodes(self, casted: bool):
inputs = self.get_graph_inputs_from_node_type("EmbedLayerNormalization", [0, 1, 7], casted)
inputs += self.get_graph_inputs_from_node_type("Attention", [3], casted)
return inputs
def change_graph_input_type(
self,
graph: GraphProto,
graph_input: ValueInfoProto,
new_type: int = TensorProto.INT32,
):
"""Change graph input type, and add Cast node if needed.
Args:
graph (GraphProto): graph
graph_input (TensorProto): input of the graph
new_type (int, optional): new data type. Defaults to TensorProto.INT32.
Returns:
NodeProto: a new Cast node that added. None if Cast node is not added.
List[NodeProto]: Cast nodes that have been removed.
"""
assert isinstance(graph, GraphProto)
assert isinstance(graph_input, ValueInfoProto)
assert self.find_graph_input(graph_input.name)
if graph_input.type.tensor_type.elem_type == int(new_type):
return None, []
new_cast_node = None
nodes_to_remove = []
input_name_to_nodes = self.input_name_to_nodes()
if graph_input.name in input_name_to_nodes:
nodes = input_name_to_nodes[graph_input.name]
# For children that is not Cast node, insert a Cast node to convert int32 to original data type.
nodes_not_cast = [node for node in nodes if node.op_type != "Cast"]
if nodes_not_cast:
node_name = self.create_node_name("Cast")
output_name = node_name + "_" + graph_input.name
new_value_info = graph.value_info.add()
new_value_info.CopyFrom(graph_input)
new_value_info.name = output_name
new_cast_node = helper.make_node(
"Cast",
[graph_input.name],
[output_name],
to=int(graph_input.type.tensor_type.elem_type),
name=node_name,
)
graph.node.extend([new_cast_node])
for node in nodes_not_cast:
OnnxModel.replace_node_input(node, graph_input.name, output_name)
# For children that is Cast node, no need to insert Cast.
# When the children is Cast to int32, we can remove that Cast node since input type is int32 now.
nodes_cast = [node for node in nodes if node.op_type == "Cast"]
for node in nodes_cast:
if OnnxModel.get_node_attribute(node, "to") == int(new_type):
self.replace_input_of_all_nodes(node.output[0], graph_input.name)
if not self.find_graph_output(node.output[0]):
nodes_to_remove.append(node)
if nodes_to_remove:
self.remove_nodes(nodes_to_remove)
graph_input.type.tensor_type.elem_type = int(new_type)
return new_cast_node, nodes_to_remove
def change_graph_inputs_to_int32(self):
"""Change data type of all graph inputs to int32 type, and add Cast node if needed."""
graph = self.graph()
add_cast_count = 0
remove_cast_count = 0
for graph_input in graph.input:
new_node, removed_nodes = self.change_graph_input_type(graph, graph_input, TensorProto.INT32)
if new_node:
add_cast_count += 1
remove_cast_count += len(removed_nodes)
logger.info(
f"Graph inputs are changed to int32. Added {add_cast_count} Cast nodes, and removed {remove_cast_count} Cast nodes."
)
def use_dynamic_axes(self, dynamic_batch_dim="batch_size", dynamic_seq_len="max_seq_len"):
"""
Update input and output shape to use dynamic axes.
"""
bert_graph_inputs = self.get_graph_inputs_from_fused_nodes(
casted=True
) + self.get_graph_inputs_from_fused_nodes(casted=False)
for input in self.model.graph.input:
if input.name in bert_graph_inputs:
dim_proto = input.type.tensor_type.shape.dim[0]
dim_proto.dim_param = dynamic_batch_dim
if dynamic_seq_len is not None:
dim_proto = input.type.tensor_type.shape.dim[1]
dim_proto.dim_param = dynamic_seq_len
for output in self.model.graph.output:
dim_proto = output.type.tensor_type.shape.dim[0]
dim_proto.dim_param = dynamic_batch_dim
def preprocess(self):
self.adjust_reshape_and_expand()
return
def adjust_reshape_and_expand(self):
nodes_to_remove = []
for node in self.nodes():
if node.op_type == "Reshape":
# Clean up unnecessary reshape nodes.
# Find reshape nodes with no actually data in "shape" attribute and remove.
reshape_shape = self.get_constant_value(node.input[1])
if reshape_shape is not None and reshape_shape.size == 0:
nodes_to_remove.extend([node])
self.replace_input_of_all_nodes(node.output[0], node.input[0])
continue
# Find path "Slice" -> "Reshape" -> "Expand" -> "Expand" -> current "Reshape", simplify the graph by
# changing current reshape's input to output of slice.
reshape_path = self.match_parent_path(
node,
["Expand", "Expand", "Reshape", "Slice"],
[0, 0, 0, 0],
self.output_name_to_node(),
)
if reshape_path is not None:
expand_node = reshape_path[-3]
expand_shape_value = self.get_constant_value(expand_node.input[1])
reshape_before_expand = reshape_path[-2]
shape_value = self.get_constant_value(reshape_before_expand.input[1])
slice_node = reshape_path[-1]
if (
expand_shape_value is not None
and shape_value is not None
and len(expand_shape_value) == 2
and len(shape_value) == 1
and expand_shape_value[1] == shape_value[0]
):
node.input[0] = slice_node.output[0]
if nodes_to_remove:
self.remove_nodes(nodes_to_remove)
logger.info(f"Removed Reshape and Expand count: {len(nodes_to_remove)}")
def clean_graph(self):
output_name_to_node = self.output_name_to_node()
nodes_to_remove = []
for node in self.nodes():
# Before:
# input_ids --> Shape --> Gather(indices=0) --> Unsqueeze ------+
# | |
# | v
# +----> Shape --> Gather(indices=1) --> Unsqueeze---> Concat --> ConstantOfShape -->Cast --> EmbedLayerNormaliation/ReduceSum
# After:
# input_ids --> Shape --> ConstantOfShape -->Cast --> EmbedLayerNormaliation/ReduceSum
# TODO: merge ConstantOfShape -->Cast to ConstantOfShape (need update the data type of value)
op_input_id = {"EmbedLayerNormalization": 1, "ReduceSum": 0, "Attention": 3}
if node.op_type in op_input_id:
i = op_input_id[node.op_type]
parent_nodes = self.match_parent_path(
node,
[
"Cast",
"ConstantOfShape",
"Concat",
"Unsqueeze",
"Gather",
"Shape",
],
[i, 0, 0, 0, 0, 0],
output_name_to_node,
)
if parent_nodes is not None:
(
cast,
constantOfShape, # noqa: N806
concat,
unsqueeze,
gather,
shape,
) = parent_nodes
if shape.input[0] == self.graph().input[0].name:
constantOfShape.input[0] = shape.output[0]
output_name_to_node = self.output_name_to_node()
if node.op_type == "Attention":
# Before:
# input_ids --> Shape -->ConstantOfShape -->Cast --> ReduceSum --> Attention
# After:
# remove this path, and remove the optional mask_index input of Attention node.
parent_nodes = self.match_parent_path(
node,
["ReduceSum", "Cast", "ConstantOfShape", "Shape"],
[3, 0, 0, 0],
output_name_to_node,
)
if parent_nodes is not None:
if parent_nodes[-1].input[0] == self.graph().input[0].name:
attention_node = helper.make_node(
"Attention",
inputs=node.input[0 : len(node.input) - 1],
outputs=node.output,
name=node.name + "_remove_mask",
)
attention_node.domain = "com.microsoft"
attention_node.attribute.extend([helper.make_attribute("num_heads", self.num_heads)])
self.add_node(attention_node, self.get_graph_by_node(attention_node).name)
nodes_to_remove.append(node)
self.remove_nodes(nodes_to_remove)
def postprocess(self):
self.clean_graph()
self.prune_graph()
def optimize(self, options: Optional[FusionOptions] = None, add_dynamic_axes: bool = False):
if (options is not None) and not options.enable_shape_inference:
self.disable_shape_inference()
self.utils.remove_identity_nodes()
# Remove cast nodes that having same data type of input and output based on symbolic shape inference.
self.utils.remove_useless_cast_nodes()
if (options is None) or options.enable_layer_norm:
self.fuse_layer_norm()
if (options is None) or options.enable_gelu:
self.fuse_gelu()
self.preprocess()
self.fuse_reshape()
if (options is None) or options.enable_skip_layer_norm:
self.fuse_skip_layer_norm()
if options is not None:
self.attention_mask.set_mask_format(options.attention_mask_format)
if options.use_multi_head_attention and not isinstance(self.attention_fusion, FusionBartAttention):
self.attention_fusion = FusionAttention(
self, self.hidden_size, self.num_heads, self.attention_mask, options.use_multi_head_attention
)
if (options is None) or options.enable_attention:
self.fuse_attention()
# Perform the MatMul fusion after the Attention fusion as we do not
# want to fuse the MatMuls inside the Attention subgraphs
if (options is None) or options.enable_qordered_matmul:
self.fuse_qordered_mamtul()
self.fuse_shape()
if (options is None) or options.enable_embed_layer_norm:
use_mask_index = options.attention_mask_format == AttentionMaskFormat.MaskIndexEnd
self.fuse_embed_layer(use_mask_index)
# Remove reshape nodes that having same shape of input and output based on symbolic shape inference.
self.utils.remove_useless_reshape_nodes()
self.postprocess()
# Bias fusion is done after postprocess to avoid extra Reshape between bias and Gelu/FastGelu/SkipLayerNormalization
if (options is None) or options.enable_bias_gelu:
# Fuse Gelu and Add Bias before it.
self.fuse_bias_gelu(is_fastgelu=True)
self.fuse_bias_gelu(is_fastgelu=False)
if (options is None) or options.enable_bias_skip_layer_norm:
# Fuse SkipLayerNormalization and Add Bias before it.
self.fuse_add_bias_skip_layer_norm()
if options is not None and options.enable_gelu_approximation:
self.gelu_approximation()
if options is not None and options.enable_gemm_fast_gelu:
self.fuse_gemm_fast_gelu()
self.remove_unused_constant()
# Use symbolic batch dimension in input and output.
if add_dynamic_axes:
self.use_dynamic_axes()
logger.info(f"opset version: {self.get_opset_version()}")
def get_fused_operator_statistics(self):
"""
Returns node count of fused operators.
"""
op_count = {}
ops = [
"EmbedLayerNormalization",
"Attention",
"MultiHeadAttention",
"Gelu",
"FastGelu",
"BiasGelu",
"GemmFastGelu",
"LayerNormalization",
"SkipLayerNormalization",
]
q_ops = ["QOrderedAttention", "QOrderedGelu", "QOrderedLayerNormalization", "QOrderedMatMul"]
for op in ops + q_ops:
nodes = self.get_nodes_by_op_type(op)
op_count[op] = len(nodes)
logger.info(f"Optimized operators:{op_count}")
return op_count
def is_fully_optimized(self):
"""
Returns True when the model is fully optimized.
"""
op_count = self.get_fused_operator_statistics()
embed = op_count["EmbedLayerNormalization"]
attention = op_count["Attention"] + op_count["MultiHeadAttention"] + op_count["QOrderedAttention"]
gelu = op_count["Gelu"] + op_count["BiasGelu"] + op_count["FastGelu"]
layer_norm = op_count["LayerNormalization"] + op_count["SkipLayerNormalization"]
is_perfect = (embed > 0) and (attention > 0) and (attention == gelu) and (layer_norm >= 2 * attention)
if layer_norm == 0:
logger.debug("Layer Normalization not fused")
if gelu == 0:
logger.debug("Gelu/FastGelu not fused")
if embed == 0:
logger.debug("Embed Layer not fused")
if attention == 0:
logger.warning("Attention not fused")
return is_perfect
def convert_to_packing_mode(self, use_symbolic_shape_infer: bool = False):
packing_mode = PackingMode(self)
packing_mode.convert(use_symbolic_shape_infer)
|
PypiClean
|
/rlpy3-2.0.0a0-cp36-cp36m-win_amd64.whl/rlpy/Domains/PacmanPackage/layout.py
|
from .util import manhattanDistance
from .game import Grid
import os
import random
from functools import reduce
VISIBILITY_MATRIX_CACHE = {}
class Layout(object):
"""
A Layout manages the static information about the game board.
"""
def __init__(self, layoutText):
self.width = len(layoutText[0])
self.height = len(layoutText)
self.walls = Grid(self.width, self.height, False)
self.food = Grid(self.width, self.height, False)
self.capsules = []
self.agentPositions = []
self.numGhosts = 0
self.processLayoutText(layoutText)
self.layoutText = layoutText
# self.initializeVisibilityMatrix()
def getNumGhosts(self):
return self.numGhosts
def initializeVisibilityMatrix(self):
global VISIBILITY_MATRIX_CACHE
if reduce(str.__add__, self.layoutText) not in VISIBILITY_MATRIX_CACHE:
from .game import Directions
vecs = [(-0.5, 0), (0.5, 0), (0, -0.5), (0, 0.5)]
dirs = [
Directions.NORTH,
Directions.SOUTH,
Directions.WEST,
Directions.EAST,
]
vis = Grid(
self.width,
self.height,
{
Directions.NORTH: set(),
Directions.SOUTH: set(),
Directions.EAST: set(),
Directions.WEST: set(),
Directions.STOP: set(),
},
)
for x in range(self.width):
for y in range(self.height):
if self.walls[x][y] == False:
for vec, direction in zip(vecs, dirs):
dx, dy = vec
nextx, nexty = x + dx, y + dy
while (nextx + nexty) != int(nextx) + int(
nexty
) or not self.walls[int(nextx)][int(nexty)]:
vis[x][y][direction].add((nextx, nexty))
nextx, nexty = x + dx, y + dy
self.visibility = vis
VISIBILITY_MATRIX_CACHE[reduce(str.__add__, self.layoutText)] = vis
else:
self.visibility = VISIBILITY_MATRIX_CACHE[
reduce(str.__add__, self.layoutText)
]
def isWall(self, pos):
x, col = pos
return self.walls[x][col]
def getRandomLegalPosition(self):
x = random.choice(list(range(self.width)))
y = random.choice(list(range(self.height)))
while self.isWall((x, y)):
x = random.choice(list(range(self.width)))
y = random.choice(list(range(self.height)))
return (x, y)
def getRandomCorner(self):
poses = [
(1, 1),
(1, self.height - 2),
(self.width - 2, 1),
(self.width - 2, self.height - 2),
]
return random.choice(poses)
def getFurthestCorner(self, pacPos):
poses = [
(1, 1),
(1, self.height - 2),
(self.width - 2, 1),
(self.width - 2, self.height - 2),
]
dist, pos = max([(manhattanDistance(p, pacPos), p) for p in poses])
return pos
def isVisibleFrom(self, ghostPos, pacPos, pacDirection):
row, col = [int(x) for x in pacPos]
return ghostPos in self.visibility[row][col][pacDirection]
def __str__(self):
return "\n".join(self.layoutText)
def deepCopy(self):
return Layout(self.layoutText[:])
def processLayoutText(self, layoutText):
"""
Coordinates are flipped from the input format to the (x,y) convention here
The shape of the maze. Each character
represents a different type of object.
% - Wall
. - Food
o - Capsule
G - Ghost
P - Pacman
Other characters are ignored.
"""
maxY = self.height - 1
for y in range(self.height):
for x in range(self.width):
layoutChar = layoutText[maxY - y][x]
self.processLayoutChar(x, y, layoutChar)
self.agentPositions.sort()
self.agentPositions = [(i == 0, pos) for i, pos in self.agentPositions]
def processLayoutChar(self, x, y, layoutChar):
if layoutChar == "%":
self.walls[x][y] = True
elif layoutChar == ".":
self.food[x][y] = True
elif layoutChar == "o":
self.capsules.append((x, y))
elif layoutChar == "P":
self.agentPositions.append((0, (x, y)))
elif layoutChar in ["G"]:
self.agentPositions.append((1, (x, y)))
self.numGhosts += 1
elif layoutChar in ["1", "2", "3", "4"]:
self.agentPositions.append((int(layoutChar), (x, y)))
self.numGhosts += 1
def getLayout(name, back=2):
if name.endswith(".lay"):
layout = tryToLoad("layouts/" + name)
if layout is None:
layout = tryToLoad(name)
else:
layout = tryToLoad("layouts/" + name + ".lay")
if layout is None:
layout = tryToLoad(name + ".lay")
if layout is None and back >= 0:
curdir = os.path.abspath(".")
os.chdir("..")
layout = getLayout(name, back - 1)
os.chdir(curdir)
return layout
def tryToLoad(fullname):
if not os.path.exists(fullname):
return None
f = open(fullname)
try:
return Layout([line.strip() for line in f])
finally:
f.close()
|
PypiClean
|
/arize_phoenix-0.0.33rc4-py3-none-any.whl/phoenix/server/main.py
|
import atexit
import errno
import logging
import os
from argparse import ArgumentParser
from pathlib import Path
from typing import Optional
import uvicorn
import phoenix.config as config
from phoenix.core.model_schema_adapter import create_model_from_datasets
from phoenix.core.traces import Traces
from phoenix.datasets.dataset import EMPTY_DATASET, Dataset
from phoenix.datasets.fixtures import FIXTURES, get_datasets
from phoenix.server.app import create_app
from phoenix.trace.fixtures import TRACES_FIXTURES, load_example_traces
logger = logging.getLogger(__name__)
def _write_pid_file() -> None:
with open(_get_pid_file(), "w"):
pass
def _remove_pid_file() -> None:
try:
os.unlink(_get_pid_file())
except OSError as e:
if e.errno == errno.ENOENT:
# If the pid file doesn't exist, ignore and continue on since
# we are already in the desired end state; This should not happen
pass
else:
raise
def _get_pid_file() -> str:
return os.path.join(config.get_pids_path(), "%d" % os.getpid())
if __name__ == "__main__":
primary_dataset_name: str
reference_dataset_name: Optional[str]
trace_dataset_name: Optional[str] = None
primary_dataset: Dataset = EMPTY_DATASET
reference_dataset: Optional[Dataset] = None
corpus_dataset: Optional[Dataset] = None
# automatically remove the pid file when the process is being gracefully terminated
atexit.register(_remove_pid_file)
_write_pid_file()
parser = ArgumentParser()
parser.add_argument("--export_path")
parser.add_argument("--port", type=int, default=config.PORT)
parser.add_argument("--no-internet", action="store_true")
parser.add_argument("--debug", action="store_false") # TODO: Disable before public launch
subparsers = parser.add_subparsers(dest="command", required=True)
datasets_parser = subparsers.add_parser("datasets")
datasets_parser.add_argument("--primary", type=str, required=True)
datasets_parser.add_argument("--reference", type=str, required=False)
datasets_parser.add_argument("--corpus", type=str, required=False)
fixture_parser = subparsers.add_parser("fixture")
fixture_parser.add_argument("fixture", type=str, choices=[fixture.name for fixture in FIXTURES])
fixture_parser.add_argument("--primary-only", type=bool)
trace_fixture_parser = subparsers.add_parser("trace-fixture")
trace_fixture_parser.add_argument(
"fixture", type=str, choices=[fixture.name for fixture in TRACES_FIXTURES]
)
args = parser.parse_args()
export_path = Path(args.export_path) if args.export_path else config.EXPORT_DIR
if args.command == "datasets":
primary_dataset_name = args.primary
reference_dataset_name = args.reference
corpus_dataset_name = args.corpus
primary_dataset = Dataset.from_name(primary_dataset_name)
reference_dataset = (
Dataset.from_name(reference_dataset_name)
if reference_dataset_name is not None
else None
)
corpus_dataset = (
None if corpus_dataset_name is None else Dataset.from_name(corpus_dataset_name)
)
elif args.command == "fixture":
fixture_name = args.fixture
primary_only = args.primary_only
primary_dataset, reference_dataset, corpus_dataset = get_datasets(
fixture_name,
args.no_internet,
)
if primary_only:
reference_dataset_name = None
reference_dataset = None
elif args.command == "trace-fixture":
trace_dataset_name = args.fixture
model = create_model_from_datasets(
primary_dataset,
reference_dataset,
)
traces: Optional[Traces] = None
if trace_dataset_name is not None:
traces_ds = load_example_traces(trace_dataset_name)
traces = Traces(traces_ds.dataframe)
app = create_app(
export_path=export_path,
model=model,
traces=traces,
corpus=None if corpus_dataset is None else create_model_from_datasets(corpus_dataset),
debug=args.debug,
)
uvicorn.run(app, port=args.port)
|
PypiClean
|
/equinor_libres-11.0.1-cp36-cp36m-macosx_10_9_x86_64.whl/res/enkf/runpath_list.py
|
from collections import namedtuple
from cwrap import BaseCClass
from res import ResPrototype
RunpathNode = namedtuple(
"RunpathNode", ["realization", "iteration", "runpath", "basename"]
)
class RunpathList(BaseCClass):
TYPE_NAME = "runpath_list"
_alloc = ResPrototype("void* runpath_list_alloc(char*)", bind=False)
_free = ResPrototype("void runpath_list_free(runpath_list)")
_add = ResPrototype("void runpath_list_add(runpath_list, int, int, char*, char*)")
_clear = ResPrototype("void runpath_list_clear(runpath_list)")
_size = ResPrototype("int runpath_list_size(runpath_list)")
_iens = ResPrototype("int runpath_list_iget_iens(runpath_list, int)")
_iteration = ResPrototype("int runpath_list_iget_iter(runpath_list, int)")
_runpath = ResPrototype("char* runpath_list_iget_runpath(runpath_list, int)")
_basename = ResPrototype("char* runpath_list_iget_basename(runpath_list, int)")
_export = ResPrototype("void runpath_list_fprintf(runpath_list)")
_load = ResPrototype("bool runpath_list_load(runpath_list)")
_get_export_file = ResPrototype("char* runpath_list_get_export_file(runpath_list)")
_set_export_file = ResPrototype(
"void runpath_list_set_export_file(runpath_list, char*)"
)
def __init__(self, export_file):
c_ptr = self._alloc(export_file)
if c_ptr:
super(RunpathList, self).__init__(c_ptr)
else:
raise IOError(
'Could not construct RunpathList with export_file "%s".' % export_file
)
def __len__(self):
return self._size()
def __getitem__(self, index):
"""@rtype: RunpathNode"""
ls = len(self)
if isinstance(index, int):
idx = index
if idx < 0:
idx += ls
if not 0 <= idx < ls:
raise IndexError("Index not in range: 0 <= %d < %d" % (index, ls))
realization = self._iens(idx)
iteration = self._iteration(idx)
runpath = self._runpath(idx)
basename = self._basename(idx)
return RunpathNode(realization, iteration, runpath, basename)
elif isinstance(index, slice):
return [self[i] for i in range(*index.indices(ls))]
raise TypeError("List indices must be integers, not %s." % str(type(index)))
def __iter__(self):
index = 0
while index < len(self):
yield self[index]
index += 1
def getExportFile(self):
return self._get_export_file()
def setExportFile(self, export_file):
self._set_export_file(export_file)
def add(self, realization_number, iteration_number, runpath, basename):
"""
@type realization_number: int
@type iteration_number: int
@type runpath: int
@type basename: int
"""
self._add(realization_number, iteration_number, runpath, basename)
def clear(self):
self._clear()
def free(self):
self._free()
def __repr__(self):
return "RunpathList(size = %d) %s" % (len(self), self._ad_str())
def export(self):
self._export()
def load(self):
if not self._load():
raise IOError("Could not load from:%s" % self._get_export_file())
|
PypiClean
|
/ansible-kkvesper-2.3.2.0.tar.gz/ansible-kkvesper-2.3.2.0/lib/ansible/modules/cloud/ovirt/ovirt_nics_facts.py
|
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_nics_facts
short_description: Retrieve facts about one or more oVirt virtual machine network interfaces
author: "Ondra Machacek (@machacekondra)"
version_added: "2.3"
description:
- "Retrieve facts about one or more oVirt virtual machine network interfaces."
notes:
- "This module creates a new top-level C(ovirt_nics) fact, which
contains a list of NICs."
options:
vm:
description:
- "Name of the VM where NIC is attached."
required: true
name:
description:
- "Name of the NIC, can be used as glob expression."
extends_documentation_fragment: ovirt_facts
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# Gather facts about all NICs which names start with C(eth) for VM named C(centos7):
- ovirt_nics_facts:
vm: centos7
name: eth*
- debug:
var: ovirt_nics
'''
RETURN = '''
ovirt_nics:
description: "List of dictionaries describing the network interfaces. NIC attribues are mapped to dictionary keys,
all NICs attributes can be found at following url: https://ovirt.example.com/ovirt-engine/api/model#types/nic."
returned: On success.
type: list
'''
import fnmatch
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
check_sdk,
create_connection,
get_dict_of_struct,
ovirt_facts_full_argument_spec,
search_by_name,
)
def main():
argument_spec = ovirt_facts_full_argument_spec(
vm=dict(required=True),
name=dict(default=None),
)
module = AnsibleModule(argument_spec)
check_sdk(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
vms_service = connection.system_service().vms_service()
vm_name = module.params['vm']
vm = search_by_name(vms_service, vm_name)
if vm is None:
raise Exception("VM '%s' was not found." % vm_name)
nics_service = vms_service.service(vm.id).nics_service()
if module.params['name']:
nics = [
e for e in nics_service.list()
if fnmatch.fnmatch(e.name, module.params['name'])
]
else:
nics = nics_service.list()
module.exit_json(
changed=False,
ansible_facts=dict(
ovirt_nics=[
get_dict_of_struct(
struct=c,
connection=connection,
fetch_nested=module.params.get('fetch_nested'),
attributes=module.params.get('nested_attributes'),
) for c in nics
],
),
)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == '__main__':
main()
|
PypiClean
|
/distro_gauss_binomial-0.1.tar.gz/distro_gauss_binomial-0.1/distro_gauss_binomial/Gaussiandistribution.py
|
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev)
|
PypiClean
|
/libensemble-0.10.2.tar.gz/libensemble-0.10.2/CONTRIBUTING.rst
|
Contributing to libEnsemble
===========================
Contributions may be made via a GitHub pull request to
https://github.com/Libensemble/libensemble
libEnsemble uses the Gitflow model. Contributors should branch from, and
make pull requests to, the develop branch. The main branch is used only
for releases. Pull requests may be made from a fork, for those without
repository write access.
Code should pass flake8 tests, allowing for the exceptions
given in the flake8_ file in the project directory.
Python code should be formatted using the latest version of black_ by running
the following in the base libensemble directory::
black --config=.black .
Issues can be raised at
https://github.com/Libensemble/libensemble/issues
Issues may include reporting bugs or suggested features. Administrators
will add issues, as appropriate, to the project board at
https://github.com/Libensemble/libensemble/projects
By convention, user branch names should have a <type>/<name> format, where
example types are feature, bugfix, testing, docs, and experimental.
Administrators may take a hotfix branch from the main, which will be
merged into main (as a patch) and develop. Administrators may also take a
release branch off develop and then merge this branch into main and develop
for a release. Most branches should relate to an issue or feature.
When a branch closes a related issue, the pull request message should include
the phrase "Closes #N," where N is the issue number. This will automatically
close out the issues when they are pulled into the default branch (currently
main).
libEnsemble is distributed under a 3-clause BSD license (see LICENSE). The
act of submitting a pull request (with or without an explicit
Signed-off-by tag) will be understood as an affirmation of the
following:
::
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
.. _black: https://pypi.org/project/black/
.. _flake8: https://github.com/Libensemble/libensemble/blob/develop/.flake8
|
PypiClean
|
/Transcrypt-3.7.16.tar.gz/Transcrypt-3.7.16/transcrypt/demos/parcel_demo/node_modules/node-forge/lib/jsbn.js
|
// Basic JavaScript BN library - subset useful for RSA encryption.
/*
Licensing (LICENSE)
-------------------
This software is covered under the following copyright:
*/
/*
* Copyright (c) 2003-2005 Tom Wu
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND,
* EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
*
* IN NO EVENT SHALL TOM WU BE LIABLE FOR ANY SPECIAL, INCIDENTAL,
* INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER
* RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF
* THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
* OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*
* In addition, the following condition applies:
*
* All redistributions must retain an intact copy of this copyright notice
* and disclaimer.
*/
/*
Address all questions regarding this license to:
Tom Wu
[email protected]
*/
var forge = require('./forge');
module.exports = forge.jsbn = forge.jsbn || {};
// Bits per digit
var dbits;
// JavaScript engine analysis
var canary = 0xdeadbeefcafe;
var j_lm = ((canary&0xffffff)==0xefcafe);
// (public) Constructor
function BigInteger(a,b,c) {
this.data = [];
if(a != null)
if("number" == typeof a) this.fromNumber(a,b,c);
else if(b == null && "string" != typeof a) this.fromString(a,256);
else this.fromString(a,b);
}
forge.jsbn.BigInteger = BigInteger;
// return new, unset BigInteger
function nbi() { return new BigInteger(null); }
// am: Compute w_j += (x*this_i), propagate carries,
// c is initial carry, returns final carry.
// c < 3*dvalue, x < 2*dvalue, this_i < dvalue
// We need to select the fastest one that works in this environment.
// am1: use a single mult and divide to get the high bits,
// max digit bits should be 26 because
// max internal value = 2*dvalue^2-2*dvalue (< 2^53)
function am1(i,x,w,j,c,n) {
while(--n >= 0) {
var v = x*this.data[i++]+w.data[j]+c;
c = Math.floor(v/0x4000000);
w.data[j++] = v&0x3ffffff;
}
return c;
}
// am2 avoids a big mult-and-extract completely.
// Max digit bits should be <= 30 because we do bitwise ops
// on values up to 2*hdvalue^2-hdvalue-1 (< 2^31)
function am2(i,x,w,j,c,n) {
var xl = x&0x7fff, xh = x>>15;
while(--n >= 0) {
var l = this.data[i]&0x7fff;
var h = this.data[i++]>>15;
var m = xh*l+h*xl;
l = xl*l+((m&0x7fff)<<15)+w.data[j]+(c&0x3fffffff);
c = (l>>>30)+(m>>>15)+xh*h+(c>>>30);
w.data[j++] = l&0x3fffffff;
}
return c;
}
// Alternately, set max digit bits to 28 since some
// browsers slow down when dealing with 32-bit numbers.
function am3(i,x,w,j,c,n) {
var xl = x&0x3fff, xh = x>>14;
while(--n >= 0) {
var l = this.data[i]&0x3fff;
var h = this.data[i++]>>14;
var m = xh*l+h*xl;
l = xl*l+((m&0x3fff)<<14)+w.data[j]+c;
c = (l>>28)+(m>>14)+xh*h;
w.data[j++] = l&0xfffffff;
}
return c;
}
// node.js (no browser)
if(typeof(navigator) === 'undefined')
{
BigInteger.prototype.am = am3;
dbits = 28;
} else if(j_lm && (navigator.appName == "Microsoft Internet Explorer")) {
BigInteger.prototype.am = am2;
dbits = 30;
} else if(j_lm && (navigator.appName != "Netscape")) {
BigInteger.prototype.am = am1;
dbits = 26;
} else { // Mozilla/Netscape seems to prefer am3
BigInteger.prototype.am = am3;
dbits = 28;
}
BigInteger.prototype.DB = dbits;
BigInteger.prototype.DM = ((1<<dbits)-1);
BigInteger.prototype.DV = (1<<dbits);
var BI_FP = 52;
BigInteger.prototype.FV = Math.pow(2,BI_FP);
BigInteger.prototype.F1 = BI_FP-dbits;
BigInteger.prototype.F2 = 2*dbits-BI_FP;
// Digit conversions
var BI_RM = "0123456789abcdefghijklmnopqrstuvwxyz";
var BI_RC = new Array();
var rr,vv;
rr = "0".charCodeAt(0);
for(vv = 0; vv <= 9; ++vv) BI_RC[rr++] = vv;
rr = "a".charCodeAt(0);
for(vv = 10; vv < 36; ++vv) BI_RC[rr++] = vv;
rr = "A".charCodeAt(0);
for(vv = 10; vv < 36; ++vv) BI_RC[rr++] = vv;
function int2char(n) { return BI_RM.charAt(n); }
function intAt(s,i) {
var c = BI_RC[s.charCodeAt(i)];
return (c==null)?-1:c;
}
// (protected) copy this to r
function bnpCopyTo(r) {
for(var i = this.t-1; i >= 0; --i) r.data[i] = this.data[i];
r.t = this.t;
r.s = this.s;
}
// (protected) set from integer value x, -DV <= x < DV
function bnpFromInt(x) {
this.t = 1;
this.s = (x<0)?-1:0;
if(x > 0) this.data[0] = x;
else if(x < -1) this.data[0] = x+this.DV;
else this.t = 0;
}
// return bigint initialized to value
function nbv(i) { var r = nbi(); r.fromInt(i); return r; }
// (protected) set from string and radix
function bnpFromString(s,b) {
var k;
if(b == 16) k = 4;
else if(b == 8) k = 3;
else if(b == 256) k = 8; // byte array
else if(b == 2) k = 1;
else if(b == 32) k = 5;
else if(b == 4) k = 2;
else { this.fromRadix(s,b); return; }
this.t = 0;
this.s = 0;
var i = s.length, mi = false, sh = 0;
while(--i >= 0) {
var x = (k==8)?s[i]&0xff:intAt(s,i);
if(x < 0) {
if(s.charAt(i) == "-") mi = true;
continue;
}
mi = false;
if(sh == 0)
this.data[this.t++] = x;
else if(sh+k > this.DB) {
this.data[this.t-1] |= (x&((1<<(this.DB-sh))-1))<<sh;
this.data[this.t++] = (x>>(this.DB-sh));
} else
this.data[this.t-1] |= x<<sh;
sh += k;
if(sh >= this.DB) sh -= this.DB;
}
if(k == 8 && (s[0]&0x80) != 0) {
this.s = -1;
if(sh > 0) this.data[this.t-1] |= ((1<<(this.DB-sh))-1)<<sh;
}
this.clamp();
if(mi) BigInteger.ZERO.subTo(this,this);
}
// (protected) clamp off excess high words
function bnpClamp() {
var c = this.s&this.DM;
while(this.t > 0 && this.data[this.t-1] == c) --this.t;
}
// (public) return string representation in given radix
function bnToString(b) {
if(this.s < 0) return "-"+this.negate().toString(b);
var k;
if(b == 16) k = 4;
else if(b == 8) k = 3;
else if(b == 2) k = 1;
else if(b == 32) k = 5;
else if(b == 4) k = 2;
else return this.toRadix(b);
var km = (1<<k)-1, d, m = false, r = "", i = this.t;
var p = this.DB-(i*this.DB)%k;
if(i-- > 0) {
if(p < this.DB && (d = this.data[i]>>p) > 0) { m = true; r = int2char(d); }
while(i >= 0) {
if(p < k) {
d = (this.data[i]&((1<<p)-1))<<(k-p);
d |= this.data[--i]>>(p+=this.DB-k);
} else {
d = (this.data[i]>>(p-=k))&km;
if(p <= 0) { p += this.DB; --i; }
}
if(d > 0) m = true;
if(m) r += int2char(d);
}
}
return m?r:"0";
}
// (public) -this
function bnNegate() { var r = nbi(); BigInteger.ZERO.subTo(this,r); return r; }
// (public) |this|
function bnAbs() { return (this.s<0)?this.negate():this; }
// (public) return + if this > a, - if this < a, 0 if equal
function bnCompareTo(a) {
var r = this.s-a.s;
if(r != 0) return r;
var i = this.t;
r = i-a.t;
if(r != 0) return (this.s<0)?-r:r;
while(--i >= 0) if((r=this.data[i]-a.data[i]) != 0) return r;
return 0;
}
// returns bit length of the integer x
function nbits(x) {
var r = 1, t;
if((t=x>>>16) != 0) { x = t; r += 16; }
if((t=x>>8) != 0) { x = t; r += 8; }
if((t=x>>4) != 0) { x = t; r += 4; }
if((t=x>>2) != 0) { x = t; r += 2; }
if((t=x>>1) != 0) { x = t; r += 1; }
return r;
}
// (public) return the number of bits in "this"
function bnBitLength() {
if(this.t <= 0) return 0;
return this.DB*(this.t-1)+nbits(this.data[this.t-1]^(this.s&this.DM));
}
// (protected) r = this << n*DB
function bnpDLShiftTo(n,r) {
var i;
for(i = this.t-1; i >= 0; --i) r.data[i+n] = this.data[i];
for(i = n-1; i >= 0; --i) r.data[i] = 0;
r.t = this.t+n;
r.s = this.s;
}
// (protected) r = this >> n*DB
function bnpDRShiftTo(n,r) {
for(var i = n; i < this.t; ++i) r.data[i-n] = this.data[i];
r.t = Math.max(this.t-n,0);
r.s = this.s;
}
// (protected) r = this << n
function bnpLShiftTo(n,r) {
var bs = n%this.DB;
var cbs = this.DB-bs;
var bm = (1<<cbs)-1;
var ds = Math.floor(n/this.DB), c = (this.s<<bs)&this.DM, i;
for(i = this.t-1; i >= 0; --i) {
r.data[i+ds+1] = (this.data[i]>>cbs)|c;
c = (this.data[i]&bm)<<bs;
}
for(i = ds-1; i >= 0; --i) r.data[i] = 0;
r.data[ds] = c;
r.t = this.t+ds+1;
r.s = this.s;
r.clamp();
}
// (protected) r = this >> n
function bnpRShiftTo(n,r) {
r.s = this.s;
var ds = Math.floor(n/this.DB);
if(ds >= this.t) { r.t = 0; return; }
var bs = n%this.DB;
var cbs = this.DB-bs;
var bm = (1<<bs)-1;
r.data[0] = this.data[ds]>>bs;
for(var i = ds+1; i < this.t; ++i) {
r.data[i-ds-1] |= (this.data[i]&bm)<<cbs;
r.data[i-ds] = this.data[i]>>bs;
}
if(bs > 0) r.data[this.t-ds-1] |= (this.s&bm)<<cbs;
r.t = this.t-ds;
r.clamp();
}
// (protected) r = this - a
function bnpSubTo(a,r) {
var i = 0, c = 0, m = Math.min(a.t,this.t);
while(i < m) {
c += this.data[i]-a.data[i];
r.data[i++] = c&this.DM;
c >>= this.DB;
}
if(a.t < this.t) {
c -= a.s;
while(i < this.t) {
c += this.data[i];
r.data[i++] = c&this.DM;
c >>= this.DB;
}
c += this.s;
} else {
c += this.s;
while(i < a.t) {
c -= a.data[i];
r.data[i++] = c&this.DM;
c >>= this.DB;
}
c -= a.s;
}
r.s = (c<0)?-1:0;
if(c < -1) r.data[i++] = this.DV+c;
else if(c > 0) r.data[i++] = c;
r.t = i;
r.clamp();
}
// (protected) r = this * a, r != this,a (HAC 14.12)
// "this" should be the larger one if appropriate.
function bnpMultiplyTo(a,r) {
var x = this.abs(), y = a.abs();
var i = x.t;
r.t = i+y.t;
while(--i >= 0) r.data[i] = 0;
for(i = 0; i < y.t; ++i) r.data[i+x.t] = x.am(0,y.data[i],r,i,0,x.t);
r.s = 0;
r.clamp();
if(this.s != a.s) BigInteger.ZERO.subTo(r,r);
}
// (protected) r = this^2, r != this (HAC 14.16)
function bnpSquareTo(r) {
var x = this.abs();
var i = r.t = 2*x.t;
while(--i >= 0) r.data[i] = 0;
for(i = 0; i < x.t-1; ++i) {
var c = x.am(i,x.data[i],r,2*i,0,1);
if((r.data[i+x.t]+=x.am(i+1,2*x.data[i],r,2*i+1,c,x.t-i-1)) >= x.DV) {
r.data[i+x.t] -= x.DV;
r.data[i+x.t+1] = 1;
}
}
if(r.t > 0) r.data[r.t-1] += x.am(i,x.data[i],r,2*i,0,1);
r.s = 0;
r.clamp();
}
// (protected) divide this by m, quotient and remainder to q, r (HAC 14.20)
// r != q, this != m. q or r may be null.
function bnpDivRemTo(m,q,r) {
var pm = m.abs();
if(pm.t <= 0) return;
var pt = this.abs();
if(pt.t < pm.t) {
if(q != null) q.fromInt(0);
if(r != null) this.copyTo(r);
return;
}
if(r == null) r = nbi();
var y = nbi(), ts = this.s, ms = m.s;
var nsh = this.DB-nbits(pm.data[pm.t-1]); // normalize modulus
if(nsh > 0) { pm.lShiftTo(nsh,y); pt.lShiftTo(nsh,r); } else { pm.copyTo(y); pt.copyTo(r); }
var ys = y.t;
var y0 = y.data[ys-1];
if(y0 == 0) return;
var yt = y0*(1<<this.F1)+((ys>1)?y.data[ys-2]>>this.F2:0);
var d1 = this.FV/yt, d2 = (1<<this.F1)/yt, e = 1<<this.F2;
var i = r.t, j = i-ys, t = (q==null)?nbi():q;
y.dlShiftTo(j,t);
if(r.compareTo(t) >= 0) {
r.data[r.t++] = 1;
r.subTo(t,r);
}
BigInteger.ONE.dlShiftTo(ys,t);
t.subTo(y,y); // "negative" y so we can replace sub with am later
while(y.t < ys) y.data[y.t++] = 0;
while(--j >= 0) {
// Estimate quotient digit
var qd = (r.data[--i]==y0)?this.DM:Math.floor(r.data[i]*d1+(r.data[i-1]+e)*d2);
if((r.data[i]+=y.am(0,qd,r,j,0,ys)) < qd) { // Try it out
y.dlShiftTo(j,t);
r.subTo(t,r);
while(r.data[i] < --qd) r.subTo(t,r);
}
}
if(q != null) {
r.drShiftTo(ys,q);
if(ts != ms) BigInteger.ZERO.subTo(q,q);
}
r.t = ys;
r.clamp();
if(nsh > 0) r.rShiftTo(nsh,r); // Denormalize remainder
if(ts < 0) BigInteger.ZERO.subTo(r,r);
}
// (public) this mod a
function bnMod(a) {
var r = nbi();
this.abs().divRemTo(a,null,r);
if(this.s < 0 && r.compareTo(BigInteger.ZERO) > 0) a.subTo(r,r);
return r;
}
// Modular reduction using "classic" algorithm
function Classic(m) { this.m = m; }
function cConvert(x) {
if(x.s < 0 || x.compareTo(this.m) >= 0) return x.mod(this.m);
else return x;
}
function cRevert(x) { return x; }
function cReduce(x) { x.divRemTo(this.m,null,x); }
function cMulTo(x,y,r) { x.multiplyTo(y,r); this.reduce(r); }
function cSqrTo(x,r) { x.squareTo(r); this.reduce(r); }
Classic.prototype.convert = cConvert;
Classic.prototype.revert = cRevert;
Classic.prototype.reduce = cReduce;
Classic.prototype.mulTo = cMulTo;
Classic.prototype.sqrTo = cSqrTo;
// (protected) return "-1/this % 2^DB"; useful for Mont. reduction
// justification:
// xy == 1 (mod m)
// xy = 1+km
// xy(2-xy) = (1+km)(1-km)
// x[y(2-xy)] = 1-k^2m^2
// x[y(2-xy)] == 1 (mod m^2)
// if y is 1/x mod m, then y(2-xy) is 1/x mod m^2
// should reduce x and y(2-xy) by m^2 at each step to keep size bounded.
// JS multiply "overflows" differently from C/C++, so care is needed here.
function bnpInvDigit() {
if(this.t < 1) return 0;
var x = this.data[0];
if((x&1) == 0) return 0;
var y = x&3; // y == 1/x mod 2^2
y = (y*(2-(x&0xf)*y))&0xf; // y == 1/x mod 2^4
y = (y*(2-(x&0xff)*y))&0xff; // y == 1/x mod 2^8
y = (y*(2-(((x&0xffff)*y)&0xffff)))&0xffff; // y == 1/x mod 2^16
// last step - calculate inverse mod DV directly;
// assumes 16 < DB <= 32 and assumes ability to handle 48-bit ints
y = (y*(2-x*y%this.DV))%this.DV; // y == 1/x mod 2^dbits
// we really want the negative inverse, and -DV < y < DV
return (y>0)?this.DV-y:-y;
}
// Montgomery reduction
function Montgomery(m) {
this.m = m;
this.mp = m.invDigit();
this.mpl = this.mp&0x7fff;
this.mph = this.mp>>15;
this.um = (1<<(m.DB-15))-1;
this.mt2 = 2*m.t;
}
// xR mod m
function montConvert(x) {
var r = nbi();
x.abs().dlShiftTo(this.m.t,r);
r.divRemTo(this.m,null,r);
if(x.s < 0 && r.compareTo(BigInteger.ZERO) > 0) this.m.subTo(r,r);
return r;
}
// x/R mod m
function montRevert(x) {
var r = nbi();
x.copyTo(r);
this.reduce(r);
return r;
}
// x = x/R mod m (HAC 14.32)
function montReduce(x) {
while(x.t <= this.mt2) // pad x so am has enough room later
x.data[x.t++] = 0;
for(var i = 0; i < this.m.t; ++i) {
// faster way of calculating u0 = x.data[i]*mp mod DV
var j = x.data[i]&0x7fff;
var u0 = (j*this.mpl+(((j*this.mph+(x.data[i]>>15)*this.mpl)&this.um)<<15))&x.DM;
// use am to combine the multiply-shift-add into one call
j = i+this.m.t;
x.data[j] += this.m.am(0,u0,x,i,0,this.m.t);
// propagate carry
while(x.data[j] >= x.DV) { x.data[j] -= x.DV; x.data[++j]++; }
}
x.clamp();
x.drShiftTo(this.m.t,x);
if(x.compareTo(this.m) >= 0) x.subTo(this.m,x);
}
// r = "x^2/R mod m"; x != r
function montSqrTo(x,r) { x.squareTo(r); this.reduce(r); }
// r = "xy/R mod m"; x,y != r
function montMulTo(x,y,r) { x.multiplyTo(y,r); this.reduce(r); }
Montgomery.prototype.convert = montConvert;
Montgomery.prototype.revert = montRevert;
Montgomery.prototype.reduce = montReduce;
Montgomery.prototype.mulTo = montMulTo;
Montgomery.prototype.sqrTo = montSqrTo;
// (protected) true iff this is even
function bnpIsEven() { return ((this.t>0)?(this.data[0]&1):this.s) == 0; }
// (protected) this^e, e < 2^32, doing sqr and mul with "r" (HAC 14.79)
function bnpExp(e,z) {
if(e > 0xffffffff || e < 1) return BigInteger.ONE;
var r = nbi(), r2 = nbi(), g = z.convert(this), i = nbits(e)-1;
g.copyTo(r);
while(--i >= 0) {
z.sqrTo(r,r2);
if((e&(1<<i)) > 0) z.mulTo(r2,g,r);
else { var t = r; r = r2; r2 = t; }
}
return z.revert(r);
}
// (public) this^e % m, 0 <= e < 2^32
function bnModPowInt(e,m) {
var z;
if(e < 256 || m.isEven()) z = new Classic(m); else z = new Montgomery(m);
return this.exp(e,z);
}
// protected
BigInteger.prototype.copyTo = bnpCopyTo;
BigInteger.prototype.fromInt = bnpFromInt;
BigInteger.prototype.fromString = bnpFromString;
BigInteger.prototype.clamp = bnpClamp;
BigInteger.prototype.dlShiftTo = bnpDLShiftTo;
BigInteger.prototype.drShiftTo = bnpDRShiftTo;
BigInteger.prototype.lShiftTo = bnpLShiftTo;
BigInteger.prototype.rShiftTo = bnpRShiftTo;
BigInteger.prototype.subTo = bnpSubTo;
BigInteger.prototype.multiplyTo = bnpMultiplyTo;
BigInteger.prototype.squareTo = bnpSquareTo;
BigInteger.prototype.divRemTo = bnpDivRemTo;
BigInteger.prototype.invDigit = bnpInvDigit;
BigInteger.prototype.isEven = bnpIsEven;
BigInteger.prototype.exp = bnpExp;
// public
BigInteger.prototype.toString = bnToString;
BigInteger.prototype.negate = bnNegate;
BigInteger.prototype.abs = bnAbs;
BigInteger.prototype.compareTo = bnCompareTo;
BigInteger.prototype.bitLength = bnBitLength;
BigInteger.prototype.mod = bnMod;
BigInteger.prototype.modPowInt = bnModPowInt;
// "constants"
BigInteger.ZERO = nbv(0);
BigInteger.ONE = nbv(1);
// jsbn2 lib
//Copyright (c) 2005-2009 Tom Wu
//All Rights Reserved.
//See "LICENSE" for details (See jsbn.js for LICENSE).
//Extended JavaScript BN functions, required for RSA private ops.
//Version 1.1: new BigInteger("0", 10) returns "proper" zero
//(public)
function bnClone() { var r = nbi(); this.copyTo(r); return r; }
//(public) return value as integer
function bnIntValue() {
if(this.s < 0) {
if(this.t == 1) return this.data[0]-this.DV;
else if(this.t == 0) return -1;
} else if(this.t == 1) return this.data[0];
else if(this.t == 0) return 0;
// assumes 16 < DB < 32
return ((this.data[1]&((1<<(32-this.DB))-1))<<this.DB)|this.data[0];
}
//(public) return value as byte
function bnByteValue() { return (this.t==0)?this.s:(this.data[0]<<24)>>24; }
//(public) return value as short (assumes DB>=16)
function bnShortValue() { return (this.t==0)?this.s:(this.data[0]<<16)>>16; }
//(protected) return x s.t. r^x < DV
function bnpChunkSize(r) { return Math.floor(Math.LN2*this.DB/Math.log(r)); }
//(public) 0 if this == 0, 1 if this > 0
function bnSigNum() {
if(this.s < 0) return -1;
else if(this.t <= 0 || (this.t == 1 && this.data[0] <= 0)) return 0;
else return 1;
}
//(protected) convert to radix string
function bnpToRadix(b) {
if(b == null) b = 10;
if(this.signum() == 0 || b < 2 || b > 36) return "0";
var cs = this.chunkSize(b);
var a = Math.pow(b,cs);
var d = nbv(a), y = nbi(), z = nbi(), r = "";
this.divRemTo(d,y,z);
while(y.signum() > 0) {
r = (a+z.intValue()).toString(b).substr(1) + r;
y.divRemTo(d,y,z);
}
return z.intValue().toString(b) + r;
}
//(protected) convert from radix string
function bnpFromRadix(s,b) {
this.fromInt(0);
if(b == null) b = 10;
var cs = this.chunkSize(b);
var d = Math.pow(b,cs), mi = false, j = 0, w = 0;
for(var i = 0; i < s.length; ++i) {
var x = intAt(s,i);
if(x < 0) {
if(s.charAt(i) == "-" && this.signum() == 0) mi = true;
continue;
}
w = b*w+x;
if(++j >= cs) {
this.dMultiply(d);
this.dAddOffset(w,0);
j = 0;
w = 0;
}
}
if(j > 0) {
this.dMultiply(Math.pow(b,j));
this.dAddOffset(w,0);
}
if(mi) BigInteger.ZERO.subTo(this,this);
}
//(protected) alternate constructor
function bnpFromNumber(a,b,c) {
if("number" == typeof b) {
// new BigInteger(int,int,RNG)
if(a < 2) this.fromInt(1);
else {
this.fromNumber(a,c);
if(!this.testBit(a-1)) // force MSB set
this.bitwiseTo(BigInteger.ONE.shiftLeft(a-1),op_or,this);
if(this.isEven()) this.dAddOffset(1,0); // force odd
while(!this.isProbablePrime(b)) {
this.dAddOffset(2,0);
if(this.bitLength() > a) this.subTo(BigInteger.ONE.shiftLeft(a-1),this);
}
}
} else {
// new BigInteger(int,RNG)
var x = new Array(), t = a&7;
x.length = (a>>3)+1;
b.nextBytes(x);
if(t > 0) x[0] &= ((1<<t)-1); else x[0] = 0;
this.fromString(x,256);
}
}
//(public) convert to bigendian byte array
function bnToByteArray() {
var i = this.t, r = new Array();
r[0] = this.s;
var p = this.DB-(i*this.DB)%8, d, k = 0;
if(i-- > 0) {
if(p < this.DB && (d = this.data[i]>>p) != (this.s&this.DM)>>p)
r[k++] = d|(this.s<<(this.DB-p));
while(i >= 0) {
if(p < 8) {
d = (this.data[i]&((1<<p)-1))<<(8-p);
d |= this.data[--i]>>(p+=this.DB-8);
} else {
d = (this.data[i]>>(p-=8))&0xff;
if(p <= 0) { p += this.DB; --i; }
}
if((d&0x80) != 0) d |= -256;
if(k == 0 && (this.s&0x80) != (d&0x80)) ++k;
if(k > 0 || d != this.s) r[k++] = d;
}
}
return r;
}
function bnEquals(a) { return(this.compareTo(a)==0); }
function bnMin(a) { return(this.compareTo(a)<0)?this:a; }
function bnMax(a) { return(this.compareTo(a)>0)?this:a; }
//(protected) r = this op a (bitwise)
function bnpBitwiseTo(a,op,r) {
var i, f, m = Math.min(a.t,this.t);
for(i = 0; i < m; ++i) r.data[i] = op(this.data[i],a.data[i]);
if(a.t < this.t) {
f = a.s&this.DM;
for(i = m; i < this.t; ++i) r.data[i] = op(this.data[i],f);
r.t = this.t;
} else {
f = this.s&this.DM;
for(i = m; i < a.t; ++i) r.data[i] = op(f,a.data[i]);
r.t = a.t;
}
r.s = op(this.s,a.s);
r.clamp();
}
//(public) this & a
function op_and(x,y) { return x&y; }
function bnAnd(a) { var r = nbi(); this.bitwiseTo(a,op_and,r); return r; }
//(public) this | a
function op_or(x,y) { return x|y; }
function bnOr(a) { var r = nbi(); this.bitwiseTo(a,op_or,r); return r; }
//(public) this ^ a
function op_xor(x,y) { return x^y; }
function bnXor(a) { var r = nbi(); this.bitwiseTo(a,op_xor,r); return r; }
//(public) this & ~a
function op_andnot(x,y) { return x&~y; }
function bnAndNot(a) { var r = nbi(); this.bitwiseTo(a,op_andnot,r); return r; }
//(public) ~this
function bnNot() {
var r = nbi();
for(var i = 0; i < this.t; ++i) r.data[i] = this.DM&~this.data[i];
r.t = this.t;
r.s = ~this.s;
return r;
}
//(public) this << n
function bnShiftLeft(n) {
var r = nbi();
if(n < 0) this.rShiftTo(-n,r); else this.lShiftTo(n,r);
return r;
}
//(public) this >> n
function bnShiftRight(n) {
var r = nbi();
if(n < 0) this.lShiftTo(-n,r); else this.rShiftTo(n,r);
return r;
}
//return index of lowest 1-bit in x, x < 2^31
function lbit(x) {
if(x == 0) return -1;
var r = 0;
if((x&0xffff) == 0) { x >>= 16; r += 16; }
if((x&0xff) == 0) { x >>= 8; r += 8; }
if((x&0xf) == 0) { x >>= 4; r += 4; }
if((x&3) == 0) { x >>= 2; r += 2; }
if((x&1) == 0) ++r;
return r;
}
//(public) returns index of lowest 1-bit (or -1 if none)
function bnGetLowestSetBit() {
for(var i = 0; i < this.t; ++i)
if(this.data[i] != 0) return i*this.DB+lbit(this.data[i]);
if(this.s < 0) return this.t*this.DB;
return -1;
}
//return number of 1 bits in x
function cbit(x) {
var r = 0;
while(x != 0) { x &= x-1; ++r; }
return r;
}
//(public) return number of set bits
function bnBitCount() {
var r = 0, x = this.s&this.DM;
for(var i = 0; i < this.t; ++i) r += cbit(this.data[i]^x);
return r;
}
//(public) true iff nth bit is set
function bnTestBit(n) {
var j = Math.floor(n/this.DB);
if(j >= this.t) return(this.s!=0);
return((this.data[j]&(1<<(n%this.DB)))!=0);
}
//(protected) this op (1<<n)
function bnpChangeBit(n,op) {
var r = BigInteger.ONE.shiftLeft(n);
this.bitwiseTo(r,op,r);
return r;
}
//(public) this | (1<<n)
function bnSetBit(n) { return this.changeBit(n,op_or); }
//(public) this & ~(1<<n)
function bnClearBit(n) { return this.changeBit(n,op_andnot); }
//(public) this ^ (1<<n)
function bnFlipBit(n) { return this.changeBit(n,op_xor); }
//(protected) r = this + a
function bnpAddTo(a,r) {
var i = 0, c = 0, m = Math.min(a.t,this.t);
while(i < m) {
c += this.data[i]+a.data[i];
r.data[i++] = c&this.DM;
c >>= this.DB;
}
if(a.t < this.t) {
c += a.s;
while(i < this.t) {
c += this.data[i];
r.data[i++] = c&this.DM;
c >>= this.DB;
}
c += this.s;
} else {
c += this.s;
while(i < a.t) {
c += a.data[i];
r.data[i++] = c&this.DM;
c >>= this.DB;
}
c += a.s;
}
r.s = (c<0)?-1:0;
if(c > 0) r.data[i++] = c;
else if(c < -1) r.data[i++] = this.DV+c;
r.t = i;
r.clamp();
}
//(public) this + a
function bnAdd(a) { var r = nbi(); this.addTo(a,r); return r; }
//(public) this - a
function bnSubtract(a) { var r = nbi(); this.subTo(a,r); return r; }
//(public) this * a
function bnMultiply(a) { var r = nbi(); this.multiplyTo(a,r); return r; }
//(public) this / a
function bnDivide(a) { var r = nbi(); this.divRemTo(a,r,null); return r; }
//(public) this % a
function bnRemainder(a) { var r = nbi(); this.divRemTo(a,null,r); return r; }
//(public) [this/a,this%a]
function bnDivideAndRemainder(a) {
var q = nbi(), r = nbi();
this.divRemTo(a,q,r);
return new Array(q,r);
}
//(protected) this *= n, this >= 0, 1 < n < DV
function bnpDMultiply(n) {
this.data[this.t] = this.am(0,n-1,this,0,0,this.t);
++this.t;
this.clamp();
}
//(protected) this += n << w words, this >= 0
function bnpDAddOffset(n,w) {
if(n == 0) return;
while(this.t <= w) this.data[this.t++] = 0;
this.data[w] += n;
while(this.data[w] >= this.DV) {
this.data[w] -= this.DV;
if(++w >= this.t) this.data[this.t++] = 0;
++this.data[w];
}
}
//A "null" reducer
function NullExp() {}
function nNop(x) { return x; }
function nMulTo(x,y,r) { x.multiplyTo(y,r); }
function nSqrTo(x,r) { x.squareTo(r); }
NullExp.prototype.convert = nNop;
NullExp.prototype.revert = nNop;
NullExp.prototype.mulTo = nMulTo;
NullExp.prototype.sqrTo = nSqrTo;
//(public) this^e
function bnPow(e) { return this.exp(e,new NullExp()); }
//(protected) r = lower n words of "this * a", a.t <= n
//"this" should be the larger one if appropriate.
function bnpMultiplyLowerTo(a,n,r) {
var i = Math.min(this.t+a.t,n);
r.s = 0; // assumes a,this >= 0
r.t = i;
while(i > 0) r.data[--i] = 0;
var j;
for(j = r.t-this.t; i < j; ++i) r.data[i+this.t] = this.am(0,a.data[i],r,i,0,this.t);
for(j = Math.min(a.t,n); i < j; ++i) this.am(0,a.data[i],r,i,0,n-i);
r.clamp();
}
//(protected) r = "this * a" without lower n words, n > 0
//"this" should be the larger one if appropriate.
function bnpMultiplyUpperTo(a,n,r) {
--n;
var i = r.t = this.t+a.t-n;
r.s = 0; // assumes a,this >= 0
while(--i >= 0) r.data[i] = 0;
for(i = Math.max(n-this.t,0); i < a.t; ++i)
r.data[this.t+i-n] = this.am(n-i,a.data[i],r,0,0,this.t+i-n);
r.clamp();
r.drShiftTo(1,r);
}
//Barrett modular reduction
function Barrett(m) {
// setup Barrett
this.r2 = nbi();
this.q3 = nbi();
BigInteger.ONE.dlShiftTo(2*m.t,this.r2);
this.mu = this.r2.divide(m);
this.m = m;
}
function barrettConvert(x) {
if(x.s < 0 || x.t > 2*this.m.t) return x.mod(this.m);
else if(x.compareTo(this.m) < 0) return x;
else { var r = nbi(); x.copyTo(r); this.reduce(r); return r; }
}
function barrettRevert(x) { return x; }
//x = x mod m (HAC 14.42)
function barrettReduce(x) {
x.drShiftTo(this.m.t-1,this.r2);
if(x.t > this.m.t+1) { x.t = this.m.t+1; x.clamp(); }
this.mu.multiplyUpperTo(this.r2,this.m.t+1,this.q3);
this.m.multiplyLowerTo(this.q3,this.m.t+1,this.r2);
while(x.compareTo(this.r2) < 0) x.dAddOffset(1,this.m.t+1);
x.subTo(this.r2,x);
while(x.compareTo(this.m) >= 0) x.subTo(this.m,x);
}
//r = x^2 mod m; x != r
function barrettSqrTo(x,r) { x.squareTo(r); this.reduce(r); }
//r = x*y mod m; x,y != r
function barrettMulTo(x,y,r) { x.multiplyTo(y,r); this.reduce(r); }
Barrett.prototype.convert = barrettConvert;
Barrett.prototype.revert = barrettRevert;
Barrett.prototype.reduce = barrettReduce;
Barrett.prototype.mulTo = barrettMulTo;
Barrett.prototype.sqrTo = barrettSqrTo;
//(public) this^e % m (HAC 14.85)
function bnModPow(e,m) {
var i = e.bitLength(), k, r = nbv(1), z;
if(i <= 0) return r;
else if(i < 18) k = 1;
else if(i < 48) k = 3;
else if(i < 144) k = 4;
else if(i < 768) k = 5;
else k = 6;
if(i < 8)
z = new Classic(m);
else if(m.isEven())
z = new Barrett(m);
else
z = new Montgomery(m);
// precomputation
var g = new Array(), n = 3, k1 = k-1, km = (1<<k)-1;
g[1] = z.convert(this);
if(k > 1) {
var g2 = nbi();
z.sqrTo(g[1],g2);
while(n <= km) {
g[n] = nbi();
z.mulTo(g2,g[n-2],g[n]);
n += 2;
}
}
var j = e.t-1, w, is1 = true, r2 = nbi(), t;
i = nbits(e.data[j])-1;
while(j >= 0) {
if(i >= k1) w = (e.data[j]>>(i-k1))&km;
else {
w = (e.data[j]&((1<<(i+1))-1))<<(k1-i);
if(j > 0) w |= e.data[j-1]>>(this.DB+i-k1);
}
n = k;
while((w&1) == 0) { w >>= 1; --n; }
if((i -= n) < 0) { i += this.DB; --j; }
if(is1) { // ret == 1, don't bother squaring or multiplying it
g[w].copyTo(r);
is1 = false;
} else {
while(n > 1) { z.sqrTo(r,r2); z.sqrTo(r2,r); n -= 2; }
if(n > 0) z.sqrTo(r,r2); else { t = r; r = r2; r2 = t; }
z.mulTo(r2,g[w],r);
}
while(j >= 0 && (e.data[j]&(1<<i)) == 0) {
z.sqrTo(r,r2); t = r; r = r2; r2 = t;
if(--i < 0) { i = this.DB-1; --j; }
}
}
return z.revert(r);
}
//(public) gcd(this,a) (HAC 14.54)
function bnGCD(a) {
var x = (this.s<0)?this.negate():this.clone();
var y = (a.s<0)?a.negate():a.clone();
if(x.compareTo(y) < 0) { var t = x; x = y; y = t; }
var i = x.getLowestSetBit(), g = y.getLowestSetBit();
if(g < 0) return x;
if(i < g) g = i;
if(g > 0) {
x.rShiftTo(g,x);
y.rShiftTo(g,y);
}
while(x.signum() > 0) {
if((i = x.getLowestSetBit()) > 0) x.rShiftTo(i,x);
if((i = y.getLowestSetBit()) > 0) y.rShiftTo(i,y);
if(x.compareTo(y) >= 0) {
x.subTo(y,x);
x.rShiftTo(1,x);
} else {
y.subTo(x,y);
y.rShiftTo(1,y);
}
}
if(g > 0) y.lShiftTo(g,y);
return y;
}
//(protected) this % n, n < 2^26
function bnpModInt(n) {
if(n <= 0) return 0;
var d = this.DV%n, r = (this.s<0)?n-1:0;
if(this.t > 0)
if(d == 0) r = this.data[0]%n;
else for(var i = this.t-1; i >= 0; --i) r = (d*r+this.data[i])%n;
return r;
}
//(public) 1/this % m (HAC 14.61)
function bnModInverse(m) {
var ac = m.isEven();
if((this.isEven() && ac) || m.signum() == 0) return BigInteger.ZERO;
var u = m.clone(), v = this.clone();
var a = nbv(1), b = nbv(0), c = nbv(0), d = nbv(1);
while(u.signum() != 0) {
while(u.isEven()) {
u.rShiftTo(1,u);
if(ac) {
if(!a.isEven() || !b.isEven()) { a.addTo(this,a); b.subTo(m,b); }
a.rShiftTo(1,a);
} else if(!b.isEven()) b.subTo(m,b);
b.rShiftTo(1,b);
}
while(v.isEven()) {
v.rShiftTo(1,v);
if(ac) {
if(!c.isEven() || !d.isEven()) { c.addTo(this,c); d.subTo(m,d); }
c.rShiftTo(1,c);
} else if(!d.isEven()) d.subTo(m,d);
d.rShiftTo(1,d);
}
if(u.compareTo(v) >= 0) {
u.subTo(v,u);
if(ac) a.subTo(c,a);
b.subTo(d,b);
} else {
v.subTo(u,v);
if(ac) c.subTo(a,c);
d.subTo(b,d);
}
}
if(v.compareTo(BigInteger.ONE) != 0) return BigInteger.ZERO;
if(d.compareTo(m) >= 0) return d.subtract(m);
if(d.signum() < 0) d.addTo(m,d); else return d;
if(d.signum() < 0) return d.add(m); else return d;
}
var lowprimes = [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,179,181,191,193,197,199,211,223,227,229,233,239,241,251,257,263,269,271,277,281,283,293,307,311,313,317,331,337,347,349,353,359,367,373,379,383,389,397,401,409,419,421,431,433,439,443,449,457,461,463,467,479,487,491,499,503,509];
var lplim = (1<<26)/lowprimes[lowprimes.length-1];
//(public) test primality with certainty >= 1-.5^t
function bnIsProbablePrime(t) {
var i, x = this.abs();
if(x.t == 1 && x.data[0] <= lowprimes[lowprimes.length-1]) {
for(i = 0; i < lowprimes.length; ++i)
if(x.data[0] == lowprimes[i]) return true;
return false;
}
if(x.isEven()) return false;
i = 1;
while(i < lowprimes.length) {
var m = lowprimes[i], j = i+1;
while(j < lowprimes.length && m < lplim) m *= lowprimes[j++];
m = x.modInt(m);
while(i < j) if(m%lowprimes[i++] == 0) return false;
}
return x.millerRabin(t);
}
//(protected) true if probably prime (HAC 4.24, Miller-Rabin)
function bnpMillerRabin(t) {
var n1 = this.subtract(BigInteger.ONE);
var k = n1.getLowestSetBit();
if(k <= 0) return false;
var r = n1.shiftRight(k);
var prng = bnGetPrng();
var a;
for(var i = 0; i < t; ++i) {
// select witness 'a' at random from between 1 and n1
do {
a = new BigInteger(this.bitLength(), prng);
}
while(a.compareTo(BigInteger.ONE) <= 0 || a.compareTo(n1) >= 0);
var y = a.modPow(r,this);
if(y.compareTo(BigInteger.ONE) != 0 && y.compareTo(n1) != 0) {
var j = 1;
while(j++ < k && y.compareTo(n1) != 0) {
y = y.modPowInt(2,this);
if(y.compareTo(BigInteger.ONE) == 0) return false;
}
if(y.compareTo(n1) != 0) return false;
}
}
return true;
}
// get pseudo random number generator
function bnGetPrng() {
// create prng with api that matches BigInteger secure random
return {
// x is an array to fill with bytes
nextBytes: function(x) {
for(var i = 0; i < x.length; ++i) {
x[i] = Math.floor(Math.random() * 0x0100);
}
}
};
}
//protected
BigInteger.prototype.chunkSize = bnpChunkSize;
BigInteger.prototype.toRadix = bnpToRadix;
BigInteger.prototype.fromRadix = bnpFromRadix;
BigInteger.prototype.fromNumber = bnpFromNumber;
BigInteger.prototype.bitwiseTo = bnpBitwiseTo;
BigInteger.prototype.changeBit = bnpChangeBit;
BigInteger.prototype.addTo = bnpAddTo;
BigInteger.prototype.dMultiply = bnpDMultiply;
BigInteger.prototype.dAddOffset = bnpDAddOffset;
BigInteger.prototype.multiplyLowerTo = bnpMultiplyLowerTo;
BigInteger.prototype.multiplyUpperTo = bnpMultiplyUpperTo;
BigInteger.prototype.modInt = bnpModInt;
BigInteger.prototype.millerRabin = bnpMillerRabin;
//public
BigInteger.prototype.clone = bnClone;
BigInteger.prototype.intValue = bnIntValue;
BigInteger.prototype.byteValue = bnByteValue;
BigInteger.prototype.shortValue = bnShortValue;
BigInteger.prototype.signum = bnSigNum;
BigInteger.prototype.toByteArray = bnToByteArray;
BigInteger.prototype.equals = bnEquals;
BigInteger.prototype.min = bnMin;
BigInteger.prototype.max = bnMax;
BigInteger.prototype.and = bnAnd;
BigInteger.prototype.or = bnOr;
BigInteger.prototype.xor = bnXor;
BigInteger.prototype.andNot = bnAndNot;
BigInteger.prototype.not = bnNot;
BigInteger.prototype.shiftLeft = bnShiftLeft;
BigInteger.prototype.shiftRight = bnShiftRight;
BigInteger.prototype.getLowestSetBit = bnGetLowestSetBit;
BigInteger.prototype.bitCount = bnBitCount;
BigInteger.prototype.testBit = bnTestBit;
BigInteger.prototype.setBit = bnSetBit;
BigInteger.prototype.clearBit = bnClearBit;
BigInteger.prototype.flipBit = bnFlipBit;
BigInteger.prototype.add = bnAdd;
BigInteger.prototype.subtract = bnSubtract;
BigInteger.prototype.multiply = bnMultiply;
BigInteger.prototype.divide = bnDivide;
BigInteger.prototype.remainder = bnRemainder;
BigInteger.prototype.divideAndRemainder = bnDivideAndRemainder;
BigInteger.prototype.modPow = bnModPow;
BigInteger.prototype.modInverse = bnModInverse;
BigInteger.prototype.pow = bnPow;
BigInteger.prototype.gcd = bnGCD;
BigInteger.prototype.isProbablePrime = bnIsProbablePrime;
//BigInteger interfaces not implemented in jsbn:
//BigInteger(int signum, byte[] magnitude)
//double doubleValue()
//float floatValue()
//int hashCode()
//long longValue()
//static BigInteger valueOf(long val)
|
PypiClean
|