index
int64 0
731k
| package
stringlengths 2
98
⌀ | name
stringlengths 1
76
| docstring
stringlengths 0
281k
⌀ | code
stringlengths 4
1.07M
⌀ | signature
stringlengths 2
42.8k
⌀ |
---|---|---|---|---|---|
24,605 | schedula.utils.base | plot |
Plots the Dispatcher with a graph in the DOT language with Graphviz.
:param workflow:
If True the latest solution will be plotted, otherwise the dmap.
:type workflow: bool, optional
:param view:
Open the rendered directed graph in the DOT language with the sys
default opener.
:type view: bool, optional
:param edge_data:
Edge attributes to view.
:type edge_data: tuple[str], optional
:param node_data:
Data node attributes to view.
:type node_data: tuple[str], optional
:param node_function:
Function node attributes to view.
:type node_function: tuple[str], optional
:param node_styles:
Default node styles according to graphviz node attributes.
:type node_styles: dict[str|Token, dict[str, str]]
:param depth:
Depth of sub-dispatch plots. If negative all levels are plotted.
:type depth: int, optional
:param name:
Graph name used in the source code.
:type name: str
:param comment:
Comment added to the first line of the source.
:type comment: str
:param directory:
(Sub)directory for source saving and rendering.
:type directory: str, optional
:param format:
Rendering output format ('pdf', 'png', ...).
:type format: str, optional
:param engine:
Layout command used ('dot', 'neato', ...).
:type engine: str, optional
:param encoding:
Encoding for saving the source.
:type encoding: str, optional
:param graph_attr:
Dict of (attribute, value) pairs for the graph.
:type graph_attr: dict, optional
:param node_attr:
Dict of (attribute, value) pairs set for all nodes.
:type node_attr: dict, optional
:param edge_attr:
Dict of (attribute, value) pairs set for all edges.
:type edge_attr: dict, optional
:param body:
Dict of (attribute, value) pairs to add to the graph body.
:type body: dict, optional
:param raw_body:
List of command to add to the graph body.
:type raw_body: list, optional
:param directory:
Where is the generated Flask app root located?
:type directory: str, optional
:param sites:
A set of :class:`~schedula.utils.drw.Site` to maintain alive the
backend server.
:type sites: set[~schedula.utils.drw.Site], optional
:param index:
Add the site index as first page?
:type index: bool, optional
:param max_lines:
Maximum number of lines for rendering node attributes.
:type max_lines: int, optional
:param max_width:
Maximum number of characters in a line to render node attributes.
:type max_width: int, optional
:param view:
Open the main page of the site?
:type view: bool, optional
:param render:
Render all pages statically?
:type render: bool, optional
:param viz:
Use viz.js as back-end?
:type viz: bool, optional
:param short_name:
Maximum length of the filename, if set name is hashed and reduced.
:type short_name: int, optional
:param executor:
Pool executor to render object.
:type executor: str, optional
:param run:
Run the backend server?
:type run: bool, optional
:return:
A SiteMap or a Site if .
:rtype: schedula.utils.drw.SiteMap
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
:code:
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> def fun(a):
... return a + 1, a - 1
>>> dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dsp.plot(view=False, graph_attr={'ratio': '1'})
SiteMap([(Dispatcher, SiteMap())])
| def plot(self, workflow=None, view=True, depth=-1, name=NONE, comment=NONE,
format=NONE, engine=NONE, encoding=NONE, graph_attr=NONE,
node_attr=NONE, edge_attr=NONE, body=NONE, raw_body=NONE,
node_styles=NONE, node_data=NONE, node_function=NONE,
edge_data=NONE, max_lines=NONE, max_width=NONE, directory=None,
sites=None, index=True, viz=False, short_name=None,
executor='async', render=False, run=False):
"""
Plots the Dispatcher with a graph in the DOT language with Graphviz.
:param workflow:
If True the latest solution will be plotted, otherwise the dmap.
:type workflow: bool, optional
:param view:
Open the rendered directed graph in the DOT language with the sys
default opener.
:type view: bool, optional
:param edge_data:
Edge attributes to view.
:type edge_data: tuple[str], optional
:param node_data:
Data node attributes to view.
:type node_data: tuple[str], optional
:param node_function:
Function node attributes to view.
:type node_function: tuple[str], optional
:param node_styles:
Default node styles according to graphviz node attributes.
:type node_styles: dict[str|Token, dict[str, str]]
:param depth:
Depth of sub-dispatch plots. If negative all levels are plotted.
:type depth: int, optional
:param name:
Graph name used in the source code.
:type name: str
:param comment:
Comment added to the first line of the source.
:type comment: str
:param directory:
(Sub)directory for source saving and rendering.
:type directory: str, optional
:param format:
Rendering output format ('pdf', 'png', ...).
:type format: str, optional
:param engine:
Layout command used ('dot', 'neato', ...).
:type engine: str, optional
:param encoding:
Encoding for saving the source.
:type encoding: str, optional
:param graph_attr:
Dict of (attribute, value) pairs for the graph.
:type graph_attr: dict, optional
:param node_attr:
Dict of (attribute, value) pairs set for all nodes.
:type node_attr: dict, optional
:param edge_attr:
Dict of (attribute, value) pairs set for all edges.
:type edge_attr: dict, optional
:param body:
Dict of (attribute, value) pairs to add to the graph body.
:type body: dict, optional
:param raw_body:
List of command to add to the graph body.
:type raw_body: list, optional
:param directory:
Where is the generated Flask app root located?
:type directory: str, optional
:param sites:
A set of :class:`~schedula.utils.drw.Site` to maintain alive the
backend server.
:type sites: set[~schedula.utils.drw.Site], optional
:param index:
Add the site index as first page?
:type index: bool, optional
:param max_lines:
Maximum number of lines for rendering node attributes.
:type max_lines: int, optional
:param max_width:
Maximum number of characters in a line to render node attributes.
:type max_width: int, optional
:param view:
Open the main page of the site?
:type view: bool, optional
:param render:
Render all pages statically?
:type render: bool, optional
:param viz:
Use viz.js as back-end?
:type viz: bool, optional
:param short_name:
Maximum length of the filename, if set name is hashed and reduced.
:type short_name: int, optional
:param executor:
Pool executor to render object.
:type executor: str, optional
:param run:
Run the backend server?
:type run: bool, optional
:return:
A SiteMap or a Site if .
:rtype: schedula.utils.drw.SiteMap
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
:code:
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> def fun(a):
... return a + 1, a - 1
>>> dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dsp.plot(view=False, graph_attr={'ratio': '1'})
SiteMap([(Dispatcher, SiteMap())])
"""
d = {
'name': name, 'comment': comment, 'format': format, 'body': body,
'engine': engine, 'encoding': encoding, 'graph_attr': graph_attr,
'node_attr': node_attr, 'edge_attr': edge_attr, 'raw_body': raw_body
}
options = {
'digraph': {k: v for k, v in d.items() if v is not NONE} or NONE,
'node_styles': node_styles,
'node_data': node_data,
'node_function': node_function,
'edge_data': edge_data,
'max_lines': max_lines, # 5
'max_width': max_width, # 200
}
options = {k: v for k, v in options.items() if v is not NONE}
from .drw import SiteMap
from .sol import Solution
if workflow is None and isinstance(self, Solution):
workflow = True
else:
workflow = workflow or False
sitemap = SiteMap()
sitemap.short_name = short_name
sitemap.directory = directory
sitemap.add_items(self, workflow=workflow, depth=depth, **options)
if render:
sitemap.render(
directory=directory, view=view, index=index, viz_js=viz,
executor=executor
)
elif view or run or sites is not None:
site = sitemap.site(
directory, view=view, run=run, index=index, viz_js=viz,
executor=executor
)
if sites is None:
return site
sites.add(site)
return sitemap
| (self, workflow=None, view=True, depth=-1, name=none, comment=none, format=none, engine=none, encoding=none, graph_attr=none, node_attr=none, edge_attr=none, body=none, raw_body=none, node_styles=none, node_data=none, node_function=none, edge_data=none, max_lines=none, max_width=none, directory=None, sites=None, index=True, viz=False, short_name=None, executor='async', render=False, run=False) |
24,606 | schedula.dispatcher | set_default_value |
Set the default value of a data node in the dispatcher.
:param data_id:
Data node id.
:type data_id: str
:param value:
Data node default value.
.. note:: If `EMPTY` the previous default value is removed.
:type value: T, optional
:param initial_dist:
Initial distance in the ArciDispatch algorithm when the data node
default value is used.
:type initial_dist: float, int, optional
**--------------------------------------------------------------------**
**Example**:
A dispatcher with a data node named `a`::
>>> import schedula as sh
>>> dsp = sh.Dispatcher(name='Dispatcher')
...
>>> dsp.add_data(data_id='a')
'a'
Add a default value to `a` node::
>>> dsp.set_default_value('a', value='value of the data')
>>> list(sorted(dsp.default_values['a'].items()))
[('initial_dist', 0.0), ('value', 'value of the data')]
Remove the default value of `a` node::
>>> dsp.set_default_value('a', value=sh.EMPTY)
>>> dsp.default_values
{}
| def set_default_value(self, data_id, value=EMPTY, initial_dist=0.0):
"""
Set the default value of a data node in the dispatcher.
:param data_id:
Data node id.
:type data_id: str
:param value:
Data node default value.
.. note:: If `EMPTY` the previous default value is removed.
:type value: T, optional
:param initial_dist:
Initial distance in the ArciDispatch algorithm when the data node
default value is used.
:type initial_dist: float, int, optional
**--------------------------------------------------------------------**
**Example**:
A dispatcher with a data node named `a`::
>>> import schedula as sh
>>> dsp = sh.Dispatcher(name='Dispatcher')
...
>>> dsp.add_data(data_id='a')
'a'
Add a default value to `a` node::
>>> dsp.set_default_value('a', value='value of the data')
>>> list(sorted(dsp.default_values['a'].items()))
[('initial_dist', 0.0), ('value', 'value of the data')]
Remove the default value of `a` node::
>>> dsp.set_default_value('a', value=sh.EMPTY)
>>> dsp.default_values
{}
"""
try:
if self.dmap.nodes[data_id]['type'] == 'data': # Is data node?
if value is EMPTY:
self.default_values.pop(data_id, None) # Remove default.
else: # Add default.
self.default_values[data_id] = {
'value': value,
'initial_dist': initial_dist
}
return
except KeyError:
pass
raise ValueError('Input error: %s is not a data node' % data_id)
| (self, data_id, value=empty, initial_dist=0.0) |
24,607 | schedula.dispatcher | shrink_dsp |
Returns a reduced dispatcher.
:param inputs:
Input data nodes.
:type inputs: list[str], iterable, optional
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable, optional
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:return:
A sub-dispatcher.
:rtype: Dispatcher
.. seealso:: :func:`dispatch`
**--------------------------------------------------------------------**
**Example**:
A dispatcher like this:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
>>> dsp = Dispatcher(name='Dispatcher')
>>> functions = [
... {
... 'function_id': 'fun1',
... 'inputs': ['a', 'b'],
... 'outputs': ['c']
... },
... {
... 'function_id': 'fun2',
... 'inputs': ['b', 'd'],
... 'outputs': ['e']
... },
... {
... 'function_id': 'fun3',
... 'function': min,
... 'inputs': ['d', 'f'],
... 'outputs': ['g']
... },
... {
... 'function_id': 'fun4',
... 'function': max,
... 'inputs': ['a', 'b'],
... 'outputs': ['g']
... },
... {
... 'function_id': 'fun5',
... 'function': max,
... 'inputs': ['d', 'e'],
... 'outputs': ['c', 'f']
... },
... ]
>>> dsp.add_from_lists(fun_list=functions)
([], [...])
Get the sub-dispatcher induced by dispatching with no calls from inputs
`a`, `b`, and `c` to outputs `c`, `e`, and `f`::
>>> shrink_dsp = dsp.shrink_dsp(inputs=['a', 'b', 'd'],
... outputs=['c', 'f'])
.. dispatcher:: shrink_dsp
:opt: graph_attr={'ratio': '1'}
>>> shrink_dsp.name = 'Sub-Dispatcher'
| def shrink_dsp(self, inputs=None, outputs=None, inputs_dist=None,
wildcard=True):
"""
Returns a reduced dispatcher.
:param inputs:
Input data nodes.
:type inputs: list[str], iterable, optional
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable, optional
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:return:
A sub-dispatcher.
:rtype: Dispatcher
.. seealso:: :func:`dispatch`
**--------------------------------------------------------------------**
**Example**:
A dispatcher like this:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
>>> dsp = Dispatcher(name='Dispatcher')
>>> functions = [
... {
... 'function_id': 'fun1',
... 'inputs': ['a', 'b'],
... 'outputs': ['c']
... },
... {
... 'function_id': 'fun2',
... 'inputs': ['b', 'd'],
... 'outputs': ['e']
... },
... {
... 'function_id': 'fun3',
... 'function': min,
... 'inputs': ['d', 'f'],
... 'outputs': ['g']
... },
... {
... 'function_id': 'fun4',
... 'function': max,
... 'inputs': ['a', 'b'],
... 'outputs': ['g']
... },
... {
... 'function_id': 'fun5',
... 'function': max,
... 'inputs': ['d', 'e'],
... 'outputs': ['c', 'f']
... },
... ]
>>> dsp.add_from_lists(fun_list=functions)
([], [...])
Get the sub-dispatcher induced by dispatching with no calls from inputs
`a`, `b`, and `c` to outputs `c`, `e`, and `f`::
>>> shrink_dsp = dsp.shrink_dsp(inputs=['a', 'b', 'd'],
... outputs=['c', 'f'])
.. dispatcher:: shrink_dsp
:opt: graph_attr={'ratio': '1'}
>>> shrink_dsp.name = 'Sub-Dispatcher'
"""
bfs = None
if inputs:
# Get all data nodes no wait inputs.
wait_in = self._get_wait_in(flag=False)
# Evaluate the workflow graph without invoking functions.
o = self.dispatch(
inputs, outputs, inputs_dist, wildcard, True, False,
True, _wait_in=wait_in
)
data_nodes = self.data_nodes # Get data nodes.
from .utils.alg import _union_workflow, _convert_bfs
bfs = _union_workflow(o) # bfg edges.
# Set minimum initial distances.
if inputs_dist:
inputs_dist = combine_dicts(o.dist, inputs_dist)
else:
inputs_dist = o.dist
# Set data nodes to wait inputs.
wait_in = self._get_wait_in(flag=True)
while True: # Start shrinking loop.
# Evaluate the workflow graph without invoking functions.
o = self.dispatch(
inputs, outputs, inputs_dist, wildcard, True, False,
False, _wait_in=wait_in
)
_union_workflow(o, bfs=bfs) # Update bfs.
n_d, status = o._remove_wait_in() # Remove wait input flags.
if not status:
break # Stop iteration.
# Update inputs.
inputs = n_d.intersection(data_nodes).union(inputs)
# Update outputs and convert bfs in DiGraphs.
outputs, bfs = outputs or o, _convert_bfs(bfs)
elif not outputs:
return self.copy_structure() # Empty Dispatcher.
# Get sub dispatcher breadth-first-search graph.
dsp = self._get_dsp_from_bfs(outputs, bfs_graphs=bfs)
return dsp # Return the shrink sub dispatcher.
| (self, inputs=None, outputs=None, inputs_dist=None, wildcard=True) |
24,609 | schedula.utils.exc | DispatcherAbort | null | class DispatcherAbort(BaseException):
pass
| null |
24,610 | schedula.utils.exc | DispatcherError | null | class DispatcherError(Exception):
def __reduce__(self):
fn, args, state = super(DispatcherError, self).__reduce__()
state = {k: v for k, v in state.items() if k not in ('sol', 'plot')}
return fn, args, state
def __init__(self, *args, sol=None, ex=None, **kwargs):
# noinspection PyArgumentList
super(DispatcherError, self).__init__(*args, **kwargs)
self.plot = None
self.sol = None
self.ex = ex
self.update(sol)
def update(self, sol):
self.sol = sol
self.plot = self.sol.plot if sol is not None else None
| (*args, sol=None, ex=None, **kwargs) |
24,611 | schedula.utils.exc | __init__ | null | def __init__(self, *args, sol=None, ex=None, **kwargs):
# noinspection PyArgumentList
super(DispatcherError, self).__init__(*args, **kwargs)
self.plot = None
self.sol = None
self.ex = ex
self.update(sol)
| (self, *args, sol=None, ex=None, **kwargs) |
24,612 | schedula.utils.exc | __reduce__ | null | def __reduce__(self):
fn, args, state = super(DispatcherError, self).__reduce__()
state = {k: v for k, v in state.items() if k not in ('sol', 'plot')}
return fn, args, state
| (self) |
24,613 | schedula.utils.exc | update | null | def update(self, sol):
self.sol = sol
self.plot = self.sol.plot if sol is not None else None
| (self, sol) |
24,614 | schedula.utils.exc | ExecutorShutdown | null | class ExecutorShutdown(BaseException):
pass
| null |
24,615 | schedula.utils.dsp | MapDispatch |
It dynamically builds a :class:`~schedula.dispatcher.Dispatcher` that is
used to invoke recursivelly a *dispatching function* that is defined
by a constructor function that takes a `dsp` base model as input.
The created function takes a list of dictionaries as input that are used to
invoke the mapping function and returns a list of outputs.
:return:
A function that executes the dispatch of the given
:class:`~schedula.dispatcher.Dispatcher`.
:rtype: callable
.. seealso:: :func:`~schedula.utils.dsp.SubDispatch`
Example:
A simple example on how to use the :func:`~schedula.utils.dsp.MapDispatch`:
.. dispatcher:: map_func
:opt: graph_attr={'ratio': '1'}, depth=-1, workflow=True
:code:
>>> from schedula import Dispatcher, MapDispatch
>>> dsp = Dispatcher(name='model')
...
>>> def fun(a, b):
... return a + b, a - b
...
>>> dsp.add_func(fun, ['c', 'd'], inputs_kwargs=True)
'fun'
>>> map_func = MapDispatch(dsp, constructor_kwargs={
... 'outputs': ['c', 'd'], 'output_type': 'list'
... })
>>> map_func([{'a': 1, 'b': 2}, {'a': 2, 'b': 2}, {'a': 3, 'b': 2}])
[[3, -1], [4, 0], [5, 1]]
The execution model is created dynamically according to the length of the
provided inputs. Moreover, the :func:`~schedula.utils.dsp.MapDispatch` has
the possibility to define default values, that are recursively merged with
the input provided to the *dispatching function* as follow:
.. dispatcher:: map_func
:opt: graph_attr={'ratio': '1'}, depth=-1, workflow=True
:code:
>>> map_func([{'a': 1}, {'a': 3, 'b': 3}], defaults={'b': 2})
[[3, -1], [6, 0]]
The :func:`~schedula.utils.dsp.MapDispatch` can also be used as a partial
reducing function, i.e., part of the outpus of the previous step are used as
input for the successive execution of the *dispatching function*. For
example:
.. dispatcher:: map_func
:opt: graph_attr={'ratio': '1'}, depth=-1, workflow=True
:code:
>>> map_func = MapDispatch(dsp, recursive_inputs={'c': 'b'})
>>> map_func([{'a': 1, 'b': 1}, {'a': 2}, {'a': 3}])
[Solution([('a', 1), ('b', 1), ('c', 2), ('d', 0)]),
Solution([('a', 2), ('b', 2), ('c', 4), ('d', 0)]),
Solution([('a', 3), ('b', 4), ('c', 7), ('d', -1)])]
| class MapDispatch(SubDispatch):
"""
It dynamically builds a :class:`~schedula.dispatcher.Dispatcher` that is
used to invoke recursivelly a *dispatching function* that is defined
by a constructor function that takes a `dsp` base model as input.
The created function takes a list of dictionaries as input that are used to
invoke the mapping function and returns a list of outputs.
:return:
A function that executes the dispatch of the given
:class:`~schedula.dispatcher.Dispatcher`.
:rtype: callable
.. seealso:: :func:`~schedula.utils.dsp.SubDispatch`
Example:
A simple example on how to use the :func:`~schedula.utils.dsp.MapDispatch`:
.. dispatcher:: map_func
:opt: graph_attr={'ratio': '1'}, depth=-1, workflow=True
:code:
>>> from schedula import Dispatcher, MapDispatch
>>> dsp = Dispatcher(name='model')
...
>>> def fun(a, b):
... return a + b, a - b
...
>>> dsp.add_func(fun, ['c', 'd'], inputs_kwargs=True)
'fun'
>>> map_func = MapDispatch(dsp, constructor_kwargs={
... 'outputs': ['c', 'd'], 'output_type': 'list'
... })
>>> map_func([{'a': 1, 'b': 2}, {'a': 2, 'b': 2}, {'a': 3, 'b': 2}])
[[3, -1], [4, 0], [5, 1]]
The execution model is created dynamically according to the length of the
provided inputs. Moreover, the :func:`~schedula.utils.dsp.MapDispatch` has
the possibility to define default values, that are recursively merged with
the input provided to the *dispatching function* as follow:
.. dispatcher:: map_func
:opt: graph_attr={'ratio': '1'}, depth=-1, workflow=True
:code:
>>> map_func([{'a': 1}, {'a': 3, 'b': 3}], defaults={'b': 2})
[[3, -1], [6, 0]]
The :func:`~schedula.utils.dsp.MapDispatch` can also be used as a partial
reducing function, i.e., part of the outpus of the previous step are used as
input for the successive execution of the *dispatching function*. For
example:
.. dispatcher:: map_func
:opt: graph_attr={'ratio': '1'}, depth=-1, workflow=True
:code:
>>> map_func = MapDispatch(dsp, recursive_inputs={'c': 'b'})
>>> map_func([{'a': 1, 'b': 1}, {'a': 2}, {'a': 3}])
[Solution([('a', 1), ('b', 1), ('c', 2), ('d', 0)]),
Solution([('a', 2), ('b', 2), ('c', 4), ('d', 0)]),
Solution([('a', 3), ('b', 4), ('c', 7), ('d', -1)])]
"""
def __init__(self, dsp, defaults=None, recursive_inputs=None,
constructor=SubDispatch, constructor_kwargs=None,
function_id=None, func_kw=lambda *args, **data: {},
input_label='inputs<{}>', output_label='outputs<{}>',
data_label='data<{}>', cluster_label='task<{}>', **kwargs):
"""
Initializes the MapDispatch function.
:param dsp:
A dispatcher that identifies the base model.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param defaults:
Defaults values that are recursively merged with the input provided
to the *dispatching function*.
:type defaults: dict
:param recursive_inputs:
List of data node ids that are extracted from the outputs of the
*dispatching function* and then merged with the inputs of the its
successive evaluation. If a dictionary is given, this is used to
rename the data node ids extracted.
:type recursive_inputs: list | dict
:param constructor:
It initializes the *dispatching function*.
:type constructor: function | class
:param constructor_kwargs:
Extra keywords passed to the constructor function.
:type constructor_kwargs: function | class
:param function_id:
Function name.
:type function_id: str, optional
:param func_kw:
Extra keywords to add the *dispatching function* to execution model.
:type func_kw: function, optional
:param input_label:
Custom label formatter for recursive inputs.
:type input_label: str, optional
:param output_label:
Custom label formatter for recursive outputs.
:type output_label: str, optional
:param data_label:
Custom label formatter for recursive internal data.
:type data_label: str, optional
:param kwargs:
Keywords to initialize the execution model.
:type kwargs: object
"""
super(MapDispatch, self).__init__(
dsp, function_id=function_id, output_type='list'
)
self.func = constructor(dsp, **(constructor_kwargs or {}))
self.kwargs = kwargs or {}
self.defaults = defaults
self.recursive_inputs = recursive_inputs
self.input_label = input_label
self.output_label = output_label
self.data_label = data_label
self.cluster_label = cluster_label
self.func_kw = func_kw
@staticmethod
def prepare_inputs(inputs, defaults):
inputs = [combine_dicts(defaults, d) for d in inputs]
return inputs if len(inputs) > 1 else inputs[0]
@staticmethod
def recursive_data(recursive_inputs, input_data, outputs):
data = selector(recursive_inputs, outputs or {}, allow_miss=True)
if isinstance(recursive_inputs, dict):
data = map_dict(recursive_inputs, data)
data.update(input_data)
return data
@staticmethod
def format_labels(it, label):
f = label.format
return [f(k, **v) for k, v in it]
@staticmethod
def format_clusters(it, label):
f = label.format
return [{'body': {
'label': f'"{f(k, **v)}"', 'labelloc': 'b'
}} for k, v in it]
def _init_dsp(self, defaults, inputs, recursive_inputs=None):
from ..dispatcher import Dispatcher
defaults = combine_dicts(self.defaults or {}, defaults or {})
self.dsp = dsp = Dispatcher(**self.kwargs)
add_data, add_func = dsp.add_data, dsp.add_func
n = len(str(len(inputs) + 1))
it = [(str(k).zfill(n), v) for k, v in enumerate(inputs, 1)]
inp = self.format_labels(it, self.input_label)
clt = self.format_clusters(it, self.cluster_label)
rl = self.format_labels(it, 'run<{}>')
self.outputs = out = self.format_labels(it, self.output_label)
add_func(self.prepare_inputs, inp, inputs=['inputs', 'defaults'])
recursive = recursive_inputs or self.recursive_inputs
if recursive:
func = functools.partial(self.recursive_data, recursive)
dat = self.format_labels(it, self.data_label)
fl = self.format_labels(it, 'recursive_data<{}>')
it = iter(zip(inp, dat, clt, fl))
i, d, c, fid = next(it)
add_data(i, clusters=c)
add_func(bypass, [d], inputs=[i], clusters=c)
for (i, d, c, fid), o, in zip(it, out[:-1]):
add_data(i, clusters=c)
add_func(func, [d], inputs=[i, o], clusters=c, function_id=fid)
inp = dat
for i, o, c, fid, (k, v) in zip(inp, out, clt, rl, enumerate(inputs)):
add_data(i, clusters=c)
kw = {'clusters': c, 'function_id': fid}
kw.update(self.func_kw(k, **v))
add_func(self.func, [o], inputs=[i], **kw)
add_data(o, clusters=c)
return {'inputs': inputs, 'defaults': defaults}
# noinspection PyMethodOverriding
def __call__(self, inputs, defaults=None, recursive_inputs=None,
_stopper=None, _executor=False, _sol_name=(), _verbose=False):
inputs = self._init_dsp(defaults, inputs, recursive_inputs)
return super(MapDispatch, self).__call__(
inputs, _stopper=_stopper, _executor=_executor, _verbose=_verbose,
_sol_name=_sol_name
)
| (dsp, defaults=None, recursive_inputs=None, constructor=<class 'schedula.utils.dsp.SubDispatch'>, constructor_kwargs=None, function_id=None, func_kw=<function MapDispatch.<lambda> at 0x7f49a8328430>, input_label='inputs<{}>', output_label='outputs<{}>', data_label='data<{}>', cluster_label='task<{}>', **kwargs) |
24,616 | schedula.utils.dsp | __call__ | null | def __call__(self, inputs, defaults=None, recursive_inputs=None,
_stopper=None, _executor=False, _sol_name=(), _verbose=False):
inputs = self._init_dsp(defaults, inputs, recursive_inputs)
return super(MapDispatch, self).__call__(
inputs, _stopper=_stopper, _executor=_executor, _verbose=_verbose,
_sol_name=_sol_name
)
| (self, inputs, defaults=None, recursive_inputs=None, _stopper=None, _executor=False, _sol_name=(), _verbose=False) |
24,618 | schedula.utils.dsp | __getstate__ | null | def __getstate__(self):
state = self.__dict__.copy()
state['solution'] = state['solution'].__class__(state['dsp'])
del state['__name__']
return state
| (self) |
24,619 | schedula.utils.dsp | __init__ |
Initializes the MapDispatch function.
:param dsp:
A dispatcher that identifies the base model.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param defaults:
Defaults values that are recursively merged with the input provided
to the *dispatching function*.
:type defaults: dict
:param recursive_inputs:
List of data node ids that are extracted from the outputs of the
*dispatching function* and then merged with the inputs of the its
successive evaluation. If a dictionary is given, this is used to
rename the data node ids extracted.
:type recursive_inputs: list | dict
:param constructor:
It initializes the *dispatching function*.
:type constructor: function | class
:param constructor_kwargs:
Extra keywords passed to the constructor function.
:type constructor_kwargs: function | class
:param function_id:
Function name.
:type function_id: str, optional
:param func_kw:
Extra keywords to add the *dispatching function* to execution model.
:type func_kw: function, optional
:param input_label:
Custom label formatter for recursive inputs.
:type input_label: str, optional
:param output_label:
Custom label formatter for recursive outputs.
:type output_label: str, optional
:param data_label:
Custom label formatter for recursive internal data.
:type data_label: str, optional
:param kwargs:
Keywords to initialize the execution model.
:type kwargs: object
| def __init__(self, dsp, defaults=None, recursive_inputs=None,
constructor=SubDispatch, constructor_kwargs=None,
function_id=None, func_kw=lambda *args, **data: {},
input_label='inputs<{}>', output_label='outputs<{}>',
data_label='data<{}>', cluster_label='task<{}>', **kwargs):
"""
Initializes the MapDispatch function.
:param dsp:
A dispatcher that identifies the base model.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param defaults:
Defaults values that are recursively merged with the input provided
to the *dispatching function*.
:type defaults: dict
:param recursive_inputs:
List of data node ids that are extracted from the outputs of the
*dispatching function* and then merged with the inputs of the its
successive evaluation. If a dictionary is given, this is used to
rename the data node ids extracted.
:type recursive_inputs: list | dict
:param constructor:
It initializes the *dispatching function*.
:type constructor: function | class
:param constructor_kwargs:
Extra keywords passed to the constructor function.
:type constructor_kwargs: function | class
:param function_id:
Function name.
:type function_id: str, optional
:param func_kw:
Extra keywords to add the *dispatching function* to execution model.
:type func_kw: function, optional
:param input_label:
Custom label formatter for recursive inputs.
:type input_label: str, optional
:param output_label:
Custom label formatter for recursive outputs.
:type output_label: str, optional
:param data_label:
Custom label formatter for recursive internal data.
:type data_label: str, optional
:param kwargs:
Keywords to initialize the execution model.
:type kwargs: object
"""
super(MapDispatch, self).__init__(
dsp, function_id=function_id, output_type='list'
)
self.func = constructor(dsp, **(constructor_kwargs or {}))
self.kwargs = kwargs or {}
self.defaults = defaults
self.recursive_inputs = recursive_inputs
self.input_label = input_label
self.output_label = output_label
self.data_label = data_label
self.cluster_label = cluster_label
self.func_kw = func_kw
| (self, dsp, defaults=None, recursive_inputs=None, constructor=<class 'schedula.utils.dsp.SubDispatch'>, constructor_kwargs=None, function_id=None, func_kw=<function MapDispatch.<lambda> at 0x7f49a8328430>, input_label='inputs<{}>', output_label='outputs<{}>', data_label='data<{}>', cluster_label='task<{}>', **kwargs) |
24,621 | schedula.utils.dsp | __setstate__ | null | def __setstate__(self, d):
self.__dict__ = d
self.__name__ = self.name
| (self, d) |
24,622 | schedula.utils.dsp | _init_dsp | null | def _init_dsp(self, defaults, inputs, recursive_inputs=None):
from ..dispatcher import Dispatcher
defaults = combine_dicts(self.defaults or {}, defaults or {})
self.dsp = dsp = Dispatcher(**self.kwargs)
add_data, add_func = dsp.add_data, dsp.add_func
n = len(str(len(inputs) + 1))
it = [(str(k).zfill(n), v) for k, v in enumerate(inputs, 1)]
inp = self.format_labels(it, self.input_label)
clt = self.format_clusters(it, self.cluster_label)
rl = self.format_labels(it, 'run<{}>')
self.outputs = out = self.format_labels(it, self.output_label)
add_func(self.prepare_inputs, inp, inputs=['inputs', 'defaults'])
recursive = recursive_inputs or self.recursive_inputs
if recursive:
func = functools.partial(self.recursive_data, recursive)
dat = self.format_labels(it, self.data_label)
fl = self.format_labels(it, 'recursive_data<{}>')
it = iter(zip(inp, dat, clt, fl))
i, d, c, fid = next(it)
add_data(i, clusters=c)
add_func(bypass, [d], inputs=[i], clusters=c)
for (i, d, c, fid), o, in zip(it, out[:-1]):
add_data(i, clusters=c)
add_func(func, [d], inputs=[i, o], clusters=c, function_id=fid)
inp = dat
for i, o, c, fid, (k, v) in zip(inp, out, clt, rl, enumerate(inputs)):
add_data(i, clusters=c)
kw = {'clusters': c, 'function_id': fid}
kw.update(self.func_kw(k, **v))
add_func(self.func, [o], inputs=[i], **kw)
add_data(o, clusters=c)
return {'inputs': inputs, 'defaults': defaults}
| (self, defaults, inputs, recursive_inputs=None) |
24,623 | schedula.utils.dsp | _return | null | def _return(self, solution):
outs = self.outputs
solution.result()
solution.parent = self
# Set output.
if self.output_type != 'all':
try:
# Save outputs.
return selector(
outs, solution, output_type=self.output_type,
**self.output_type_kw
)
except KeyError:
# Outputs not reached.
missed = {k for k in outs if k not in solution}
# Raise error
msg = '\n Unreachable output-targets: {}\n Available ' \
'outputs: {}'.format(missed, list(solution.keys()))
raise DispatcherError(msg, sol=solution)
return solution # Return outputs.
| (self, solution) |
24,627 | schedula.utils.dsp | format_clusters | null | @staticmethod
def format_clusters(it, label):
f = label.format
return [{'body': {
'label': f'"{f(k, **v)}"', 'labelloc': 'b'
}} for k, v in it]
| (it, label) |
24,628 | schedula.utils.dsp | format_labels | null | @staticmethod
def format_labels(it, label):
f = label.format
return [f(k, **v) for k, v in it]
| (it, label) |
24,631 | schedula.utils.dsp | prepare_inputs | null | @staticmethod
def prepare_inputs(inputs, defaults):
inputs = [combine_dicts(defaults, d) for d in inputs]
return inputs if len(inputs) > 1 else inputs[0]
| (inputs, defaults) |
24,632 | schedula.utils.dsp | recursive_data | null | @staticmethod
def recursive_data(recursive_inputs, input_data, outputs):
data = selector(recursive_inputs, outputs or {}, allow_miss=True)
if isinstance(recursive_inputs, dict):
data = map_dict(recursive_inputs, data)
data.update(input_data)
return data
| (recursive_inputs, input_data, outputs) |
24,634 | schedula.utils.asy.executors | PoolExecutor | General PoolExecutor to dispatch asynchronously and in parallel. | class PoolExecutor:
"""General PoolExecutor to dispatch asynchronously and in parallel."""
def __init__(self, thread_executor, process_executor=None, parallel=None):
"""
:param thread_executor:
Thread pool executor to dispatch asynchronously.
:type thread_executor: ThreadExecutor
:param process_executor:
Process pool executor to execute in parallel the functions calls.
:type process_executor: ProcessExecutor | ProcessPoolExecutor
:param parallel:
Run `_process_funcs` in parallel.
:type parallel: bool
"""
self._thread = thread_executor
self._process = process_executor
self._parallel = parallel
self._running = bool(thread_executor)
self.futures = {}
finalize(self, self.shutdown, False)
def __reduce__(self):
return self.__class__, (self._thread, self._process, self._parallel)
def add_future(self, sol_id, fut):
get_nested_dicts(self.futures, fut, default=set).add(sol_id)
fut.add_done_callback(self.futures.pop)
return fut
def get_futures(self, sol_id=EMPTY):
if sol_id is EMPTY:
return self.futures
else:
return {k for k, v in self.futures.items() if sol_id in v}
def thread(self, sol_id, *args, **kwargs):
if self._running:
return self.add_future(sol_id, self._thread.submit(*args, **kwargs))
fut = Future()
fut.set_exception(ExecutorShutdown)
return fut
def process_funcs(self, exe_id, funcs, *args, **kw):
not_sub = self._process and not any(map(
lambda x: isinstance(x, SubDispatch) and not isinstance(x, NoSub),
map(parent_func, funcs)
))
if self._parallel is not False and not_sub or self._parallel:
sid = exe_id[-1]
exe_id = False, sid
return self.process(sid, _process_funcs, exe_id, funcs, *args, **kw)
return _process_funcs(exe_id, funcs, *args, **kw)
def process(self, sol_id, fn, *args, **kwargs):
if self._running:
if self._process:
fut = self._process.submit(fn, *args, **kwargs)
return self.add_future(sol_id, fut).result()
return fn(*args, **kwargs)
raise ExecutorShutdown
def wait(self, timeout=None):
from concurrent.futures import wait as _wait_fut
_wait_fut(self.futures, timeout)
def shutdown(self, wait=True):
if self._running:
wait and self.wait()
self._running = False
tasks = {
'executor': self,
'tasks': {
'process': self._process and self._process.shutdown(
0
) or {},
'thread': self._thread.shutdown(0)
}
}
self.futures = {}
self._process = self._thread = None
return tasks
| (thread_executor, process_executor=None, parallel=None) |
24,635 | schedula.utils.asy.executors | __init__ |
:param thread_executor:
Thread pool executor to dispatch asynchronously.
:type thread_executor: ThreadExecutor
:param process_executor:
Process pool executor to execute in parallel the functions calls.
:type process_executor: ProcessExecutor | ProcessPoolExecutor
:param parallel:
Run `_process_funcs` in parallel.
:type parallel: bool
| def __init__(self, thread_executor, process_executor=None, parallel=None):
"""
:param thread_executor:
Thread pool executor to dispatch asynchronously.
:type thread_executor: ThreadExecutor
:param process_executor:
Process pool executor to execute in parallel the functions calls.
:type process_executor: ProcessExecutor | ProcessPoolExecutor
:param parallel:
Run `_process_funcs` in parallel.
:type parallel: bool
"""
self._thread = thread_executor
self._process = process_executor
self._parallel = parallel
self._running = bool(thread_executor)
self.futures = {}
finalize(self, self.shutdown, False)
| (self, thread_executor, process_executor=None, parallel=None) |
24,636 | schedula.utils.asy.executors | __reduce__ | null | def __reduce__(self):
return self.__class__, (self._thread, self._process, self._parallel)
| (self) |
24,637 | schedula.utils.asy.executors | add_future | null | def add_future(self, sol_id, fut):
get_nested_dicts(self.futures, fut, default=set).add(sol_id)
fut.add_done_callback(self.futures.pop)
return fut
| (self, sol_id, fut) |
24,638 | schedula.utils.asy.executors | get_futures | null | def get_futures(self, sol_id=EMPTY):
if sol_id is EMPTY:
return self.futures
else:
return {k for k, v in self.futures.items() if sol_id in v}
| (self, sol_id=empty) |
24,639 | schedula.utils.asy.executors | process | null | def process(self, sol_id, fn, *args, **kwargs):
if self._running:
if self._process:
fut = self._process.submit(fn, *args, **kwargs)
return self.add_future(sol_id, fut).result()
return fn(*args, **kwargs)
raise ExecutorShutdown
| (self, sol_id, fn, *args, **kwargs) |
24,640 | schedula.utils.asy.executors | process_funcs | null | def process_funcs(self, exe_id, funcs, *args, **kw):
not_sub = self._process and not any(map(
lambda x: isinstance(x, SubDispatch) and not isinstance(x, NoSub),
map(parent_func, funcs)
))
if self._parallel is not False and not_sub or self._parallel:
sid = exe_id[-1]
exe_id = False, sid
return self.process(sid, _process_funcs, exe_id, funcs, *args, **kw)
return _process_funcs(exe_id, funcs, *args, **kw)
| (self, exe_id, funcs, *args, **kw) |
24,641 | schedula.utils.asy.executors | shutdown | null | def shutdown(self, wait=True):
if self._running:
wait and self.wait()
self._running = False
tasks = {
'executor': self,
'tasks': {
'process': self._process and self._process.shutdown(
0
) or {},
'thread': self._thread.shutdown(0)
}
}
self.futures = {}
self._process = self._thread = None
return tasks
| (self, wait=True) |
24,642 | schedula.utils.asy.executors | thread | null | def thread(self, sol_id, *args, **kwargs):
if self._running:
return self.add_future(sol_id, self._thread.submit(*args, **kwargs))
fut = Future()
fut.set_exception(ExecutorShutdown)
return fut
| (self, sol_id, *args, **kwargs) |
24,643 | schedula.utils.asy.executors | wait | null | def wait(self, timeout=None):
from concurrent.futures import wait as _wait_fut
_wait_fut(self.futures, timeout)
| (self, timeout=None) |
24,644 | schedula.utils.asy.executors | ProcessExecutor | Process Executor | class ProcessExecutor(Executor):
"""Process Executor"""
_init = None
_init_args = ()
_init_kwargs = {}
_shutdown = None
def _submit(self, func, args, kwargs):
# noinspection PyUnresolvedReferences
from multiprocess import get_context
ctx = get_context()
fut, (c0, c1) = Future(), ctx.Pipe(duplex=False)
self.tasks[fut] = task = ctx.Process(
target=self._target, args=(c1.send, func, args, kwargs)
)
task.start()
return self._set_future(fut, c0.recv())
def __reduce__(self):
return self.__class__, (), {
'_init': self._init,
'_submit': self._submit,
'_shutdown': self._shutdown,
'_init_args': self._init_args,
'_init_kwargs': self._init_kwargs
}
def __init__(self, *args, **state):
super(ProcessExecutor, self).__init__()
import threading
self.lock = threading.Lock()
for k, v in state.items():
setattr(self, k, v)
def init(self):
if self._init:
with self.lock:
self._init()
def submit(self, func, *args, **kwargs):
self.init()
return self._submit(func, args, kwargs)
def shutdown(self, wait=True):
tasks = super(ProcessExecutor, self).shutdown(wait)
if self._shutdown:
with self.lock:
self._shutdown()
return tasks
| (*args, **state) |
24,645 | schedula.utils.asy.executors | __init__ | null | def __init__(self, *args, **state):
super(ProcessExecutor, self).__init__()
import threading
self.lock = threading.Lock()
for k, v in state.items():
setattr(self, k, v)
| (self, *args, **state) |
24,646 | schedula.utils.asy.executors | __reduce__ | null | def __reduce__(self):
return self.__class__, (), {
'_init': self._init,
'_submit': self._submit,
'_shutdown': self._shutdown,
'_init_args': self._init_args,
'_init_kwargs': self._init_kwargs
}
| (self) |
24,647 | schedula.utils.asy.executors | _set_future | null | def _set_future(self, fut, res):
self.tasks.pop(fut)
if 'err' in res:
_safe_set_exception(fut, res['err'])
else:
_safe_set_result(fut, res['res'])
return fut
| (self, fut, res) |
24,648 | schedula.utils.asy.executors | _submit | null | def _submit(self, func, args, kwargs):
# noinspection PyUnresolvedReferences
from multiprocess import get_context
ctx = get_context()
fut, (c0, c1) = Future(), ctx.Pipe(duplex=False)
self.tasks[fut] = task = ctx.Process(
target=self._target, args=(c1.send, func, args, kwargs)
)
task.start()
return self._set_future(fut, c0.recv())
| (self, func, args, kwargs) |
24,649 | schedula.utils.asy.executors | _target | null | @staticmethod
def _target(send, func, args, kwargs):
try:
obj = {'res': func(*args, **kwargs)}
except BaseException as ex:
obj = {'err': ex}
if send:
send(obj)
else:
return obj
| (send, func, args, kwargs) |
24,650 | schedula.utils.asy.executors | init | null | def init(self):
if self._init:
with self.lock:
self._init()
| (self) |
24,651 | schedula.utils.asy.executors | shutdown | null | def shutdown(self, wait=True):
tasks = super(ProcessExecutor, self).shutdown(wait)
if self._shutdown:
with self.lock:
self._shutdown()
return tasks
| (self, wait=True) |
24,652 | schedula.utils.asy.executors | submit | null | def submit(self, func, *args, **kwargs):
self.init()
return self._submit(func, args, kwargs)
| (self, func, *args, **kwargs) |
24,653 | schedula.utils.asy.executors | ProcessPoolExecutor | Process Pool Executor | class ProcessPoolExecutor(ProcessExecutor):
"""Process Pool Executor"""
def _init(self):
if getattr(self, 'pool', None) is None:
# noinspection PyUnresolvedReferences
from multiprocess import get_context
ctx = get_context()
self.pool = ctx.Pool(*self._init_args, **self._init_kwargs)
def _submit(self, func, args, kwargs):
fut = Future()
callback = functools.partial(_safe_set_result, fut)
error_callback = functools.partial(_safe_set_exception, fut)
self.tasks[fut] = self.pool.apply_async(
func, args, kwargs, callback, error_callback
)
fut.add_done_callback(self.tasks.pop)
return fut
def _shutdown(self):
if getattr(self, 'pool', None):
self.pool.terminate()
self.pool.join()
| (*args, **state) |
24,656 | schedula.utils.asy.executors | _init | null | def _init(self):
if getattr(self, 'pool', None) is None:
# noinspection PyUnresolvedReferences
from multiprocess import get_context
ctx = get_context()
self.pool = ctx.Pool(*self._init_args, **self._init_kwargs)
| (self) |
24,658 | schedula.utils.asy.executors | _shutdown | null | def _shutdown(self):
if getattr(self, 'pool', None):
self.pool.terminate()
self.pool.join()
| (self) |
24,659 | schedula.utils.asy.executors | _submit | null | def _submit(self, func, args, kwargs):
fut = Future()
callback = functools.partial(_safe_set_result, fut)
error_callback = functools.partial(_safe_set_exception, fut)
self.tasks[fut] = self.pool.apply_async(
func, args, kwargs, callback, error_callback
)
fut.add_done_callback(self.tasks.pop)
return fut
| (self, func, args, kwargs) |
24,664 | schedula.utils.exc | SkipNode | null | class SkipNode(BaseException):
def __init__(self, *args, ex=None, **kwargs):
# noinspection PyArgumentList
super(SkipNode, self).__init__(*args, **kwargs)
self.ex = ex
| (*args, ex=None, **kwargs) |
24,665 | schedula.utils.exc | __init__ | null | def __init__(self, *args, ex=None, **kwargs):
# noinspection PyArgumentList
super(SkipNode, self).__init__(*args, **kwargs)
self.ex = ex
| (self, *args, ex=None, **kwargs) |
24,666 | schedula.utils.dsp | SubDispatch |
It dispatches a given :class:`~schedula.dispatcher.Dispatcher` like a
function.
This function takes a sequence of dictionaries as input that will be
combined before the dispatching.
:return:
A function that executes the dispatch of the given
:class:`~schedula.dispatcher.Dispatcher`.
:rtype: callable
.. seealso:: :func:`~schedula.dispatcher.Dispatcher.dispatch`,
:func:`combine_dicts`
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}, depth=-1
:code:
>>> from schedula import Dispatcher
>>> sub_dsp = Dispatcher(name='Sub-dispatcher')
...
>>> def fun(a):
... return a + 1, a - 1
...
>>> sub_dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dispatch = SubDispatch(sub_dsp, ['a', 'b', 'c'], output_type='dict')
>>> dsp = Dispatcher(name='Dispatcher')
>>> dsp.add_function('Sub-dispatch', dispatch, ['d'], ['e'])
'Sub-dispatch'
The Dispatcher output is:
.. dispatcher:: o
:opt: graph_attr={'ratio': '1'}, depth=-1
:code:
>>> o = dsp.dispatch(inputs={'d': {'a': 3}})
while, the Sub-dispatch is:
.. dispatcher:: sol
:opt: graph_attr={'ratio': '1'}, depth=-1
:code:
>>> sol = o.workflow.nodes['Sub-dispatch']['solution']
>>> sol
Solution([('a', 3), ('b', 4), ('c', 2)])
>>> sol == o['e']
True
| class SubDispatch(Base):
"""
It dispatches a given :class:`~schedula.dispatcher.Dispatcher` like a
function.
This function takes a sequence of dictionaries as input that will be
combined before the dispatching.
:return:
A function that executes the dispatch of the given
:class:`~schedula.dispatcher.Dispatcher`.
:rtype: callable
.. seealso:: :func:`~schedula.dispatcher.Dispatcher.dispatch`,
:func:`combine_dicts`
Example:
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}, depth=-1
:code:
>>> from schedula import Dispatcher
>>> sub_dsp = Dispatcher(name='Sub-dispatcher')
...
>>> def fun(a):
... return a + 1, a - 1
...
>>> sub_dsp.add_function('fun', fun, ['a'], ['b', 'c'])
'fun'
>>> dispatch = SubDispatch(sub_dsp, ['a', 'b', 'c'], output_type='dict')
>>> dsp = Dispatcher(name='Dispatcher')
>>> dsp.add_function('Sub-dispatch', dispatch, ['d'], ['e'])
'Sub-dispatch'
The Dispatcher output is:
.. dispatcher:: o
:opt: graph_attr={'ratio': '1'}, depth=-1
:code:
>>> o = dsp.dispatch(inputs={'d': {'a': 3}})
while, the Sub-dispatch is:
.. dispatcher:: sol
:opt: graph_attr={'ratio': '1'}, depth=-1
:code:
>>> sol = o.workflow.nodes['Sub-dispatch']['solution']
>>> sol
Solution([('a', 3), ('b', 4), ('c', 2)])
>>> sol == o['e']
True
"""
def __new__(cls, dsp=None, *args, **kwargs):
from .blue import Blueprint
if isinstance(dsp, Blueprint):
return Blueprint(dsp, *args, **kwargs)._set_cls(cls)
return super(SubDispatch, cls).__new__(cls)
def __getstate__(self):
state = self.__dict__.copy()
state['solution'] = state['solution'].__class__(state['dsp'])
del state['__name__']
return state
def __setstate__(self, d):
self.__dict__ = d
self.__name__ = self.name
def __init__(self, dsp, outputs=None, inputs_dist=None, wildcard=False,
no_call=False, shrink=False, rm_unused_nds=False,
output_type='all', function_id=None, output_type_kw=None):
"""
Initializes the Sub-dispatch.
:param dsp:
A dispatcher that identifies the model adopted.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:param no_call:
If True data node estimation function is not used.
:type no_call: bool, optional
:param shrink:
If True the dispatcher is shrink before the dispatch.
:type shrink: bool, optional
:param rm_unused_nds:
If True unused function and sub-dispatcher nodes are removed from
workflow.
:type rm_unused_nds: bool, optional
:param output_type:
Type of function output:
+ 'all': a dictionary with all dispatch outputs.
+ 'list': a list with all outputs listed in `outputs`.
+ 'dict': a dictionary with any outputs listed in `outputs`.
:type output_type: str, optional
:param output_type_kw:
Extra kwargs to pass to the `selector` function.
:type output_type_kw: dict, optional
:param function_id:
Function name.
:type function_id: str, optional
"""
self.dsp = dsp
self.outputs = outputs
self.wildcard = wildcard
self.no_call = no_call
self.shrink = shrink
self.output_type = output_type
self.output_type_kw = output_type_kw or {}
self.inputs_dist = inputs_dist
self.rm_unused_nds = rm_unused_nds
self.name = self.__name__ = function_id or dsp.name
self.__doc__ = dsp.__doc__
self.solution = dsp.solution.__class__(dsp)
def blue(self, memo=None, depth=-1):
"""
Constructs a Blueprint out of the current object.
:param memo:
A dictionary to cache Blueprints.
:type memo: dict[T,schedula.utils.blue.Blueprint]
:param depth:
Depth of sub-dispatch blue. If negative all levels are bluprinted.
:type depth: int, optional
:return:
A Blueprint of the current object.
:rtype: schedula.utils.blue.Blueprint
"""
if depth == 0:
return self
depth -= 1
memo = {} if memo is None else memo
if self not in memo:
import inspect
from .blue import Blueprint, _parent_blue
keys = tuple(inspect.signature(self.__init__).parameters)
memo[self] = Blueprint(**{
k: _parent_blue(v, memo, depth)
for k, v in self.__dict__.items() if k in keys
})._set_cls(self.__class__)
return memo[self]
def __call__(self, *input_dicts, copy_input_dicts=False, _stopper=None,
_executor=False, _sol_name=(), _verbose=False):
# Combine input dictionaries.
i = combine_dicts(*input_dicts, copy=copy_input_dicts)
# Dispatch the function calls.
self.solution = self.dsp.dispatch(
i, self.outputs, self.inputs_dist, self.wildcard, self.no_call,
self.shrink, self.rm_unused_nds, stopper=_stopper,
executor=_executor, sol_name=_sol_name, verbose=_verbose
)
return self._return(self.solution)
def _return(self, solution):
outs = self.outputs
solution.result()
solution.parent = self
# Set output.
if self.output_type != 'all':
try:
# Save outputs.
return selector(
outs, solution, output_type=self.output_type,
**self.output_type_kw
)
except KeyError:
# Outputs not reached.
missed = {k for k in outs if k not in solution}
# Raise error
msg = '\n Unreachable output-targets: {}\n Available ' \
'outputs: {}'.format(missed, list(solution.keys()))
raise DispatcherError(msg, sol=solution)
return solution # Return outputs.
def copy(self):
return _copy.deepcopy(self)
| (dsp=None, *args, **kwargs) |
24,667 | schedula.utils.dsp | __call__ | null | def __call__(self, *input_dicts, copy_input_dicts=False, _stopper=None,
_executor=False, _sol_name=(), _verbose=False):
# Combine input dictionaries.
i = combine_dicts(*input_dicts, copy=copy_input_dicts)
# Dispatch the function calls.
self.solution = self.dsp.dispatch(
i, self.outputs, self.inputs_dist, self.wildcard, self.no_call,
self.shrink, self.rm_unused_nds, stopper=_stopper,
executor=_executor, sol_name=_sol_name, verbose=_verbose
)
return self._return(self.solution)
| (self, *input_dicts, copy_input_dicts=False, _stopper=None, _executor=False, _sol_name=(), _verbose=False) |
24,670 | schedula.utils.dsp | __init__ |
Initializes the Sub-dispatch.
:param dsp:
A dispatcher that identifies the model adopted.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:param no_call:
If True data node estimation function is not used.
:type no_call: bool, optional
:param shrink:
If True the dispatcher is shrink before the dispatch.
:type shrink: bool, optional
:param rm_unused_nds:
If True unused function and sub-dispatcher nodes are removed from
workflow.
:type rm_unused_nds: bool, optional
:param output_type:
Type of function output:
+ 'all': a dictionary with all dispatch outputs.
+ 'list': a list with all outputs listed in `outputs`.
+ 'dict': a dictionary with any outputs listed in `outputs`.
:type output_type: str, optional
:param output_type_kw:
Extra kwargs to pass to the `selector` function.
:type output_type_kw: dict, optional
:param function_id:
Function name.
:type function_id: str, optional
| def __init__(self, dsp, outputs=None, inputs_dist=None, wildcard=False,
no_call=False, shrink=False, rm_unused_nds=False,
output_type='all', function_id=None, output_type_kw=None):
"""
Initializes the Sub-dispatch.
:param dsp:
A dispatcher that identifies the model adopted.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:param no_call:
If True data node estimation function is not used.
:type no_call: bool, optional
:param shrink:
If True the dispatcher is shrink before the dispatch.
:type shrink: bool, optional
:param rm_unused_nds:
If True unused function and sub-dispatcher nodes are removed from
workflow.
:type rm_unused_nds: bool, optional
:param output_type:
Type of function output:
+ 'all': a dictionary with all dispatch outputs.
+ 'list': a list with all outputs listed in `outputs`.
+ 'dict': a dictionary with any outputs listed in `outputs`.
:type output_type: str, optional
:param output_type_kw:
Extra kwargs to pass to the `selector` function.
:type output_type_kw: dict, optional
:param function_id:
Function name.
:type function_id: str, optional
"""
self.dsp = dsp
self.outputs = outputs
self.wildcard = wildcard
self.no_call = no_call
self.shrink = shrink
self.output_type = output_type
self.output_type_kw = output_type_kw or {}
self.inputs_dist = inputs_dist
self.rm_unused_nds = rm_unused_nds
self.name = self.__name__ = function_id or dsp.name
self.__doc__ = dsp.__doc__
self.solution = dsp.solution.__class__(dsp)
| (self, dsp, outputs=None, inputs_dist=None, wildcard=False, no_call=False, shrink=False, rm_unused_nds=False, output_type='all', function_id=None, output_type_kw=None) |
24,680 | schedula.utils.dsp | SubDispatchFunction |
It converts a :class:`~schedula.dispatcher.Dispatcher` into a function.
This function takes a sequence of arguments or a key values as input of the
dispatch.
:return:
A function that executes the dispatch of the given `dsp`.
:rtype: callable
.. seealso:: :func:`~schedula.dispatcher.Dispatcher.dispatch`,
:func:`~schedula.dispatcher.Dispatcher.shrink_dsp`
**Example**:
A dispatcher with two functions `max` and `min` and an unresolved cycle
(i.e., `a` --> `max` --> `c` --> `min` --> `a`):
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> dsp.add_function('max', max, inputs=['a', 'b'], outputs=['c'])
'max'
>>> from math import log
>>> def my_log(x):
... return log(x - 1)
>>> dsp.add_function('log(x - 1)', my_log, inputs=['c'],
... outputs=['a'], input_domain=lambda c: c > 1)
'log(x - 1)'
Extract a static function node, i.e. the inputs `a` and `b` and the
output `a` are fixed::
>>> fun = SubDispatchFunction(dsp, 'myF', ['a', 'b'], ['a'])
>>> fun.__name__
'myF'
>>> fun(b=1, a=2)
0.0
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
>>> fun.dsp.name = 'Created function internal'
The created function raises a ValueError if un-valid inputs are
provided:
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
:code:
>>> fun(1, 0) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
DispatcherError:
Unreachable output-targets: ...
Available outputs: ...
| class SubDispatchFunction(SubDispatch):
"""
It converts a :class:`~schedula.dispatcher.Dispatcher` into a function.
This function takes a sequence of arguments or a key values as input of the
dispatch.
:return:
A function that executes the dispatch of the given `dsp`.
:rtype: callable
.. seealso:: :func:`~schedula.dispatcher.Dispatcher.dispatch`,
:func:`~schedula.dispatcher.Dispatcher.shrink_dsp`
**Example**:
A dispatcher with two functions `max` and `min` and an unresolved cycle
(i.e., `a` --> `max` --> `c` --> `min` --> `a`):
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> dsp.add_function('max', max, inputs=['a', 'b'], outputs=['c'])
'max'
>>> from math import log
>>> def my_log(x):
... return log(x - 1)
>>> dsp.add_function('log(x - 1)', my_log, inputs=['c'],
... outputs=['a'], input_domain=lambda c: c > 1)
'log(x - 1)'
Extract a static function node, i.e. the inputs `a` and `b` and the
output `a` are fixed::
>>> fun = SubDispatchFunction(dsp, 'myF', ['a', 'b'], ['a'])
>>> fun.__name__
'myF'
>>> fun(b=1, a=2)
0.0
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
>>> fun.dsp.name = 'Created function internal'
The created function raises a ValueError if un-valid inputs are
provided:
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
:code:
>>> fun(1, 0) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
DispatcherError:
Unreachable output-targets: ...
Available outputs: ...
"""
var_keyword = 'kw'
def __init__(self, dsp, function_id=None, inputs=None, outputs=None,
inputs_dist=None, shrink=True, wildcard=True, output_type=None,
output_type_kw=None, first_arg_as_kw=False):
"""
Initializes the Sub-dispatch Function.
:param dsp:
A dispatcher that identifies the model adopted.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param function_id:
Function name.
:type function_id: str, optional
:param inputs:
Input data nodes.
:type inputs: list[str], iterable, optional
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable, optional
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param shrink:
If True the dispatcher is shrink before the dispatch.
:type shrink: bool, optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:param output_type:
Type of function output:
+ 'all': a dictionary with all dispatch outputs.
+ 'list': a list with all outputs listed in `outputs`.
+ 'dict': a dictionary with any outputs listed in `outputs`.
:type output_type: str, optional
:param output_type_kw:
Extra kwargs to pass to the `selector` function.
:type output_type_kw: dict, optional
:param first_arg_as_kw:
Uses the first argument of the __call__ method as `kwargs`.
:type output_type_kw: bool
"""
if shrink:
dsp = dsp.shrink_dsp(
inputs, outputs, inputs_dist=inputs_dist, wildcard=wildcard
)
if outputs:
# Outputs not reached.
missed = {k for k in outputs if k not in dsp.nodes}
if missed: # If outputs are missing raise error.
available = list(dsp.data_nodes.keys()) # Available data nodes.
# Raise error
msg = '\n Unreachable output-targets: {}\n Available ' \
'outputs: {}'.format(missed, available)
raise ValueError(msg)
# Set internal proprieties
self.inputs = inputs
# Set dsp name equal to function id.
self.function_id = dsp.name = function_id or dsp.name or 'fun'
no_call = False
self._sol = dsp.solution.__class__(
dsp, dict.fromkeys(inputs or (), None), outputs, wildcard, None,
inputs_dist, no_call, False
)
# Initialize as sub dispatch.
super(SubDispatchFunction, self).__init__(
dsp, outputs, inputs_dist, wildcard, no_call, True, True, 'list',
output_type_kw=output_type_kw
)
# Define the function to return outputs sorted.
if output_type is not None:
self.output_type = output_type
elif outputs is None:
self.output_type = 'all'
elif len(outputs) == 1:
self.output_type = 'values'
self.first_arg_as_kw = first_arg_as_kw
@property
def __signature__(self):
import inspect
dfl, p = self.dsp.default_values, []
for name in self.inputs or ():
par = inspect.Parameter('par', inspect._POSITIONAL_OR_KEYWORD)
par._name = name
if name in dfl:
par._default = dfl[name]['value']
p.append(par)
if self.var_keyword:
p.append(inspect.Parameter(self.var_keyword, inspect._VAR_KEYWORD))
return inspect.Signature(p, __validate_parameters__=False)
def _parse_inputs(self, *args, **kw):
if self.first_arg_as_kw:
for k in sorted(args[0]):
if k in kw:
msg = 'multiple values for argument %r'
raise TypeError(msg % k) from None
kw.update(args[0])
args = args[1:]
defaults, inputs = self.dsp.default_values, {}
for i, k in enumerate(self.inputs or ()):
try:
inputs[k] = args[i]
if k in kw:
msg = 'multiple values for argument %r'
raise TypeError(msg % k) from None
except IndexError:
if k in kw:
inputs[k] = kw.pop(k)
elif k in defaults:
inputs[k] = defaults[k]['value']
else:
msg = 'missing a required argument: %r'
raise TypeError(msg % k) from None
if len(inputs) < len(args):
raise TypeError('too many positional arguments') from None
if self.var_keyword:
inputs.update(kw)
elif not all(k in inputs for k in kw):
k = next(k for k in sorted(kw) if k not in inputs)
msg = 'got an unexpected keyword argument %r'
raise TypeError(msg % k) from None
return inputs
def __call__(self, *args, _stopper=None, _executor=False, _sol_name=(),
_verbose=False, **kw):
# Namespace shortcuts.
self.solution = sol = self._sol._copy_structure()
sol.verbose = _verbose
self.solution.full_name, dfl = _sol_name, self.dsp.default_values
# Parse inputs.
inp = self._parse_inputs(*args, **kw)
i = tuple(k for k in inp if k not in self.dsp.data_nodes)
if i:
msg = "%s() got an unexpected keyword argument '%s'"
raise TypeError(msg % (self.function_id, min(i)))
inputs_dist = combine_dicts(
sol.inputs_dist, dict.fromkeys(inp, 0), self.inputs_dist or {}
)
inp.update({k: v['value'] for k, v in dfl.items() if k not in inp})
# Initialize.
sol._init_workflow(inp, inputs_dist=inputs_dist, clean=False)
# Dispatch outputs.
sol._run(stopper=_stopper, executor=_executor)
# Return outputs sorted.
return self._return(sol)
| null |
24,681 | schedula.utils.dsp | __call__ | null | def __call__(self, *args, _stopper=None, _executor=False, _sol_name=(),
_verbose=False, **kw):
# Namespace shortcuts.
self.solution = sol = self._sol._copy_structure()
sol.verbose = _verbose
self.solution.full_name, dfl = _sol_name, self.dsp.default_values
# Parse inputs.
inp = self._parse_inputs(*args, **kw)
i = tuple(k for k in inp if k not in self.dsp.data_nodes)
if i:
msg = "%s() got an unexpected keyword argument '%s'"
raise TypeError(msg % (self.function_id, min(i)))
inputs_dist = combine_dicts(
sol.inputs_dist, dict.fromkeys(inp, 0), self.inputs_dist or {}
)
inp.update({k: v['value'] for k, v in dfl.items() if k not in inp})
# Initialize.
sol._init_workflow(inp, inputs_dist=inputs_dist, clean=False)
# Dispatch outputs.
sol._run(stopper=_stopper, executor=_executor)
# Return outputs sorted.
return self._return(sol)
| (self, *args, _stopper=None, _executor=False, _sol_name=(), _verbose=False, **kw) |
24,684 | schedula.utils.dsp | __init__ |
Initializes the Sub-dispatch Function.
:param dsp:
A dispatcher that identifies the model adopted.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param function_id:
Function name.
:type function_id: str, optional
:param inputs:
Input data nodes.
:type inputs: list[str], iterable, optional
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable, optional
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param shrink:
If True the dispatcher is shrink before the dispatch.
:type shrink: bool, optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:param output_type:
Type of function output:
+ 'all': a dictionary with all dispatch outputs.
+ 'list': a list with all outputs listed in `outputs`.
+ 'dict': a dictionary with any outputs listed in `outputs`.
:type output_type: str, optional
:param output_type_kw:
Extra kwargs to pass to the `selector` function.
:type output_type_kw: dict, optional
:param first_arg_as_kw:
Uses the first argument of the __call__ method as `kwargs`.
:type output_type_kw: bool
| def __init__(self, dsp, function_id=None, inputs=None, outputs=None,
inputs_dist=None, shrink=True, wildcard=True, output_type=None,
output_type_kw=None, first_arg_as_kw=False):
"""
Initializes the Sub-dispatch Function.
:param dsp:
A dispatcher that identifies the model adopted.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param function_id:
Function name.
:type function_id: str, optional
:param inputs:
Input data nodes.
:type inputs: list[str], iterable, optional
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable, optional
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param shrink:
If True the dispatcher is shrink before the dispatch.
:type shrink: bool, optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:param output_type:
Type of function output:
+ 'all': a dictionary with all dispatch outputs.
+ 'list': a list with all outputs listed in `outputs`.
+ 'dict': a dictionary with any outputs listed in `outputs`.
:type output_type: str, optional
:param output_type_kw:
Extra kwargs to pass to the `selector` function.
:type output_type_kw: dict, optional
:param first_arg_as_kw:
Uses the first argument of the __call__ method as `kwargs`.
:type output_type_kw: bool
"""
if shrink:
dsp = dsp.shrink_dsp(
inputs, outputs, inputs_dist=inputs_dist, wildcard=wildcard
)
if outputs:
# Outputs not reached.
missed = {k for k in outputs if k not in dsp.nodes}
if missed: # If outputs are missing raise error.
available = list(dsp.data_nodes.keys()) # Available data nodes.
# Raise error
msg = '\n Unreachable output-targets: {}\n Available ' \
'outputs: {}'.format(missed, available)
raise ValueError(msg)
# Set internal proprieties
self.inputs = inputs
# Set dsp name equal to function id.
self.function_id = dsp.name = function_id or dsp.name or 'fun'
no_call = False
self._sol = dsp.solution.__class__(
dsp, dict.fromkeys(inputs or (), None), outputs, wildcard, None,
inputs_dist, no_call, False
)
# Initialize as sub dispatch.
super(SubDispatchFunction, self).__init__(
dsp, outputs, inputs_dist, wildcard, no_call, True, True, 'list',
output_type_kw=output_type_kw
)
# Define the function to return outputs sorted.
if output_type is not None:
self.output_type = output_type
elif outputs is None:
self.output_type = 'all'
elif len(outputs) == 1:
self.output_type = 'values'
self.first_arg_as_kw = first_arg_as_kw
| (self, dsp, function_id=None, inputs=None, outputs=None, inputs_dist=None, shrink=True, wildcard=True, output_type=None, output_type_kw=None, first_arg_as_kw=False) |
24,695 | schedula.utils.dsp | SubDispatchPipe |
It converts a :class:`~schedula.dispatcher.Dispatcher` into a function.
This function takes a sequence of arguments as input of the dispatch.
:return:
A function that executes the pipe of the given `dsp`.
:rtype: callable
.. seealso:: :func:`~schedula.dispatcher.Dispatcher.dispatch`,
:func:`~schedula.dispatcher.Dispatcher.shrink_dsp`
**Example**:
A dispatcher with two functions `max` and `min` and an unresolved cycle
(i.e., `a` --> `max` --> `c` --> `min` --> `a`):
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> dsp.add_function('max', max, inputs=['a', 'b'], outputs=['c'])
'max'
>>> def func(x):
... return x - 1
>>> dsp.add_function('x - 1', func, inputs=['c'], outputs=['a'])
'x - 1'
Extract a static function node, i.e. the inputs `a` and `b` and the
output `a` are fixed::
>>> fun = SubDispatchPipe(dsp, 'myF', ['a', 'b'], ['a'])
>>> fun.__name__
'myF'
>>> fun(2, 1)
1
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
>>> fun.dsp.name = 'Created function internal'
The created function raises a ValueError if un-valid inputs are
provided:
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
:code:
>>> fun(1, 0)
0
| class SubDispatchPipe(SubDispatchFunction):
"""
It converts a :class:`~schedula.dispatcher.Dispatcher` into a function.
This function takes a sequence of arguments as input of the dispatch.
:return:
A function that executes the pipe of the given `dsp`.
:rtype: callable
.. seealso:: :func:`~schedula.dispatcher.Dispatcher.dispatch`,
:func:`~schedula.dispatcher.Dispatcher.shrink_dsp`
**Example**:
A dispatcher with two functions `max` and `min` and an unresolved cycle
(i.e., `a` --> `max` --> `c` --> `min` --> `a`):
.. dispatcher:: dsp
:opt: graph_attr={'ratio': '1'}
>>> from schedula import Dispatcher
>>> dsp = Dispatcher(name='Dispatcher')
>>> dsp.add_function('max', max, inputs=['a', 'b'], outputs=['c'])
'max'
>>> def func(x):
... return x - 1
>>> dsp.add_function('x - 1', func, inputs=['c'], outputs=['a'])
'x - 1'
Extract a static function node, i.e. the inputs `a` and `b` and the
output `a` are fixed::
>>> fun = SubDispatchPipe(dsp, 'myF', ['a', 'b'], ['a'])
>>> fun.__name__
'myF'
>>> fun(2, 1)
1
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
>>> fun.dsp.name = 'Created function internal'
The created function raises a ValueError if un-valid inputs are
provided:
.. dispatcher:: fun
:opt: workflow=True, graph_attr={'ratio': '1'}
:code:
>>> fun(1, 0)
0
"""
var_keyword = None
def __init__(self, dsp, function_id=None, inputs=None, outputs=None,
inputs_dist=None, no_domain=True, wildcard=True, shrink=True,
output_type=None, output_type_kw=None, first_arg_as_kw=False):
"""
Initializes the Sub-dispatch Function.
:param dsp:
A dispatcher that identifies the model adopted.
:type dsp: schedula.Dispatcher | schedula.utils.blue.BlueDispatcher
:param function_id:
Function name.
:type function_id: str
:param inputs:
Input data nodes.
:type inputs: list[str], iterable
:param outputs:
Ending data nodes.
:type outputs: list[str], iterable, optional
:param inputs_dist:
Initial distances of input data nodes.
:type inputs_dist: dict[str, int | float], optional
:param no_domain:
Skip the domain check.
:type no_domain: bool, optional
:param shrink:
If True the dispatcher is shrink before the dispatch.
:type shrink: bool, optional
:param wildcard:
If True, when the data node is used as input and target in the
ArciDispatch algorithm, the input value will be used as input for
the connected functions, but not as output.
:type wildcard: bool, optional
:param output_type:
Type of function output:
+ 'all': a dictionary with all dispatch outputs.
+ 'list': a list with all outputs listed in `outputs`.
+ 'dict': a dictionary with any outputs listed in `outputs`.
:type output_type: str, optional
:param output_type_kw:
Extra kwargs to pass to the `selector` function.
:type output_type_kw: dict, optional
:param first_arg_as_kw:
Converts first argument of the __call__ method as `kwargs`.
:type output_type_kw: bool
"""
self.solution = sol = dsp.solution.__class__(
dsp, inputs, outputs, wildcard, inputs_dist, True, True,
no_domain=no_domain
)
sol._run()
if shrink:
from .alg import _union_workflow, _convert_bfs
bfs = _union_workflow(sol)
o, bfs = outputs or sol, _convert_bfs(bfs)
dsp = dsp._get_dsp_from_bfs(o, bfs_graphs=bfs)
super(SubDispatchPipe, self).__init__(
dsp, function_id, inputs, outputs=outputs, inputs_dist=inputs_dist,
shrink=False, wildcard=wildcard, output_type=output_type,
output_type_kw=output_type_kw, first_arg_as_kw=first_arg_as_kw
)
self._reset_sol()
self.pipe = self._set_pipe()
def _reset_sol(self):
self._sol.no_call = True
self._sol._init_workflow()
self._sol._run()
self._sol.no_call = False
def _set_pipe(self):
def _make_tks(task):
v, s = task[-1]
if v is START:
nxt_nds = s.dsp.dmap[v]
else:
nxt_nds = s.workflow[v]
nxt_dsp = [n for n in nxt_nds if s.nodes[n]['type'] == 'dispatcher']
nxt_dsp = [(n, s._edge_length(s.dmap[v][n], s.nodes[n]))
for n in nxt_dsp]
return (task[0], task[1], (v, s)), nxt_nds, nxt_dsp
return [_make_tks(v['task']) for v in self._sol.pipe.values()]
def _init_new_solution(self, full_name, verbose):
key_map, sub_sol = {}, {}
for k, s in self._sol.sub_sol.items():
ns = s._copy_structure(dist=1)
ns.verbose = verbose
ns.fringe = None
ns.sub_sol = sub_sol
ns.full_name = full_name + s.full_name
key_map[s] = ns
sub_sol[ns.index] = ns
return key_map[self._sol], lambda x: key_map[x]
def _init_workflows(self, inputs):
self.solution.inputs.update(inputs)
for s in self.solution.sub_sol.values():
s._init_workflow(clean=False)
def _callback_pipe_failure(self):
pass
def _pipe_append(self):
return self.solution._pipe.append
def __call__(self, *args, _stopper=None, _executor=False, _sol_name=(),
_verbose=False, **kw):
self.solution, key_map = self._init_new_solution(_sol_name, _verbose)
pipe_append = self._pipe_append()
self._init_workflows(self._parse_inputs(*args, **kw))
for x, nxt_nds, nxt_dsp in self.pipe:
v, s = x[-1]
s = key_map(s)
pipe_append(x[:2] + ((v, s),))
if not s._set_node_output(
v, False, next_nds=nxt_nds, stopper=_stopper,
executor=_executor):
self._callback_pipe_failure()
break
for n, vw_d in nxt_dsp:
s._set_sub_dsp_node_input(v, n, [], False, vw_d)
s._see_remote_link_node(v)
# Return outputs sorted.
return self._return(self.solution)
| null |
24,702 | schedula.utils.dsp | _callback_pipe_failure | null | def _callback_pipe_failure(self):
pass
| (self) |
24,703 | schedula.utils.dsp | _init_new_solution | null | def _init_new_solution(self, full_name, verbose):
key_map, sub_sol = {}, {}
for k, s in self._sol.sub_sol.items():
ns = s._copy_structure(dist=1)
ns.verbose = verbose
ns.fringe = None
ns.sub_sol = sub_sol
ns.full_name = full_name + s.full_name
key_map[s] = ns
sub_sol[ns.index] = ns
return key_map[self._sol], lambda x: key_map[x]
| (self, full_name, verbose) |
24,704 | schedula.utils.dsp | _init_workflows | null | def _init_workflows(self, inputs):
self.solution.inputs.update(inputs)
for s in self.solution.sub_sol.values():
s._init_workflow(clean=False)
| (self, inputs) |
24,706 | schedula.utils.dsp | _pipe_append | null | def _pipe_append(self):
return self.solution._pipe.append
| (self) |
24,716 | schedula.utils.asy.executors | ThreadExecutor | Multi Thread Executor | class ThreadExecutor(Executor):
"""Multi Thread Executor"""
def submit(self, func, *args, **kwargs):
import threading
fut, send = Future(), lambda res: self._set_future(fut, res)
task = threading.Thread(
target=self._target, args=(send, func, args, kwargs)
)
self.tasks[fut], task.daemon = task, True
task.start()
return fut
| () |
24,717 | schedula.utils.asy.executors | __init__ | null | def __init__(self):
self.tasks = {}
finalize(self, self.shutdown, False)
| (self) |
24,718 | schedula.utils.asy.executors | __reduce__ | null | def __reduce__(self):
return self.__class__, ()
| (self) |
24,721 | schedula.utils.asy.executors | shutdown | null | def shutdown(self, wait=True):
tasks = dict(self.tasks)
if wait:
from concurrent.futures import wait as _wait_fut
# noinspection PyCallingNonCallable
_wait_fut(tasks)
for fut, task in tasks.items():
_safe_set_exception(fut, ExecutorShutdown)
for _ in range(100):
try:
hasattr(task, 'terminate') and task.terminate()
break
except AttributeError:
time.sleep(0.01)
pass
else:
raise ValueError('Task could not terminate!')
return tasks
| (self, wait=True) |
24,722 | schedula.utils.asy.executors | submit | null | def submit(self, func, *args, **kwargs):
import threading
fut, send = Future(), lambda res: self._set_future(fut, res)
task = threading.Thread(
target=self._target, args=(send, func, args, kwargs)
)
self.tasks[fut], task.daemon = task, True
task.start()
return fut
| (self, func, *args, **kwargs) |
24,723 | schedula.utils.gen | Token |
It constructs a unique constant that behaves like a string.
Example::
>>> s = Token('string')
>>> s
string
>>> s == 'string'
False
>>> s == Token('string')
False
>>> {s: 1, Token('string'): 1}
{string: 1, string: 1}
>>> s.capitalize()
'String'
| class Token(_Token, str):
"""
It constructs a unique constant that behaves like a string.
Example::
>>> s = Token('string')
>>> s
string
>>> s == 'string'
False
>>> s == Token('string')
False
>>> {s: 1, Token('string'): 1}
{string: 1, string: 1}
>>> s.capitalize()
'String'
"""
| (*args) |
24,726 | schedula.utils.gen | __eq__ | null | def __eq__(self, other):
return self is other
| (self, other) |
24,728 | schedula.utils.gen | __init__ | null | def __init__(self, *args):
import inspect
f = inspect.currentframe()
self.module_name = f and f.f_back and f.f_back.f_globals.get('__name__')
del f
| (self, *args) |
24,729 | schedula.utils.gen | __ne__ | null | def __ne__(self, other):
return self is not other
| (self, other) |
24,730 | schedula.utils.gen | __reduce__ | null | def __reduce__(self):
if self.module_name:
import importlib
# noinspection PyTypeChecker
mdl = importlib.import_module(self.module_name)
for name, obj in mdl.__dict__.items():
if obj is self:
return getattr, (mdl, name)
return super(_Token, self).__reduce__()
| (self) |
24,731 | schedula.utils.gen | __repr__ | null | def __repr__(self):
return self
| (self) |
24,732 | schedula.utils.exc | WebResponse | null | class WebResponse(BaseException):
def __init__(self, response):
self.response = response
| (response) |
24,733 | schedula.utils.exc | __init__ | null | def __init__(self, response):
self.response = response
| (self, response) |
24,734 | schedula.utils.dsp | add_args | null | class add_args:
"""
Adds arguments to a function (left side).
:param func:
Function to wrap.
:type func: callable
:param n:
Number of unused arguments to add to the left side.
:type n: int
:return:
Wrapped function.
:rtype: callable
Example::
>>> import inspect
>>> def original_func(a, b, *args, c=0):
... '''Doc'''
... return a + b + c
>>> func = add_args(original_func, n=2)
>>> func.__name__, func.__doc__
('original_func', 'Doc')
>>> func(1, 2, 3, 4, c=5)
12
>>> str(inspect.signature(func))
'(none, none, a, b, *args, c=0)'
"""
__name__ = __doc__ = None
_args = ('func', 'n', 'callback')
def __init__(self, func, n=1, callback=None):
self.n = n
self.callback = callback
self.func = func
for i in range(2):
# noinspection PyBroadException
try:
self.__name__ = func.__name__
self.__doc__ = func.__doc__
break
except AttributeError:
func = parent_func(func)
@property
def __signature__(self):
return _get_signature(self.func, self.n)
def __call__(self, *args, **kwargs):
res = self.func(*args[self.n:], **kwargs)
if self.callback:
self.callback(res, *args, **kwargs)
return res
| null |
24,735 | schedula.utils.dsp | __call__ | null | def __call__(self, *args, **kwargs):
res = self.func(*args[self.n:], **kwargs)
if self.callback:
self.callback(res, *args, **kwargs)
return res
| (self, *args, **kwargs) |
24,736 | schedula.utils.dsp | __init__ | null | def __init__(self, func, n=1, callback=None):
self.n = n
self.callback = callback
self.func = func
for i in range(2):
# noinspection PyBroadException
try:
self.__name__ = func.__name__
self.__doc__ = func.__doc__
break
except AttributeError:
func = parent_func(func)
| (self, func, n=1, callback=None) |
24,737 | schedula.utils.dsp | add_function |
Decorator to add a function to a dispatcher.
:param dsp:
A dispatcher.
:type dsp: schedula.Dispatcher | schedula.blue.BlueDispatcher
:param inputs_kwargs:
Do you want to include kwargs as inputs?
:type inputs_kwargs: bool
:param inputs_defaults:
Do you want to set default values?
:type inputs_defaults: bool
:param kw:
See :func:~`schedula.dispatcher.Dispatcher.add_function`.
:return:
Decorator.
:rtype: callable
**------------------------------------------------------------------------**
**Example**:
.. dispatcher:: sol
:opt: graph_attr={'ratio': '1'}
:code:
>>> import schedula as sh
>>> dsp = sh.Dispatcher(name='Dispatcher')
>>> @sh.add_function(dsp, outputs=['e'])
... @sh.add_function(dsp, False, True, outputs=['i'], inputs='ecah')
... @sh.add_function(dsp, True, outputs=['l'])
... def f(a, b, c, d=1):
... return (a + b) - c + d
>>> @sh.add_function(dsp, True, outputs=['d'])
... def g(e, i, *args, d=0):
... return e + i + d
>>> sol = dsp({'a': 1, 'b': 2, 'c': 3}); sol
Solution([('a', 1), ('b', 2), ('c', 3), ('h', 1), ('e', 1), ('i', 4),
('d', 5), ('l', 5)])
| def add_function(dsp, inputs_kwargs=False, inputs_defaults=False, **kw):
"""
Decorator to add a function to a dispatcher.
:param dsp:
A dispatcher.
:type dsp: schedula.Dispatcher | schedula.blue.BlueDispatcher
:param inputs_kwargs:
Do you want to include kwargs as inputs?
:type inputs_kwargs: bool
:param inputs_defaults:
Do you want to set default values?
:type inputs_defaults: bool
:param kw:
See :func:~`schedula.dispatcher.Dispatcher.add_function`.
:return:
Decorator.
:rtype: callable
**------------------------------------------------------------------------**
**Example**:
.. dispatcher:: sol
:opt: graph_attr={'ratio': '1'}
:code:
>>> import schedula as sh
>>> dsp = sh.Dispatcher(name='Dispatcher')
>>> @sh.add_function(dsp, outputs=['e'])
... @sh.add_function(dsp, False, True, outputs=['i'], inputs='ecah')
... @sh.add_function(dsp, True, outputs=['l'])
... def f(a, b, c, d=1):
... return (a + b) - c + d
>>> @sh.add_function(dsp, True, outputs=['d'])
... def g(e, i, *args, d=0):
... return e + i + d
>>> sol = dsp({'a': 1, 'b': 2, 'c': 3}); sol
Solution([('a', 1), ('b', 2), ('c', 3), ('h', 1), ('e', 1), ('i', 4),
('d', 5), ('l', 5)])
"""
def decorator(f):
dsp.add_func(
f, inputs_kwargs=inputs_kwargs, inputs_defaults=inputs_defaults,
**kw
)
return f
return decorator
| (dsp, inputs_kwargs=False, inputs_defaults=False, **kw) |
24,738 | schedula.utils.dsp | are_in_nested_dicts |
Nested keys are inside of nested-dictionaries.
:param nested_dict:
Nested dictionary.
:type nested_dict: dict
:param keys:
Nested keys.
:type keys: object
:return:
True if nested keys are inside of nested-dictionaries, otherwise False.
:rtype: bool
| def are_in_nested_dicts(nested_dict, *keys):
"""
Nested keys are inside of nested-dictionaries.
:param nested_dict:
Nested dictionary.
:type nested_dict: dict
:param keys:
Nested keys.
:type keys: object
:return:
True if nested keys are inside of nested-dictionaries, otherwise False.
:rtype: bool
"""
if keys:
# noinspection PyBroadException
try:
return are_in_nested_dicts(nested_dict[keys[0]], *keys[1:])
except Exception: # Key error or not a dict.
return False
return True
| (nested_dict, *keys) |
24,739 | schedula.utils.asy | await_result |
Return the result of a `Future` object.
:param obj:
Value object.
:type obj: concurrent.futures.Future | object
:param timeout:
The number of seconds to wait for the result if the future isn't done.
If None, then there is no limit on the wait time.
:type timeout: int
:return:
Result.
:rtype: object
Example::
>>> from concurrent.futures import Future
>>> fut = Future()
>>> fut.set_result(3)
>>> await_result(fut), await_result(4)
(3, 4)
| def await_result(obj, timeout=None):
"""
Return the result of a `Future` object.
:param obj:
Value object.
:type obj: concurrent.futures.Future | object
:param timeout:
The number of seconds to wait for the result if the future isn't done.
If None, then there is no limit on the wait time.
:type timeout: int
:return:
Result.
:rtype: object
Example::
>>> from concurrent.futures import Future
>>> fut = Future()
>>> fut.set_result(3)
>>> await_result(fut), await_result(4)
(3, 4)
"""
return obj.result(timeout) if isinstance(obj, Future) else obj
| (obj, timeout=None) |
24,740 | schedula.utils.dsp | bypass |
Returns the same arguments.
:param inputs:
Inputs values.
:type inputs: T
:param copy:
If True, it returns a deepcopy of input values.
:type copy: bool, optional
:return:
Same input values.
:rtype: (T, ...), T
Example::
>>> bypass('a', 'b', 'c')
('a', 'b', 'c')
>>> bypass('a')
'a'
| def bypass(*inputs, copy=False):
"""
Returns the same arguments.
:param inputs:
Inputs values.
:type inputs: T
:param copy:
If True, it returns a deepcopy of input values.
:type copy: bool, optional
:return:
Same input values.
:rtype: (T, ...), T
Example::
>>> bypass('a', 'b', 'c')
('a', 'b', 'c')
>>> bypass('a')
'a'
"""
if len(inputs) == 1:
inputs = inputs[0] # Same inputs.
return _copy.deepcopy(inputs) if copy else inputs # Return inputs.
| (*inputs, copy=False) |
24,741 | schedula.utils.dsp | combine_dicts |
Combines multiple dicts in one.
:param dicts:
A sequence of dicts.
:type dicts: dict
:param copy:
If True, it returns a deepcopy of input values.
:type copy: bool, optional
:param base:
Base dict where combine multiple dicts in one.
:type base: dict, optional
:return:
A unique dict.
:rtype: dict
Example::
>>> sorted(combine_dicts({'a': 3, 'c': 3}, {'a': 1, 'b': 2}).items())
[('a', 1), ('b', 2), ('c', 3)]
| def combine_dicts(*dicts, copy=False, base=None):
"""
Combines multiple dicts in one.
:param dicts:
A sequence of dicts.
:type dicts: dict
:param copy:
If True, it returns a deepcopy of input values.
:type copy: bool, optional
:param base:
Base dict where combine multiple dicts in one.
:type base: dict, optional
:return:
A unique dict.
:rtype: dict
Example::
>>> sorted(combine_dicts({'a': 3, 'c': 3}, {'a': 1, 'b': 2}).items())
[('a', 1), ('b', 2), ('c', 3)]
"""
if len(dicts) == 1 and base is None: # Only one input dict.
cd = dicts[0].copy()
else:
cd = {} if base is None else base # Initialize empty dict.
for d in dicts: # Combine dicts.
if d:
# noinspection PyTypeChecker
cd.update(d)
# Return combined dict.
return {k: _copy.deepcopy(v) for k, v in cd.items()} if copy else cd
| (*dicts, copy=False, base=None) |
24,742 | schedula.utils.dsp | combine_nested_dicts |
Merge nested-dictionaries.
:param nested_dicts:
Nested dictionaries.
:type nested_dicts: dict
:param depth:
Maximum keys depth.
:type depth: int, optional
:param base:
Base dict where combine multiple dicts in one.
:type base: dict, optional
:return:
Combined nested-dictionary.
:rtype: dict
| def combine_nested_dicts(*nested_dicts, depth=-1, base=None):
"""
Merge nested-dictionaries.
:param nested_dicts:
Nested dictionaries.
:type nested_dicts: dict
:param depth:
Maximum keys depth.
:type depth: int, optional
:param base:
Base dict where combine multiple dicts in one.
:type base: dict, optional
:return:
Combined nested-dictionary.
:rtype: dict
"""
if base is None:
base = {}
for nested_dict in nested_dicts:
for k, v in stack_nested_keys(nested_dict, depth=depth):
while k:
# noinspection PyBroadException
try:
get_nested_dicts(base, *k[:-1])[k[-1]] = v
break
except Exception:
# A branch of the nested_dict is longer than the base.
k = k[:-1]
v = get_nested_dicts(nested_dict, *k)
return base
| (*nested_dicts, depth=-1, base=None) |
24,743 | schedula.utils.gen | counter |
Return a object whose .__call__() method returns consecutive values.
:param start:
Start value.
:type start: int, float, optional
:param step:
Step value.
:type step: int, float, optional
| def counter(start=0, step=1):
"""
Return a object whose .__call__() method returns consecutive values.
:param start:
Start value.
:type start: int, float, optional
:param step:
Step value.
:type step: int, float, optional
"""
return itertools.count(start, step).__next__
| (start=0, step=1) |
24,744 | schedula.utils.dsp | get_nested_dicts |
Get/Initialize the value of nested-dictionaries.
:param nested_dict:
Nested dictionary.
:type nested_dict: dict
:param keys:
Nested keys.
:type keys: object
:param default:
Function used to initialize a new value.
:type default: callable, optional
:param init_nesting:
Function used to initialize a new intermediate nesting dict.
:type init_nesting: callable, optional
:return:
Value of nested-dictionary.
:rtype: generator
| def get_nested_dicts(nested_dict, *keys, default=None, init_nesting=dict):
"""
Get/Initialize the value of nested-dictionaries.
:param nested_dict:
Nested dictionary.
:type nested_dict: dict
:param keys:
Nested keys.
:type keys: object
:param default:
Function used to initialize a new value.
:type default: callable, optional
:param init_nesting:
Function used to initialize a new intermediate nesting dict.
:type init_nesting: callable, optional
:return:
Value of nested-dictionary.
:rtype: generator
"""
if keys:
default = default or init_nesting
if keys[0] in nested_dict:
nd = nested_dict[keys[0]]
else:
d = default() if len(keys) == 1 else init_nesting()
nd = nested_dict[keys[0]] = d
return get_nested_dicts(nd, *keys[1:], default=default,
init_nesting=init_nesting)
return nested_dict
| (nested_dict, *keys, default=None, init_nesting=<class 'dict'>) |
24,745 | schedula.utils.dsp | inf | Class to model infinite numbers for workflow distance. | class inf:
"""Class to model infinite numbers for workflow distance."""
_inf: float = 0
_num: float = 0
def __iter__(self):
yield self._inf
yield self._num
@staticmethod
def format(val):
if not isinstance(val, tuple):
val = 0, val
return inf(*val)
def __repr__(self):
if self._inf == 0:
return str(self._num)
return 'inf(inf={}, num={})'.format(*self)
def __add__(self, other):
if isinstance(other, self.__class__):
return inf(self._inf + other._inf, self._num + other._num)
return inf(self._inf, self._num + other)
def __sub__(self, other):
other = isinstance(other, self.__class__) and other or (0, other)
return inf(*(x - y for x, y in zip(self, other)))
def __rsub__(self, other):
other = isinstance(other, self.__class__) and other or (0, other)
return inf(*(x - y for x, y in zip(other, self)))
def __mul__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x * y for x, y in zip(self, other)))
def __truediv__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x / y for x, y in zip(self, other)))
def __rtruediv__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x / y for x, y in zip(other, self)))
def __pow__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x ** y for x, y in zip(self, other)))
def __rpow__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x ** y for x, y in zip(other, self)))
def __mod__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x % y for x, y in zip(self, other)))
def __rmod__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x % y for x, y in zip(other, self)))
def __floordiv__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x // y for x, y in zip(self, other)))
def __rfloordiv__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x // y for x, y in zip(other, self)))
def __neg__(self):
return inf(*(-x for x in self))
def __pos__(self):
return inf(*(+x for x in self))
def __abs__(self):
return inf(*(map(abs, self)))
def __trunc__(self):
return inf(*(map(math.trunc, self)))
def __floor__(self):
return inf(*(map(math.floor, self)))
def __ceil__(self):
return inf(*(map(math.ceil, self)))
def __round__(self, n=None):
return inf(*(round(x, n) for x in self))
__radd__ = __add__
__rmul__ = __mul__
def __ge__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) >= other
def __gt__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) > other
def __eq__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) == other
def __le__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) <= other
def __lt__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) < other
def __ne__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) != other
| (_inf: float = 0, _num: float = 0) -> None |
24,746 | schedula.utils.dsp | __abs__ | null | def __abs__(self):
return inf(*(map(abs, self)))
| (self) |
24,747 | schedula.utils.dsp | __add__ | null | def __add__(self, other):
if isinstance(other, self.__class__):
return inf(self._inf + other._inf, self._num + other._num)
return inf(self._inf, self._num + other)
| (self, other) |
24,748 | schedula.utils.dsp | __ceil__ | null | def __ceil__(self):
return inf(*(map(math.ceil, self)))
| (self) |
24,749 | schedula.utils.dsp | __delattr__ | null | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
#
# Copyright 2015-2024, Vincenzo Arcidiacono;
# Licensed under the EUPL (the 'Licence');
# You may not use this work except in compliance with the Licence.
# You may obtain a copy of the Licence at: http://ec.europa.eu/idabc/eupl
"""
It provides tools to create models with the
:class:`~schedula.dispatcher.Dispatcher`.
"""
import math
import inspect
import functools
import itertools
import collections
import copy as _copy
from .cst import START
from .gen import Token
from .base import Base
from .exc import DispatcherError
from dataclasses import dataclass
__author__ = 'Vincenzo Arcidiacono <[email protected]>'
def stlp(s):
"""
Converts a string in a tuple.
"""
if isinstance(s, str):
return s,
return s
| (self, name) |
24,750 | schedula.utils.dsp | __eq__ | null | def __eq__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) == other
| (self, other) |
24,751 | schedula.utils.dsp | __floor__ | null | def __floor__(self):
return inf(*(map(math.floor, self)))
| (self) |
24,752 | schedula.utils.dsp | __floordiv__ | null | def __floordiv__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x // y for x, y in zip(self, other)))
| (self, other) |
24,753 | schedula.utils.dsp | __ge__ | null | def __ge__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) >= other
| (self, other) |
24,754 | schedula.utils.dsp | __gt__ | null | def __gt__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) > other
| (self, other) |
24,756 | schedula.utils.dsp | __iter__ | null | def __iter__(self):
yield self._inf
yield self._num
| (self) |
24,757 | schedula.utils.dsp | __le__ | null | def __le__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) <= other
| (self, other) |
24,758 | schedula.utils.dsp | __lt__ | null | def __lt__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) < other
| (self, other) |
24,759 | schedula.utils.dsp | __mod__ | null | def __mod__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x % y for x, y in zip(self, other)))
| (self, other) |
24,760 | schedula.utils.dsp | __mul__ | null | def __mul__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x * y for x, y in zip(self, other)))
| (self, other) |
24,761 | schedula.utils.dsp | __ne__ | null | def __ne__(self, other):
other = isinstance(other, self.__class__) and tuple(other) or (0, other)
return tuple(self) != other
| (self, other) |
24,762 | schedula.utils.dsp | __neg__ | null | def __neg__(self):
return inf(*(-x for x in self))
| (self) |
24,763 | schedula.utils.dsp | __pos__ | null | def __pos__(self):
return inf(*(+x for x in self))
| (self) |
24,764 | schedula.utils.dsp | __pow__ | null | def __pow__(self, other):
other = isinstance(other, self.__class__) and other or (other, other)
return inf(*(x ** y for x, y in zip(self, other)))
| (self, other) |
24,766 | schedula.utils.dsp | __repr__ | null | def __repr__(self):
if self._inf == 0:
return str(self._num)
return 'inf(inf={}, num={})'.format(*self)
| (self) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.