code
stringlengths 66
870k
| docstring
stringlengths 19
26.7k
| func_name
stringlengths 1
138
| language
stringclasses 1
value | repo
stringlengths 7
68
| path
stringlengths 5
324
| url
stringlengths 46
389
| license
stringclasses 7
values |
---|---|---|---|---|---|---|---|
def _clear_metadata(self):
"""
Getting product status (outdated/up-to-date) is slow, especially for
product whose metadata is stored remotely. This is critical when
rendering because we need to do a forward pass to know which tasks to
run and a product's status depends on its upstream product's status,
and we have to make sure we only retrieve metadata once, so we save a
local copy. But even with this implementation, we don't throw away
product status after rendering, otherwise calls that need project
status (like DAG.plot, DAG.status, DAG.to_markup) would have to get
product status before running its logic, so once we get it, we stick
with it. The only caveat is that status updates won't be reflected
immediately (e.g. if the user manually deletes a product's metadata),
but that's a small price to pay given that this is not expected to
happen often. The only case when we *must* be sure that we have
up-to-date metadata is when calling DAG.build(), so we call this
method before building, which forces metadata reload.
"""
self._logger.debug("Clearing product status")
# clearing out this way is only useful after building, but not
# if the metadata changed since it wont be reloaded
for task in self.values():
task.product.metadata.clear() |
Getting product status (outdated/up-to-date) is slow, especially for
product whose metadata is stored remotely. This is critical when
rendering because we need to do a forward pass to know which tasks to
run and a product's status depends on its upstream product's status,
and we have to make sure we only retrieve metadata once, so we save a
local copy. But even with this implementation, we don't throw away
product status after rendering, otherwise calls that need project
status (like DAG.plot, DAG.status, DAG.to_markup) would have to get
product status before running its logic, so once we get it, we stick
with it. The only caveat is that status updates won't be reflected
immediately (e.g. if the user manually deletes a product's metadata),
but that's a small price to pay given that this is not expected to
happen often. The only case when we *must* be sure that we have
up-to-date metadata is when calling DAG.build(), so we call this
method before building, which forces metadata reload.
| _clear_metadata | python | ploomber/ploomber | src/ploomber/dag/dag.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/dag.py | Apache-2.0 |
def __iter__(self):
"""
Iterate task names in topological order. Topological order is
desirable in many situations, this order guarantees that for any given
task, its dependencies are executed first, but it's also useful for
other purposes, such as listing tasks, because it shows a more natural
order to see how data flows start to finish. For cases where this
sorting is not required, used the DAG._iter() method instead.
Notes
-----
https://en.wikipedia.org/wiki/Topological_sorting
"""
# TODO: raise a warning if this any of this dag tasks have tasks
# from other tasks as dependencies (they won't show up here)
try:
for name in nx.algorithms.topological_sort(self._G):
yield name
except nx.NetworkXUnfeasible:
raise DAGCycle |
Iterate task names in topological order. Topological order is
desirable in many situations, this order guarantees that for any given
task, its dependencies are executed first, but it's also useful for
other purposes, such as listing tasks, because it shows a more natural
order to see how data flows start to finish. For cases where this
sorting is not required, used the DAG._iter() method instead.
Notes
-----
https://en.wikipedia.org/wiki/Topological_sorting
| __iter__ | python | ploomber/ploomber | src/ploomber/dag/dag.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/dag.py | Apache-2.0 |
def create(self, *args, **kwargs):
"""Return a DAG with the given parameters
*args, **kwargs
Parameters to pass to the DAG constructor
"""
dag = DAG(*args, **kwargs)
dag._params = copy(self.params)
return dag | Return a DAG with the given parameters
*args, **kwargs
Parameters to pass to the DAG constructor
| create | python | ploomber/ploomber | src/ploomber/dag/dagconfigurator.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/dagconfigurator.py | Apache-2.0 |
def build(self, input_data, copy=False):
"""Run the DAG
Parameters
----------
input_data : dict
A dictionary mapping root tasks (names) to dict params. Root tasks
are tasks in the DAG that do not have upstream dependencies,
the corresponding dictionary is passed to the respective task
source function as keyword arguments
copy : bool or callable
Whether to copy the output of an upstream task before passing it
to the task being processed. It is recommended to turn this off
for memory efficiency but if the tasks are not pure functions
(i.e. mutate their inputs) this migh lead to bugs, in such
case, the best way to fix it would be to make all your tasks
pure functions but you can enable this option if memory
consumption is not a problem. If True it uses the ``copy.copy``
function before passing the upstream products, if you pass a
callable instead, such function is used (for example, you
may pass ``copy.deepcopy``)
Returns
-------
dict
A dictionary mapping task names to their respective outputs
"""
outs = {}
input_data_names = set(self.root_nodes)
# FIXME: for this particula case, the error here should be TypeError,
# not KeyError (the former is the one used when calling functions with
# invalid arguments) - maybe an argument validate.keys to choose
# which error to raise?
validate.keys(
valid=input_data_names,
passed=set(input_data),
required=input_data_names,
name="input_data",
)
if copy is True:
copying_function = copy_module.copy
elif callable(copy):
copying_function = copy
else:
copying_function = _do_nothing
for task_name in self.dag:
task = self.dag[task_name]
params = task.params.to_dict()
if task_name in self.root_nodes:
params = {**params, "input_data": input_data[task_name]}
# replace params with the returned value from upstream tasks
if "upstream" in params:
params["upstream"] = {
k: copying_function(outs[k]) for k, v in params["upstream"].items()
}
params.pop("product", None)
output = self.return_postprocessor(task.source.primitive(**params))
if output is None:
raise ValueError(
"All callables in a {} must return a value. "
'Callable "{}", from task "{}" returned None'.format(
type(self).__name__, task.source.name, task_name
)
)
outs[task_name] = output
return outs | Run the DAG
Parameters
----------
input_data : dict
A dictionary mapping root tasks (names) to dict params. Root tasks
are tasks in the DAG that do not have upstream dependencies,
the corresponding dictionary is passed to the respective task
source function as keyword arguments
copy : bool or callable
Whether to copy the output of an upstream task before passing it
to the task being processed. It is recommended to turn this off
for memory efficiency but if the tasks are not pure functions
(i.e. mutate their inputs) this migh lead to bugs, in such
case, the best way to fix it would be to make all your tasks
pure functions but you can enable this option if memory
consumption is not a problem. If True it uses the ``copy.copy``
function before passing the upstream products, if you pass a
callable instead, such function is used (for example, you
may pass ``copy.deepcopy``)
Returns
-------
dict
A dictionary mapping task names to their respective outputs
| build | python | ploomber/ploomber | src/ploomber/dag/inmemorydag.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/inmemorydag.py | Apache-2.0 |
def choose_backend(backend, path=None):
"""Determine which backend to use for plotting
Temporarily disable pygraphviz for Python 3.10 on Windows
"""
if (
(not check_pygraphviz_installed() and backend is None)
or (backend == "d3")
or (backend is None and path and Path(path).suffix == ".html")
):
return "d3"
elif backend == "mermaid":
return "mermaid"
return "pygraphviz" | Determine which backend to use for plotting
Temporarily disable pygraphviz for Python 3.10 on Windows
| choose_backend | python | ploomber/ploomber | src/ploomber/dag/plot.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/plot.py | Apache-2.0 |
def json_dag_parser(graph: dict):
"""Format dag dict so d3 can understand it"""
nodes = {}
for task in graph["nodes"]:
nodes[task["id"]] = task
# change name label to products for now
for node in nodes:
nodes[node]["products"] = nodes[node]["label"].replace("\n", "").split(",")
for link in graph["links"]:
node_links = nodes[link["target"]].get("parentIds", [])
node_links.append(link["source"])
nodes[link["target"]]["parentIds"] = node_links
return json.dumps(list(nodes.values())) | Format dag dict so d3 can understand it | json_dag_parser | python | ploomber/ploomber | src/ploomber/dag/plot.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/plot.py | Apache-2.0 |
def with_d3(graph, output, image_only=False):
"""Generates D3 Dag html output and return output file name"""
json_data = json_dag_parser(graph=graph)
if image_only:
Path(output).write_text(json_data)
else:
template = jinja2.Template(
importlib_resources.read_text(resources, "dag_template_d3.html")
)
rendered = template.render(json_data=json_data)
Path(output).write_text(rendered) | Generates D3 Dag html output and return output file name | with_d3 | python | ploomber/ploomber | src/ploomber/dag/plot.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/plot.py | Apache-2.0 |
def check_duplicated_products(dag):
"""
Raises an error if more than one task produces the same product.
Note that this relies on the __hash__ and __eq__ implementations of
each Product to determine whether they're the same or not. This
implies that a relative File and absolute File pointing to the same file
are considered duplicates and SQLRelations (in any of its flavors) are
the same when they resolve to the same (schema, name, type) tuple
(i.e., client is ignored), this because when using the generic SQLite
backend for storing SQL product metadata, the table only relies on schema
and name to retrieve metadata.
"""
prod2tasknames = defaultdict(lambda: [])
for name in dag._iter():
product = dag[name].product
if isinstance(product, MetaProduct):
for p in product.products:
prod2tasknames[p].append(name)
else:
prod2tasknames[product].append(name)
duplicated = {k: v for k, v in prod2tasknames.items() if len(v) > 1}
if duplicated:
raise DAGWithDuplicatedProducts(
"Tasks must generate unique products. "
"The following products appear in more than "
f"one task:\n{_generate_error_message(duplicated)}"
) |
Raises an error if more than one task produces the same product.
Note that this relies on the __hash__ and __eq__ implementations of
each Product to determine whether they're the same or not. This
implies that a relative File and absolute File pointing to the same file
are considered duplicates and SQLRelations (in any of its flavors) are
the same when they resolve to the same (schema, name, type) tuple
(i.e., client is ignored), this because when using the generic SQLite
backend for storing SQL product metadata, the table only relies on schema
and name to retrieve metadata.
| check_duplicated_products | python | ploomber/ploomber | src/ploomber/dag/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/util.py | Apache-2.0 |
def fetch_remote_metadata_in_parallel(dag):
"""Fetches remote metadta in parallel from a list of Files"""
files = flatten_products(
dag[t].product
for t in dag._iter()
if isinstance(dag[t].product, File) or isinstance(dag[t].product, MetaProduct)
)
if files:
with ThreadPoolExecutor(max_workers=64) as executor:
future2file = {
executor.submit(file._remote._fetch_remote_metadata): file
for file in files
}
for future in as_completed(future2file):
exception = future.exception()
if exception:
local = future2file[future]
raise RuntimeError(
"An error occurred when fetching "
f"remote metadata for file {local!r}"
) from exception | Fetches remote metadta in parallel from a list of Files | fetch_remote_metadata_in_parallel | python | ploomber/ploomber | src/ploomber/dag/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/util.py | Apache-2.0 |
def _path_for_plot(path_to_plot, fmt):
"""Context manager to manage DAG.plot
Parameters
----------
path_to_plot : str
Where to store the plot. If 'embed', It returns a temporary empty file
otherwise and deletes it when exiting. Otherwise, it just passes the
value
"""
if path_to_plot == "embed":
fd, path = tempfile.mkstemp(suffix=f".{fmt}")
os.close(fd)
else:
path = str(path_to_plot)
try:
yield path
finally:
if path_to_plot == "embed" and fmt != "html":
Path(path).unlink() | Context manager to manage DAG.plot
Parameters
----------
path_to_plot : str
Where to store the plot. If 'embed', It returns a temporary empty file
otherwise and deletes it when exiting. Otherwise, it just passes the
value
| _path_for_plot | python | ploomber/ploomber | src/ploomber/dag/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/dag/util.py | Apache-2.0 |
def load_env(fn):
"""
A function decorated with @load_env will be called with the current
environment in an env keyword argument
"""
_validate_and_modify_signature(fn)
@wraps(fn)
def wrapper(*args, **kwargs):
return fn(Env.load(), *args, **kwargs)
return wrapper |
A function decorated with @load_env will be called with the current
environment in an env keyword argument
| load_env | python | ploomber/ploomber | src/ploomber/env/decorators.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/decorators.py | Apache-2.0 |
def with_env(source):
"""
A function decorated with @with_env that starts and environment during
the execution of a function.
Notes
-----
The first argument of a function decorated with @with_env must be named
"env", the env will be passed automatically when calling the function. The
original function's signature is edited.
You can replace values in the environment, e.g. if you want to replace
env.key.another, you can call the decorated function with:
my_fn(env__key__another='my_new_value')
The environment is resolved at import time, changes to the working
directory will not affect initialization.
Examples
--------
.. literalinclude:: ../../examples/short/with_env.py
"""
def decorator(fn):
_validate_and_modify_signature(fn)
try:
# FIXME: we should deprecate initializing from a decorator
# with a dictionary, it isn't useful. leaving it for now
if isinstance(source, Mapping):
env_dict = EnvDict(source)
else:
# when the decorator is called without args, look for
# 'env.yaml'
env_dict = EnvDict.find(source or "env.yaml")
except Exception as e:
raise RuntimeError(
"Failed to resolve environment using "
'@with_env decorator in function "{}". '
"Tried to call Env with argument: {}".format(
_get_function_name_w_module(fn), source
)
) from e
fn._env_dict = env_dict
@wraps(fn)
def wrapper(*args, **kwargs):
to_replace = {k: v for k, v in kwargs.items() if k.startswith("env__")}
for key in to_replace.keys():
kwargs.pop(key)
env_dict_new = env_dict._replace_flatten_keys(to_replace)
try:
Env._init_from_decorator(env_dict_new, _get_function_name_w_module(fn))
except Exception as e:
current = Env.load()
raise RuntimeError(
"Failed to initialize environment using "
'@with_env decorator in function "{}". '
"Current environment: {}".format(
_get_function_name_w_module(fn), repr(current)
)
) from e
Env._ref = _get_function_name_w_module(fn)
try:
res = fn(Env.load(), *args, **kwargs)
except Exception as e:
Env.end()
raise e
Env.end()
return res
return wrapper
if isinstance(source, types.FunctionType):
fn = source
source = None
return decorator(fn)
return decorator |
A function decorated with @with_env that starts and environment during
the execution of a function.
Notes
-----
The first argument of a function decorated with @with_env must be named
"env", the env will be passed automatically when calling the function. The
original function's signature is edited.
You can replace values in the environment, e.g. if you want to replace
env.key.another, you can call the decorated function with:
my_fn(env__key__another='my_new_value')
The environment is resolved at import time, changes to the working
directory will not affect initialization.
Examples
--------
.. literalinclude:: ../../examples/short/with_env.py
| with_env | python | ploomber/ploomber | src/ploomber/env/decorators.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/decorators.py | Apache-2.0 |
def __init__(self, source="env.yaml"):
"""Start the environment
Parameters
----------
source: dict, pathlib.Path, str, optional
If dict, loads it directly, if pathlib.Path or path, reads the file
(assumes yaml format).
Raises
------
FileNotFoundError
If source is None and an environment file cannot be found
automatically
RuntimeError
If one environment has already started
Returns
-------
ploomber.Env
An environment object
"""
if not isinstance(source, EnvDict):
# try to initialize an EnvDict to perform validation, if any
# errors occur, discard object
try:
source = EnvDict(source)
except Exception:
Env.__instance = None
raise
self._data = source
self._fn_name = None | Start the environment
Parameters
----------
source: dict, pathlib.Path, str, optional
If dict, loads it directly, if pathlib.Path or path, reads the file
(assumes yaml format).
Raises
------
FileNotFoundError
If source is None and an environment file cannot be found
automatically
RuntimeError
If one environment has already started
Returns
-------
ploomber.Env
An environment object
| __init__ | python | ploomber/ploomber | src/ploomber/env/env.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/env.py | Apache-2.0 |
def find(cls, source):
"""
Find env file recursively, currently only used by the @with_env
decorator
"""
if not Path(source).exists():
source_found, _ = default.find_file_recursively(source)
if source_found is None:
raise FileNotFoundError(
'Could not find file "{}" in the '
"current working directory nor "
"6 levels up".format(source)
)
else:
source = source_found
return cls(source, path_to_here=Path(source).parent) |
Find env file recursively, currently only used by the @with_env
decorator
| find | python | ploomber/ploomber | src/ploomber/env/envdict.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/envdict.py | Apache-2.0 |
def _replace_value(self, value, keys_all):
"""
Replace a value in the underlying dictionary, by passing a value and
a list of keys
e.g. given {'a': {'b': 1}}, we can replace 1 by doing
_replace_value(2, ['a', 'b'])
"""
keys_to_final_dict = keys_all[:-1]
key_to_edit = keys_all[-1]
dict_to_edit = self._data
for e in keys_to_final_dict:
dict_to_edit = dict_to_edit[e]
if key_to_edit not in dict_to_edit:
dotted_path = ".".join(keys_all)
raise KeyError(
'Trying to replace key "{}" in env, '
"but it does not exist".format(dotted_path)
)
dict_to_edit[key_to_edit] = self._expander.expand_raw_value(value, keys_all) |
Replace a value in the underlying dictionary, by passing a value and
a list of keys
e.g. given {'a': {'b': 1}}, we can replace 1 by doing
_replace_value(2, ['a', 'b'])
| _replace_value | python | ploomber/ploomber | src/ploomber/env/envdict.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/envdict.py | Apache-2.0 |
def _inplace_replace_flatten_key(self, value, key_flatten):
"""
Replace a value in the underlying dictionary, by passing a value and
a list of keys
e.g. given {'a': {'b': 1}}, we can replace 1 by doing
_replace_flatten_keys(2, 'env__a__b'). This function is used
internally to overrive env values when calling factories (functions
decorated with @with_env or when doing so via the command line
interface - ploomber build pipeline.yaml --env--a--b 2)
Returns a copy
"""
# convert env__a__b__c -> ['a', 'b', 'c']
parts = key_flatten.split("__")
if parts[0] != "env":
raise ValueError("keys_flatten must start with env__")
keys_all = parts[1:]
self._replace_value(value, keys_all) |
Replace a value in the underlying dictionary, by passing a value and
a list of keys
e.g. given {'a': {'b': 1}}, we can replace 1 by doing
_replace_flatten_keys(2, 'env__a__b'). This function is used
internally to overrive env values when calling factories (functions
decorated with @with_env or when doing so via the command line
interface - ploomber build pipeline.yaml --env--a--b 2)
Returns a copy
| _inplace_replace_flatten_key | python | ploomber/ploomber | src/ploomber/env/envdict.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/envdict.py | Apache-2.0 |
def load_from_source(source):
"""
Loads from a dictionary or a YAML and applies preprocesssing to the
dictionary
Returns
-------
dict
Raw dictionary
pathlib.Path
Path to the loaded file, None if source is a dict
str
Name, if loaded from a YAML file with the env.{name}.yaml format,
None if another format or if source is a dict
"""
if isinstance(source, Mapping):
# dictiionary, path
return source, None
with open(str(source)) as f:
try:
raw = yaml.load(f, Loader=yaml.SafeLoader)
except Exception as e:
raise type(e)(
"yaml.load failed to parse your YAML file "
"fix syntax errors and try again"
) from e
finally:
# yaml.load returns None for empty files and str if file just
# contains a string - those aren't valid for our use case, raise
# an error
if not isinstance(raw, Mapping):
raise ValueError(
"Expected object loaded from '{}' to be "
"a dict but got '{}' instead, "
"verify the content".format(source, type(raw).__name__)
)
path = Path(source).resolve()
return raw, path |
Loads from a dictionary or a YAML and applies preprocesssing to the
dictionary
Returns
-------
dict
Raw dictionary
pathlib.Path
Path to the loaded file, None if source is a dict
str
Name, if loaded from a YAML file with the env.{name}.yaml format,
None if another format or if source is a dict
| load_from_source | python | ploomber/ploomber | src/ploomber/env/envdict.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/envdict.py | Apache-2.0 |
def raw_preprocess(raw, path_to_raw):
"""
Preprocess a raw dictionary. If a '_module' key exists, it
will be expanded: first, try to locate a module with that name and resolve
to their location (root __init__.py parent), if no module is found,
interpret as a path to the project's root folder, checks that the folder
actually exists. '{{here}}' is also allowed, which resolves to the
path_to_raw, raises Exception if path_to_raw is None
Returns
-------
preprocessed : dict
Dict with preprocessed keys (empty dictionary it no special
keys exist in raw)
Parameters
----------
raw : dict
Raw data dictionary
path_to_raw : str
Path to file where dict was read from, it read from a dict, pass None
"""
module = raw.get("_module")
preprocessed = {}
if module:
if raw["_module"] == "{{here}}":
if path_to_raw is not None:
preprocessed["_module"] = path_to_raw.parent
else:
raise ValueError(
"_module cannot be {{here}} if " "not loaded from a file"
)
else:
# check if it's a filesystem path
as_path = Path(module)
if as_path.exists():
if as_path.is_file():
raise ValueError(
'Could not resolve _module "{}", '
"expected a module or a directory but got a "
"file".format(module)
)
else:
path_to_module = as_path
# must be a dotted path
else:
module_spec = importlib.util.find_spec(module)
# package does not exist
if module_spec is None:
raise ValueError(
'Could not resolve _module "{}", '
"it is not a valid module "
"nor a directory".format(module)
)
else:
path_to_module = Path(module_spec.origin).parent
preprocessed["_module"] = path_to_module
return preprocessed |
Preprocess a raw dictionary. If a '_module' key exists, it
will be expanded: first, try to locate a module with that name and resolve
to their location (root __init__.py parent), if no module is found,
interpret as a path to the project's root folder, checks that the folder
actually exists. '{{here}}' is also allowed, which resolves to the
path_to_raw, raises Exception if path_to_raw is None
Returns
-------
preprocessed : dict
Dict with preprocessed keys (empty dictionary it no special
keys exist in raw)
Parameters
----------
raw : dict
Raw data dictionary
path_to_raw : str
Path to file where dict was read from, it read from a dict, pass None
| raw_preprocess | python | ploomber/ploomber | src/ploomber/env/envdict.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/envdict.py | Apache-2.0 |
def cast_if_possible(value):
"""
Reference to env in specs must be strings, but we would like the rendered
value to still have the appropriate type
"""
if isinstance(value, str):
value_lower = value.lower()
if value_lower == "false":
return False
elif value_lower == "true":
return True
elif value_lower in {"none", "null"}:
return None
try:
return ast.literal_eval(value)
except Exception:
pass
return value |
Reference to env in specs must be strings, but we would like the rendered
value to still have the appropriate type
| cast_if_possible | python | ploomber/ploomber | src/ploomber/env/expand.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/expand.py | Apache-2.0 |
def expand_raw_value(self, raw_value, parents):
"""
Expand a string with placeholders
Parameters
----------
raw_value : str
The original value to expand
parents : list
The list of parents to get to this value in the dictionary
Notes
-----
If for a given raw_value, the first parent is 'path', expanded value
is casted to pathlib.Path object and .expanduser() is called,
furthermore, if raw_value ends with '/', a directory is created if
it does not currently exist
"""
placeholders = util.get_tags_in_str(raw_value)
if not placeholders:
value = raw_value
else:
if "git" in placeholders:
if not shutil.which("git"):
raise BaseException(
"Found placeholder {{git}}, but "
"git is not installed. Please install "
"it and try again."
)
if not repo.is_repo(
self._preprocessed.get("_module", self._path_to_here)
):
raise BaseException(
"Found placeholder {{git}}, but could not "
"locate a git repository. Create a repository "
"or remove the {{git}} placeholder."
)
# get all required placeholders
params = {k: self.load_placeholder(k) for k in placeholders}
value = Template(raw_value).render(**params)
if parents:
if parents[0] == "path":
# value is a str (since it was loaded from a yaml file),
# if it has an explicit trailing slash, interpret it as
# a directory and create it, we have to do it at this point,
# because once we cast to Path, we lose the trailing slash
if str(value).endswith("/"):
self._try_create_dir(value)
value = Path(value).expanduser()
else:
value = cast_if_possible(value)
# store the rendered value so it's available for upcoming raw_values
# NOTE: the current implementation only works for non-nested keys
if len(parents) == 1 and isinstance(parents[0], str):
self._placeholders[parents[0]] = value
return value |
Expand a string with placeholders
Parameters
----------
raw_value : str
The original value to expand
parents : list
The list of parents to get to this value in the dictionary
Notes
-----
If for a given raw_value, the first parent is 'path', expanded value
is casted to pathlib.Path object and .expanduser() is called,
furthermore, if raw_value ends with '/', a directory is created if
it does not currently exist
| expand_raw_value | python | ploomber/ploomber | src/ploomber/env/expand.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/expand.py | Apache-2.0 |
def iterate_nested_dict(d):
"""
Iterate over all values (possibly nested) in a dictionary
Yields: dict holding the value, current key, current value, list of keys
to get to this value
"""
for k, v in d.items():
for i in _iterate(d, k, v, preffix=[k]):
yield i |
Iterate over all values (possibly nested) in a dictionary
Yields: dict holding the value, current key, current value, list of keys
to get to this value
| iterate_nested_dict | python | ploomber/ploomber | src/ploomber/env/expand.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/expand.py | Apache-2.0 |
def raw_data_keys(d):
"""
Validate raw dictionary, no top-level keys with leading underscores,
except for _module, and no keys with double underscores anywhere
"""
keys_all = get_keys_for_dict(d)
errors = []
try:
no_double_underscores(keys_all)
except ValueError as e:
errors.append(str(e))
try:
no_leading_underscore(d.keys())
except ValueError as e:
errors.append(str(e))
if errors:
msg = "Error validating env.\n" + "\n".join(errors)
raise ValueError(msg) |
Validate raw dictionary, no top-level keys with leading underscores,
except for _module, and no keys with double underscores anywhere
| raw_data_keys | python | ploomber/ploomber | src/ploomber/env/validate.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/validate.py | Apache-2.0 |
def get_keys_for_dict(d):
"""
Get all (possibly nested) keys in a dictionary
"""
out = []
for k, v in d.items():
out.append(k)
if isinstance(v, Mapping):
out += get_keys_for_dict(v)
return out |
Get all (possibly nested) keys in a dictionary
| get_keys_for_dict | python | ploomber/ploomber | src/ploomber/env/validate.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/env/validate.py | Apache-2.0 |
def next_task():
"""
Return the next Task to execute, returns None if no Tasks are
available for execution (cause their dependencies are not done yet)
and raises a StopIteration exception if there are no more tasks to
run, which means the DAG is done
"""
for task in dag.values():
if task.exec_status in {TaskStatus.Aborted}:
if self._bar:
self._bar.update()
done.append(task)
elif task.exec_status == TaskStatus.BrokenProcessPool:
raise StopIteration
# iterate over tasks to find which is ready for execution
for task in dag.values():
# ignore tasks that are already started, I should probably add
# an executing status but that cannot exist in the task itself,
# maybe in the manaer?
if (
task.exec_status
in {TaskStatus.WaitingExecution, TaskStatus.WaitingDownload}
and task not in started
):
return task
# there might be some up-to-date tasks, add them
set_done = set([t.name for t in done])
if not self._i % 50000:
click.clear()
if set_done:
_log(
f"Finished: {pretty_print.iterable(set_done)}",
self._logger.debug,
print_progress=self.print_progress,
)
remaining = pretty_print.iterable(set_all - set_done)
_log(
f"Remaining: {remaining}",
self._logger.debug,
print_progress=self.print_progress,
)
_log(
f"Finished {len(set_done)} out of {len(set_all)} tasks",
self._logger.info,
print_progress=self.print_progress,
)
if set_done == set_all:
self._logger.debug("All tasks done")
raise StopIteration
self._i += 1 |
Return the next Task to execute, returns None if no Tasks are
available for execution (cause their dependencies are not done yet)
and raises a StopIteration exception if there are no more tasks to
run, which means the DAG is done
| next_task | python | ploomber/ploomber | src/ploomber/executors/parallel.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/executors/parallel.py | Apache-2.0 |
def next_task():
"""
Return the next Task to execute, returns None if no Tasks are
available for execution (cause their dependencies are not done yet)
and raises a StopIteration exception if there are no more tasks to
run, which means the DAG is done
"""
for task in dag.values():
if task.exec_status in {TaskStatus.Aborted}:
if self._bar:
self._bar.update()
done.append(task)
elif task.exec_status == TaskStatus.BrokenProcessPool:
raise StopIteration
# iterate over tasks to find which is ready for execution
for task in dag.values():
# ignore tasks that are already started, I should probably add
# an executing status but that cannot exist in the task itself,
# maybe in the manaer?
if (
task.exec_status
in {TaskStatus.WaitingExecution, TaskStatus.WaitingDownload}
and task not in started
):
return task
# there might be some up-to-date tasks, add them
set_done = set([t.name for t in done])
if not self._i % 50000:
click.clear()
if set_done:
_log(
f"Finished: {pretty_print.iterable(set_done)}",
self._logger.debug,
print_progress=self.print_progress,
)
remaining = pretty_print.iterable(set_all - set_done)
_log(
f"Remaining: {remaining}",
self._logger.debug,
print_progress=self.print_progress,
)
_log(
f"Finished {len(set_done)} out of {len(set_all)} tasks",
self._logger.info,
print_progress=self.print_progress,
)
if set_done == set_all:
self._logger.debug("All tasks done")
raise StopIteration
self._i += 1 |
Return the next Task to execute, returns None if no Tasks are
available for execution (cause their dependencies are not done yet)
and raises a StopIteration exception if there are no more tasks to
run, which means the DAG is done
| next_task | python | ploomber/ploomber | src/ploomber/executors/parallel_dill.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/executors/parallel_dill.py | Apache-2.0 |
def catch_warnings(fn, warnings_all):
"""
Catch all warnings on the current task (except DeprecationWarning)
and append them to warnings_all. Runs if the parameter catch_warnings
is true.
Parameters
----------
fn : function
A LazyFunction that automatically calls catch_warnings, with parameters
from the main function (warnings_all) and the current scheduled task.
warnings_all: BuildWarningsCollector object
Collects all warnings.
"""
# TODO: we need a try catch in case fn() raises an exception
with warnings.catch_warnings(record=True) as warnings_current:
warnings.simplefilter("ignore", DeprecationWarning)
result = fn()
if warnings_current:
w = [str(a_warning.message) for a_warning in warnings_current]
warnings_all.append(task=fn.task, message="\n".join(w))
return result |
Catch all warnings on the current task (except DeprecationWarning)
and append them to warnings_all. Runs if the parameter catch_warnings
is true.
Parameters
----------
fn : function
A LazyFunction that automatically calls catch_warnings, with parameters
from the main function (warnings_all) and the current scheduled task.
warnings_all: BuildWarningsCollector object
Collects all warnings.
| catch_warnings | python | ploomber/ploomber | src/ploomber/executors/serial.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/executors/serial.py | Apache-2.0 |
def catch_exceptions(fn, exceptions_all):
"""
If there is an exception, log it and append it to warnings_all. Runs if
the parameter catch_exceptions is true.
Parameters
----------
fn : function
A LazyFunction that automatically calls catch_exceptions, with
parameters from the main function (exceptions_all) and the
current scheduled task.
exceptions_all: BuildExceptionsCollector object
Collects all exceptions.
"""
logger = logging.getLogger(__name__)
# NOTE: we are individually catching exceptions
# (see build_in_current_process and build_in_subprocess), would it be
# better to catch everything here and set
# task.exec_status = TaskStatus.Errored accordingly?
# TODO: setting exec_status can also raise exceptions if the hook fails
# add tests for that, and check the final task status,
try:
# try to run task build
fn()
except Exception as e:
# if running in a different process, logger.exception inside Task.build
# won't show up. So we do it here.
# FIXME: this is going to cause duplicates if not running in a
# subprocess
logger.exception(str(e))
tr = _format.exception(e)
exceptions_all.append(task=fn.task, message=tr, obj=e) |
If there is an exception, log it and append it to warnings_all. Runs if
the parameter catch_exceptions is true.
Parameters
----------
fn : function
A LazyFunction that automatically calls catch_exceptions, with
parameters from the main function (exceptions_all) and the
current scheduled task.
exceptions_all: BuildExceptionsCollector object
Collects all exceptions.
| catch_exceptions | python | ploomber/ploomber | src/ploomber/executors/serial.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/executors/serial.py | Apache-2.0 |
def pass_exceptions(fn):
"""
Pass all exceptions for the current task and nothing. Runs if
both parameters catch_exceptions and catch_warnings are false.
Parameters
----------
fn : function
A LazyFunction that automatically calls pass_exceptions on the
current scheduled task.
"""
# should i still check here for DAGBuildEarlyStop? is it worth
# for returning accurate task status?
fn() |
Pass all exceptions for the current task and nothing. Runs if
both parameters catch_exceptions and catch_warnings are false.
Parameters
----------
fn : function
A LazyFunction that automatically calls pass_exceptions on the
current scheduled task.
| pass_exceptions | python | ploomber/ploomber | src/ploomber/executors/serial.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/executors/serial.py | Apache-2.0 |
def build_in_subprocess(task, build_kwargs, reports_all):
"""
Execute the current task in a subprocess. Runs if the parameter
build_in_subprocess is true.
Parameters
----------
task : DAG object
The current task.
build_kwargs: dict
Contains bool catch_exceptions and bool catch_warnings, checks
whether to catch exceptions and warnings on the current task.
reports_all: list
Collects the build report when executing the current DAG.
"""
if callable(task.source.primitive):
try:
p = Pool(processes=1)
except RuntimeError as e:
if "An attempt has been made to start a new process" in str(e):
# this is most likely due to child processes created with
# spawn (mac/windows) outside if __name__ == '__main__'
raise RuntimeError(
"Press ctrl + c to exit. "
"For help solving this, go to: "
"https://ploomber.io/s/mp"
) from e
else:
raise
res = p.apply_async(func=task._build, kwds=build_kwargs)
# calling this make sure we catch the exception, from the docs:
# Return the result when it arrives. If timeout is not None and
# the result does not arrive within timeout seconds then
# multiprocessing.TimeoutError is raised. If the remote call
# raised an exception then that exception will be reraised by
# get().
# https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.AsyncResult.get
try:
report, meta = res.get()
except Exception:
# we have to update status since this is ran in a subprocess
task.exec_status = TaskStatus.Errored
raise
else:
task.product.metadata.update_locally(meta)
task.exec_status = TaskStatus.Executed
reports_all.append(report)
finally:
p.close()
p.join()
else:
# we run other tasks in the same process
report, meta = task._build(**build_kwargs)
task.product.metadata.update_locally(meta)
reports_all.append(report) |
Execute the current task in a subprocess. Runs if the parameter
build_in_subprocess is true.
Parameters
----------
task : DAG object
The current task.
build_kwargs: dict
Contains bool catch_exceptions and bool catch_warnings, checks
whether to catch exceptions and warnings on the current task.
reports_all: list
Collects the build report when executing the current DAG.
| build_in_subprocess | python | ploomber/ploomber | src/ploomber/executors/serial.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/executors/serial.py | Apache-2.0 |
def exception(exc):
"""Formats an exception into a more concise traceback
Parameters
----------
"""
# extract all the exception objects, in case this is a chained exception
exceptions = [exc]
while exc.__cause__:
exceptions.append(exc.__cause__)
exc = exc.__cause__
# reverse to get the most specific error first
exceptions.reverse()
# find the first instance of TaskBuildError
breakpoint = None
for i, exc in enumerate(exceptions):
if isinstance(exc, (TaskBuildError)):
breakpoint = i
break
# using the breakpoint, find the exception where we'll only display the
# error message, not the traceback. That is, the first TaskBuildError
# exception
exc_short = exceptions[breakpoint:]
if breakpoint is not None:
# this happens when running a single task, all exception can be
# TaskBuildError
if breakpoint != 0:
# traceback info applies to non-TaskBuildError
# (this takes care of chained exceptions as well)
tr = _format_exception(exceptions[breakpoint - 1])
else:
tr = ""
# append the short exceptions (only error message)
tr = (
tr
+ "\n"
+ "\n".join(f"{_get_exc_name(exc)}: {str(exc)}" for exc in exc_short)
)
else:
# if not breakpoint, take the outermost exception and show it.
# this ensures we show the full traceback in case there are chained
# exceptions
tr = _format_exception(exceptions[-1])
return tr | Formats an exception into a more concise traceback
Parameters
----------
| exception | python | ploomber/ploomber | src/ploomber/executors/_format.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/executors/_format.py | Apache-2.0 |
def serializer(extension_mapping=None, *, fallback=False, defaults=None, unpack=False):
"""Decorator for serializing functions
Parameters
----------
extension_mapping : dict, default=None
An extension -> function mapping. Calling the decorated function with a
File of a given extension will use the one in the mapping if it exists,
e.g., {'.csv': to_csv, '.json': to_json}.
fallback : bool or str, default=False
Determines what method to use if extension_mapping does not match the
product to serialize. Valid values are True (uses the pickle module),
'joblib', and 'cloudpickle'. If you use any of the last two, the
corresponding moduel must be installed. If this is enabled, the
body of the decorated function is never executed. To turn it off
pass False.
defaults : list, default=None
Built-in serializing functions to use. Must be a list with any
combinations of values: '.txt', '.json', '.csv', '.parquet'. To save
to .txt, the returned object must be a string, for .json it must be
a json serializable object (e.g., a list or a dict), for .csv and
.parquet it must be a pandas.DataFrame. If using .parquet, a parquet
library must be installed (e.g., pyarrow). If extension_mapping
and defaults contain overlapping keys, an error is raised
unpack : bool, default=False
If True, it treats every element in a dictionary as a different
file, calling the serializing function one per (key, value) pair and
using the key as filename.
"""
def _serializer(fn):
extension_mapping_final = _build_extension_mapping_final(
extension_mapping, defaults, fn, _DEFAULTS, "serializer"
)
try:
serializer_fallback = _EXTERNAL[fallback]
except KeyError:
error = True
else:
error = False
if error:
raise ValueError(
f"Invalid fallback argument {fallback!r} "
f"in function {fn.__name__!r}. Must be one of "
"True, 'joblib', or 'cloudpickle'"
)
if serializer_fallback is None and fallback in {"cloudpickle", "joblib"}:
raise ModuleNotFoundError(
f"Error serializing with function {fn.__name__!r}. "
f"{fallback} is not installed"
)
n_params = len(signature(fn).parameters)
if n_params != 2:
raise TypeError(
f"Expected serializer {fn.__name__!r} "
f"to take 2 arguments, but it takes {n_params!r}"
)
@wraps(fn)
def wrapper(obj, product):
if isinstance(product, MetaProduct):
_validate_obj(obj, product)
for key, value in obj.items():
_serialize_product(
value,
product[key],
extension_mapping_final,
fallback,
serializer_fallback,
fn,
unpack,
)
else:
_serialize_product(
obj,
product,
extension_mapping_final,
fallback,
serializer_fallback,
fn,
unpack,
)
return wrapper
return _serializer | Decorator for serializing functions
Parameters
----------
extension_mapping : dict, default=None
An extension -> function mapping. Calling the decorated function with a
File of a given extension will use the one in the mapping if it exists,
e.g., {'.csv': to_csv, '.json': to_json}.
fallback : bool or str, default=False
Determines what method to use if extension_mapping does not match the
product to serialize. Valid values are True (uses the pickle module),
'joblib', and 'cloudpickle'. If you use any of the last two, the
corresponding moduel must be installed. If this is enabled, the
body of the decorated function is never executed. To turn it off
pass False.
defaults : list, default=None
Built-in serializing functions to use. Must be a list with any
combinations of values: '.txt', '.json', '.csv', '.parquet'. To save
to .txt, the returned object must be a string, for .json it must be
a json serializable object (e.g., a list or a dict), for .csv and
.parquet it must be a pandas.DataFrame. If using .parquet, a parquet
library must be installed (e.g., pyarrow). If extension_mapping
and defaults contain overlapping keys, an error is raised
unpack : bool, default=False
If True, it treats every element in a dictionary as a different
file, calling the serializing function one per (key, value) pair and
using the key as filename.
| serializer | python | ploomber/ploomber | src/ploomber/io/serialize.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/serialize.py | Apache-2.0 |
def _serialize_product(
obj, product, extension_mapping, fallback, serializer_fallback, fn, unpack
):
"""
Determine which function to use for serialization. Note that this
function operates on single products. If the task generates multiple
products, this function is called multiple times.
"""
suffix = Path(product).suffix
if unpack and isinstance(obj, Mapping):
parent = Path(product)
# if the directory exists, delete it, otherwise old files will
# mix with the new ones
if parent.is_dir():
shutil.rmtree(product)
# if it's a file, delete it as well
elif parent.is_file():
parent.unlink()
parent.mkdir(exist_ok=True, parents=True)
for filename, o in obj.items():
out_path = _Path(product, filename)
suffix_current = Path(filename).suffix
serializer = _determine_serializer(
suffix_current, extension_mapping, fallback, serializer_fallback, fn
)
serializer(o, out_path)
else:
serializer = _determine_serializer(
suffix, extension_mapping, fallback, serializer_fallback, fn
)
serializer(obj, product) |
Determine which function to use for serialization. Note that this
function operates on single products. If the task generates multiple
products, this function is called multiple times.
| _serialize_product | python | ploomber/ploomber | src/ploomber/io/serialize.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/serialize.py | Apache-2.0 |
def _write_source(
self, lines: Sequence[str], indents: Sequence[str] = (), lexer: str = "pytb"
) -> None:
"""Write lines of source code possibly highlighted.
Keeping this private for now because the API is clunky.
We should discuss how
to evolve the terminal writer so we can have more precise color
support, for example
being able to write part of a line in one color and the rest in
another, and so on.
"""
if indents and len(indents) != len(lines):
raise ValueError(
"indents size ({}) should have same size as lines ({})".format(
len(indents), len(lines)
)
)
if not indents:
indents = [""] * len(lines)
source = "\n".join(lines)
new_lines = self._highlight(source, lexer=lexer).splitlines()
for indent, new_line in zip(indents, new_lines):
self.line(indent + new_line) | Write lines of source code possibly highlighted.
Keeping this private for now because the API is clunky.
We should discuss how
to evolve the terminal writer so we can have more precise color
support, for example
being able to write part of a line in one color and the rest in
another, and so on.
| _write_source | python | ploomber/ploomber | src/ploomber/io/terminalwriter.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/terminalwriter.py | Apache-2.0 |
def _highlight(self, source: str, lexer: str) -> str:
"""Highlight the given source code if we have markup support."""
if lexer not in {"py", "pytb"}:
raise ValueError(f'lexer must be "py" or "pytb", got: {lexer!r}')
if not self.hasmarkup or not self.code_highlight:
return source
try:
from pygments.formatters.terminal import TerminalFormatter
from pygments.lexers.python import PythonLexer, PythonTracebackLexer
from pygments import highlight
except ImportError:
return source
else:
Lexer = PythonLexer if lexer == "py" else PythonTracebackLexer
highlighted = highlight(
source, Lexer(), TerminalFormatter(bg="dark")
) # type: str
return highlighted | Highlight the given source code if we have markup support. | _highlight | python | ploomber/ploomber | src/ploomber/io/terminalwriter.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/terminalwriter.py | Apache-2.0 |
def unserializer(
extension_mapping=None, *, fallback=False, defaults=None, unpack=False
):
"""Decorator for unserializing functions
Parameters
----------
extension_mapping : dict, default=None
An extension -> function mapping. Calling the decorated function with a
File of a given extension will use the one in the mapping if it exists,
e.g., {'.csv': from_csv, '.json': from_json}.
fallback : bool or str, default=False
Determines what method to use if extension_mapping does not match the
product to unserialize. Valid values are True (uses the pickle module),
'joblib', and 'cloudpickle'. If you use any of the last two, the
corresponding moduel must be installed. If this is enabled, the
body of the decorated function is never executed. To turn it off
pass False.
defaults : list, default=None
Built-in unserializing functions to use. Must be a list with any
combinations of values: '.txt', '.json', '.csv', '.parquet'.
Unserializing .txt, returns a string, for .json returns any
JSON-unserializable object (e.g., a list or a dict), .csv and
.parquet return a pandas.DataFrame. If using .parquet, a parquet
library must be installed (e.g., pyarrow). If extension_mapping
and defaults contain overlapping keys, an error is raises
unpack : bool, default=False
If True and the task product points to a directory, it will call
the unserializer one time per file in the directory. The unserialized
object will be a dictionary where keys are the filenames and values
are the unserialized objects. Note that this isn't recursive, it only
looks at files that are immediate children of the product directory.
"""
def _unserializer(fn):
extension_mapping_final = _build_extension_mapping_final(
extension_mapping, defaults, fn, _DEFAULTS, "unserializer"
)
try:
unserializer_fallback = _EXTERNAL[fallback]
except KeyError:
error = True
else:
error = False
if error:
raise ValueError(
f"Invalid fallback argument {fallback!r} "
f"in function {fn.__name__!r}. Must be one of "
"True, 'joblib', or 'cloudpickle'"
)
if unserializer_fallback is None and fallback in {"cloudpickle", "joblib"}:
raise ModuleNotFoundError(
f"Error unserializing with function {fn.__name__!r}. "
f"{fallback} is not installed"
)
n_params = len(signature(fn).parameters)
if n_params != 1:
raise TypeError(
f"Expected unserializer {fn.__name__!r} "
f"to take 1 argument, but it takes {n_params!r}"
)
@wraps(fn)
def wrapper(product):
if isinstance(product, MetaProduct):
return {
key: _unserialize_product(
value,
extension_mapping_final,
fallback,
unserializer_fallback,
fn,
unpack,
)
for key, value in product.products.products.items()
}
else:
return _unserialize_product(
product,
extension_mapping_final,
fallback,
unserializer_fallback,
fn,
unpack,
)
return wrapper
return _unserializer | Decorator for unserializing functions
Parameters
----------
extension_mapping : dict, default=None
An extension -> function mapping. Calling the decorated function with a
File of a given extension will use the one in the mapping if it exists,
e.g., {'.csv': from_csv, '.json': from_json}.
fallback : bool or str, default=False
Determines what method to use if extension_mapping does not match the
product to unserialize. Valid values are True (uses the pickle module),
'joblib', and 'cloudpickle'. If you use any of the last two, the
corresponding moduel must be installed. If this is enabled, the
body of the decorated function is never executed. To turn it off
pass False.
defaults : list, default=None
Built-in unserializing functions to use. Must be a list with any
combinations of values: '.txt', '.json', '.csv', '.parquet'.
Unserializing .txt, returns a string, for .json returns any
JSON-unserializable object (e.g., a list or a dict), .csv and
.parquet return a pandas.DataFrame. If using .parquet, a parquet
library must be installed (e.g., pyarrow). If extension_mapping
and defaults contain overlapping keys, an error is raises
unpack : bool, default=False
If True and the task product points to a directory, it will call
the unserializer one time per file in the directory. The unserialized
object will be a dictionary where keys are the filenames and values
are the unserialized objects. Note that this isn't recursive, it only
looks at files that are immediate children of the product directory.
| unserializer | python | ploomber/ploomber | src/ploomber/io/unserialize.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/unserialize.py | Apache-2.0 |
def wcwidth(c: str) -> int:
"""Determine how many columns are needed to display a character in a
terminal. Returns -1 if the character is not printable.
Returns 0, 1 or 2 for other characters.
"""
o = ord(c)
# ASCII fast path.
if 0x20 <= o < 0x07F:
return 1
# Some Cf/Zp/Zl characters which should be zero-width.
if (
o == 0x0000
or 0x200B <= o <= 0x200F
or 0x2028 <= o <= 0x202E
or 0x2060 <= o <= 0x2063
):
return 0
category = unicodedata.category(c)
# Control characters.
if category == "Cc":
return -1
# Combining characters with zero width.
if category in ("Me", "Mn"):
return 0
# Full/Wide east asian characters.
if unicodedata.east_asian_width(c) in ("F", "W"):
return 2
return 1 | Determine how many columns are needed to display a character in a
terminal. Returns -1 if the character is not printable.
Returns 0, 1 or 2 for other characters.
| wcwidth | python | ploomber/ploomber | src/ploomber/io/wcwidth.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/wcwidth.py | Apache-2.0 |
def wcswidth(s: str) -> int:
"""Determine how many columns are needed to display a string in a terminal.
Returns -1 if the string contains non-printable characters.
"""
width = 0
for c in unicodedata.normalize("NFC", s):
wc = wcwidth(c)
if wc < 0:
return -1
width += wc
return width | Determine how many columns are needed to display a string in a terminal.
Returns -1 if the string contains non-printable characters.
| wcswidth | python | ploomber/ploomber | src/ploomber/io/wcwidth.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/wcwidth.py | Apache-2.0 |
def run(
self,
*cmd,
description=None,
capture_output=False,
expected_output=None,
error_message=None,
hint=None,
show_cmd=True,
):
"""Execute a command in a subprocess
Parameters
----------
*cmd
Command to execute
description: str, default=None
Label to display before executing the command
capture_output: bool, default=False
Captures output, otherwise prints to standard output and standard
error
expected_output: str, default=None
Raises a RuntimeError if the output is different than this value.
Only valid when capture_output=True
error_message: str, default=None
Error to display when expected_output does not match. If None,
a generic message is shown
hint: str, default=None
An optional string to show when at the end of the error when
the expected_output does not match. Used to hint the user how
to fix the problem
show_cmd : bool, default=True
Whether to display the command next to the description
(and error message if it fails) or not. Only valid when
description is not None
"""
cmd_str = " ".join(cmd)
if expected_output is not None and not capture_output:
raise RuntimeError(
"capture_output must be True when " "expected_output is not None"
)
if description:
header = f"{description}: {cmd_str}" if show_cmd else description
self.tw.sep("=", header, blue=True)
error = None
result = None
try:
result = subprocess.run(cmd, capture_output=capture_output)
# throw error if return code is not 0
result.check_returncode()
except Exception as e:
error = e
else:
# result is None if run is run with capture_output=false
if result.stdout is not None:
result = result.stdout.decode(sys.stdout.encoding)
if expected_output is not None:
error = result != expected_output
if error:
lines = []
if error_message:
line_first = error_message
else:
if show_cmd:
cmd_str = " ".join(cmd)
line_first = (
"An error occurred when executing " f"command: {cmd_str}"
)
else:
line_first = "An error occurred."
lines.append(line_first)
if not capture_output:
lines.append(f"Original error message: {error}")
if hint:
lines.append(f"Hint: {hint}.")
raise CommanderException("\n".join(lines))
else:
return result | Execute a command in a subprocess
Parameters
----------
*cmd
Command to execute
description: str, default=None
Label to display before executing the command
capture_output: bool, default=False
Captures output, otherwise prints to standard output and standard
error
expected_output: str, default=None
Raises a RuntimeError if the output is different than this value.
Only valid when capture_output=True
error_message: str, default=None
Error to display when expected_output does not match. If None,
a generic message is shown
hint: str, default=None
An optional string to show when at the end of the error when
the expected_output does not match. Used to hint the user how
to fix the problem
show_cmd : bool, default=True
Whether to display the command next to the description
(and error message if it fails) or not. Only valid when
description is not None
| run | python | ploomber/ploomber | src/ploomber/io/_commander.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/_commander.py | Apache-2.0 |
def copy_template(self, path, **render_kwargs):
"""Copy template to the workspace
Parameters
----------
path : str
Path to template (relative to templates path)
**render_kwargs
Keyword arguments passed to the template
Examples
--------
>>> # copies template in {templates-path}/directory/template.yaml
>>> # to {workspace}/template.yaml
>>> cmdr.copy_template('directory/template.yaml') # doctest: +SKIP
"""
dst = Path(self.workspace, PurePosixPath(path).name)
# This message is no longer valid since this is only called
# when there is no env yet
if dst.exists():
self.success(f"Using existing {path!s}...")
else:
self.info(f"Adding {dst!s}...")
dst.parent.mkdir(exist_ok=True, parents=True)
content = self._env.get_template(str(path)).render(**render_kwargs)
dst.write_text(content) | Copy template to the workspace
Parameters
----------
path : str
Path to template (relative to templates path)
**render_kwargs
Keyword arguments passed to the template
Examples
--------
>>> # copies template in {templates-path}/directory/template.yaml
>>> # to {workspace}/template.yaml
>>> cmdr.copy_template('directory/template.yaml') # doctest: +SKIP
| copy_template | python | ploomber/ploomber | src/ploomber/io/_commander.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/_commander.py | Apache-2.0 |
def cp(self, src):
"""
Copies a file or directory to the workspace, replacing it if necessary.
Deleted on exit.
Notes
-----
Used mainly for preparing Dockerfiles since they can only
copy from the current working directory
Examples
--------
>>> # copies dir/file to {workspace}/file
>>> cmdr.cp('dir/file') # doctest: +SKIP
"""
path = Path(src)
if not path.exists():
raise CommanderException(f"Missing {src} file. Add it and try again.")
# convert to absolute to ensure we delete the right file on __exit__
dst = Path(self.workspace, path.name).resolve()
self._to_delete.append(dst)
_delete(dst)
if path.is_file():
shutil.copy(src, dst)
else:
shutil.copytree(src, dst) |
Copies a file or directory to the workspace, replacing it if necessary.
Deleted on exit.
Notes
-----
Used mainly for preparing Dockerfiles since they can only
copy from the current working directory
Examples
--------
>>> # copies dir/file to {workspace}/file
>>> cmdr.cp('dir/file') # doctest: +SKIP
| cp | python | ploomber/ploomber | src/ploomber/io/_commander.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/_commander.py | Apache-2.0 |
def append_inline(self, line, dst):
"""Append line to a file
Parameters
----------
line : str
Line to append
dst : str
File to append (can be outside the workspace)
Examples
--------
>>> cmdr.append_inline('*.csv', '.gitignore') # doctest: +SKIP
"""
if not Path(dst).exists():
Path(dst).touch()
original = Path(dst).read_text()
Path(dst).write_text(original + "\n" + line + "\n") | Append line to a file
Parameters
----------
line : str
Line to append
dst : str
File to append (can be outside the workspace)
Examples
--------
>>> cmdr.append_inline('*.csv', '.gitignore') # doctest: +SKIP
| append_inline | python | ploomber/ploomber | src/ploomber/io/_commander.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/_commander.py | Apache-2.0 |
def _warn_show(self):
"""Display accumulated warning messages (added via .warn_on_exit)"""
if self._warnings:
self.tw.sep("=", "Warnings", yellow=True)
self.tw.write("\n\n".join(self._warnings) + "\n")
self.tw.sep("=", yellow=True) | Display accumulated warning messages (added via .warn_on_exit) | _warn_show | python | ploomber/ploomber | src/ploomber/io/_commander.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/io/_commander.py | Apache-2.0 |
def remove_line_number(path):
"""
Takes a path/to/file:line path and returns path/to/file path object
"""
parts = list(Path(path).parts)
parts[-1] = parts[-1].split(":")[0]
return Path(*parts) |
Takes a path/to/file:line path and returns path/to/file path object
| remove_line_number | python | ploomber/ploomber | src/ploomber/jupyter/dag.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/dag.py | Apache-2.0 |
def overwrite(self, model, path):
"""Overwrite a model back to the original function"""
resource = self._get(path)
resource.interactive.overwrite(nbformat.from_dict(model["content"]))
return {
"name": resource.task.name,
"type": "notebook",
"path": path,
"writable": True,
"created": datetime.datetime.now(),
"last_modified": datetime.datetime.now(),
"content": None,
"mimetype": "text/x-python",
"format": None,
} | Overwrite a model back to the original function | overwrite | python | ploomber/ploomber | src/ploomber/jupyter/dag.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/dag.py | Apache-2.0 |
def _mapping(self):
"""Returns the corresponding DAGMapping instance for the current DAG"""
if self.__mapping is None:
notebook_tasks = [
task for task in self._dag.values() if isinstance(task, NotebookMixin)
]
pairs = [
(resolve_path(Path(self._root_dir).resolve(), t.source.loc), t)
for t in notebook_tasks
if t.source.loc is not None
]
self.__mapping = DAGMapping(pairs)
return self.__mapping | Returns the corresponding DAGMapping instance for the current DAG | _mapping | python | ploomber/ploomber | src/ploomber/jupyter/manager.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/manager.py | Apache-2.0 |
def resolve_path(parent, path):
"""
Functions functions resolves paths to make the {source} -> {task} mapping
work even then `jupyter notebook` is initialized from a subdirectory
of pipeline.yaml
"""
try:
# FIXME: remove :linenumber
return Path(path).resolve().relative_to(parent).as_posix().strip()
except ValueError:
return None |
Functions functions resolves paths to make the {source} -> {task} mapping
work even then `jupyter notebook` is initialized from a subdirectory
of pipeline.yaml
| resolve_path | python | ploomber/ploomber | src/ploomber/jupyter/manager.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/manager.py | Apache-2.0 |
def __init__(self, *args, **kwargs):
"""
Initialize the content manger, look for a pipeline.yaml file in the
current directory, if there is one, load it, if there isn't one
don't do anything
"""
# NOTE: there is some strange behavior in the Jupyter contents
# manager. We should not access attributes here (e.g.,
# self.root_dir) otherwise they aren't correctly initialized.
# We have a test that checks this:
# test_manager_initialization in test_jupyter.py
self.reset_dag()
super().__init__(*args, **kwargs)
self.__dag_loader = None |
Initialize the content manger, look for a pipeline.yaml file in the
current directory, if there is one, load it, if there isn't one
don't do anything
| __init__ | python | ploomber/ploomber | src/ploomber/jupyter/manager.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/manager.py | Apache-2.0 |
def get(self, path, content=True, type=None, format=None, *args, **kwargs):
"""
This is called when opening a file (content=True) and when listing
files. When listing it's called once per file with (content=False).
Also called for directories. When the directory is part of the
listing (content=False) and when opened (content=True)
"""
# FIXME: reloading inside a (functions) folder causes 404
if content and (
self.spec is None or self.spec["meta"]["jupyter_functions_as_notebooks"]
):
# this load_dag() is required to list the folder that contains
# the notebooks exported from functions, however, since jupyter
# continuosly calls this for the current directory it gets
# too verbose so we skip showing the log message
self.load_dag(log=False)
if self.manager and path in self.manager:
return self.manager.get(path, content)
# get the model contents (e.g. notebook content)
model = super().get(
path,
content=content,
type=type,
format=format,
*args,
**kwargs,
)
# user requested directory listing, check if there are task
# functions defined here
if model["type"] == "directory" and self.manager:
if model["content"]:
model["content"].extend(self.manager.get_by_parent(path))
check_metadata_filter(self.log, model)
# if opening a file
if model["content"] and model["type"] == "notebook":
# instantiate the dag using starting at the current folder
self.log.info(
f'[Ploomber] Requested model: {model["path"]}. '
f"Looking for DAG with root dir: {self.root_dir}"
)
self.load_dag(
starting_dir=Path(self.root_dir, model["path"]).parent, model=model
)
if self._model_in_dag(model):
self.log.info("[Ploomber] Injecting cell...")
inject_cell(
model=model, params=self.dag_mapping[model["path"]]._params
)
return model |
This is called when opening a file (content=True) and when listing
files. When listing it's called once per file with (content=False).
Also called for directories. When the directory is part of the
listing (content=False) and when opened (content=True)
| get | python | ploomber/ploomber | src/ploomber/jupyter/manager.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/manager.py | Apache-2.0 |
def save(self, model, path=""):
"""
This is called when a file is saved
"""
if self.manager and path in self.manager:
out = self.manager.overwrite(model, path)
return out
else:
check_metadata_filter(self.log, model)
# not sure what's the difference between model['path'] and path
# but path has leading "/", _model_in_dag strips it
key = self._model_in_dag(model, path)
if key:
content = model["content"]
metadata = content.get("metadata", {}).get("ploomber", {})
if not metadata.get("injected_manually"):
self.log.info(
"[Ploomber] Cleaning up injected cell in {}...".format(
model.get("name") or ""
)
)
model["content"] = _cleanup_rendered_nb(content)
self.log.info("[Ploomber] Deleting product's metadata...")
self.dag_mapping.delete_metadata(key)
return super().save(model, path) |
This is called when a file is saved
| save | python | ploomber/ploomber | src/ploomber/jupyter/manager.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/manager.py | Apache-2.0 |
def _model_in_dag(self, model, path=None):
"""Determine if the model is part of the pipeline"""
model_in_dag = False
if path is None:
path = model["path"]
else:
path = path.strip("/")
if self.dag:
if "content" in model and model["type"] == "notebook":
if path in self.dag_mapping:
# NOTE: not sure why sometimes the model comes with a
# name and sometimes it doesn't
self.log.info(
"[Ploomber] {} is part of the pipeline... ".format(
model.get("name") or ""
)
)
model_in_dag = True
else:
self.log.info(
"[Ploomber] {} is not part of the pipeline, "
"skipping...".format(model.get("name") or "")
)
return path if model_in_dag else False | Determine if the model is part of the pipeline | _model_in_dag | python | ploomber/ploomber | src/ploomber/jupyter/manager.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/manager.py | Apache-2.0 |
def _load_jupyter_server_extension(app):
"""
This function is called to configure the new content manager, there are a
lot of quirks that jupytext maintainers had to solve to make it work so
we base our implementation on theirs:
https://github.com/mwouts/jupytext/blob/bc1b15935e096c280b6630f45e65c331f04f7d9c/jupytext/__init__.py#L19
"""
if hasattr(app.contents_manager_class, "load_dag"):
app.log.info(
"[Ploomber] NotebookApp.contents_manager_class "
"is a subclass of PloomberContentsManager already - OK"
)
return
# The server extension call is too late!
# The contents manager was set at NotebookApp.init_configurables
# Let's change the contents manager class
app.log.info("[Ploomber] setting content manager " "to PloomberContentsManager")
app.contents_manager_class = derive_class(app.contents_manager_class)
try:
# And re-run selected init steps from:
# https://github.com/jupyter/notebook/blob/
# 132f27306522b32fa667a6b208034cb7a04025c9/notebook/notebookapp.py#L1634-L1638
app.contents_manager = app.contents_manager_class(parent=app, log=app.log)
app.session_manager.contents_manager = app.contents_manager
app.web_app.settings["contents_manager"] = app.contents_manager
except Exception:
error = """[Ploomber] An error occured. Please
deactivate the server extension with "jupyter serverextension disable ploomber"
and configure the contents manager manually by adding
c.NotebookApp.contents_manager_class = "ploomber.jupyter.PloomberContentsManager"
to your .jupyter/jupyter_notebook_config.py file.
""" # noqa
app.log.error(error)
raise |
This function is called to configure the new content manager, there are a
lot of quirks that jupytext maintainers had to solve to make it work so
we base our implementation on theirs:
https://github.com/mwouts/jupytext/blob/bc1b15935e096c280b6630f45e65c331f04f7d9c/jupytext/__init__.py#L19
| _load_jupyter_server_extension | python | ploomber/ploomber | src/ploomber/jupyter/manager.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/jupyter/manager.py | Apache-2.0 |
def grid(**params):
"""A decorator to create multiple tasks, one per parameter combination"""
def decorator(f):
if not hasattr(f, "__ploomber_grid__"):
f.__ploomber_grid__ = []
# TODO: validate they have the same keys as the earlier ones
f.__ploomber_grid__.append(params)
return f
return decorator | A decorator to create multiple tasks, one per parameter combination | grid | python | ploomber/ploomber | src/ploomber/micro/_micro.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/micro/_micro.py | Apache-2.0 |
def capture(f):
"""A decorator to capture outputs in a function"""
f.__ploomber_capture__ = True
f.__ploomber_globals__ = f.__globals__
return f | A decorator to capture outputs in a function | capture | python | ploomber/ploomber | src/ploomber/micro/_micro.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/micro/_micro.py | Apache-2.0 |
def _signature_wrapper(f, call_with_args):
"""An internal wrapper so functions don't need the upstream and product
arguments
"""
# store the wrapper, we'll need this for hot_reload to work, see
# the constructor of CallableLoader in
# ploomber.sources.pythoncallablesource for details
f.__ploomber_wrapper_factory__ = partial(
_signature_wrapper, call_with_args=call_with_args
)
@wraps(f)
def wrapper_args(upstream, **kwargs):
return f(*upstream.values(), **kwargs)
@wraps(f)
def wrapper_kwargs(upstream, **kwargs):
return f(**upstream, **kwargs)
return wrapper_args if call_with_args else wrapper_kwargs | An internal wrapper so functions don't need the upstream and product
arguments
| _signature_wrapper | python | ploomber/ploomber | src/ploomber/micro/_micro.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/micro/_micro.py | Apache-2.0 |
def _get_upstream(fn):
"""
Get upstream tasks for a given function by looking at the signature
arguments
"""
if hasattr(fn, "__wrapped__"):
grid = getattr(fn, "__ploomber_grid__", None)
if grid is not None:
ignore = set(grid[0])
else:
ignore = set()
return set(signature(fn.__wrapped__).parameters) - ignore
else:
return set(signature(fn).parameters) - {"input_data"} |
Get upstream tasks for a given function by looking at the signature
arguments
| _get_upstream | python | ploomber/ploomber | src/ploomber/micro/_micro.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/micro/_micro.py | Apache-2.0 |
def dag_from_functions(
functions,
output="output",
params=None,
parallel=False,
dependencies=None,
hot_reload=True,
):
"""Create a DAG from a list of functions
Parameters
----------
functions : list
List of functions
output : str, default='output'
Directory to store outputs and metadata from each task
params : dict, default None
Parameters to pass to each task, it must be a dictionary with task
names as keys and parameters (dict) as values
parallel : bool, default=False
If True, the dag will run tasks in parallel when calling
``dag.build()``, note that this requires the 'multiprocess' package:
``pip install multiprocess``
dependencies : dict, default=None
A mapping with functions names to their dependencies. Use it if
the arguments in the function do not match the names of its
dependencies.
hot_reload : bool, default=False
If True, the dag will automatically detect changes in the source code.
However, this won't work if tasks are defined inside functions
"""
dependencies = dependencies or dict()
params = params or dict()
# NOTE: cache_rendered_status isn't doing anything. I modified Product
# to look at the hot_reload flag for the hot reloading to work. I don't
# think we're using the cache_rendered_status anywhere, so we should
# delete it.
configurator = DAGConfigurator()
configurator.params.hot_reload = hot_reload
dag = configurator.create()
if parallel:
dag.executor = ParallelDill()
else:
# need to disable subprocess, otherwise pickling will fail since
# functions might be defined in the __main__ module
dag.executor = Serial(build_in_subprocess=False)
for callable_ in functions:
if callable_.__name__ in params:
params_task = params[callable_.__name__]
else:
params_task = dict()
# if decorated, call with grid
if hasattr(callable_, "__ploomber_grid__"):
for i, items in enumerate(
chain(
*(ParamGrid(grid).product() for grid in callable_.__ploomber_grid__)
)
):
_make_task(
callable_,
dag=dag,
params={**params_task, **items},
output=output,
call_with_args=callable_.__name__ in dependencies,
suffix=i,
)
else:
_make_task(
callable_,
dag=dag,
params=params_task,
output=output,
call_with_args=callable_.__name__ in dependencies,
)
for name in dag._iter():
# check if there are manually declared dependencies
if name in dependencies:
upstream = dependencies[name]
else:
upstream = _get_upstream(dag[name].source.primitive)
for up in upstream:
dag[name].set_upstream(dag[up])
return dag | Create a DAG from a list of functions
Parameters
----------
functions : list
List of functions
output : str, default='output'
Directory to store outputs and metadata from each task
params : dict, default None
Parameters to pass to each task, it must be a dictionary with task
names as keys and parameters (dict) as values
parallel : bool, default=False
If True, the dag will run tasks in parallel when calling
``dag.build()``, note that this requires the 'multiprocess' package:
``pip install multiprocess``
dependencies : dict, default=None
A mapping with functions names to their dependencies. Use it if
the arguments in the function do not match the names of its
dependencies.
hot_reload : bool, default=False
If True, the dag will automatically detect changes in the source code.
However, this won't work if tasks are defined inside functions
| dag_from_functions | python | ploomber/ploomber | src/ploomber/micro/_micro.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/micro/_micro.py | Apache-2.0 |
def _raw(self):
"""A string with the raw jinja2.Template contents"""
if self._hot_reload:
self.__raw = self._path.read_text()
return self.__raw | A string with the raw jinja2.Template contents | _raw | python | ploomber/ploomber | src/ploomber/placeholders/placeholder.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/placeholder.py | Apache-2.0 |
def _needs_render(self):
"""
Returns true if the template is a literal and does not need any
parameters to render
"""
env = self._template.environment
# check if the template has the variable or block start string
# is there any better way of checking this?
needs_variables = (
env.variable_start_string in self._raw
and env.variable_end_string in self._raw
)
needs_blocks = (
env.block_start_string in self._raw and env.block_end_string in self._raw
)
return needs_variables or needs_blocks |
Returns true if the template is a literal and does not need any
parameters to render
| _needs_render | python | ploomber/ploomber | src/ploomber/placeholders/placeholder.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/placeholder.py | Apache-2.0 |
def best_repr(self, shorten):
"""
Returns the rendered version (if available), otherwise the raw version
"""
best = self._raw if self._str is None else self._str
if shorten:
best = self._repr.repr(best)
return best |
Returns the rendered version (if available), otherwise the raw version
| best_repr | python | ploomber/ploomber | src/ploomber/placeholders/placeholder.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/placeholder.py | Apache-2.0 |
def variables(self):
"""Returns declared variables in the template"""
# this requires parsing the raw template, do lazy load, but override
# it if hot_reload is True
if self._variables is None or self._hot_reload:
self._variables = util.get_tags_in_str(self._raw)
return self._variables | Returns declared variables in the template | variables | python | ploomber/ploomber | src/ploomber/placeholders/placeholder.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/placeholder.py | Apache-2.0 |
def _init_template(raw, loader_init):
"""
Initializes template, taking care of configuring the loader environment
if needed
This helps prevents errors when using copy or pickling (the copied or
unpickled object wont have access to the environment.loader which breaks
macros and anything that needs access to the jinja environment.loader
object
"""
if loader_init is None:
template = Template(
raw, undefined=StrictUndefined, extensions=(extensions.RaiseExtension,)
)
_add_globals(template.environment)
return template
else:
if loader_init["class"] == "FileSystemLoader":
loader = FileSystemLoader(**loader_init["kwargs"])
elif loader_init["class"] == "PackageLoader":
loader = PackageLoader(**loader_init["kwargs"])
else:
raise TypeError(
"Error setting state for Placeholder, "
"expected the loader to be FileSystemLoader "
"or PackageLoader"
)
env = Environment(
loader=loader,
undefined=StrictUndefined,
extensions=(extensions.RaiseExtension,),
)
_add_globals(env)
return env.from_string(raw) |
Initializes template, taking care of configuring the loader environment
if needed
This helps prevents errors when using copy or pickling (the copied or
unpickled object wont have access to the environment.loader which breaks
macros and anything that needs access to the jinja environment.loader
object
| _init_template | python | ploomber/ploomber | src/ploomber/placeholders/placeholder.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/placeholder.py | Apache-2.0 |
def get(self, key):
"""Load template, returns None if it doesn' exist"""
try:
return self[key]
except exceptions.TemplateNotFound:
return None | Load template, returns None if it doesn' exist | get | python | ploomber/ploomber | src/ploomber/placeholders/sourceloader.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/sourceloader.py | Apache-2.0 |
def path_to(self, key):
"""Return the path to a template, even if it doesn't exist"""
try:
return self[key].path
except exceptions.TemplateNotFound:
return Path(self.path_full, key) | Return the path to a template, even if it doesn't exist | path_to | python | ploomber/ploomber | src/ploomber/placeholders/sourceloader.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/sourceloader.py | Apache-2.0 |
def get_template(self, name):
"""Load a template by name
Parameters
----------
name : str or pathlib.Path
Template to load
"""
# if name is a nested path, this will return an appropriate
# a/b/c string even on Windows
path = str(PurePosixPath(*Path(name).parts))
try:
template = self.env.get_template(path)
except exceptions.TemplateNotFound as e:
exception = e
else:
exception = None
if exception is not None:
expected_path = str(Path(self.path_full, name))
# user saved the template locally, but the source loader is
# configured to load from a different place
if Path(name).exists():
raise exceptions.TemplateNotFound(
f"{str(name)!r} template does not exist. "
"However such a file exists in the current working "
"directory, if you want to load it as a template, move it "
f"to {self.path_full!r} or remove the source_loader"
)
# no template and the file does not exist, raise a generic message
else:
raise exceptions.TemplateNotFound(
f"{str(name)!r} template does not exist. "
"Based on your configuration, if should be located "
f"at: {expected_path!r}"
)
return Placeholder(template) | Load a template by name
Parameters
----------
name : str or pathlib.Path
Template to load
| get_template | python | ploomber/ploomber | src/ploomber/placeholders/sourceloader.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/sourceloader.py | Apache-2.0 |
def get_tags_in_str(s, require_runtime_placeholders=True):
"""
Returns tags (e.g. {{variable}}) in a given string as a set, returns an
empty set for None
Parameters
----------
require_runtime_placeholders : bool, default=True
Also check runtime tags - the ones in square brackets
(e.g. [[placeholder]])
"""
# render placeholders
vars_render = meta.find_undeclared_variables(env_render.parse(s))
# runtime placeholders
if require_runtime_placeholders:
vars_runtime = meta.find_undeclared_variables(env_runtime.parse(s))
else:
vars_runtime = set()
return vars_render | vars_runtime |
Returns tags (e.g. {{variable}}) in a given string as a set, returns an
empty set for None
Parameters
----------
require_runtime_placeholders : bool, default=True
Also check runtime tags - the ones in square brackets
(e.g. [[placeholder]])
| get_tags_in_str | python | ploomber/ploomber | src/ploomber/placeholders/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/placeholders/util.py | Apache-2.0 |
def _check_is_outdated(self, outdated_by_code):
"""
Unlike other Product implementation that only have to check the
current metadata, File has to check if there is a metadata remote copy
and download it to decide outdated status, which yield to task
execution or product downloading
"""
should_download = False
if self._remote.exists():
if self._remote._is_equal_to_local_copy():
return self._remote._is_outdated(with_respect_to_local=True)
else:
# download when doing so will bring the product
# up-to-date (this takes into account upstream
# timestamps)
should_download = not self._remote._is_outdated(
with_respect_to_local=True, outdated_by_code=outdated_by_code
)
if should_download:
return TaskStatus.WaitingDownload
# no need to download, check status using local metadata
return super()._check_is_outdated(outdated_by_code=outdated_by_code) |
Unlike other Product implementation that only have to check the
current metadata, File has to check if there is a metadata remote copy
and download it to decide outdated status, which yield to task
execution or product downloading
| _check_is_outdated | python | ploomber/ploomber | src/ploomber/products/file.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/file.py | Apache-2.0 |
def _is_remote_outdated(self, outdated_by_code):
"""
Check status using remote metadata, if no remote is available
(or remote metadata is corrupted) returns True
"""
if self._remote.exists():
return self._remote._is_outdated(
with_respect_to_local=False, outdated_by_code=outdated_by_code
)
else:
# if no remote, return True. This is the least destructive option
# since we don't know what will be available and what not when this
# executes
return True |
Check status using remote metadata, if no remote is available
(or remote metadata is corrupted) returns True
| _is_remote_outdated | python | ploomber/ploomber | src/ploomber/products/file.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/file.py | Apache-2.0 |
def _get(self):
"""
Get the "true metadata", ignores actual metadata if the product does
not exist. It is lazily called by the timestamp and stored_source_code
attributes. Ignores fetched metadata and replaces it with the stored
metadata
"""
# if the product does not exist, ignore metadata in backend storage
# FIXME: cache the output of this command, we are using it in several
# places, sometimes we have to re-fetch but sometimes we can cache,
# look for product.exists() references and .exists() references
# in the Product definition
if not self._product.exists():
metadata = self._default_metadata()
else:
# FIXME: if anything goes wrong when fetching metadata, warn
# and set it to a valid dictionary with None values, validation
# should happen here, not in the fetch_metadata method, but we
# shouldn't catch all exceptions. Create a new one in
# ploomber.exceptions and raise it on each fetch_metadata
# implementation when failing to unserialize
metadata_fetched = self._product.fetch_metadata()
if metadata_fetched is None:
self._logger.debug(
"fetch_metadata for product %s returned " "None", self._product
)
metadata = self._default_metadata()
else:
# FIXME: we need to further validate this, need to check
# that this is an instance of mapping, if yes, then
# check keys [timestamp, stored_source_code], check
# types and fill with None if any of the keys is missing
metadata = metadata_fetched
self._did_fetch = True
self._data = metadata |
Get the "true metadata", ignores actual metadata if the product does
not exist. It is lazily called by the timestamp and stored_source_code
attributes. Ignores fetched metadata and replaces it with the stored
metadata
| _get | python | ploomber/ploomber | src/ploomber/products/metadata.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metadata.py | Apache-2.0 |
def update(self, source_code, params):
"""
Update metadata in the storage backend, this should be called by
Task objects when running successfully to update metadata in the
backend storage. If saving in the backend storage succeeds the local
copy is updated as well
Parameters
----------
source_code : str
Task's source code
params : dict
Task's params
"""
# remove any unserializable parameters
params = remove_non_serializable_top_keys(params)
new_data = dict(
timestamp=datetime.now().timestamp(),
stored_source_code=source_code,
# process params to store hashes in case they're
# declared as resources
params=process_resources(params),
)
kwargs = callback_check(
self._product.prepare_metadata,
available={"metadata": new_data, "product": self._product},
)
data = self._product.prepare_metadata(**kwargs)
self._product.save_metadata(data)
# if saving worked, we can update the local in-memory copy
self.update_locally(new_data) |
Update metadata in the storage backend, this should be called by
Task objects when running successfully to update metadata in the
backend storage. If saving in the backend storage succeeds the local
copy is updated as well
Parameters
----------
source_code : str
Task's source code
params : dict
Task's params
| update | python | ploomber/ploomber | src/ploomber/products/metadata.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metadata.py | Apache-2.0 |
def update_locally(self, data):
"""Updates the in-memory copy, does not update persistent copy"""
# could be the case that we haven't fetched metadata yet. since this
# overwrites existing metadata. we no longer have to fetch
self._did_fetch = True
self._data = deepcopy(data) | Updates the in-memory copy, does not update persistent copy | update_locally | python | ploomber/ploomber | src/ploomber/products/metadata.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metadata.py | Apache-2.0 |
def clear(self):
"""
Clears up metadata local copy, next time the timestamp or
stored_source_code are needed, this will trigger another call to
._get(). Should be called only when the local copy might be outdated
due external execution. Currently, we are only using this when running
DAG.build_partially, because that triggers a deep copy of the original
DAG. hence our local copy in the original DAG is not valid anymore
"""
self._did_fetch = False
self._data = self._default_metadata() |
Clears up metadata local copy, next time the timestamp or
stored_source_code are needed, this will trigger another call to
._get(). Should be called only when the local copy might be outdated
due external execution. Currently, we are only using this when running
DAG.build_partially, because that triggers a deep copy of the original
DAG. hence our local copy in the original DAG is not valid anymore
| clear | python | ploomber/ploomber | src/ploomber/products/metadata.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metadata.py | Apache-2.0 |
def large_timestamp_difference(timestamps):
"""Returns True if there is at least one timestamp difference > 5 seconds"""
dts = [datetime.fromtimestamp(ts) for ts in timestamps]
for i in range(len(dts)):
for j in range(len(dts)):
if i != j:
diff = (dts[i] - dts[j]).total_seconds()
if abs(diff) > 5:
return True
return False | Returns True if there is at least one timestamp difference > 5 seconds | large_timestamp_difference | python | ploomber/ploomber | src/ploomber/products/metadata.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metadata.py | Apache-2.0 |
def to_json_serializable(self):
"""Returns a JSON serializable version of this product"""
if isinstance(self.products, Mapping):
return {name: str(product) for name, product in self.products.items()}
else:
return list(str(product) for product in self.products) | Returns a JSON serializable version of this product | to_json_serializable | python | ploomber/ploomber | src/ploomber/products/metaproduct.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metaproduct.py | Apache-2.0 |
def to_json_serializable(self):
"""Returns a JSON serializable version of this product"""
# NOTE: this is used in tasks where only JSON serializable parameters
# are supported such as NotebookRunner that depends on papermill
return self.products.to_json_serializable() | Returns a JSON serializable version of this product | to_json_serializable | python | ploomber/ploomber | src/ploomber/products/metaproduct.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metaproduct.py | Apache-2.0 |
def _is_outdated(self, outdated_by_code=True):
"""
Given current conditions, determine if the Task that holds this
Product should be executed
Returns
-------
bool
True if the Task should execute
"""
# if hot_reload is enable, we should not cache the status
if self.task.source.hot_reload:
self._reset_cached_outdated_status()
if self._is_outdated_status is None:
self._is_outdated_status = self._check_is_outdated(outdated_by_code)
return self._is_outdated_status |
Given current conditions, determine if the Task that holds this
Product should be executed
Returns
-------
bool
True if the Task should execute
| _is_outdated | python | ploomber/ploomber | src/ploomber/products/product.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/product.py | Apache-2.0 |
def _outdated_data_dependencies(self):
"""
Determine if the product is outdated by checking upstream timestamps
"""
if self._outdated_data_dependencies_status is not None:
self.logger.debug(
("Returning cached data dependencies status. " "Outdated? %s"),
self._outdated_data_dependencies_status,
)
return self._outdated_data_dependencies_status
outdated = any(
[
self._is_outdated_due_to_upstream(up.product)
for up in self.task.upstream.values()
]
)
self._outdated_data_dependencies_status = outdated
self.logger.debug(
("Finished checking data dependencies status. " "Outdated? %s"),
self._outdated_data_dependencies_status,
)
return self._outdated_data_dependencies_status |
Determine if the product is outdated by checking upstream timestamps
| _outdated_data_dependencies | python | ploomber/ploomber | src/ploomber/products/product.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/product.py | Apache-2.0 |
def _is_outdated_due_to_upstream(self, up_prod):
"""
A task becomes data outdated if an upstream product has a higher
timestamp or if an upstream product is outdated
"""
if self.metadata.timestamp is None or up_prod.metadata.timestamp is None:
return True
else:
return (
(up_prod.metadata.timestamp > self.metadata.timestamp)
# this second condition propagates outdated status
# from indirect upstream dependencies. e.g., a -> b -> c
# user runs in order but then it only runs a. Since a is
# outdated, so should c
or up_prod._is_outdated()
) |
A task becomes data outdated if an upstream product has a higher
timestamp or if an upstream product is outdated
| _is_outdated_due_to_upstream | python | ploomber/ploomber | src/ploomber/products/product.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/product.py | Apache-2.0 |
def _outdated_code_dependency(self):
"""
Determine if the product is outdated by checking the source code that
it generated it
"""
if self._outdated_code_dependency_status is not None:
self.logger.debug(
("Returning cached code dependencies status. " "Outdated? %s"),
self._outdated_code_dependency_status,
)
return self._outdated_code_dependency_status
outdated, diff = self.task.dag.differ.is_different(
a=self.metadata.stored_source_code,
b=str(self.task.source),
a_params=self.metadata.params,
# process resource params to compare the file hash instead of
# the path to the file
b_params=process_resources(
self.task.params.to_json_serializable(params_only=True)
),
extension=self.task.source.extension,
)
self._outdated_code_dependency_status = outdated
self.logger.debug(
('Finished checking code status for task "%s" ' "Outdated? %s"),
self.task.name,
self._outdated_code_dependency_status,
)
if outdated:
self.logger.info(
'Task "%s" has outdated code. Diff:\n%s', self.task.name, diff
)
return self._outdated_code_dependency_status |
Determine if the product is outdated by checking the source code that
it generated it
| _outdated_code_dependency | python | ploomber/ploomber | src/ploomber/products/product.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/product.py | Apache-2.0 |
def to_json_serializable(self):
"""Returns a JSON serializable version of this product"""
# NOTE: this is used in tasks where only JSON serializable parameters
# are supported such as NotebookRunner that depends on papermill
return str(self) | Returns a JSON serializable version of this product | to_json_serializable | python | ploomber/ploomber | src/ploomber/products/product.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/product.py | Apache-2.0 |
def remove_non_serializable_top_keys(obj):
"""
Remove top-level keys with unserializable objects, warning if necessary
Parameters
----------
d: a dictionary containing parameters
"""
out = copy(obj)
for _, _, current_val, preffix in iterate_nested_dict(obj):
if not is_json_serializable(current_val):
top_key = preffix[0]
if top_key in out:
del out[top_key]
warnings.warn(
f"Param {top_key!r} contains an unserializable "
f"object: {current_val!r}, it will be ignored. "
"Changes to it will not trigger task execution."
)
return out |
Remove top-level keys with unserializable objects, warning if necessary
Parameters
----------
d: a dictionary containing parameters
| remove_non_serializable_top_keys | python | ploomber/ploomber | src/ploomber/products/serializeparams.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/serializeparams.py | Apache-2.0 |
def exists(self):
"""
Checks if remote File exists. This is used by Metadata to determine
whether to use the existing remote metadat (if any) or ignore it: if
this returns False, remote metadata is ignored even if it exists
"""
if self._exists is None:
# TODO remove checking if file exists and just make the API
# call directly
self._exists = (
self._local_file.client is not None
and self._local_file.client._remote_exists(
self._local_file._path_to_metadata
)
and self._local_file.client._remote_exists(
self._local_file._path_to_file
)
)
return self._exists |
Checks if remote File exists. This is used by Metadata to determine
whether to use the existing remote metadat (if any) or ignore it: if
this returns False, remote metadata is ignored even if it exists
| exists | python | ploomber/ploomber | src/ploomber/products/_remotefile.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/_remotefile.py | Apache-2.0 |
def _is_outdated(self, with_respect_to_local, outdated_by_code=True):
"""
Determines outdated status using remote metadata, to decide
whether to download the remote file or not
with_respect_to_local : bool
If True, determines status by comparing timestamps with upstream
local metadata, otherwise it uses upstream remote metadata
"""
if with_respect_to_local:
if self._is_outdated_status_local is None:
self._is_outdated_status_local = self._check_is_outdated(
with_respect_to_local, outdated_by_code
)
return self._is_outdated_status_local
else:
if self._is_outdated_status_remote is None:
self._is_outdated_status_remote = self._check_is_outdated(
with_respect_to_local, outdated_by_code
)
return self._is_outdated_status_remote |
Determines outdated status using remote metadata, to decide
whether to download the remote file or not
with_respect_to_local : bool
If True, determines status by comparing timestamps with upstream
local metadata, otherwise it uses upstream remote metadata
| _is_outdated | python | ploomber/ploomber | src/ploomber/products/_remotefile.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/_remotefile.py | Apache-2.0 |
def _outdated_code_dependency(self):
"""
Determine if the source code has changed by looking at the remote
metadata
"""
outdated, _ = self._local_file.task.dag.differ.is_different(
a=self.metadata.stored_source_code,
b=str(self._local_file.task.source),
a_params=self.metadata.params,
b_params=self._local_file.task.params.to_json_serializable(
params_only=True
),
extension=self._local_file.task.source.extension,
)
return outdated |
Determine if the source code has changed by looking at the remote
metadata
| _outdated_code_dependency | python | ploomber/ploomber | src/ploomber/products/_remotefile.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/_remotefile.py | Apache-2.0 |
def _outdated_data_dependencies(self, with_respect_to_local):
"""
Determine if the product is outdated by checking upstream timestamps
"""
upstream_outdated = [
self._is_outdated_due_to_upstream(up, with_respect_to_local)
for up in self._local_file.task.upstream.values()
]
# special case: if all upstream dependencies are waiting for download
# or up-to-date, mark this as up-to-date
if set(upstream_outdated) <= {TaskStatus.WaitingDownload, False}:
return False
return any(upstream_outdated) |
Determine if the product is outdated by checking upstream timestamps
| _outdated_data_dependencies | python | ploomber/ploomber | src/ploomber/products/_remotefile.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/_remotefile.py | Apache-2.0 |
def _is_outdated_due_to_upstream(self, upstream, with_respect_to_local):
"""
A task becomes data outdated if an upstream product has a higher
timestamp or if an upstream product is outdated
"""
if (
upstream.exec_status == TaskStatus.WaitingDownload
or not with_respect_to_local
):
# TODO: delete ._remote will never be None
if upstream.product._remote:
upstream_timestamp = upstream.product._remote.metadata.timestamp
else:
upstream_timestamp = None
else:
upstream_timestamp = upstream.product.metadata.timestamp
if self.metadata.timestamp is None or upstream_timestamp is None:
return True
else:
more_recent_upstream = upstream_timestamp > self.metadata.timestamp
if with_respect_to_local:
outdated_upstream_prod = upstream.product._is_outdated()
else:
outdated_upstream_prod = upstream.product._is_remote_outdated(True)
return more_recent_upstream or outdated_upstream_prod |
A task becomes data outdated if an upstream product has a higher
timestamp or if an upstream product is outdated
| _is_outdated_due_to_upstream | python | ploomber/ploomber | src/ploomber/products/_remotefile.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/_remotefile.py | Apache-2.0 |
def process_resources(params):
"""
Process resources in a parameters dict, computes the hash of the file for
resources (i.e., params with the resource__ prefix)
Parameters
----------
params : dict
Task parameters
"""
# params can be None
if params is None:
return None
if _KEY not in params:
return deepcopy(params)
_validate(params)
resources = {}
for key, value in params[_KEY].items():
path = _cast_to_path(value, key)
_check_is_file(path, key)
_check_file_size(path)
digest = hashlib.md5(path.read_bytes()).hexdigest()
resources[key] = digest
params_out = deepcopy(params)
params_out[_KEY] = resources
return params_out |
Process resources in a parameters dict, computes the hash of the file for
resources (i.e., params with the resource__ prefix)
Parameters
----------
params : dict
Task parameters
| process_resources | python | ploomber/ploomber | src/ploomber/products/_resources.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/_resources.py | Apache-2.0 |
def create(self, source, params, class_):
"""Scaffold a task if they don't exist
Returns
-------
bool
True if it created a task, False if it didn't
"""
did_create = False
if class_ is tasks.PythonCallable:
source_parts = source.split(".")
(*module_parts, fn_name) = source_parts
params["function_name"] = fn_name
try:
spec = locate_dotted_path(source)
except ModuleNotFoundError:
create_intermediate_modules(module_parts)
spec = locate_dotted_path(source)
source = Path(spec.origin)
source.parent.mkdir(parents=True, exist_ok=True)
original = source.read_text()
module = ast.parse(original)
names = {
element.name for element in module.body if hasattr(element, "name")
}
if fn_name not in names:
print(f"Adding {fn_name!r} to module {source!s}...")
fn_str = self.render("function.py", params=params)
source.write_text(original + fn_str)
did_create = True
# script task...
else:
path = Path(source)
if not path.exists():
if path.suffix in {".py", ".sql", ".ipynb", ".R", ".Rmd"}:
# create parent folders if needed
source.parent.mkdir(parents=True, exist_ok=True)
content = self.render("task" + source.suffix, params=params)
print("Adding {}...".format(source))
source.write_text(content)
did_create = True
else:
print(
"Error: This command does not support adding "
'tasks with extension "{}", valid ones are '
".py and .sql. Skipping {}".format(path.suffix, path)
)
return did_create | Scaffold a task if they don't exist
Returns
-------
bool
True if it created a task, False if it didn't
| create | python | ploomber/ploomber | src/ploomber/scaffold/scaffoldloader.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/scaffold/scaffoldloader.py | Apache-2.0 |
def getfile(fn):
"""
Returns the file where the function is defined. Works even in wrapped
functions
"""
if hasattr(fn, "__wrapped__"):
return getfile(fn.__wrapped__)
else:
return inspect.getfile(fn) |
Returns the file where the function is defined. Works even in wrapped
functions
| getfile | python | ploomber/ploomber | src/ploomber/sources/inspect.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/inspect.py | Apache-2.0 |
def to_nb(self, path=None):
"""
Converts the function to is notebook representation, Returns a
notebook object, if path is passed, it saves the notebook as well
Returns the function's body in a notebook (tmp location), inserts
params as variables at the top
"""
self._reload_fn()
body_elements, _ = parse_function(self.fn)
top, local, bottom = extract_imports(self.fn)
return function_to_nb(
body_elements, top, local, bottom, self.params, self.fn, path
) |
Converts the function to is notebook representation, Returns a
notebook object, if path is passed, it saves the notebook as well
Returns the function's body in a notebook (tmp location), inserts
params as variables at the top
| to_nb | python | ploomber/ploomber | src/ploomber/sources/interact.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/interact.py | Apache-2.0 |
def overwrite(self, obj):
"""
Overwrite the function's body with the notebook contents, excluding
injected parameters and cells whose first line is "#". obj can be
either a notebook object or a path
"""
self._reload_fn()
if isinstance(obj, (str, Path)):
nb = nbformat.read(obj, as_version=nbformat.NO_CONVERT)
else:
nb = obj
nb.cells = nb.cells[: last_non_empty_cell(nb.cells)]
# remove cells that are only needed for the nb but not for the function
code_cells = [c["source"] for c in nb.cells if keep_cell(c)]
# add 4 spaces to each code cell, exclude white space lines
code_cells = [indent_cell(code) for code in code_cells]
# get the original file where the function is defined
content = self.path_to_source.read_text()
content_lines = content.splitlines()
trailing_newline = content[-1] == "\n"
# an upstream parameter
fn_starts, fn_ends = function_lines(self.fn)
# keep the file the same until you reach the function definition plus
# an offset to account for the signature (which might span >1 line)
_, body_start = parse_function(self.fn)
keep_until = fn_starts + body_start
header = content_lines[:keep_until]
# the footer is everything below the end of the original definition
footer = content_lines[fn_ends:]
# if there is anything at the end, we have to add an empty line to
# properly end the function definition, if this is the last definition
# in the file, we don't have to add this
if footer:
footer = [""] + footer
new_content = "\n".join(header + code_cells + footer)
# replace old top imports with new ones
new_content_lines = new_content.splitlines()
_, line = extract_imports_top(parso.parse(new_content), new_content_lines)
imports_top_cell, _ = find_cell_with_tag(nb, "imports-top")
# ignore trailing whitespace in top imports cell but keep original
# amount of whitespace separating the last import and the first name
# definition
content_to_write = (
imports_top_cell["source"].rstrip()
+ "\n"
+ "\n".join(new_content_lines[line - 1 :])
)
# if the original file had a trailing newline, keep it
if trailing_newline:
content_to_write += "\n"
# NOTE: this last part parses the code several times, we can improve
# performance by only parsing once
m = parso.parse(content_to_write)
fn_def = find_function_with_name(m, self.fn.__name__)
fn_code = fn_def.get_code()
has_upstream_dependencies = PythonCallableExtractor(fn_code).extract_upstream()
upstream_in_func_sig = upstream_in_func_signature(fn_code)
if not upstream_in_func_sig and has_upstream_dependencies:
fn_code_new = add_upstream_to_func_signature(fn_code)
content_to_write = _replace_fn_source(content_to_write, fn_def, fn_code_new)
elif upstream_in_func_sig and not has_upstream_dependencies:
fn_code_new = remove_upstream_to_func_signature(fn_code)
content_to_write = _replace_fn_source(content_to_write, fn_def, fn_code_new)
self.path_to_source.write_text(content_to_write) |
Overwrite the function's body with the notebook contents, excluding
injected parameters and cells whose first line is "#". obj can be
either a notebook object or a path
| overwrite | python | ploomber/ploomber | src/ploomber/sources/interact.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/interact.py | Apache-2.0 |
def last_non_empty_cell(cells):
"""Returns the index + 1 for the last non-empty cell"""
idx = len(cells)
for cell in cells[::-1]:
if cell.source:
return idx
idx -= 1
return idx | Returns the index + 1 for the last non-empty cell | last_non_empty_cell | python | ploomber/ploomber | src/ploomber/sources/interact.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/interact.py | Apache-2.0 |
def keep_cell(cell):
"""
Rule to decide whether to keep a cell or not. This is executed before
converting the notebook back to a function
"""
cell_tags = set(cell["metadata"].get("tags", {}))
# remove cell with this tag, they are not part of the function body
tags_to_remove = {
"injected-parameters",
"imports-top",
"imports-local",
"imports-bottom",
"debugging-settings",
}
has_tags_to_remove = len(cell_tags & tags_to_remove)
return (
cell["cell_type"] == "code"
and not has_tags_to_remove
and cell["source"][:2] != "#\n"
) |
Rule to decide whether to keep a cell or not. This is executed before
converting the notebook back to a function
| keep_cell | python | ploomber/ploomber | src/ploomber/sources/interact.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/interact.py | Apache-2.0 |
def parse_function(fn):
"""
Extract function's source code, parse it and return function body
elements along with the # of the last line for the signature (which
marks the beginning of the function's body) and all the imports
"""
# TODO: exclude return at the end, what if we find more than one?
# maybe do not support functions with return statements for now
source = inspect.getsource(fn).rstrip()
body_elements, start_pos = body_elements_from_source(source)
return body_elements, start_pos |
Extract function's source code, parse it and return function body
elements along with the # of the last line for the signature (which
marks the beginning of the function's body) and all the imports
| parse_function | python | ploomber/ploomber | src/ploomber/sources/interact.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/interact.py | Apache-2.0 |
def has_import(stmt):
"""
Check if statement contains an import
"""
for ch in stmt.children:
if ch.type in {"import_name", "import_from"}:
return True
return False |
Check if statement contains an import
| has_import | python | ploomber/ploomber | src/ploomber/sources/interact.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/interact.py | Apache-2.0 |
def function_to_nb(
body_elements, imports_top, imports_local, imports_bottom, params, fn, path
):
"""
Save function body elements to a notebook
"""
# TODO: Params should implement an option to call to_json_serializable
# on product to avoid repetition I'm using this same code in notebook
# runner. Also raise error if any of the params is not
# json serializable
try:
params = params.to_json_serializable()
params["product"] = params["product"].to_json_serializable()
except AttributeError:
pass
nb_format = nbformat.versions[nbformat.current_nbformat]
nb = nb_format.new_notebook()
# get the module where the function is declared
tokens = inspect.getmodule(fn).__name__.split(".")
module_name = ".".join(tokens[:-1])
# add cell that chdirs for the current working directory
# add __package__, we need this for relative imports to work
# see: https://www.python.org/dev/peps/pep-0366/ for details
source = """
# Debugging settings (this cell will be removed before saving)
# change the current working directory to the one when .debug() happen
# to make relative paths work
import os
{}
__package__ = "{}"
""".format(
chdir_code(Path(".").resolve()), module_name
)
cell = nb_format.new_code_cell(source, metadata={"tags": ["debugging-settings"]})
nb.cells.append(cell)
# then add params passed to the function
cell = nb_format.new_code_cell(
PythonTranslator.codify(params), metadata={"tags": ["injected-parameters"]}
)
nb.cells.append(cell)
# first three cells: imports
for code, tag in (
(imports_top, "imports-top"),
(imports_local, "imports-local"),
(imports_bottom, "imports-bottom"),
):
if code:
nb.cells.append(
nb_format.new_code_cell(source=code, metadata=dict(tags=[tag]))
)
for statement in body_elements:
lines, newlines = split_statement(statement)
# find indentation # of characters using the first line
idx = indentation_idx(lines[0])
# remove indentation from all function body lines
lines = [line[idx:] for line in lines]
# add one empty cell per leading new line
nb.cells.extend([nb_format.new_code_cell(source="") for _ in range(newlines)])
# add actual code as a single string
cell = nb_format.new_code_cell(source="\n".join(lines))
nb.cells.append(cell)
k = jupyter_client.kernelspec.get_kernel_spec("python3")
nb.metadata.kernelspec = {
"display_name": k.display_name,
"language": k.language,
"name": "python3",
}
if path:
nbformat.write(nb, path)
return nb |
Save function body elements to a notebook
| function_to_nb | python | ploomber/ploomber | src/ploomber/sources/interact.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/interact.py | Apache-2.0 |
def find_cell_with_tags(nb, tags):
"""
Find the first cell with any of the given tags, returns a dictionary
with 'cell' (cell object) and 'index', the cell index.
"""
tags_to_find = list(tags)
tags_found = {}
for index, cell in enumerate(nb["cells"]):
for tag in cell["metadata"].get("tags", []):
if tag in tags_to_find:
tags_found[tag] = dict(cell=cell, index=index)
tags_to_find.remove(tag)
if not tags_to_find:
break
return tags_found |
Find the first cell with any of the given tags, returns a dictionary
with 'cell' (cell object) and 'index', the cell index.
| find_cell_with_tags | python | ploomber/ploomber | src/ploomber/sources/nb_utils.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/nb_utils.py | Apache-2.0 |
def find_cell_with_tag(nb, tag):
"""
Find a cell with a given tag, returns a cell, index tuple. Otherwise
(None, None)
"""
out = find_cell_with_tags(nb, [tag])
if out:
located = out[tag]
return located["cell"], located["index"]
else:
return None, None |
Find a cell with a given tag, returns a cell, index tuple. Otherwise
(None, None)
| find_cell_with_tag | python | ploomber/ploomber | src/ploomber/sources/nb_utils.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/nb_utils.py | Apache-2.0 |
def _jupytext_fmt(primitive, extension):
"""
Determine the jupytext fmt string to use based on the content and extension
"""
if extension.startswith("."):
extension = extension[1:]
if extension != "ipynb":
fmt, _ = jupytext.guess_format(primitive, f".{extension}")
fmt_final = f"{extension}:{fmt}"
else:
fmt_final = ".ipynb"
return fmt_final |
Determine the jupytext fmt string to use based on the content and extension
| _jupytext_fmt | python | ploomber/ploomber | src/ploomber/sources/notebooksource.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/notebooksource.py | Apache-2.0 |
def _get_last_cell(nb):
"""
Get last cell, ignores cells with empty source (unless the notebook only
has one cell and it's empty)
"""
# iterate in reverse order
for idx in range(-1, -len(nb.cells) - 1, -1):
cell = nb.cells[idx]
# only return it if it has some code
if cell["source"].strip():
return cell
# otherwise return the first cell
return nb.cells[0] |
Get last cell, ignores cells with empty source (unless the notebook only
has one cell and it's empty)
| _get_last_cell | python | ploomber/ploomber | src/ploomber/sources/notebooksource.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/notebooksource.py | Apache-2.0 |
def requires_path(func):
"""
Checks if NotebookSource instance was initialized from a file, raises
an error if not
"""
@wraps(func)
def wrapper(self, *args, **kwargs):
if self._path is None:
raise ValueError(
f"Cannot use {func.__name__!r} if notebook was "
"not initialized from a file"
)
return func(self, *args, **kwargs)
return wrapper |
Checks if NotebookSource instance was initialized from a file, raises
an error if not
| requires_path | python | ploomber/ploomber | src/ploomber/sources/notebooksource.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/sources/notebooksource.py | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.