code
stringlengths 66
870k
| docstring
stringlengths 19
26.7k
| func_name
stringlengths 1
138
| language
stringclasses 1
value | repo
stringlengths 7
68
| path
stringlengths 5
324
| url
stringlengths 46
389
| license
stringclasses 7
values |
---|---|---|---|---|---|---|---|
def entry_point_relative(name=None):
"""
Returns a relative path to the entry point with the given name.
Raises
------
DAGSpecInvalidError
If cannot locate the requested file
Notes
-----
This is used by Soopervisor when loading dags. We must ensure it gets
a relative path since such file is used for loading the spec in the client
and in the hosted container (which has a different filesystem structure)
"""
FILENAME = "pipeline.yaml" if name is None else f"pipeline.{name}.yaml"
location_pkg = _package_location(root_path=".", name=FILENAME)
location = FILENAME if Path(FILENAME).exists() else None
if location_pkg and location:
raise DAGSpecInvalidError(
f"Error loading {FILENAME}, both {location} "
"(relative to the current working directory) "
f"and {location_pkg} exist, but expected only one. "
f"If your project is a package, keep {location_pkg}, "
f"if it's not, keep {location}"
)
if location_pkg is None and location is None:
raise DAGSpecInvalidError(
f"Could not find dag spec with name {FILENAME}, "
"make sure the file is located relative to the working directory "
f"or in src/pkg_name/{FILENAME} (where pkg_name is the name of "
"your package if your project is one)"
)
return location_pkg or location |
Returns a relative path to the entry point with the given name.
Raises
------
DAGSpecInvalidError
If cannot locate the requested file
Notes
-----
This is used by Soopervisor when loading dags. We must ensure it gets
a relative path since such file is used for loading the spec in the client
and in the hosted container (which has a different filesystem structure)
| entry_point_relative | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def try_to_find_env_yml(path_to_spec):
"""
The purpose of this function is to check whether there are env files
ending with .yml format. It will return that file if it does exist,
otherwise return None.
This function will only be called right after path_to_env_from_spec.
"""
# FIXME: delete this
if path_to_spec is None:
return None
path_to_parent = Path(path_to_spec).parent
name = extract_name(path_to_spec)
filename = "env.yml" if name is None else f"env.{name}.yml"
return _path_to_filename_in_cwd_or_with_parent(
filename=filename, path_to_parent=path_to_parent, raise_=False
) |
The purpose of this function is to check whether there are env files
ending with .yml format. It will return that file if it does exist,
otherwise return None.
This function will only be called right after path_to_env_from_spec.
| try_to_find_env_yml | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def path_to_env_from_spec(path_to_spec):
"""
It first looks up the PLOOMBER_ENV_FILENAME env var, it if exists, it uses
the filename defined there. If not, it looks for an env.yaml file. Prefers
a file in the working directory, otherwise, one relative to the spec's
parent. If the appropriate env.yaml file does not exist, returns None.
If path to spec has a pipeline.{name}.yaml format, it tries to look up an
env.{name}.yaml first.
It returns None if None of those files exist, except when
PLOOMBER_ENV_FILENAME, in such case, it raises an error.
Parameters
----------
path_to_spec : str or pathlib.Path
Path to YAML spec
Raises
------
FileNotFoundError
If PLOOMBER_ENV_FILENAME is defined but doesn't exist
ValueError
If PLOOMBER_ENV_FILENAME is defined and contains a path with
directory components.
If path_to_spec does not have an extension or if it's a directory
"""
# FIXME: delete this
if path_to_spec is None:
return None
if Path(path_to_spec).is_dir():
raise ValueError(
f"Expected path to spec {str(path_to_spec)!r} to be a "
"file but got a directory instead"
)
if not Path(path_to_spec).suffix:
raise ValueError(
"Expected path to spec to have a file extension "
f"but got: {str(path_to_spec)!r}"
)
path_to_parent = Path(path_to_spec).parent
environ = _get_env_filename_environment_variable(path_to_parent)
if environ:
filename = environ
else:
name = environ or extract_name(path_to_spec)
filename = "env.yaml" if name is None else f"env.{name}.yaml"
return _search_for_env_with_name_and_parent(
filename, path_to_parent, raise_=environ is not None
) |
It first looks up the PLOOMBER_ENV_FILENAME env var, it if exists, it uses
the filename defined there. If not, it looks for an env.yaml file. Prefers
a file in the working directory, otherwise, one relative to the spec's
parent. If the appropriate env.yaml file does not exist, returns None.
If path to spec has a pipeline.{name}.yaml format, it tries to look up an
env.{name}.yaml first.
It returns None if None of those files exist, except when
PLOOMBER_ENV_FILENAME, in such case, it raises an error.
Parameters
----------
path_to_spec : str or pathlib.Path
Path to YAML spec
Raises
------
FileNotFoundError
If PLOOMBER_ENV_FILENAME is defined but doesn't exist
ValueError
If PLOOMBER_ENV_FILENAME is defined and contains a path with
directory components.
If path_to_spec does not have an extension or if it's a directory
| path_to_env_from_spec | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def _path_to_filename_in_cwd_or_with_parent(filename, path_to_parent, raise_):
"""
Looks for a file with filename in the current working directory, if it
doesn't exist, it looks for the file under parent directory again
For example:
project/
pipeline.yaml
another/ <- assume this is the current working directory
env.yaml <- this gets loaded
And:
project/ <- this is path_to_parent
pipeline.yaml
env.yaml <- this gets loaded
another/ <- assume this is the current working directory
sibling/
env.yaml <- this will never get loaded
(under sibling rather than project)
Parameters
----------
filename : str
Filename to search for
path_to_parent : str or pathlib.Path
If filename does not exist in the current working directory, look
relative to this path
raise_ : bool
If Trye, raises an error if the file doesn't exist
"""
local_env = Path(".", filename).resolve()
if local_env.exists():
return str(local_env)
if path_to_parent:
sibling_env = Path(path_to_parent, filename).resolve()
if sibling_env.exists():
return str(sibling_env)
if raise_:
raise FileNotFoundError(
"Failed to load env: PLOOMBER_ENV_FILENAME "
f"has value {filename!r} but "
"there isn't a file with such name. "
"Tried looking it up relative to the "
"current working directory "
f"({str(local_env)!r}) and relative "
f"to the YAML spec ({str(sibling_env)!r})"
) |
Looks for a file with filename in the current working directory, if it
doesn't exist, it looks for the file under parent directory again
For example:
project/
pipeline.yaml
another/ <- assume this is the current working directory
env.yaml <- this gets loaded
And:
project/ <- this is path_to_parent
pipeline.yaml
env.yaml <- this gets loaded
another/ <- assume this is the current working directory
sibling/
env.yaml <- this will never get loaded
(under sibling rather than project)
Parameters
----------
filename : str
Filename to search for
path_to_parent : str or pathlib.Path
If filename does not exist in the current working directory, look
relative to this path
raise_ : bool
If Trye, raises an error if the file doesn't exist
| _path_to_filename_in_cwd_or_with_parent | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def extract_name(path):
"""
Extract name from a path whose filename is something.{name}.{extension}.
Returns none if the file doesn't follow the naming convention
"""
name = Path(path).name
parts = name.split(".")
if len(parts) < 3:
return None
else:
return parts[1] |
Extract name from a path whose filename is something.{name}.{extension}.
Returns none if the file doesn't follow the naming convention
| extract_name | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def find_file_recursively(name, max_levels_up=6, starting_dir=None):
"""
Find a file by looking into the current folder and parent folders,
returns None if no file was found otherwise pathlib.Path to the file
Parameters
----------
name : str
Filename
Returns
-------
path : str
Absolute path to the file
levels : int
How many levels up the file is located
"""
current_dir = starting_dir or os.getcwd()
current_dir = Path(current_dir).resolve()
path_to_file = None
levels = None
for levels in range(max_levels_up):
current_path = Path(current_dir, name)
if current_path.exists():
path_to_file = current_path.resolve()
break
current_dir = current_dir.parent
return path_to_file, levels |
Find a file by looking into the current folder and parent folders,
returns None if no file was found otherwise pathlib.Path to the file
Parameters
----------
name : str
Filename
Returns
-------
path : str
Absolute path to the file
levels : int
How many levels up the file is located
| find_file_recursively | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def find_root_recursively(starting_dir=None, filename=None, check_parents=True):
"""
Finds a project root by looking recursively for pipeline.yaml or a setup.py
file. Ignores pipeline.yaml if located in src/*/pipeline.yaml.
Parameters
---------
starting_dir : str or pathlib.Path
The directory to start the search
Raises
------
DAGSpecInvalidError
If fails to determine a valid project root
"""
filename = filename or "pipeline.yaml"
if len(Path(filename).parts) > 1:
raise ValueError(
f"{filename!r} should be a filename "
"(e.g., pipeline.yaml), not a path "
"(e.g., path/to/pipeline.yaml)"
)
root_by_setup, setup_levels = find_parent_of_file_recursively(
"setup.py", max_levels_up=6, starting_dir=starting_dir
)
root_by_pipeline, pipeline_levels = find_parent_of_file_recursively(
filename,
max_levels_up=6,
starting_dir=starting_dir,
)
if root_by_pipeline == _filesystem_root():
raise DAGSpecInvalidError(
f"{filename} cannot be in the filesystem root. "
"Please add it inside a directory like "
f"project-name/{filename}. "
)
root_found = None
# use found pipeline.yaml if not in a src/*/pipeline.yaml structure when
# there is no setup.py
# OR
# if the pipeline.yaml if closer to the starting_dir than the setup.py.
# e.g., project/some/pipeline.yaml vs project/setup.py
if root_by_pipeline and not root_by_setup:
if root_by_pipeline.parents[0].name == "src":
pkg_portion = str(Path(*root_by_pipeline.parts[-2:]))
raise DAGSpecInvalidError(
"Invalid project layout. Found project root at "
f"{root_by_pipeline} under a parent with name {pkg_portion}. "
"This suggests a package "
"structure but no setup.py exists. If your project is a "
"package create a setup.py file, otherwise rename the "
"src directory"
)
root_found = root_by_pipeline
elif root_by_pipeline and root_by_setup:
if pipeline_levels < setup_levels and root_by_pipeline.parents[0].name != "src":
root_found = root_by_pipeline
if root_by_setup and (
not root_by_pipeline
or setup_levels <= pipeline_levels
or root_by_pipeline.parents[0].name == "src"
):
pipeline_yaml = Path(root_by_setup, filename)
# pkg_location checks if there is a src/{package-name}/{filename}
# e.g., src/my_pkg/pipeline.yaml
pkg_location = _package_location(root_path=root_by_setup, name=filename)
if not pkg_location and not pipeline_yaml.exists():
raise DAGSpecInvalidError(
"Failed to determine project root. Found "
"a setup.py file at "
f"{str(root_by_setup)!r} and expected "
f"to find a {filename} file at "
f"src/*/{filename} (relative to "
"setup.py parent) but no such file was "
"found"
)
if pkg_location and pipeline_yaml.exists():
pkg = Path(*Path(pkg_location).parts[-3:-1])
example = str(pkg / "pipeline.another.yaml")
raise DAGSpecInvalidError(
"Failed to determine project root: found "
f"two {filename} files: {pkg_location} "
f"and {pipeline_yaml}. To fix it, move "
"and rename the second file "
f"under {str(pkg)} (e.g., {example}) or move {pkg_location} "
"to your root directory"
)
root_found = root_by_setup
if root_found:
if check_parents:
try:
another_found = find_root_recursively(
starting_dir=Path(root_found).parent,
filename=filename,
check_parents=False,
)
except DAGSpecInvalidError:
pass
else:
warnings.warn(
f"Found project root with filename {filename!r} "
f"at {str(root_found)!r}, but "
"found another one in a parent directory "
f"({str(another_found)!r}). The former will be used. "
"Nested YAML specs are not recommended, "
"consider moving them to the same folder and "
"rename them (e.g., project/pipeline.yaml "
"and project/pipeline.serve.yaml) or store them "
"in separate folders (e.g., "
"project1/pipeline.yaml and "
"project2/pipeline.yaml"
)
return root_found
else:
raise DAGSpecInvalidError(
"Failed to determine project root. Looked "
"recursively for a setup.py or "
f"{filename} in parent folders but none of "
"those files exist"
) |
Finds a project root by looking recursively for pipeline.yaml or a setup.py
file. Ignores pipeline.yaml if located in src/*/pipeline.yaml.
Parameters
---------
starting_dir : str or pathlib.Path
The directory to start the search
Raises
------
DAGSpecInvalidError
If fails to determine a valid project root
| find_root_recursively | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def find_package_name(starting_dir=None):
"""
Find package name for this project. Raises an error if it cannot determine
a valid root path
"""
root = find_root_recursively(starting_dir=starting_dir)
pkg = _package_location(root_path=root)
if not pkg:
raise ValueError(
"Could not find a valid package. Make sure "
"there is a src/package-name/pipeline.yaml file relative "
f"to your project root ({root})"
)
return Path(pkg).parent.name |
Find package name for this project. Raises an error if it cannot determine
a valid root path
| find_package_name | python | ploomber/ploomber | src/ploomber/util/default.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/default.py | Apache-2.0 |
def load_dotted_path(dotted_path, raise_=True, reload=False):
"""Load an object/function/module by passing a dotted path
Parameters
----------
dotted_path : str
Dotted path to a module, e.g. ploomber.tasks.NotebookRunner
raise_ : bool, default=True
If True, an exception is raised if the module can't be imported,
otherwise return None if that happens
reload : bool, default=False
Reloads the module after importing it
"""
obj, module = None, None
parsed = _validate_dotted_path(dotted_path, raise_=raise_)
if parsed:
mod, name = parsed
main_mod = str(mod.split(".")[0])
try:
module = importlib.import_module(mod)
except ModuleNotFoundError as e:
if raise_:
spec = importlib.util.find_spec(main_mod)
msg = (
"An error occured when trying to import dotted "
f"path {dotted_path!r}: {e}"
)
if spec is not None:
msg = msg + f" (loaded {main_mod!r} from {spec.origin!r})"
e.msg = msg
raise
if module:
if reload:
module = importlib.reload(module)
try:
obj = getattr(module, name)
except AttributeError as e:
if raise_:
e.args = (
(
f"Could not get {name!r} from module {mod!r} "
f"(loaded {mod!r} from {module.__file__!r}). "
"Ensure it is defined in such module"
),
)
raise
return obj
else:
if raise_:
raise ValueError(
'Invalid dotted path value "{}", must be a dot separated '
"string, with at least "
"[module_name].[function_name]".format(dotted_path)
) | Load an object/function/module by passing a dotted path
Parameters
----------
dotted_path : str
Dotted path to a module, e.g. ploomber.tasks.NotebookRunner
raise_ : bool, default=True
If True, an exception is raised if the module can't be imported,
otherwise return None if that happens
reload : bool, default=False
Reloads the module after importing it
| load_dotted_path | python | ploomber/ploomber | src/ploomber/util/dotted_path.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/dotted_path.py | Apache-2.0 |
def load_callable_dotted_path(dotted_path, raise_=True, reload=False):
"""
Like load_dotted_path but verifies the loaded object is a callable
"""
loaded_object = load_dotted_path(
dotted_path=dotted_path, raise_=raise_, reload=reload
)
if not callable(loaded_object):
raise TypeError(
f"Error loading dotted path {dotted_path!r}. "
"Expected a callable object (i.e., some kind "
f"of function). Got {loaded_object!r} "
f"(an object of type: {type(loaded_object).__name__})"
)
return loaded_object |
Like load_dotted_path but verifies the loaded object is a callable
| load_callable_dotted_path | python | ploomber/ploomber | src/ploomber/util/dotted_path.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/dotted_path.py | Apache-2.0 |
def call_dotted_path(dotted_path, raise_=True, reload=False, kwargs=None):
"""
Load dotted path (using load_callable_dotted_path), and call it with
kwargs arguments, raises an exception if returns None
Parameters
----------
dotted_path : str
Dotted path to call
kwargs : dict, default=None
Keyword arguments to call the dotted path
"""
callable_ = load_callable_dotted_path(
dotted_path=dotted_path, raise_=raise_, reload=reload
)
kwargs = kwargs or dict()
try:
out = callable_(**kwargs)
except Exception as e:
origin = locate_dotted_path(dotted_path).origin
msg = str(e) + f" (Loaded from: {origin})"
e.args = (msg,)
raise
if out is None:
raise TypeError(
f"Error calling dotted path {dotted_path!r}. "
"Expected a value but got None"
)
return out |
Load dotted path (using load_callable_dotted_path), and call it with
kwargs arguments, raises an exception if returns None
Parameters
----------
dotted_path : str
Dotted path to call
kwargs : dict, default=None
Keyword arguments to call the dotted path
| call_dotted_path | python | ploomber/ploomber | src/ploomber/util/dotted_path.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/dotted_path.py | Apache-2.0 |
def locate_dotted_path(dotted_path):
"""
Locates a dotted path, returns the spec for the module where the attribute
is defined
"""
tokens = dotted_path.split(".")
module = ".".join(tokens[:-1])
# NOTE: if importing a sub-module (e.g., something.another), this will
# import some modules (rather than just locating them) - I think we
# should remove them to pervent import clashes
spec = importlib.util.find_spec(module)
if spec is None:
raise ModuleNotFoundError(f"Module {module!r} does not exist")
return spec |
Locates a dotted path, returns the spec for the module where the attribute
is defined
| locate_dotted_path | python | ploomber/ploomber | src/ploomber/util/dotted_path.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/dotted_path.py | Apache-2.0 |
def locate_dotted_path_root(dotted_path):
"""
Returns the module spec for a given dotted path.
e.g. module.sub.another, checks that module exists
"""
tokens = dotted_path.split(".")
spec = importlib.util.find_spec(tokens[0])
if spec is None:
raise ModuleNotFoundError(f"Module {tokens[0]!r} does not exist")
return spec |
Returns the module spec for a given dotted path.
e.g. module.sub.another, checks that module exists
| locate_dotted_path_root | python | ploomber/ploomber | src/ploomber/util/dotted_path.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/dotted_path.py | Apache-2.0 |
def lazily_locate_dotted_path(dotted_path):
"""
Locates a dotted path, but unlike importlib.util.find_spec, it does not
import submodules
"""
_validate_dotted_path(dotted_path)
parts = dotted_path.split(".")
module_name = ".".join(parts[:-1])
first, middle, mod, symbol = parts[0], parts[1:-2], parts[-2], parts[-1]
spec = importlib.util.find_spec(first)
if spec is None:
raise ModuleNotFoundError(
"Error processing dotted "
f"path {dotted_path!r}, "
f"no module named {first!r}"
)
if spec.origin is None:
raise ModuleNotFoundError(
"Error processing dotted "
f"path {dotted_path!r}: "
f"{first!r} appears to be a namespace "
"package, which are not supported"
)
origin = Path(spec.origin)
location = origin.parent
# a.b.c.d.e.f
# a/__init__.py or a.py must exist
# from b until d, there must be {name}/__init__.py
# there must be e/__init__.py or e.py
# f must be a symbol defined at e.py or e/__init__.py
if len(parts) == 2:
return _check_defines_function_with_name(origin, symbol, dotted_path)
location = reduce(lambda x, y: x / y, [location] + middle)
init = location / mod / "__init__.py"
file_ = location / f"{mod}.py"
if init.exists():
return _check_defines_function_with_name(init, symbol, dotted_path)
elif file_.exists():
return _check_defines_function_with_name(file_, symbol, dotted_path)
else:
raise ModuleNotFoundError(
f"No module named {module_name!r}. "
f"Expected to find one of {str(init)!r} or "
f"{str(file_)!r}, but none of those exist"
) |
Locates a dotted path, but unlike importlib.util.find_spec, it does not
import submodules
| lazily_locate_dotted_path | python | ploomber/ploomber | src/ploomber/util/dotted_path.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/dotted_path.py | Apache-2.0 |
def create_intermediate_modules(module_parts):
"""
Creates the folder structure needed for a module specified
by the parts of its name
Parameters
----------
module_parts : list
A list of strings with the module elements.
Example: ['module', 'sub_module']
Raises
------
ValueError
If the module already exists
"""
dotted_path_to_module = ".".join(module_parts)
*inner, last = module_parts
# check if it already exists
if len(module_parts) >= 2:
fn_check = dotted_path_exists
else:
fn_check = importlib.util.find_spec
if fn_check(dotted_path_to_module):
raise ValueError(f"Module {dotted_path_to_module!r} already exists")
# if the root module already exists, we should create the missing files
# in the existing location
spec = importlib.util.find_spec(module_parts[0])
# .origin will be None for namespace packages
if spec and spec.origin is not None:
inner[0] = Path(spec.origin).parent
parent = Path(*inner)
parent.mkdir(parents=True, exist_ok=True)
for idx in range(len(inner)):
init_file = Path(*inner[: idx + 1], "__init__.py")
if not init_file.exists():
init_file.touch()
Path(parent, f"{last}.py").touch() |
Creates the folder structure needed for a module specified
by the parts of its name
Parameters
----------
module_parts : list
A list of strings with the module elements.
Example: ['module', 'sub_module']
Raises
------
ValueError
If the module already exists
| create_intermediate_modules | python | ploomber/ploomber | src/ploomber/util/dotted_path.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/dotted_path.py | Apache-2.0 |
def lazily_load_entry_point(starting_dir=None, reload=False):
"""
Lazily loads entry point by recursively looking in starting_dir directory
and parent directories.
"""
generator = _lazily_load_entry_point_generator(
starting_dir=starting_dir, reload=reload
)
_ = next(generator)
return next(generator) |
Lazily loads entry point by recursively looking in starting_dir directory
and parent directories.
| lazily_load_entry_point | python | ploomber/ploomber | src/ploomber/util/loader.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/loader.py | Apache-2.0 |
def _default_spec_load_generator(starting_dir=None, lazy_import=False, reload=False):
"""
Similar to _default_spec_load but this one returns a generator. The
first element is the path to the entry point and the second one
the spec, the path to spec parent and the path to the spec
"""
root_path = starting_dir or os.getcwd()
path_to_entry_point = default.entry_point(root_path=root_path)
yield path_to_entry_point
try:
spec = DAGSpec(
path_to_entry_point, env=None, lazy_import=lazy_import, reload=reload
)
path_to_spec = Path(path_to_entry_point)
yield spec, path_to_spec.parent, path_to_spec
except Exception as e:
exc = DAGSpecInitializationError(
"Error initializing DAG from " f"{path_to_entry_point!s}"
)
raise exc from e |
Similar to _default_spec_load but this one returns a generator. The
first element is the path to the entry point and the second one
the spec, the path to spec parent and the path to the spec
| _default_spec_load_generator | python | ploomber/ploomber | src/ploomber/util/loader.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/loader.py | Apache-2.0 |
def markdown_to_html(md):
"""
Convert markdown to HTML with syntax highlighting, works with old and
new versions of mistune
"""
if mistune_recent:
class HighlightRenderer(mistune.HTMLRenderer):
def block_code(self, code, lang=None):
if lang:
lexer = get_lexer_by_name(lang, stripall=True)
formatter = html.HtmlFormatter()
return highlight(code, lexer, formatter)
return "<pre><code>" + mistune.escape(code) + "</code></pre>"
markdown = mistune.create_markdown(renderer=HighlightRenderer(escape=False))
return markdown(md)
else:
class HighlightRenderer(mistune.Renderer):
"""mistune renderer with syntax highlighting
Notes
-----
Source: https://github.com/lepture/mistune#renderer
"""
def block_code(self, code, lang):
if not lang:
return "\n<pre><code>%s</code></pre>\n" % mistune.escape(code)
lexer = get_lexer_by_name(lang, stripall=True)
formatter = html.HtmlFormatter()
return highlight(code, lexer, formatter)
renderer = HighlightRenderer()
return mistune.markdown(md, escape=False, renderer=renderer) |
Convert markdown to HTML with syntax highlighting, works with old and
new versions of mistune
| markdown_to_html | python | ploomber/ploomber | src/ploomber/util/markup.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/markup.py | Apache-2.0 |
def callback_check(fn, available, allow_default=True):
"""
Check if a callback function signature requests available parameters
Parameters
----------
fn : callable
Callable (e.g. a function) to check
available : dict
All available params
allow_default : bool, optional
Whether allow arguments with default values in "fn" or not
Returns
-------
dict
Dictionary with requested parameters
Raises
------
ploomber.exceptions.CallbackCheckAborted
When passing a dotted path whose underlying function hasn't been
imported
ploomber.exceptions.CallbackSignatureError
When fn does not have the required signature
"""
# keep a copy of the original value because we'll modified it if this is
# a DottedPath
available_raw = available
if isinstance(fn, DottedPath):
available = {**fn._spec.get_kwargs(), **available}
if fn.callable is None:
raise CallbackCheckAborted(
"Cannot check callback because function "
"is a dotted path whose function has not been imported yet"
)
else:
fn = fn.callable
parameters = inspect.signature(fn).parameters
optional = {
name for name, param in parameters.items() if param.default != inspect._empty
}
# not all functions have __name__ (e.g. partials)
fn_name = getattr(fn, "__name__", fn)
if optional and not allow_default:
raise CallbackSignatureError(
"Callback functions cannot have "
"parameters with default values, "
'got: {} in "{}"'.format(optional, fn_name)
)
required = {
name for name, param in parameters.items() if param.default == inspect._empty
}
available_set = set(available)
extra = required - available_set
if extra:
raise CallbackSignatureError(
'Callback function "{}" unknown '
"parameter(s): {}, available ones are: "
"{}".format(fn_name, extra, available_set)
)
return {k: v for k, v in available_raw.items() if k in required} |
Check if a callback function signature requests available parameters
Parameters
----------
fn : callable
Callable (e.g. a function) to check
available : dict
All available params
allow_default : bool, optional
Whether allow arguments with default values in "fn" or not
Returns
-------
dict
Dictionary with requested parameters
Raises
------
ploomber.exceptions.CallbackCheckAborted
When passing a dotted path whose underlying function hasn't been
imported
ploomber.exceptions.CallbackSignatureError
When fn does not have the required signature
| callback_check | python | ploomber/ploomber | src/ploomber/util/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/util.py | Apache-2.0 |
def signature_check(fn, params, task_name):
"""
Verify if the function signature used as source in a PythonCallable
task matches available params
"""
params = set(params)
parameters = inspect.signature(fn).parameters
required = {
name for name, param in parameters.items() if param.default == inspect._empty
}
extra = params - set(parameters.keys())
missing = set(required) - params
errors = []
if extra:
msg = f"Got unexpected arguments: {sorted(extra)}"
errors.append(msg)
if missing:
msg = f"Missing arguments: {sorted(missing)}"
errors.append(msg)
if "upstream" in missing:
errors.append(
"Verify this task declared upstream depedencies or "
'remove the "upstream" argument from the function'
)
missing_except_upstream = sorted(missing - {"upstream"})
if missing_except_upstream:
errors.append(f'Pass {missing_except_upstream} in "params"')
if extra or missing:
msg = ". ".join(errors)
# not all functions have __name__ (e.g. partials)
fn_name = getattr(fn, "__name__", fn)
raise TaskRenderError(
'Error rendering task "{}" initialized with '
'function "{}". {}'.format(task_name, fn_name, msg)
)
return True |
Verify if the function signature used as source in a PythonCallable
task matches available params
| signature_check | python | ploomber/ploomber | src/ploomber/util/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/util.py | Apache-2.0 |
def call_with_dictionary(fn, kwargs):
"""
Call a function by passing elements from a dictionary that appear in the
function signature
"""
parameters = inspect.signature(fn).parameters
common = set(parameters) & set(kwargs)
sub_kwargs = {k: kwargs[k] for k in common}
return fn(**sub_kwargs) |
Call a function by passing elements from a dictionary that appear in the
function signature
| call_with_dictionary | python | ploomber/ploomber | src/ploomber/util/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/util.py | Apache-2.0 |
def chdir_code(path):
"""
Returns a string with valid code to chdir to the passed path
"""
path = Path(path).resolve()
if isinstance(path, WindowsPath):
path = str(path).replace("\\", "\\\\")
return f'os.chdir("{path}")' |
Returns a string with valid code to chdir to the passed path
| chdir_code | python | ploomber/ploomber | src/ploomber/util/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/util.py | Apache-2.0 |
def _python_bin():
"""
Get the path to the Python executable, return 'python' if unable to get it
"""
executable = sys.executable
return executable if executable else "python" |
Get the path to the Python executable, return 'python' if unable to get it
| _python_bin | python | ploomber/ploomber | src/ploomber/util/_sys.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/util/_sys.py | Apache-2.0 |
def validate_task_class_name(value):
"""
Validates if a string is a valid Task class name (e.g., SQLScipt).
Raises a ValueError if not.
"""
if value not in _KEY2CLASS_TASKS:
suggestion = get_suggestion(value, mapping=_NORMALIZED_TASKS)
msg = f"{value!r} is not a valid Task class name"
if suggestion:
msg += f". Did you mean {suggestion!r}?"
raise ValueError(msg)
return _KEY2CLASS_TASKS[value] |
Validates if a string is a valid Task class name (e.g., SQLScipt).
Raises a ValueError if not.
| validate_task_class_name | python | ploomber/ploomber | src/ploomber/validators/string.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/validators/string.py | Apache-2.0 |
def validate_product_class_name(value):
"""
Validates if a string is a valid Product class name (e.g., File).
Raises a ValueError if not.
"""
if value not in _KEY2CLASS_PRODUCTS:
suggestion = get_suggestion(value, mapping=_NORMALIZED_PRODUCTS)
msg = f"{value!r} is not a valid Product class name"
if suggestion:
msg += f". Did you mean {suggestion!r}?"
raise ValueError(msg)
return _KEY2CLASS_PRODUCTS[value] |
Validates if a string is a valid Product class name (e.g., File).
Raises a ValueError if not.
| validate_product_class_name | python | ploomber/ploomber | src/ploomber/validators/string.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/validators/string.py | Apache-2.0 |
def validate_schema(assert_, data, schema, optional=None, on_unexpected_cols="raise"):
"""Check if a data frame complies with a schema
Parameters
----------
data : pandas.DataFrame
Data frame to test
schema : list or dict
List with column names (will only validate names)
or dict with column names as keys, dtypes as values (will validate
names and dtypes)
optional : list, optional
List of optional column names, no warns nor errors if they appear
on_unexpected_cols : str, optional
One of 'warn', 'raise' or None. If 'warn', it will warn on extra
columns, if 'raise' it will raise an error, if None it will completely
ignore extra columns
"""
if on_unexpected_cols not in {"warn", "raise", None}:
raise ValueError(
"'on_unexpected_cols' must be one of 'warn', 'raise' " "or None"
)
optional = optional or {}
cols = set(data.columns)
expected = set(schema)
missing = expected - cols
unexpected = cols - expected - set(optional)
msg = "(validate_schema) Missing columns {missing}.".format(missing=missing)
assert_(not missing, msg)
if on_unexpected_cols is not None:
msg = "(validate_schema) Unexpected columns {unexpected}".format(
unexpected=unexpected
)
caller = assert_ if on_unexpected_cols == "raise" else assert_.warn
caller(not unexpected, msg)
# if passing a mapping, schema is validated (even for optional columns)
for schema_to_validate in [schema, optional]:
if isinstance(schema_to_validate, Mapping):
# validate column types (as many as you can)
dtypes = data.dtypes.astype(str).to_dict()
for name, dtype in dtypes.items():
expected = schema_to_validate.get(name)
if expected is not None:
msg = (
'(validate_schema) Wrong dtype for column "{name}". '
'Expected: "{expected}". Got: "{dtype}"'.format(
name=name, expected=expected, dtype=dtype
)
)
assert_(dtype == expected, msg)
return assert_ | Check if a data frame complies with a schema
Parameters
----------
data : pandas.DataFrame
Data frame to test
schema : list or dict
List with column names (will only validate names)
or dict with column names as keys, dtypes as values (will validate
names and dtypes)
optional : list, optional
List of optional column names, no warns nor errors if they appear
on_unexpected_cols : str, optional
One of 'warn', 'raise' or None. If 'warn', it will warn on extra
columns, if 'raise' it will raise an error, if None it will completely
ignore extra columns
| validate_schema | python | ploomber/ploomber | src/ploomber/validators/validators.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/validators/validators.py | Apache-2.0 |
def data_frame_validator(df, validators):
"""
Examples
--------
>>> from ploomber.validators import data_frame_validator
>>> from ploomber.validators import validate_schema, validate_values
>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame({'x': np.random.rand(3), 'y': np.random.rand(3),
... 'z': [0, 1, 2], 'i': ['a', 'b', 'c']})
>>> data_frame_validator(df,
... [validate_schema(schema={'x': 'int', 'z': 'int'}),
... validate_values(values={'z': ('range', (0, 1)),
... 'i': ('unique', {'a'}),
... 'j': ('unique', {'b'})}
... )]) # doctest: +SKIP
"""
assert_ = Assert()
for validator in validators:
validator(assert_=assert_, data=df)
if len(assert_):
raise AssertionError("Data frame validation failed. " + str(assert_))
return True |
Examples
--------
>>> from ploomber.validators import data_frame_validator
>>> from ploomber.validators import validate_schema, validate_values
>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame({'x': np.random.rand(3), 'y': np.random.rand(3),
... 'z': [0, 1, 2], 'i': ['a', 'b', 'c']})
>>> data_frame_validator(df,
... [validate_schema(schema={'x': 'int', 'z': 'int'}),
... validate_values(values={'z': ('range', (0, 1)),
... 'i': ('unique', {'a'}),
... 'j': ('unique', {'b'})}
... )]) # doctest: +SKIP
| data_frame_validator | python | ploomber/ploomber | src/ploomber/validators/validators.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/validators/validators.py | Apache-2.0 |
def scaffold(name, conda, package, entry_point, empty):
"""
Create a new project and task source files
Step 1. Create new project
$ ploomber scaffold myproject
$ cd myproject
Step 2. Add tasks to the pipeline.yaml file
Step 3. Create source files
$ ploomber scaffold
Need help? https://ploomber.io/community
"""
from ploomber import scaffold as _scaffold
template = "-e/--entry-point is not compatible with {flag}"
user_passed_name = name is not None
if entry_point and name:
err = '-e/--entry-point is not compatible with the "name" argument'
raise click.ClickException(err)
if entry_point and conda:
err = template.format(flag="--conda")
raise click.ClickException(err)
if entry_point and package:
err = template.format(flag="--package")
raise click.ClickException(err)
if entry_point and empty:
err = template.format(flag="--empty")
raise click.ClickException(err)
# try to load a dag by looking in default places
if entry_point is None:
loaded = _scaffold.load_dag()
else:
from ploomber.spec import DAGSpec
try:
loaded = (
DAGSpec(entry_point, lazy_import="skip"),
Path(entry_point).parent,
Path(entry_point),
)
except Exception as e:
raise click.ClickException(e) from e
if loaded:
if user_passed_name:
click.secho(
"The 'name' positional argument is "
"only valid for creating new projects, ignoring...",
fg="yellow",
)
# existing pipeline, add tasks
spec, _, path_to_spec = loaded
_scaffold.add(spec, path_to_spec)
else:
scaffold_project.cli(
project_path=name, conda=conda, package=package, empty=empty
) |
Create a new project and task source files
Step 1. Create new project
$ ploomber scaffold myproject
$ cd myproject
Step 2. Add tasks to the pipeline.yaml file
Step 3. Create source files
$ ploomber scaffold
Need help? https://ploomber.io/community
| scaffold | python | ploomber/ploomber | src/ploomber_cli/cli.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber_cli/cli.py | Apache-2.0 |
def examples(name, force, branch, output):
"""
Download examples
Step 1. List examples
$ ploomber examples
Step 2. Download an example
$ ploomber examples -n templates/ml-basic -o my-pipeline
Need help? https://ploomber.io/community
"""
click.echo("Loading examples...")
from ploomber import cli as cli_module
try:
cli_module.examples.main(name=name, force=force, branch=branch, output=output)
except click.ClickException:
raise
except Exception as e:
raise RuntimeError(
"An error happened when executing the examples command. Check out "
"the full error message for details. Downloading the examples "
"again or upgrading Ploomber may fix the "
"issue.\nDownload: ploomber examples -f\n"
"Update: pip install ploomber -U\n"
"Update [conda]: conda update ploomber -c conda-forge"
) from e |
Download examples
Step 1. List examples
$ ploomber examples
Step 2. Download an example
$ ploomber examples -n templates/ml-basic -o my-pipeline
Need help? https://ploomber.io/community
| examples | python | ploomber/ploomber | src/ploomber_cli/cli.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber_cli/cli.py | Apache-2.0 |
def tmp_directory_local(tmp_path):
"""
Pretty much the same as tmp_directory, but it uses pytest tmp_path,
which creates the path in a pre-determined location depending on the test,
TODO: replace the logic in tmp_directory with this one
"""
old = os.getcwd()
os.chdir(tmp_path)
yield tmp_path
os.chdir(old) |
Pretty much the same as tmp_directory, but it uses pytest tmp_path,
which creates the path in a pre-determined location depending on the test,
TODO: replace the logic in tmp_directory with this one
| tmp_directory_local | python | ploomber/ploomber | tests/conftest.py | https://github.com/ploomber/ploomber/blob/master/tests/conftest.py | Apache-2.0 |
def sqlite_client_and_tmp_dir():
"""
Creates a sqlite db with sample data and yields initialized client
along with a temporary directory location
"""
old = os.getcwd()
tmp_dir = Path(tempfile.mkdtemp())
os.chdir(str(tmp_dir))
client = SQLAlchemyClient("sqlite:///" + str(tmp_dir / "my_db.db"))
df = pd.DataFrame({"x": range(10)})
df.to_sql("data", client.engine)
yield client, tmp_dir
os.chdir(old)
client.close()
shutil.rmtree(str(tmp_dir)) |
Creates a sqlite db with sample data and yields initialized client
along with a temporary directory location
| sqlite_client_and_tmp_dir | python | ploomber/ploomber | tests/conftest.py | https://github.com/ploomber/ploomber/blob/master/tests/conftest.py | Apache-2.0 |
def pg_client_and_schema():
"""
Creates a temporary schema for the testing session, drops everything
at the end
"""
db = _load_db_credentials()
# set a new schema for this session, otherwise if two test sessions
# are run at the same time, tests might conflict with each other
# NOTE: avoid upper case characters, pandas.DataFrame.to_sql does not like
# them
schema = ("".join(random.choice(string.ascii_letters) for i in range(12))).lower()
# initialize client, set default schema
# info: https://www.postgresonline.com/article_pfriendly/279.html
client = SQLAlchemyClient(
db["uri"],
create_engine_kwargs=dict(
connect_args=dict(options=f"-c search_path={schema}")
),
)
# create schema
client.execute("CREATE SCHEMA {};".format(schema))
df = pd.DataFrame({"x": range(10)})
df.to_sql("data", client.engine)
yield client, schema
# clean up schema
client.execute("DROP SCHEMA {} CASCADE;".format(schema))
client.close() |
Creates a temporary schema for the testing session, drops everything
at the end
| pg_client_and_schema | python | ploomber/ploomber | tests/conftest.py | https://github.com/ploomber/ploomber/blob/master/tests/conftest.py | Apache-2.0 |
def no_sys_modules_cache():
"""
Removes modules from sys.modules that didn't exist before the test
"""
mods = set(sys.modules)
yield
current = set(sys.modules)
to_remove = current - mods
for a_module in to_remove:
del sys.modules[a_module] |
Removes modules from sys.modules that didn't exist before the test
| no_sys_modules_cache | python | ploomber/ploomber | tests/conftest.py | https://github.com/ploomber/ploomber/blob/master/tests/conftest.py | Apache-2.0 |
def set_terminal_output_columns(num_cols: int, monkeypatch):
"""
Sets the number of columns for terminalwriter
Usefult for ci where the number of columns is inconsistent
"""
# countaract lines in sep() of terminalwriter.py that removes a col from
# the width if on windows
if sys.platform == "win32":
num_cols += 1
monkeypatch.setattr(terminalwriter, "get_terminal_width", lambda: num_cols) |
Sets the number of columns for terminalwriter
Usefult for ci where the number of columns is inconsistent
| set_terminal_output_columns | python | ploomber/ploomber | tests/tests_util.py | https://github.com/ploomber/ploomber/blob/master/tests/tests_util.py | Apache-2.0 |
def test_manager_initialization(tmp_directory):
"""
There some weird stuff in the Jupyter contents manager base class that
causses the initialization to break if we modify the __init__ method
in PloomberContentsManager. We add this to check that values are
correctly initialized
"""
dir_ = Path("some_dir")
dir_.mkdir()
dir_ = dir_.resolve()
dir_ = str(dir_)
app = serverapp.ServerApp()
app.initialize(argv=[])
app.root_dir = dir_
assert app.contents_manager.root_dir == dir_ |
There some weird stuff in the Jupyter contents manager base class that
causses the initialization to break if we modify the __init__ method
in PloomberContentsManager. We add this to check that values are
correctly initialized
| test_manager_initialization | python | ploomber/ploomber | tests/test_jupyter.py | https://github.com/ploomber/ploomber/blob/master/tests/test_jupyter.py | Apache-2.0 |
def test_ignores_tasks_whose_source_is_not_a_file(
monkeypatch, capsys, tmp_directory, no_sys_modules_cache
):
"""
Context: jupyter extension only applies to tasks whose source is a script,
otherwise it will break, trying to get the source location. This test
checks that a SQLUpload (whose source is a data file) task is ignored
from the extension
"""
monkeypatch.setattr(sys, "argv", ["jupyter"])
spec = {
"meta": {
"extract_product": False,
"extract_upstream": False,
"product_default_class": {"SQLUpload": "SQLiteRelation"},
},
"clients": {"SQLUpload": "db.get_client", "SQLiteRelation": "db.get_client"},
"tasks": [
{
"source": "some_file.csv",
"name": "task",
"class": "SQLUpload",
"product": ["some_table", "table"],
}
],
}
with open("pipeline.yaml", "w") as f:
yaml.dump(spec, f)
Path("db.py").write_text(
"""
from ploomber.clients import SQLAlchemyClient
def get_client():
return SQLAlchemyClient('sqlite://')
"""
)
Path("file.py").touch()
app = NotebookApp()
app.initialize()
app.contents_manager.get("file.py")
out, err = capsys.readouterr()
assert "Traceback" not in err |
Context: jupyter extension only applies to tasks whose source is a script,
otherwise it will break, trying to get the source location. This test
checks that a SQLUpload (whose source is a data file) task is ignored
from the extension
| test_ignores_tasks_whose_source_is_not_a_file | python | ploomber/ploomber | tests/test_jupyter.py | https://github.com/ploomber/ploomber/blob/master/tests/test_jupyter.py | Apache-2.0 |
def test_jupyter_workflow_with_functions(
backup_spec_with_functions, no_sys_modules_cache
):
"""
Tests a typical workflow with a pieline where some tasks are functions
"""
cm = PloomberContentsManager()
def get_names(out):
return {model["name"] for model in out["content"]}
assert get_names(cm.get("")) == {"my_tasks", "pipeline.yaml"}
assert get_names(cm.get("my_tasks")) == {"__init__.py", "clean", "raw"}
# check new notebooks appear, which are generated from the function tasks
assert get_names(cm.get("my_tasks/raw")) == {
"__init__.py",
"functions.py",
"functions.py (functions)",
}
assert get_names(cm.get("my_tasks/clean")) == {
"__init__.py",
"functions.py",
"functions.py (functions)",
"util.py",
}
# get notebooks generated from task functions
raw = cm.get("my_tasks/raw/functions.py (functions)/raw")
clean = cm.get("my_tasks/clean/functions.py (functions)/clean")
# add some new code
cell = nbformat.versions[nbformat.current_nbformat].new_code_cell("1 + 1")
raw["content"]["cells"].append(cell)
clean["content"]["cells"].append(cell)
# overwrite the original function
cm.save(raw, path="my_tasks/raw/functions.py (functions)/raw")
cm.save(clean, path="my_tasks/clean/functions.py (functions)/clean")
# make sure source code was updated
raw_source = (
backup_spec_with_functions / "my_tasks" / "raw" / "functions.py"
).read_text()
clean_source = (
backup_spec_with_functions / "my_tasks" / "clean" / "functions.py"
).read_text()
assert "1 + 1" in raw_source
assert "1 + 1" in clean_source |
Tests a typical workflow with a pieline where some tasks are functions
| test_jupyter_workflow_with_functions | python | ploomber/ploomber | tests/test_jupyter.py | https://github.com/ploomber/ploomber/blob/master/tests/test_jupyter.py | Apache-2.0 |
def test_disable_functions_as_notebooks(backup_spec_with_functions):
"""
Tests a typical workflow with a pieline where some tasks are functions
"""
with open("pipeline.yaml") as f:
spec = yaml.safe_load(f)
spec["meta"]["jupyter_functions_as_notebooks"] = False
Path("pipeline.yaml").write_text(yaml.dump(spec))
cm = PloomberContentsManager()
def get_names(out):
return {model["name"] for model in out["content"]}
assert get_names(cm.get("")) == {"my_tasks", "pipeline.yaml"}
assert get_names(cm.get("my_tasks")) == {"__init__.py", "clean", "raw"}
# check new notebooks appear, which are generated from the function tasks
assert get_names(cm.get("my_tasks/raw")) == {
"__init__.py",
"functions.py",
}
assert get_names(cm.get("my_tasks/clean")) == {
"__init__.py",
"functions.py",
"util.py",
} |
Tests a typical workflow with a pieline where some tasks are functions
| test_disable_functions_as_notebooks | python | ploomber/ploomber | tests/test_jupyter.py | https://github.com/ploomber/ploomber/blob/master/tests/test_jupyter.py | Apache-2.0 |
def features(upstream, product):
"""Generate new features from existing columns"""
data = pd.read_parquet(str(upstream["get"]))
ft = data["1"] * data["2"]
df = pd.DataFrame({"feature": ft, "another": ft**2})
df.to_parquet(str(product)) | Generate new features from existing columns | features | python | ploomber/ploomber | tests/assets/simple/tasks_simple.py | https://github.com/ploomber/ploomber/blob/master/tests/assets/simple/tasks_simple.py | Apache-2.0 |
def join(upstream, product):
"""Join raw data with generated features"""
a = pd.read_parquet(str(upstream["get"]))
b = pd.read_parquet(str(upstream["features"]))
df = a.join(b)
df.to_parquet(str(product)) | Join raw data with generated features | join | python | ploomber/ploomber | tests/assets/simple/tasks_simple.py | https://github.com/ploomber/ploomber/blob/master/tests/assets/simple/tasks_simple.py | Apache-2.0 |
def test_cli_does_not_import_main_package(monkeypatch):
"""
ploomber_cli.cli should NOT import ploomber since it's a heavy package
and we want the CLI to be responsive. imports should happen inside each
command
"""
out = subprocess.check_output(
[
"python",
"-c",
'import sys; import ploomber_cli.cli; print("ploomber" in sys.modules)',
]
)
assert out.decode().strip() == "False" |
ploomber_cli.cli should NOT import ploomber since it's a heavy package
and we want the CLI to be responsive. imports should happen inside each
command
| test_cli_does_not_import_main_package | python | ploomber/ploomber | tests/cli/test_custom.py | https://github.com/ploomber/ploomber/blob/master/tests/cli/test_custom.py | Apache-2.0 |
def test_task_command_does_not_force_dag_render(tmp_nbs, monkeypatch):
"""
Make sure the force flag is only used in task.render and not dag.render
because we don't want to override the status of other tasks
"""
args = ["task", "load", "--force"]
monkeypatch.setattr(sys, "argv", args)
class CustomParserWrapper(CustomParser):
def load_from_entry_point_arg(self):
dag, args = super().load_from_entry_point_arg()
dag_mock = MagicMock(wraps=dag)
type(self).dag_mock = dag_mock
return dag_mock, args
monkeypatch.setattr(task, "CustomParser", CustomParserWrapper)
task.main(catch_exception=False)
CustomParserWrapper.dag_mock.render.assert_called_once_with() |
Make sure the force flag is only used in task.render and not dag.render
because we don't want to override the status of other tasks
| test_task_command_does_not_force_dag_render | python | ploomber/ploomber | tests/cli/test_custom.py | https://github.com/ploomber/ploomber/blob/master/tests/cli/test_custom.py | Apache-2.0 |
def test_dagspec_initialization_from_yaml(tmp_nbs_nested, monkeypatch):
"""
DAGSpec can be initialized with a path to a spec or a dictionary, but
they have a slightly different behavior. This checks that we initialize
with the path
"""
mock = Mock(wraps=parsers.DAGSpec)
monkeypatch.setattr(sys, "argv", ["python"])
monkeypatch.setattr(parsers, "DAGSpec", mock)
parser = CustomParser()
with parser:
pass
dag, args = parser.load_from_entry_point_arg()
mock.assert_called_once_with("pipeline.yaml") |
DAGSpec can be initialized with a path to a spec or a dictionary, but
they have a slightly different behavior. This checks that we initialize
with the path
| test_dagspec_initialization_from_yaml | python | ploomber/ploomber | tests/cli/test_customparser.py | https://github.com/ploomber/ploomber/blob/master/tests/cli/test_customparser.py | Apache-2.0 |
def test_dagspec_initialization_from_yaml_and_env(tmp_nbs, monkeypatch):
"""
DAGSpec can be initialized with a path to a spec or a dictionary, but
they have a slightly different behavior. This ensure the cli passes
the path, instead of a dictionary
"""
mock_DAGSpec = Mock(wraps=parsers.DAGSpec)
mock_default_path_to_env = Mock(wraps=parsers.default.path_to_env_from_spec)
mock_EnvDict = Mock(wraps=parsers.EnvDict)
monkeypatch.setattr(sys, "argv", ["python"])
monkeypatch.setattr(parsers, "DAGSpec", mock_DAGSpec)
monkeypatch.setattr(
parsers.default, "path_to_env_from_spec", mock_default_path_to_env
)
monkeypatch.setattr(parsers, "EnvDict", mock_EnvDict)
# ensure current timestamp does not change
mock = Mock()
mock.datetime.now().isoformat.return_value = "current-timestamp"
monkeypatch.setattr(expand, "datetime", mock)
parser = CustomParser()
with parser:
pass
dag, args = parser.load_from_entry_point_arg()
# ensure called using the path to the yaml spec
mock_DAGSpec.assert_called_once_with(
"pipeline.yaml", env=EnvDict({"sample": False}, path_to_here=".")
)
# and EnvDict initialized from env.yaml
mock_EnvDict.assert_called_once_with(
str(Path("env.yaml").resolve()), path_to_here=Path(".")
) |
DAGSpec can be initialized with a path to a spec or a dictionary, but
they have a slightly different behavior. This ensure the cli passes
the path, instead of a dictionary
| test_dagspec_initialization_from_yaml_and_env | python | ploomber/ploomber | tests/cli/test_customparser.py | https://github.com/ploomber/ploomber/blob/master/tests/cli/test_customparser.py | Apache-2.0 |
def test_ploomber_scaffold(tmp_directory, monkeypatch, args, conda, package, empty):
"""
Testing cli args are correctly routed to the function
"""
mock = Mock()
monkeypatch.setattr(cli.scaffold_project, "cli", mock)
runner = CliRunner()
result = runner.invoke(scaffold, args=args, catch_exceptions=False)
assert not result.exit_code
mock.assert_called_once_with(
project_path=None, conda=conda, package=package, empty=empty
) |
Testing cli args are correctly routed to the function
| test_ploomber_scaffold | python | ploomber/ploomber | tests/cli/test_scaffold.py | https://github.com/ploomber/ploomber/blob/master/tests/cli/test_scaffold.py | Apache-2.0 |
def test_ploomber_scaffold_task_template(file_, extract_flag, tmp_directory):
"""Test scaffold when project already exists (add task templates)"""
sample_spec = {
"meta": {"extract_upstream": extract_flag, "extract_product": extract_flag}
}
task = {"source": file_}
if not extract_flag:
task["product"] = "nb.ipynb"
sample_spec["tasks"] = [task]
with open("pipeline.yaml", "w") as f:
yaml.dump(sample_spec, f)
runner = CliRunner()
result = runner.invoke(scaffold)
assert "Found spec at 'pipeline.yaml'" in result.output
assert "Created 1 new task sources." in result.output
assert result.exit_code == 0
assert Path(file_).exists() | Test scaffold when project already exists (add task templates) | test_ploomber_scaffold_task_template | python | ploomber/ploomber | tests/cli/test_scaffold.py | https://github.com/ploomber/ploomber/blob/master/tests/cli/test_scaffold.py | Apache-2.0 |
def monkeypatch_plot(monkeypatch):
"""
Monkeypatch logic for making the DAG.plot() work without calling
pygraphviz and checking calls are done with the right arguments
"""
image_out = object()
mock_Image = Mock(return_value=image_out)
mock_to_agraph = Mock()
def touch(*args, **kwargs):
Path(args[0]).touch()
# create file, then make sure it's deleted
mock_to_agraph.draw.side_effect = touch
def to_agraph(*args, **kwargs):
return mock_to_agraph
monkeypatch.setattr(dag_module, "Image", mock_Image)
monkeypatch.setattr(
dag_module.nx.nx_agraph, "to_agraph", Mock(side_effect=to_agraph)
)
yield mock_Image, mock_to_agraph, image_out |
Monkeypatch logic for making the DAG.plot() work without calling
pygraphviz and checking calls are done with the right arguments
| monkeypatch_plot | python | ploomber/ploomber | tests/dag/test_dag.py | https://github.com/ploomber/ploomber/blob/master/tests/dag/test_dag.py | Apache-2.0 |
def test_forced_render_does_not_call_is_outdated(monkeypatch):
"""
For products whose metadata is stored remotely, checking status is an
expensive operation. Make dure forced render does not call
Product._is_oudated
"""
dag = DAG()
t1 = PythonCallable(touch_root, File("1.txt"), dag, name=1)
t2 = PythonCallable(touch, File("2.txt"), dag, name=2)
t1 >> t2
def _is_outdated(self, outdated_by_code):
raise ValueError(f"Called _is_outdated on {self}")
monkeypatch.setattr(File, "_is_outdated", _is_outdated)
dag.render(force=True) |
For products whose metadata is stored remotely, checking status is an
expensive operation. Make dure forced render does not call
Product._is_oudated
| test_forced_render_does_not_call_is_outdated | python | ploomber/ploomber | tests/dag/test_dag.py | https://github.com/ploomber/ploomber/blob/master/tests/dag/test_dag.py | Apache-2.0 |
def test_dag_functions_do_not_fetch_metadata(
function_name, executor, tmp_directory, monkeypatch_plot
):
"""
these function should not look up metadata, since the products do not
exist, the status can be determined without it
"""
product = File("1.txt")
dag = DAG(executor=executor)
PythonCallable(touch_root, product, dag, name=1)
m = Mock(wraps=product.fetch_metadata)
# to make this work with pickle
m.__reduce__ = lambda self: (MagicMock, ())
product.fetch_metadata = m
getattr(dag, function_name)()
# not called
product.fetch_metadata.assert_not_called()
if function_name == "build":
# if building, we should still see the metadata
assert product.metadata._data["stored_source_code"]
assert product.metadata._data["timestamp"] |
these function should not look up metadata, since the products do not
exist, the status can be determined without it
| test_dag_functions_do_not_fetch_metadata | python | ploomber/ploomber | tests/dag/test_dag.py | https://github.com/ploomber/ploomber/blob/master/tests/dag/test_dag.py | Apache-2.0 |
def test_dag_task_status_life_cycle(executor, tmp_directory):
"""
Check dag and task status along calls to DAG.render and DAG.build.
Although DAG and Task status are automatically updated and propagated
downstream upon calls to render and build, we have to parametrize this
over executors since the object that gets updated might not be the same
one that we declared here (this happens when a task runs in a different
process), hence, it is the executor's responsibility to notify tasks
on sucess/fail scenarios so downstream tasks are updated correctly
"""
dag = DAG(executor=executor)
t1 = PythonCallable(touch_root, File("ok.txt"), dag, name="t1")
t2 = PythonCallable(failing_root, File("a_file.txt"), dag, name="t2")
t3 = PythonCallable(touch, File("another_file.txt"), dag, name="t3")
t4 = PythonCallable(touch, File("yet_another_file.txt"), dag, name="t4")
t5 = PythonCallable(touch_root, File("file.txt"), dag, name="t5")
t2 >> t3 >> t4
assert dag._exec_status == DAGStatus.WaitingRender
assert {TaskStatus.WaitingRender} == set([t.exec_status for t in dag.values()])
dag.render()
assert dag._exec_status == DAGStatus.WaitingExecution
assert t1.exec_status == TaskStatus.WaitingExecution
assert t2.exec_status == TaskStatus.WaitingExecution
assert t3.exec_status == TaskStatus.WaitingUpstream
assert t4.exec_status == TaskStatus.WaitingUpstream
assert t5.exec_status == TaskStatus.WaitingExecution
try:
dag.build()
except DAGBuildError:
pass
assert dag._exec_status == DAGStatus.Errored
assert t1.exec_status == TaskStatus.Executed
assert t2.exec_status == TaskStatus.Errored
assert t3.exec_status == TaskStatus.Aborted
assert t4.exec_status == TaskStatus.Aborted
assert t5.exec_status == TaskStatus.Executed
dag.render()
assert dag._exec_status == DAGStatus.WaitingExecution
assert t1.exec_status == TaskStatus.Skipped
assert t2.exec_status == TaskStatus.WaitingExecution
assert t3.exec_status == TaskStatus.WaitingUpstream
assert t4.exec_status == TaskStatus.WaitingUpstream
assert t5.exec_status == TaskStatus.Skipped
# TODO: add test when trying to Execute dag with task status
# other than WaitingExecution anf WaitingUpstream |
Check dag and task status along calls to DAG.render and DAG.build.
Although DAG and Task status are automatically updated and propagated
downstream upon calls to render and build, we have to parametrize this
over executors since the object that gets updated might not be the same
one that we declared here (this happens when a task runs in a different
process), hence, it is the executor's responsibility to notify tasks
on sucess/fail scenarios so downstream tasks are updated correctly
| test_dag_task_status_life_cycle | python | ploomber/ploomber | tests/dag/test_dag.py | https://github.com/ploomber/ploomber/blob/master/tests/dag/test_dag.py | Apache-2.0 |
def test_logging_handler(executor, tmp_directory):
"""
Note: this test is a bit weird, when executed in isolation it fails,
but when executing the whole file, it works. Not sure why.
Also, this only works on windows/mac when tasks are executed in the same
process. For it to work, we'd have to ensure that the logging objects
are re-configured again in the child process, see this:
https://stackoverflow.com/a/26168432/709975
"""
configurator = DAGConfigurator()
configurator.params.logging_factory = logging_factory
dag = configurator.create()
dag.name = "my_dag"
dag.executor = executor
PythonCallable(touch_root, File("file.txt"), dag)
dag.build()
log = Path("my_dag.log").read_text()
assert "Logging..." in log
assert "This should not appear..." not in log |
Note: this test is a bit weird, when executed in isolation it fails,
but when executing the whole file, it works. Not sure why.
Also, this only works on windows/mac when tasks are executed in the same
process. For it to work, we'd have to ensure that the logging objects
are re-configured again in the child process, see this:
https://stackoverflow.com/a/26168432/709975
| test_logging_handler | python | ploomber/ploomber | tests/dag/test_dagconfigurator.py | https://github.com/ploomber/ploomber/blob/master/tests/dag/test_dagconfigurator.py | Apache-2.0 |
def test_render_checks_outdated_status_once(monkeypatch, tmp_directory):
"""
_check_is_outdated is an expensive operation and it should only run
once per task
"""
def _make_dag():
dag = DAG(executor=Serial(build_in_subprocess=False))
t1 = PythonCallable(touch_root, File("one.txt"), dag, name="one")
t2 = PythonCallable(touch, File("two.txt"), dag, name="two")
t1 >> t2
return dag
_make_dag().build()
dag = _make_dag()
t1 = dag["one"]
t2 = dag["two"]
monkeypatch.setattr(
t1.product, "_check_is_outdated", Mock(wraps=t1.product._check_is_outdated)
)
monkeypatch.setattr(
t2.product, "_check_is_outdated", Mock(wraps=t2.product._check_is_outdated)
)
# after building for the first time
dag.render()
t1.product._check_is_outdated.assert_called_once()
t2.product._check_is_outdated.assert_called_once() |
_check_is_outdated is an expensive operation and it should only run
once per task
| test_render_checks_outdated_status_once | python | ploomber/ploomber | tests/dag/test_render.py | https://github.com/ploomber/ploomber/blob/master/tests/dag/test_render.py | Apache-2.0 |
def color_mapping():
"""Returns a utility class which can replace keys in strings in the form
"{NAME}"
by their equivalent ASCII codes in the terminal.
Used by tests which check the actual colors output by pytest.
"""
class ColorMapping:
COLORS = {
"red": "\x1b[31m",
"green": "\x1b[32m",
"yellow": "\x1b[33m",
"bold": "\x1b[1m",
"reset": "\x1b[0m",
"kw": "\x1b[94m",
"hl-reset": "\x1b[39;49;00m",
"function": "\x1b[92m",
"number": "\x1b[94m",
"str": "\x1b[33m",
"print": "\x1b[96m",
"end-line": "\x1b[90m\x1b[39;49;00m",
}
RE_COLORS = {k: re.escape(v) for k, v in COLORS.items()}
@classmethod
def format(cls, lines: List[str]) -> List[str]:
"""Straightforward replacement of color names to their ASCII
codes."""
return [line.format(**cls.COLORS) for line in lines]
@classmethod
def format_for_fnmatch(cls, lines: List[str]) -> List[str]:
"""Replace color names for use with LineMatcher.fnmatch_lines"""
return [line.format(**cls.COLORS).replace("[", "[[]") for line in lines]
@classmethod
def format_for_rematch(cls, lines: List[str]) -> List[str]:
"""Replace color names for use with LineMatcher.re_match_lines"""
return [line.format(**cls.RE_COLORS) for line in lines]
return ColorMapping | Returns a utility class which can replace keys in strings in the form
"{NAME}"
by their equivalent ASCII codes in the terminal.
Used by tests which check the actual colors output by pytest.
| color_mapping | python | ploomber/ploomber | tests/io_mod/test_terminalwriter.py | https://github.com/ploomber/ploomber/blob/master/tests/io_mod/test_terminalwriter.py | Apache-2.0 |
def test_task_with_client_and_metaproduct_isnt_outdated_rtrns_waiting_download(
operation, tmp_directory, tmp_path
):
"""
Checking MetaProduct correctly forwards WaitingDownload when calling
MetaProduct._is_outdated
"""
dag = _make_dag_with_metaproduct(with_client=True)
dag.build()
# simulate local outdated tasks
operation(".file.txt.metadata", tmp_path)
operation(".another.txt.metadata", tmp_path)
operation(".root.metadata", tmp_path)
dag = _make_dag_with_metaproduct(with_client=True).render()
assert dag["root"].product._is_outdated() == TaskStatus.WaitingDownload
assert dag["task"].product._is_outdated() == TaskStatus.WaitingDownload
assert set(v.exec_status for v in dag.values()) == {TaskStatus.WaitingDownload} |
Checking MetaProduct correctly forwards WaitingDownload when calling
MetaProduct._is_outdated
| test_task_with_client_and_metaproduct_isnt_outdated_rtrns_waiting_download | python | ploomber/ploomber | tests/products/test_file.py | https://github.com/ploomber/ploomber/blob/master/tests/products/test_file.py | Apache-2.0 |
def test_task_with_client_and_metaproduct_with_some_missing_products(
operation, tmp_directory, tmp_path
):
"""
If local MetaProduct content isn't consistent, it should execute instead of
download
"""
dag = _make_dag_with_metaproduct(with_client=True)
dag.build()
# simulate *some* local outdated tasks
operation(".file.txt.metadata", tmp_path)
operation(".root.metadata", tmp_path)
dag = _make_dag_with_metaproduct(with_client=True).render()
assert dag["root"].product._is_outdated() == TaskStatus.WaitingDownload
assert dag["task"].product._is_outdated() == TaskStatus.WaitingDownload
assert dag["root"].exec_status == TaskStatus.WaitingDownload
assert dag["task"].exec_status == TaskStatus.WaitingDownload |
If local MetaProduct content isn't consistent, it should execute instead of
download
| test_task_with_client_and_metaproduct_with_some_missing_products | python | ploomber/ploomber | tests/products/test_file.py | https://github.com/ploomber/ploomber/blob/master/tests/products/test_file.py | Apache-2.0 |
def test_task_with_client_and_metaproduct_with_some_missing_remote_products(
operation, tmp_directory, tmp_path
):
"""
If remote MetaProduct content isn't consistent, it should execute instead
of download
"""
dag = _make_dag_with_metaproduct(with_client=True)
dag.build()
# simulate *some* local outdated tasks (to force remote metadata lookup)
operation(".file.txt.metadata", tmp_path)
operation(".root.metadata", tmp_path)
# simulate corrupted remote MetaProduct metadata
operation("remote/.file.txt.metadata", tmp_path)
dag = _make_dag_with_metaproduct(with_client=True).render()
assert dag["root"].product._is_outdated() == TaskStatus.WaitingDownload
assert dag["task"].product._is_outdated() is True
assert dag["root"].exec_status == TaskStatus.WaitingDownload
assert dag["task"].exec_status == TaskStatus.WaitingUpstream |
If remote MetaProduct content isn't consistent, it should execute instead
of download
| test_task_with_client_and_metaproduct_with_some_missing_remote_products | python | ploomber/ploomber | tests/products/test_file.py | https://github.com/ploomber/ploomber/blob/master/tests/products/test_file.py | Apache-2.0 |
def test_interface(concrete_class):
"""
Look for unnecessary implemeneted methods/attributes in MetaProduct,
this helps us keep the API up-to-date if the Product interface changes
"""
allowed_mapping = {
"SQLRelation": {"schema", "name", "kind", "client"},
"SQLiteRelation": {"schema", "name", "kind", "client"},
"PostgresRelation": {"schema", "name", "kind", "client"},
"GenericProduct": {"client", "name"},
# these come from collections.abc.Mapping
"MetaProduct": {"get", "keys", "items", "values", "missing"},
}
allowed = allowed_mapping.get(concrete_class.__name__, {})
assert_no_extra_attributes_in_class(Product, concrete_class, allowed=allowed) |
Look for unnecessary implemeneted methods/attributes in MetaProduct,
this helps us keep the API up-to-date if the Product interface changes
| test_interface | python | ploomber/ploomber | tests/products/test_product.py | https://github.com/ploomber/ploomber/blob/master/tests/products/test_product.py | Apache-2.0 |
def tmp_nbs_ipynb(tmp_nbs):
"""Modifies the nbs example to have one task with ipynb format"""
# modify the spec so it has one ipynb task
with open("pipeline.yaml") as f:
spec = yaml.safe_load(f)
spec["tasks"][0]["source"] = "load.ipynb"
Path("pipeline.yaml").write_text(yaml.dump(spec))
# generate notebook in ipynb format
jupytext.write(jupytext.read("load.py"), "load.ipynb") | Modifies the nbs example to have one task with ipynb format | tmp_nbs_ipynb | python | ploomber/ploomber | tests/sources/test_notebooksource.py | https://github.com/ploomber/ploomber/blob/master/tests/sources/test_notebooksource.py | Apache-2.0 |
def test_unmodified_function(fn_name, remove_trailing_newline, backup_test_pkg):
"""
This test makes sure the file is not modified if we don't change the
notebook because whitespace is tricky
"""
fn = getattr(functions, fn_name)
path_to_file = Path(inspect.getfile(fn))
content = path_to_file.read_text()
# make sure the trailing newline in the file is not removed accidentally,
# we need it as part of the test
assert content[-1] == "\n", "expected a trailing newline character"
if remove_trailing_newline:
path_to_file.write_text(content[:-1])
functions_reloaded = importlib.reload(functions)
fn = getattr(functions_reloaded, fn_name)
fn_source_original = inspect.getsource(fn)
mod_source_original = path_to_file.read_text()
with CallableInteractiveDeveloper(
getattr(functions_reloaded, fn_name), {"upstream": None, "product": None}
) as tmp_nb:
pass
functions_edited = importlib.reload(functions)
fn_source_new = inspect.getsource(getattr(functions_edited, fn_name))
mod_source_new = path_to_file.read_text()
assert fn_source_original == fn_source_new
assert mod_source_original == mod_source_new
assert not Path(tmp_nb).exists() |
This test makes sure the file is not modified if we don't change the
notebook because whitespace is tricky
| test_unmodified_function | python | ploomber/ploomber | tests/sources/test_python_interact.py | https://github.com/ploomber/ploomber/blob/master/tests/sources/test_python_interact.py | Apache-2.0 |
def test_develop_spec_with_local_functions(
task_name, backup_spec_with_functions, no_sys_modules_cache
):
"""
Check we can develop functions defined locally, the sample project includes
relative imports, which should work when generating the temporary notebook
"""
dag = DAGSpec("pipeline.yaml").to_dag()
dag.render()
fn = dag[task_name].source.primitive
params = dag[task_name].params.to_json_serializable()
if sys.platform == "win32":
# edge case, wee need this to correctly parametrize the notebook
# when running the test on windows
params["product"] = str(params["product"]).replace("\\", "\\\\")
with CallableInteractiveDeveloper(fn, params) as tmp_nb:
pm.execute_notebook(tmp_nb, tmp_nb) |
Check we can develop functions defined locally, the sample project includes
relative imports, which should work when generating the temporary notebook
| test_develop_spec_with_local_functions | python | ploomber/ploomber | tests/sources/test_python_interact.py | https://github.com/ploomber/ploomber/blob/master/tests/sources/test_python_interact.py | Apache-2.0 |
def test_interface(concrete_class):
"""
Check that Source concrete classes do not have any extra methods
that are not declared in the Source abstract class
"""
if concrete_class in {PythonCallableSource, NotebookSource}:
# FIXME: these two have a lot of extra methods
pytest.xfail()
allowed_mapping = {}
allowed = allowed_mapping.get(concrete_class.__name__, {})
assert_no_extra_attributes_in_class(Source, concrete_class, allowed=allowed) |
Check that Source concrete classes do not have any extra methods
that are not declared in the Source abstract class
| test_interface | python | ploomber/ploomber | tests/sources/test_sources.py | https://github.com/ploomber/ploomber/blob/master/tests/sources/test_sources.py | Apache-2.0 |
def test_init_from_file_resolves_source_location(tmp_directory, spec, base, cwd):
"""
DAGSpec resolves sources and products to absolute values, this process
should be independent of the current working directory and ignore the
existence of other pipeline.yaml in parent directories
"""
Path("dir").mkdir()
Path("pipeline.yaml").touch()
Path("dir", "pipeline.yaml").touch()
base = Path(base)
Path(base, "task.py").write_text(
"""
# + tags = ["parameters"]
upstream = None
# -
"""
)
Path(base, "pipeline.yaml").write_text(yaml.dump(spec))
path_to_pipeline = Path(base, "pipeline.yaml").resolve()
os.chdir(cwd)
dag = DAGSpec(path_to_pipeline).to_dag()
absolute = str(Path(tmp_directory, base).resolve())
assert all([str(dag[name].source.loc).startswith(absolute) for name in list(dag)])
assert all([str(product).startswith(absolute) for product in get_all_products(dag)]) |
DAGSpec resolves sources and products to absolute values, this process
should be independent of the current working directory and ignore the
existence of other pipeline.yaml in parent directories
| test_init_from_file_resolves_source_location | python | ploomber/ploomber | tests/spec/test_dagspec.py | https://github.com/ploomber/ploomber/blob/master/tests/spec/test_dagspec.py | Apache-2.0 |
def test_spec_with_functions(
lazy_import,
backup_spec_with_functions,
add_current_to_sys_path,
no_sys_modules_cache,
):
"""
Check we can create pipeline where the task is a function defined in a
local file
"""
spec = DAGSpec("pipeline.yaml", lazy_import=lazy_import)
spec.to_dag().build() |
Check we can create pipeline where the task is a function defined in a
local file
| test_spec_with_functions | python | ploomber/ploomber | tests/spec/test_dagspec.py | https://github.com/ploomber/ploomber/blob/master/tests/spec/test_dagspec.py | Apache-2.0 |
def test_spec_with_functions_fails(
lazy_import,
backup_spec_with_functions_no_sources,
add_current_to_sys_path,
no_sys_modules_cache,
):
"""
Check we can create pipeline where the task is a function defined in a
local file but the sources do not exist. Since it is trying to load the
source scripts thanks to lazy_import being bool, it should fail (True
imports the function, while False does not but it checks that it exists)
"""
with pytest.raises(exceptions.DAGSpecInitializationError):
DAGSpec("pipeline.yaml", lazy_import=lazy_import) |
Check we can create pipeline where the task is a function defined in a
local file but the sources do not exist. Since it is trying to load the
source scripts thanks to lazy_import being bool, it should fail (True
imports the function, while False does not but it checks that it exists)
| test_spec_with_functions_fails | python | ploomber/ploomber | tests/spec/test_dagspec.py | https://github.com/ploomber/ploomber/blob/master/tests/spec/test_dagspec.py | Apache-2.0 |
def test_spec_with_sourceless_functions(
backup_spec_with_functions_no_sources, add_current_to_sys_path, no_sys_modules_cache
):
"""
Check we can create pipeline where the task is a function defined in a
deep hierarchical structure where the source does not exists
"""
assert DAGSpec("pipeline.yaml", lazy_import="skip") |
Check we can create pipeline where the task is a function defined in a
deep hierarchical structure where the source does not exists
| test_spec_with_sourceless_functions | python | ploomber/ploomber | tests/spec/test_dagspec.py | https://github.com/ploomber/ploomber/blob/master/tests/spec/test_dagspec.py | Apache-2.0 |
def test_import_tasks_from_does_not_resolve_dotted_paths(tmp_nbs):
"""
Sources defined in a file used in "import_tasks_from" are resolved
if they're paths to files but dotted paths should remain the same
"""
some_tasks = [
{"source": "extra_task.py", "product": "extra.ipynb"},
{"source": "test_pkg.functions.touch_root", "product": "some_file.csv"},
]
Path("some_tasks.yaml").write_text(yaml.dump(some_tasks))
spec_d = yaml.safe_load(Path("pipeline.yaml").read_text())
spec_d["meta"]["import_tasks_from"] = "some_tasks.yaml"
spec = DAGSpec(spec_d, lazy_import=True)
assert "test_pkg.functions.touch_root" in [t["source"] for t in spec["tasks"]] |
Sources defined in a file used in "import_tasks_from" are resolved
if they're paths to files but dotted paths should remain the same
| test_import_tasks_from_does_not_resolve_dotted_paths | python | ploomber/ploomber | tests/spec/test_dagspec.py | https://github.com/ploomber/ploomber/blob/master/tests/spec/test_dagspec.py | Apache-2.0 |
def test_resolve_client(tmp_directory, tmp_imports, product_class, arg):
"""
Test tries to use task-level client, then dag-level client
"""
Path("my_testing_client.py").write_text(
"""
def get():
return 1
"""
)
task = product_class(arg, client=DottedPath("my_testing_client.get"))
assert task.client == 1 |
Test tries to use task-level client, then dag-level client
| test_resolve_client | python | ploomber/ploomber | tests/tasks/test_client.py | https://github.com/ploomber/ploomber/blob/master/tests/tasks/test_client.py | Apache-2.0 |
def test_interface(concrete_class):
"""
Look for unnecessary implemented methods/attributes in Tasks concrete
classes, this helps us keep the API up-to-date
"""
allowed_mapping = {
"Input": {"_true", "_null_update_metadata"},
"Link": {"_false"},
"PythonCallable": {"load", "_interactive_developer", "debug_mode", "develop"},
"SQLScript": {"load"},
"NotebookRunner": {"static_analysis", "debug_mode", "develop"},
"ScriptRunner": {"static_analysis", "develop"},
}
allowed = allowed_mapping.get(concrete_class.__name__, {})
assert_no_extra_attributes_in_class(Task, concrete_class, allowed=allowed) |
Look for unnecessary implemented methods/attributes in Tasks concrete
classes, this helps us keep the API up-to-date
| test_interface | python | ploomber/ploomber | tests/tasks/test_task.py | https://github.com/ploomber/ploomber/blob/master/tests/tasks/test_task.py | Apache-2.0 |
def test_task_init_source_with_placeholder_obj(Task, prod, source):
"""
Testing we can initialize a task with a Placeholder as the source argument
"""
dag = DAG()
dag.clients[Task] = Mock()
dag.clients[type(prod)] = Mock()
Task(Placeholder(source), prod, dag, name="task") |
Testing we can initialize a task with a Placeholder as the source argument
| test_task_init_source_with_placeholder_obj | python | ploomber/ploomber | tests/tasks/test_tasks.py | https://github.com/ploomber/ploomber/blob/master/tests/tasks/test_tasks.py | Apache-2.0 |
def fixture_backup(source):
"""
Similar to fixture_tmp_dir but backups the content instead
"""
def decorator(function):
@wraps(function)
def wrapper():
old = os.getcwd()
backup = tempfile.mkdtemp()
root = _path_to_tests() / "assets" / source
shutil.copytree(str(root), str(Path(backup, source)))
os.chdir(root)
yield root
os.chdir(old)
shutil.rmtree(str(root))
shutil.copytree(str(Path(backup, source)), str(root))
shutil.rmtree(backup)
return pytest.fixture(wrapper)
return decorator |
Similar to fixture_tmp_dir but backups the content instead
| fixture_backup | python | ploomber/ploomber | testutils/testutils.py | https://github.com/ploomber/ploomber/blob/master/testutils/testutils.py | Apache-2.0 |
def fixture_tmp_dir(source, **kwargs):
"""
A lot of our fixtures are copying a few files into a temporary location,
making that location the current working directory and deleting after
the test is done. This decorator allows us to build such fixture
"""
# NOTE: I tried not making this a decorator and just do:
# some_fixture = factory('some/path')
# but didn't work
def decorator(function):
@wraps(function)
def wrapper():
old = os.getcwd()
tmp_dir = tempfile.mkdtemp()
tmp = Path(tmp_dir, "content")
# we have to add extra folder content/, otherwise copytree
# complains
shutil.copytree(str(source), str(tmp))
os.chdir(str(tmp))
yield tmp
# some tests create sample git repos, if we are on windows, we
# need to change permissions to be able to delete the files
_fix_all_dot_git_permissions(tmp)
os.chdir(old)
shutil.rmtree(tmp_dir)
return pytest.fixture(wrapper, **kwargs)
return decorator |
A lot of our fixtures are copying a few files into a temporary location,
making that location the current working directory and deleting after
the test is done. This decorator allows us to build such fixture
| fixture_tmp_dir | python | ploomber/ploomber | testutils/testutils.py | https://github.com/ploomber/ploomber/blob/master/testutils/testutils.py | Apache-2.0 |
def set_outflow_BC(self, pores, mode='add'):
r"""
Adds outflow boundary condition to the selected pores
Parameters
----------
pores : array_like
The pore indices where the condition should be applied
mode : str, optional
Controls how the boundary conditions are applied. The default value
is 'merge'. For definition of various modes, see the
docstring for ``set_BC``.
force : bool, optional
If ``True`` then the ``'mode'`` is applied to all other bctypes as
well. The default is ``False``.
Notes
-----
Outflow condition means that the gradient of the solved quantity
does not change, i.e. is 0.
"""
pores = self._parse_indices(pores)
# Calculating A[i,i] values to ensure the outflow condition
network = self.project.network
phase = self.project[self.settings['phase']]
throats = network.find_neighbor_throats(pores=pores)
C12 = network.conns[throats]
P12 = phase[self.settings['pressure']][C12]
gh = phase[self.settings['hydraulic_conductance']][throats]
Q12 = -gh * np.diff(P12, axis=1).squeeze()
Qp = np.zeros(self.Np)
np.add.at(Qp, C12[:, 0], -Q12)
np.add.at(Qp, C12[:, 1], Q12)
self.set_BC(pores=pores, bcvalues=Qp[pores], bctype='outflow',
mode=mode) |
Adds outflow boundary condition to the selected pores
Parameters
----------
pores : array_like
The pore indices where the condition should be applied
mode : str, optional
Controls how the boundary conditions are applied. The default value
is 'merge'. For definition of various modes, see the
docstring for ``set_BC``.
force : bool, optional
If ``True`` then the ``'mode'`` is applied to all other bctypes as
well. The default is ``False``.
Notes
-----
Outflow condition means that the gradient of the solved quantity
does not change, i.e. is 0.
| set_outflow_BC | python | PMEAL/OpenPNM | openpnm/algorithms/_advection_diffusion.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_advection_diffusion.py | MIT |
def _apply_BCs(self):
r"""
Applies Dirichlet, Neumann, and outflow BCs in order
"""
# Apply Dirichlet and rate BCs
super()._apply_BCs()
if 'pore.bc.outflow' not in self.keys():
return
# Apply outflow BC
diag = self.A.diagonal()
ind = np.isfinite(self['pore.bc.outflow'])
diag[ind] += self['pore.bc.outflow'][ind]
self.A.setdiag(diag) |
Applies Dirichlet, Neumann, and outflow BCs in order
| _apply_BCs | python | PMEAL/OpenPNM | openpnm/algorithms/_advection_diffusion.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_advection_diffusion.py | MIT |
def iterative_props(self):
r"""
Finds and returns properties that need to be iterated while
running the algorithm.
"""
import networkx as nx
phase = self.project[self.settings.phase]
# Generate global dependency graph
dg = nx.compose_all([x.models.dependency_graph(deep=True)
for x in [phase]])
variable_props = self.settings["variable_props"].copy()
variable_props.add(self.settings["quantity"])
base = list(variable_props)
# Find all props downstream that depend on base props
dg = nx.DiGraph(nx.edge_dfs(dg, source=base))
if len(dg.nodes) == 0:
return []
iterative_props = list(nx.dag.lexicographical_topological_sort(dg))
# "variable_props" should be in the returned list but not "quantity"
if self.settings.quantity in iterative_props:
iterative_props.remove(self.settings["quantity"])
return iterative_props |
Finds and returns properties that need to be iterated while
running the algorithm.
| iterative_props | python | PMEAL/OpenPNM | openpnm/algorithms/_algorithm.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_algorithm.py | MIT |
def _update_iterative_props(self, iterative_props=None):
"""
Regenerates phase, geometries, and physics objects using the
current value of ``quantity``.
Notes
-----
The algorithm directly writes the value of 'quantity' into the
phase, which is against one of the OpenPNM rules of objects not
being able to write into each other.
"""
if iterative_props is None:
iterative_props = self.iterative_props
if not iterative_props:
return
# Fetch objects associated with the algorithm
phase = self.project[self.settings.phase]
# Update 'quantity' on phase with the most recent value
quantity = self.settings['quantity']
phase[quantity] = self.x
# Regenerate all associated objects
phase.regenerate_models(propnames=iterative_props) |
Regenerates phase, geometries, and physics objects using the
current value of ``quantity``.
Notes
-----
The algorithm directly writes the value of 'quantity' into the
phase, which is against one of the OpenPNM rules of objects not
being able to write into each other.
| _update_iterative_props | python | PMEAL/OpenPNM | openpnm/algorithms/_algorithm.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_algorithm.py | MIT |
def set_BC(self, pores=None, bctype=[], bcvalues=[], mode='add'):
r"""
The main method for setting and adjusting boundary conditions.
This method is called by other more convenient wrapper functions like
``set_value_BC``.
Parameters
----------
pores : array_like
The pores where the boundary conditions should be applied. If
``None`` is given then *all* pores are assumed. This is useful
when ``mode='remove'``.
bctype : str
Specifies the type or the name of boundary condition to apply. This
can be anything, but normal options are 'rate' and 'value'. If
an empty list is provided, then all bc types will be assumed. This
is useful for clearing all bcs if ``mode='remove'`` and ``pores=
None``.
bcvalues : int or array_like
The boundary value to apply, such as concentration or rate.
If a single value is given, it's assumed to apply to all
locations. Different values can be applied to all pores in the
form of an array of the same length as ``pores``. Note that using
``mode='add'`` and ``values=np.nan`` is equivalent to removing
bcs from the given ``pores``.
mode : str or list of str, optional
Controls how the boundary conditions are applied. Options are:
============ =====================================================
mode meaning
============ =====================================================
'add' (default) Adds the supplied boundary conditions to
the given locations. Raises an exception if values
of any type already exist in the given locations.
'overwrite' Adds supplied boundary conditions to the given
locations, including overwriting conditions of the
given type or any other type that may be present in
the given locations.
'remove' Removes boundary conditions of the specified type
from the specified locations. If ``bctype`` is not
specified then *all* types are removed. If no
locations are given then values are remvoed from
*all* locations.
============ =====================================================
If a list of strings is provided, then each mode in the list is
handled in order, so that ``['remove', 'add']`` will give the same
results add ``'overwrite'``.
Notes
-----
It is not possible to have multiple boundary conditions for a
specified location in one algorithm.
"""
# If a list of modes was given, handle them each in order
if not isinstance(mode, str):
for item in mode:
self.set_BC(pores=pores, bctype=bctype,
bcvalues=bcvalues, mode=item)
return
# If a list of bctypes was given, handle them each in order
if len(bctype) == 0:
bctype = self['pore.bc'].keys()
if not isinstance(bctype, str):
for item in bctype:
self.set_BC(pores=pores, bctype=item,
bcvalues=bcvalues, mode=mode)
return
# Begin method
bc_types = list(self['pore.bc'].keys())
other_types = np.setdiff1d(bc_types, bctype).tolist()
mode = self._parse_mode(
mode,
allowed=['overwrite', 'add', 'remove'],
single=True)
# Infer the value that indicates "no bc" based on array dtype
no_bc = get_no_bc(self[f'pore.bc.{bctype}'])
if pores is None:
pores = self.Ps
pores = self._parse_indices(pores)
# Deal with size of the given bcvalues
values = np.array(bcvalues)
if values.size == 1:
values = np.ones_like(pores, dtype=values.dtype)*values
# Ensure values and pores are the same size
if values.size > 1 and values.size != pores.size:
raise Exception('The number of values must match the number of locations')
# Finally adjust the BCs according to mode
if mode == 'add':
mask = np.ones_like(pores, dtype=bool) # Indices of pores to keep
for item in self['pore.bc'].keys(): # Remove pores that are taken
mask[isfinite(self[f'pore.bc.{item}'][pores])] = False
if not np.all(mask): # Raise exception if some conflicts found
msg = "Some of the given locations already have BCs, " \
+ "either use mode='remove' first or " \
+ "use mode='overwrite' instead"
raise Exception(msg)
self[f"pore.bc.{bctype}"][pores[mask]] = values[mask]
elif mode == 'overwrite':
# Put given values in specified BC, sort out conflicts below
self[f"pore.bc.{bctype}"][pores] = values
# Collect indices that are present for other BCs for removal
mask = np.ones_like(pores, dtype=bool)
for item in other_types:
self[f"pore.bc.{item}"][pores] = get_no_bc(self[f"pore.bc.{item}"])
# Make a note of any BCs values of other types
mask[isfinite(self[f'pore.bc.{item}'][pores])] = False
if not np.all(mask): # Warn that other values were overwritten
msg = 'Some of the given locations already have BCs of ' \
+ 'another type, these will be overwritten'
logger.warning(msg)
elif mode == 'remove':
self[f"pore.bc.{bctype}"][pores] = no_bc |
The main method for setting and adjusting boundary conditions.
This method is called by other more convenient wrapper functions like
``set_value_BC``.
Parameters
----------
pores : array_like
The pores where the boundary conditions should be applied. If
``None`` is given then *all* pores are assumed. This is useful
when ``mode='remove'``.
bctype : str
Specifies the type or the name of boundary condition to apply. This
can be anything, but normal options are 'rate' and 'value'. If
an empty list is provided, then all bc types will be assumed. This
is useful for clearing all bcs if ``mode='remove'`` and ``pores=
None``.
bcvalues : int or array_like
The boundary value to apply, such as concentration or rate.
If a single value is given, it's assumed to apply to all
locations. Different values can be applied to all pores in the
form of an array of the same length as ``pores``. Note that using
``mode='add'`` and ``values=np.nan`` is equivalent to removing
bcs from the given ``pores``.
mode : str or list of str, optional
Controls how the boundary conditions are applied. Options are:
============ =====================================================
mode meaning
============ =====================================================
'add' (default) Adds the supplied boundary conditions to
the given locations. Raises an exception if values
of any type already exist in the given locations.
'overwrite' Adds supplied boundary conditions to the given
locations, including overwriting conditions of the
given type or any other type that may be present in
the given locations.
'remove' Removes boundary conditions of the specified type
from the specified locations. If ``bctype`` is not
specified then *all* types are removed. If no
locations are given then values are remvoed from
*all* locations.
============ =====================================================
If a list of strings is provided, then each mode in the list is
handled in order, so that ``['remove', 'add']`` will give the same
results add ``'overwrite'``.
Notes
-----
It is not possible to have multiple boundary conditions for a
specified location in one algorithm.
| set_BC | python | PMEAL/OpenPNM | openpnm/algorithms/_algorithm.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_algorithm.py | MIT |
def reset(self):
r"""
Resets the algorithm's main results so that it can be re-run
"""
self['pore.invaded'] = False
self['throat.invaded'] = False
# self['pore.residual'] = False
# self['throat.residual'] = False
self['pore.trapped'] = False
self['throat.trapped'] = False
self['pore.invasion_pressure'] = np.inf
self['throat.invasion_pressure'] = np.inf
self['pore.invasion_sequence'] = -1
self['throat.invasion_sequence'] = -1 |
Resets the algorithm's main results so that it can be re-run
| reset | python | PMEAL/OpenPNM | openpnm/algorithms/_drainage.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_drainage.py | MIT |
def run(self, pressures=25):
r"""
Runs the simulation for the pressure points
Parameters
----------
pressures : int or ndarray
The number of pressue steps to apply, or an array of specific
points
"""
if isinstance(pressures, int):
phase = self.project[self.settings.phase]
hi = 1.25*phase[self.settings.throat_entry_pressure].max()
low = 0.80*phase[self.settings.throat_entry_pressure].min()
pressures = np.logspace(np.log10(low), np.log10(hi), pressures)
pressures = np.array(pressures, ndmin=1)
msg = 'Performing drainage simulation'
for i, p in enumerate(tqdm(pressures, msg)):
self._run_special(p)
pmask = self['pore.invaded'] * (self['pore.invasion_pressure'] == np.inf)
self['pore.invasion_pressure'][pmask] = p
self['pore.invasion_sequence'][pmask] = i
tmask = self['throat.invaded'] * (self['throat.invasion_pressure'] == np.inf)
self['throat.invasion_pressure'][tmask] = p
self['throat.invasion_sequence'][tmask] = i
# If any outlets were specified, evaluate trapping
if np.any(self['pore.bc.outlet']):
self.apply_trapping() |
Runs the simulation for the pressure points
Parameters
----------
pressures : int or ndarray
The number of pressue steps to apply, or an array of specific
points
| run | python | PMEAL/OpenPNM | openpnm/algorithms/_drainage.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_drainage.py | MIT |
def apply_trapping(self):
r"""
Adjusts the invasion history of pores and throats that are trapped.
Returns
-------
This function returns nothing, but the following adjustments are made
to the data on the object for trapped pores and throats:
* ``'pore/throat.trapped'`` is set to ``True``
* ``'pore/throat.invaded'`` is set to ``False``
* ``'pore/throat.invasion_pressure'`` is set to ``np.inf``
* ``'pore/throat.invasion_sequence'`` is set to ``0``
Notes
-----
This search proceeds by the following 3 steps:
1. A site percolation is applied to *uninvaded* pores and they are set
to trapped if they belong to a cluster that is not connected to the
outlets.
2. All throats which were invaded at a pressure *higher* than either
of its two neighboring pores are set to trapped, regardless of
whether the pores themselves are trapped.
3. All throats which are connected to trapped pores are set to trapped
as these cannot be invaded since the fluid they contain cannot escape.
"""
pseq = self['pore.invasion_pressure']
tseq = self['throat.invasion_pressure']
# Firstly, find any throats who were invaded at a pressure higher than
# either of its two neighboring pores
temp = (pseq[self.network.conns].T > tseq).T
self['throat.trapped'][np.all(temp, axis=1)] = True
# Now scan through and use site percolation to find other trapped
# clusters of pores
for p in np.unique(pseq):
s, b = site_percolation(conns=self.network.conns,
occupied_sites=pseq > p)
# Identify cluster numbers connected to the outlets
clusters = np.unique(s[self['pore.bc.outlet']])
# Find ALL throats connected to any trapped site, since these
# throats must also be trapped, and update their cluster numbers
Ts = self.network.find_neighbor_throats(pores=s >= 0)
b[Ts] = np.amax(s[self.network.conns], axis=1)[Ts]
# Finally, mark pores and throats as trapped if their cluster
# numbers are NOT connected to the outlets
self['pore.trapped'] += np.isin(s, clusters, invert=True)*(s >= 0)
self['throat.trapped'] += np.isin(b, clusters, invert=True)*(b >= 0)
# Use the identified trapped pores and throats to update the other
# data on the object accordingly
# self['pore.trapped'][self['pore.residual']] = False
# self['throat.trapped'][self['throat.residual']] = False
self['pore.invaded'][self['pore.trapped']] = False
self['throat.invaded'][self['throat.trapped']] = False
self['pore.invasion_pressure'][self['pore.trapped']] = np.inf
self['throat.invasion_pressure'][self['throat.trapped']] = np.inf
self['pore.invasion_sequence'][self['pore.trapped']] = -1
self['throat.invasion_sequence'][self['throat.trapped']] = -1 |
Adjusts the invasion history of pores and throats that are trapped.
Returns
-------
This function returns nothing, but the following adjustments are made
to the data on the object for trapped pores and throats:
* ``'pore/throat.trapped'`` is set to ``True``
* ``'pore/throat.invaded'`` is set to ``False``
* ``'pore/throat.invasion_pressure'`` is set to ``np.inf``
* ``'pore/throat.invasion_sequence'`` is set to ``0``
Notes
-----
This search proceeds by the following 3 steps:
1. A site percolation is applied to *uninvaded* pores and they are set
to trapped if they belong to a cluster that is not connected to the
outlets.
2. All throats which were invaded at a pressure *higher* than either
of its two neighboring pores are set to trapped, regardless of
whether the pores themselves are trapped.
3. All throats which are connected to trapped pores are set to trapped
as these cannot be invaded since the fluid they contain cannot escape.
| apply_trapping | python | PMEAL/OpenPNM | openpnm/algorithms/_drainage.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_drainage.py | MIT |
def set_inlet_BC(self, pores=None, mode='add'):
r"""
Specifies which pores are treated as inlets for the invading phase
Parameters
----------
pores : ndarray
The indices of the pores from which the invading fluid invasion
should start
mode : str or list of str, optional
Controls how the boundary conditions are applied. Options are:
============ =====================================================
mode meaning
============ =====================================================
'add' (default) Adds the supplied boundary conditions to
the given locations. Raises an exception if values
of any type already exist in the given locations.
'overwrite' Adds supplied boundary conditions to the given
locations, including overwriting conditions of the
given type or any other type that may be present in
the given locations.
'remove' Removes boundary conditions of the specified type
from the specified locations. If ``bctype`` is not
specified then *all* types are removed. If no
locations are given then values are remvoed from
*all* locations.
============ =====================================================
If a list of strings is provided, then each mode in the list is
handled in order, so that ``['remove', 'add']`` will give the same
results add ``'overwrite'``.
"""
self.set_BC(pores=pores, bcvalues=True, bctype='inlet', mode=mode)
self.reset()
self['pore.invasion_sequence'][self['pore.bc.inlet']] = 0 |
Specifies which pores are treated as inlets for the invading phase
Parameters
----------
pores : ndarray
The indices of the pores from which the invading fluid invasion
should start
mode : str or list of str, optional
Controls how the boundary conditions are applied. Options are:
============ =====================================================
mode meaning
============ =====================================================
'add' (default) Adds the supplied boundary conditions to
the given locations. Raises an exception if values
of any type already exist in the given locations.
'overwrite' Adds supplied boundary conditions to the given
locations, including overwriting conditions of the
given type or any other type that may be present in
the given locations.
'remove' Removes boundary conditions of the specified type
from the specified locations. If ``bctype`` is not
specified then *all* types are removed. If no
locations are given then values are remvoed from
*all* locations.
============ =====================================================
If a list of strings is provided, then each mode in the list is
handled in order, so that ``['remove', 'add']`` will give the same
results add ``'overwrite'``.
| set_inlet_BC | python | PMEAL/OpenPNM | openpnm/algorithms/_invasion_percolation.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_invasion_percolation.py | MIT |
def run(self):
r"""
Performs the algorithm for the given number of steps
"""
# Setup arrays and info
# TODO: This should be called conditionally so that it doesn't
# overwrite existing data when doing a few steps at a time
self._run_setup()
n_steps = np.inf
# Create incidence matrix for use in _run_accelerated which is jit
im = self.network.create_incidence_matrix(fmt='csr')
# Perform initial analysis on input pores
Ts = self.project.network.find_neighbor_throats(pores=self['pore.bc.inlet'])
t_start = self['throat.order'][Ts]
t_inv, p_inv, p_inv_t = \
_run_accelerated(
t_start=t_start,
t_sorted=self['throat.sorted'],
t_order=self['throat.order'],
t_inv=self['throat.invasion_sequence'],
p_inv=self['pore.invasion_sequence'],
p_inv_t=np.zeros_like(self['pore.invasion_sequence']),
conns=self.project.network['throat.conns'],
idx=im.indices,
indptr=im.indptr,
n_steps=n_steps)
# Transfer results onto algorithm object
self['throat.invasion_sequence'] = t_inv
self['pore.invasion_sequence'] = p_inv
self['throat.invasion_pressure'] = self['throat.entry_pressure']
self['pore.invasion_pressure'] = self['throat.entry_pressure'][p_inv_t]
# Set invasion pressure of inlets to 0
self['pore.invasion_pressure'][self['pore.invasion_sequence'] == 0] = 0.0
# Set invasion sequence and pressure of any residual pores/throats to 0
# self['throat.invasion_sequence'][self['throat.residual']] = 0
# self['pore.invasion_sequence'][self['pore.residual']] = 0 |
Performs the algorithm for the given number of steps
| run | python | PMEAL/OpenPNM | openpnm/algorithms/_invasion_percolation.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_invasion_percolation.py | MIT |
def pc_curve(self):
r"""
Get the percolation data as the non-wetting phase saturation vs the
capillary pressure.
"""
net = self.project.network
pvols = net[self.settings['pore_volume']]
tvols = net[self.settings['throat_volume']]
tot_vol = np.sum(pvols) + np.sum(tvols)
# Normalize
pvols /= tot_vol
tvols /= tot_vol
# Remove trapped volume
pmask = self['pore.invasion_sequence'] >= 0
tmask = self['throat.invasion_sequence'] >= 0
pvols = pvols[pmask]
tvols = tvols[tmask]
pseq = self['pore.invasion_sequence'][pmask]
tseq = self['throat.invasion_sequence'][tmask]
pPc = self['pore.invasion_pressure'][pmask]
tPc = self['throat.invasion_pressure'][tmask]
vols = np.concatenate((pvols, tvols))
seqs = np.concatenate((pseq, tseq))
Pcs = np.concatenate((pPc, tPc))
data = np.rec.fromarrays([seqs, vols, Pcs],
formats=['i', 'f', 'f'],
names=['seq', 'vol', 'Pc'])
data.sort(axis=0, order='seq')
pc_curve = namedtuple('pc_curve', ('pc', 'snwp'))
data = pc_curve(data.Pc, np.cumsum(data.vol))
return data |
Get the percolation data as the non-wetting phase saturation vs the
capillary pressure.
| pc_curve | python | PMEAL/OpenPNM | openpnm/algorithms/_invasion_percolation.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_invasion_percolation.py | MIT |
def apply_trapping(self):
r"""
Adjusts the invasion sequence of pores and throats that are trapped.
This method uses the reverse invasion percolation procedure outlined
by Masson [1].
Returns
-------
This function does not return anything. It adjusts the
``'pore.invasion_sequence'`` and ``'throat.invasion_sequence'`` arrays
on the object by setting trapped pores/throats to ``ninf``. It also puts
``True`` values into the ``'pore.trapped'`` and ``'throat.trapped'``
arrays.
Notes
-----
Outlet pores must be specified (using ``set_outlet_BC`` or putting
``True`` values in ``alg['pore.bc.outlet']``) or else an exception is
raised.
References
----------
[1] Masson, Y. https://doi.org/10.1016/j.cageo.2016.02.003
"""
outlets = np.where(self['pore.bc.outlet'])[0]
am = self.network.create_adjacency_matrix(fmt='csr')
inv_seq = self['pore.invasion_sequence']
self['pore.trapped'] = _find_trapped_pores(inv_seq, am.indices,
am.indptr, outlets)
# Update invasion sequence
self['pore.invasion_sequence'][self['pore.trapped']] = -1
# Find which throats are trapped, including throats which were invaded
# after both of it's pores were invaded (hence have a unique invasion
# sequence number).
pmask = self['pore.invasion_sequence'][self.network.conns]
tmask = np.stack((self['throat.invasion_sequence'],
self['throat.invasion_sequence'])).T
hits = ~np.any(pmask == tmask, axis=1)
self['throat.trapped'] = hits
self['throat.invasion_sequence'][hits] = -1
# Make some adjustments
Pmask = self['pore.invasion_sequence'] < 0
Tmask = self['throat.invasion_sequence'] < 0
self['pore.invasion_sequence'] = \
self['pore.invasion_sequence'].astype(float)
self['pore.invasion_sequence'][Pmask] = np.inf
self['throat.invasion_sequence'] = \
self['throat.invasion_sequence'].astype(float)
self['throat.invasion_sequence'][Tmask] = np.inf |
Adjusts the invasion sequence of pores and throats that are trapped.
This method uses the reverse invasion percolation procedure outlined
by Masson [1].
Returns
-------
This function does not return anything. It adjusts the
``'pore.invasion_sequence'`` and ``'throat.invasion_sequence'`` arrays
on the object by setting trapped pores/throats to ``ninf``. It also puts
``True`` values into the ``'pore.trapped'`` and ``'throat.trapped'``
arrays.
Notes
-----
Outlet pores must be specified (using ``set_outlet_BC`` or putting
``True`` values in ``alg['pore.bc.outlet']``) or else an exception is
raised.
References
----------
[1] Masson, Y. https://doi.org/10.1016/j.cageo.2016.02.003
| apply_trapping | python | PMEAL/OpenPNM | openpnm/algorithms/_invasion_percolation.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_invasion_percolation.py | MIT |
def _run_accelerated(t_start, t_sorted, t_order, t_inv, p_inv, p_inv_t,
conns, idx, indptr, n_steps): # pragma: no cover
r"""
Numba-jitted run method for InvasionPercolation class.
Notes
-----
``idx`` and ``indptr`` are properties are the network's incidence
matrix, and are used to quickly find neighbor throats.
Numba doesn't like foreign data types (i.e. Network), and so
``find_neighbor_throats`` method cannot be called in a jitted method.
Nested wrapper is for performance issues (reduced OpenPNM import)
time due to local numba import
"""
# TODO: The following line is supposed to be numba's new list, but the
# heap does not work with this
# queue = List(t_start)
queue = list(t_start)
hq.heapify(queue)
count = 1
while count < (n_steps + 1):
# Find throat at the top of the queue
t = hq.heappop(queue)
# Extract actual throat number
t_next = t_sorted[t]
t_inv[t_next] = count
# If throat is duplicated
while len(queue) > 0 and queue[0] == t:
_ = hq.heappop(queue)
# Find pores connected to newly invaded throat from am in coo format
Ps = conns[t_next]
# Remove already invaded pores from Ps
Ps = Ps[p_inv[Ps] < 0]
# If either of the neighboring pores are uninvaded (-1), set it to
# invaded and add its neighboring throats to the queue
if len(Ps) > 0:
p_inv[Ps] = count
p_inv_t[Ps] = t_next
for i in Ps:
# Get neighboring throat numbers from im in csr format
Ts = idx[indptr[i]:indptr[i+1]]
# Keep only throats which are uninvaded
Ts = Ts[t_inv[Ts] < 0]
for i in Ts: # Add throat to the queue
hq.heappush(queue, t_order[i])
count += 1
if len(queue) == 0:
break
return t_inv, p_inv, p_inv_t |
Numba-jitted run method for InvasionPercolation class.
Notes
-----
``idx`` and ``indptr`` are properties are the network's incidence
matrix, and are used to quickly find neighbor throats.
Numba doesn't like foreign data types (i.e. Network), and so
``find_neighbor_throats`` method cannot be called in a jitted method.
Nested wrapper is for performance issues (reduced OpenPNM import)
time due to local numba import
| _run_accelerated | python | PMEAL/OpenPNM | openpnm/algorithms/_invasion_percolation.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_invasion_percolation.py | MIT |
def set_source(self, pores, propname, mode="add"):
r"""
Applies a given source term to the specified pores
Parameters
----------
pores : array_like
The pore indices where the source term should be applied.
propname : str
The property name of the source term model to be applied.
mode : str
Controls how the sources are applied. Options are:
=========== =====================================================
mode meaning
=========== =====================================================
'add' (default) Adds supplied source term to already
existing ones.
'remove' Deletes given source term from the specified
locations.
'clear' Deletes given source term from all locations (ignores
the ``pores`` argument).
=========== =====================================================
Notes
-----
Source terms cannot be applied in pores where boundary conditions
have already been set. Attempting to do so will result in an error
being raised.
"""
# If a list of modes was sent, do them each in order
if isinstance(mode, list):
for item in mode:
self.set_source(pores=pores, propname=propname, mode=item)
return
propname = self._parse_prop(propname, "pore")
# Check if any BC is already set in the same locations
locs_BC = np.zeros(self.Np, dtype=bool)
for item in self["pore.bc"].keys():
locs_BC = np.isfinite(self[f"pore.bc.{item}"])
if np.any(locs_BC[pores]):
raise Exception("BCs present in given pores, can't assign source term")
prop = "pore.source." + propname.split(".", 1)[1]
if mode == "add":
if prop not in self.keys():
self[prop] = False
self[prop][pores] = True
elif mode == "remove":
self[prop][pores] = False
elif mode == "clear":
self[prop] = False
else:
raise Exception(f"Unsupported mode {mode}") |
Applies a given source term to the specified pores
Parameters
----------
pores : array_like
The pore indices where the source term should be applied.
propname : str
The property name of the source term model to be applied.
mode : str
Controls how the sources are applied. Options are:
=========== =====================================================
mode meaning
=========== =====================================================
'add' (default) Adds supplied source term to already
existing ones.
'remove' Deletes given source term from the specified
locations.
'clear' Deletes given source term from all locations (ignores
the ``pores`` argument).
=========== =====================================================
Notes
-----
Source terms cannot be applied in pores where boundary conditions
have already been set. Attempting to do so will result in an error
being raised.
| set_source | python | PMEAL/OpenPNM | openpnm/algorithms/_reactive_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_reactive_transport.py | MIT |
def _apply_sources(self):
"""
Updates ``A`` and ``b``, applying source terms to specified pores.
Notes
-----
Phase and physics objects are also updated before applying source
terms to ensure that source terms values are associated with the
current value of 'quantity'.
"""
try:
phase = self.project[self.settings.phase]
for item in self["pore.source"].keys():
# Fetch linearized values of the source term
Ps = self["pore.source." + item]
S1, S2 = [phase[f"pore.{item}.{Si}"] for Si in ["S1", "S2"]]
# Modify A and b: diag(A) += -S1, b += S2
diag = self.A.diagonal()
diag[Ps] += -S1[Ps]
self.A.setdiag(diag)
self.b[Ps] += S2[Ps]
except KeyError:
pass |
Updates ``A`` and ``b``, applying source terms to specified pores.
Notes
-----
Phase and physics objects are also updated before applying source
terms to ensure that source terms values are associated with the
current value of 'quantity'.
| _apply_sources | python | PMEAL/OpenPNM | openpnm/algorithms/_reactive_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_reactive_transport.py | MIT |
def _run_special(self, solver, x0, verbose=None):
r"""
Repeatedly updates ``A``, ``b``, and the solution guess within
according to the applied source term then calls ``_solve`` to
solve the resulting system of linear equations.
Stops when the max-norm of the residual drops by at least
``f_rtol``:
``norm(R_n) < norm(R_0) * f_rtol``
AND
``norm(dx) < norm(x) * x_rtol``
where R_i is the residual at ith iteration, x is the solution at
current iteration, and dx is the change in the solution between two
consecutive iterations. ``f_rtol`` and ``x_rtol`` are defined in
the algorithm's settings under: ``alg.settings['f_rtol']``, and
``alg.settings['x_rtol']``, respectively.
Parameters
----------
x0 : ndarray
Initial guess of the unknown variable
"""
w = self.settings["relaxation_factor"]
maxiter = self.settings["newton_maxiter"]
f_rtol = self.settings["f_rtol"]
x_rtol = self.settings["x_rtol"]
xold = self.x
dx = self.x - xold
condition = TerminationCondition(
f_tol=np.inf, f_rtol=f_rtol, x_rtol=x_rtol, norm=norm
)
tqdm_settings = {
"total": 100,
"desc": f"{self.name} : Newton iterations",
"disable": not verbose,
"file": sys.stdout,
"leave": False,
}
with tqdm(**tqdm_settings) as pbar:
for i in range(maxiter):
res = self._get_residual()
progress = self._get_progress(res)
pbar.update(progress - pbar.n)
is_converged = bool(condition.check(f=res, x=xold, dx=dx))
if is_converged:
pbar.update(100 - pbar.n)
self.soln.is_converged = is_converged
logger.info(f"Solution converged, residual norm: {norm(res):.4e}")
return
super()._run_special(solver=solver, x0=xold, w=w)
dx = self.x - xold
xold = self.x
logger.info(f"Iteration #{i:<4d} | Residual norm: {norm(res):.4e}")
self.soln.num_iter = i + 1
self.soln.is_converged = False
logger.warning(f"{self.name} didn't converge after {maxiter} iterations") |
Repeatedly updates ``A``, ``b``, and the solution guess within
according to the applied source term then calls ``_solve`` to
solve the resulting system of linear equations.
Stops when the max-norm of the residual drops by at least
``f_rtol``:
``norm(R_n) < norm(R_0) * f_rtol``
AND
``norm(dx) < norm(x) * x_rtol``
where R_i is the residual at ith iteration, x is the solution at
current iteration, and dx is the change in the solution between two
consecutive iterations. ``f_rtol`` and ``x_rtol`` are defined in
the algorithm's settings under: ``alg.settings['f_rtol']``, and
``alg.settings['x_rtol']``, respectively.
Parameters
----------
x0 : ndarray
Initial guess of the unknown variable
| _run_special | python | PMEAL/OpenPNM | openpnm/algorithms/_reactive_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_reactive_transport.py | MIT |
def _get_progress(self, res):
"""
Returns an approximate value for completion percent of Newton iterations.
"""
if not hasattr(self, "_f0_norm"):
self._f0_norm = norm(res)
f_rtol = self.settings.f_rtol
norm_reduction = norm(res) / self._f0_norm / f_rtol
progress = (1 - max(np.log10(norm_reduction), 0) / np.log10(1 / f_rtol)) * 100
return max(0, progress) |
Returns an approximate value for completion percent of Newton iterations.
| _get_progress | python | PMEAL/OpenPNM | openpnm/algorithms/_reactive_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_reactive_transport.py | MIT |
def _get_residual(self, x=None):
r"""
Calculates solution residual based on the given ``x`` based on the
following formula:
``R = A * x - b``
"""
if x is None:
x = self.x
return self.A * x - self.b |
Calculates solution residual based on the given ``x`` based on the
following formula:
``R = A * x - b``
| _get_residual | python | PMEAL/OpenPNM | openpnm/algorithms/_reactive_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_reactive_transport.py | MIT |
def interpolate(self, x):
"""
Interpolates solution at point 'x'.
Parameters
----------
x : float
Point at which the solution is to be interpolated
Returns
-------
ndarray
Solution interpolated at the given point 'x'
"""
# Cache interpolant to avoid overhead
if not hasattr(self, "_interpolant"):
self._create_interpolant()
return self._interpolant(x) |
Interpolates solution at point 'x'.
Parameters
----------
x : float
Point at which the solution is to be interpolated
Returns
-------
ndarray
Solution interpolated at the given point 'x'
| interpolate | python | PMEAL/OpenPNM | openpnm/algorithms/_solution.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_solution.py | MIT |
def run(self, x0, tspan, saveat=None, integrator=None):
"""
Runs the transient algorithm and returns the solution.
Parameters
----------
x0 : ndarray or float
Array (or scalar) containing initial condition values.
tspan : array_like
Tuple (or array) containing the integration time span.
saveat : array_like or float, optional
If an array is passed, it signifies the time points at which
the solution is to be stored, and if a scalar is passed, it
refers to the interval at which the solution is to be stored.
integrator : Integrator, optional
Integrator object which will be used to to the time stepping.
Can be instantiated using openpnm.integrators module.
Returns
-------
TransientSolution
The solution object, which is basically a numpy array with
the added functionality that it can be called to return the
solution at intermediate times (i.e., those not stored in the
solution object).
"""
logger.info('Running TransientTransport')
if np.isscalar(saveat):
saveat = np.arange(*tspan, saveat)
# FIXME: why do we forcibly add tspan[1] to saveat even if the user
# didn't want to?
if (saveat is not None) and (tspan[1] not in saveat):
saveat = np.hstack((saveat, [tspan[1]]))
integrator = ScipyRK45() if integrator is None else integrator
# Perform pre-solve validations
self._validate_settings()
self._validate_topology_health()
self._validate_linear_system()
# Write x0 to algorithm the obj (needed by _update_iterative_props)
self['pore.ic'] = x0 = np.ones(self.Np, dtype=float) * x0
self._merge_inital_and_boundary_values()
# Build RHS (dx/dt = RHS), then integrate the system of ODEs
rhs = self._build_rhs()
# Integrate RHS using the given solver
soln = integrator.solve(rhs, x0, tspan, saveat)
# Return solution as dictionary
self.soln = SolutionContainer()
self.soln[self.settings['quantity']] = soln |
Runs the transient algorithm and returns the solution.
Parameters
----------
x0 : ndarray or float
Array (or scalar) containing initial condition values.
tspan : array_like
Tuple (or array) containing the integration time span.
saveat : array_like or float, optional
If an array is passed, it signifies the time points at which
the solution is to be stored, and if a scalar is passed, it
refers to the interval at which the solution is to be stored.
integrator : Integrator, optional
Integrator object which will be used to to the time stepping.
Can be instantiated using openpnm.integrators module.
Returns
-------
TransientSolution
The solution object, which is basically a numpy array with
the added functionality that it can be called to return the
solution at intermediate times (i.e., those not stored in the
solution object).
| run | python | PMEAL/OpenPNM | openpnm/algorithms/_transient_reactive_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transient_reactive_transport.py | MIT |
def _build_rhs(self):
"""
Returns a function handle, which calculates dy/dt = rhs(y, t).
Notes
-----
``y`` is the variable that the algorithms solves for, e.g., for
``TransientFickianDiffusion``, it would be concentration.
"""
def ode_func(t, y):
# TODO: add a cache mechanism
self.x = y
self._update_A_and_b()
A = self.A.tocsc()
b = self.b
V = self.network[self.settings["pore_volume"]]
return (-A.dot(y) + b) / V # much faster than A*y
return ode_func |
Returns a function handle, which calculates dy/dt = rhs(y, t).
Notes
-----
``y`` is the variable that the algorithms solves for, e.g., for
``TransientFickianDiffusion``, it would be concentration.
| _build_rhs | python | PMEAL/OpenPNM | openpnm/algorithms/_transient_reactive_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transient_reactive_transport.py | MIT |
def _build_A(self):
"""
Builds the coefficient matrix based on throat conductance values.
Notes
-----
The conductance to use is specified in stored in the algorithm's
settings under ``alg.settings['conductance']``.
"""
gvals = self.settings['conductance']
if gvals in self.iterative_props:
self.settings.cache = False
if not self.settings['cache']:
self._pure_A = None
if self._pure_A is None:
phase = self.project[self.settings.phase]
g = phase[gvals]
am = self.network.create_adjacency_matrix(weights=g, fmt='coo')
self._pure_A = spgr.laplacian(am).astype(float)
self.A = self._pure_A.copy() |
Builds the coefficient matrix based on throat conductance values.
Notes
-----
The conductance to use is specified in stored in the algorithm's
settings under ``alg.settings['conductance']``.
| _build_A | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
def _build_b(self):
"""Initializes the RHS vector, b, with zeros."""
b = np.zeros(self.Np, dtype=float)
self._pure_b = b
self.b = self._pure_b.copy() | Initializes the RHS vector, b, with zeros. | _build_b | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
def b(self):
"""The right-hand-side (RHS) vector, b (in Ax = b)"""
if self._b is None:
self._build_b()
return self._b | The right-hand-side (RHS) vector, b (in Ax = b) | b | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
def _apply_BCs(self):
"""Applies specified boundary conditions by modifying A and b."""
if 'pore.bc.rate' in self.keys():
# Update b
ind = np.isfinite(self['pore.bc.rate'])
self.b[ind] = self['pore.bc.rate'][ind]
if 'pore.bc.value' in self.keys():
f = self.A.diagonal().mean()
# Update b (impose bc values)
ind = np.isfinite(self['pore.bc.value'])
self.b[ind] = self['pore.bc.value'][ind] * f
# Update b (subtract quantities from b to keep A symmetric)
x_BC = np.zeros_like(self.b)
x_BC[ind] = self['pore.bc.value'][ind]
self.b[~ind] -= (self.A * x_BC)[~ind]
# Update A
P_bc = self.to_indices(ind)
mask = np.isin(self.A.row, P_bc) | np.isin(self.A.col, P_bc)
# Remove entries from A for all BC rows/cols
self.A.data[mask] = 0
# Add diagonal entries back into A
datadiag = self.A.diagonal()
datadiag[P_bc] = np.ones_like(P_bc, dtype=float) * f
self.A.setdiag(datadiag)
self.A.eliminate_zeros() | Applies specified boundary conditions by modifying A and b. | _apply_BCs | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
def run(self, solver=None, x0=None, verbose=False):
"""
Builds the A and b matrices, and calls the solver specified in the
``settings`` attribute.
This method stores the solution in the algorithm's ``soln``
attribute as a ``SolutionContainer`` object. The solution itself
is stored in the ``x`` attribute of the algorithm as a NumPy array.
Parameters
----------
x0 : ndarray
Initial guess of unknown variable
Returns
-------
None
"""
logger.info('Running Transport')
if solver is None:
solver = getattr(solvers, ws.settings.default_solver)()
# Perform pre-solve validations
self._validate_settings()
self._validate_topology_health()
self._validate_linear_system()
# Write x0 to algorithm (needed by _update_iterative_props)
self.x = x0 = np.zeros_like(self.b) if x0 is None else x0.copy()
self["pore.initial_guess"] = x0
self._validate_x0()
# Initialize the solution object
self.soln = SolutionContainer()
self.soln[self.settings['quantity']] = SteadyStateSolution(x0)
self.soln.is_converged = False
# Build A and b, then solve the system of equations
self._update_A_and_b()
self._run_special(solver=solver, x0=x0, verbose=verbose) |
Builds the A and b matrices, and calls the solver specified in the
``settings`` attribute.
This method stores the solution in the algorithm's ``soln``
attribute as a ``SolutionContainer`` object. The solution itself
is stored in the ``x`` attribute of the algorithm as a NumPy array.
Parameters
----------
x0 : ndarray
Initial guess of unknown variable
Returns
-------
None
| run | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
def _validate_x0(self):
"""Ensures x0 doesn't contain any nans/infs."""
x0 = self["pore.initial_guess"]
if not np.isfinite(x0).all():
raise Exception("x0 contains inf/nan values") | Ensures x0 doesn't contain any nans/infs. | _validate_x0 | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
def _validate_topology_health(self):
"""
Ensures the network is not clustered, and if it is, they're at
least connected to a boundary condition pore.
"""
Ps = ~np.isnan(self['pore.bc.rate']) + ~np.isnan(self['pore.bc.value'])
if not is_fully_connected(network=self.network, pores_BC=Ps):
msg = ("Your network is clustered, making Ax = b ill-conditioned")
raise Exception(msg) |
Ensures the network is not clustered, and if it is, they're at
least connected to a boundary condition pore.
| _validate_topology_health | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
def _validate_linear_system(self):
"""Ensures the linear system Ax = b doesn't contain any nans/infs."""
if np.isfinite(self.A.data).all() and np.isfinite(self.b).all():
return
raise Exception("A or b contains inf/nan values") | Ensures the linear system Ax = b doesn't contain any nans/infs. | _validate_linear_system | python | PMEAL/OpenPNM | openpnm/algorithms/_transport.py | https://github.com/PMEAL/OpenPNM/blob/master/openpnm/algorithms/_transport.py | MIT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.