file_path
stringlengths 32
153
| content
stringlengths 0
3.14M
|
---|---|
omniverse-code/kit/exts/omni.graph.core/docs/decisions/adr-template.rst | :orphan:
--- Template for problem with solution ---
##########################################
* Status: {proposed | rejected | accepted | deprecated | … | superseded by LINK_TO_OTHER_ADR} (OPTIONAL)
* Deciders: {list everyone involved in the decision} (OPTIONAL)
* Date: {YYYY-MM-DD when the decision was last updated} (OPTIONAL)
Technical Story: {description | ticket/issue URL} (OPTIONAL)
Context and Problem Statement
==============================
{Describe the context and problem statement, e.g., in free form using two to three sentences. You may want to articulate the problem in form of a question.}
Decision Drivers (OPTIONAL)
===========================
* {driver 1, e.g., a force, facing concern, …}
* {driver 2, e.g., a force, facing concern, …}
* {numbers of drivers can vary}
Considered Options
==================
* {option 1}
* {option 2}
* {option 3}
* {numbers of options can vary}
Decision Outcome
================
Chosen option: "{option 1}", because {justification. e.g., only option, which meets k.o. criterion decision driver | which resolves force {force} | … | comes out best (see below)}.
Positive Consequences (OPTIONAL)
--------------------------------
* {e.g., improvement of quality attribute satisfaction, follow-up decisions required, …}
* …
Negative Consequences (OPTIONAL)
--------------------------------
* {e.g., compromising quality attribute, follow-up decisions required, …}
* …
Pros and Cons of the Options (OPTIONAL)
=======================================
{option 1}
----------
..
{example | description | pointer to more information | …} (OPTIONAL)
* Good, because {argument a}
* Good, because {argument b}
* Bad, because {argument c}
* {numbers of pros and cons can vary}
{option 2}
----------
..
{example | description | pointer to more information | …} (OPTIONAL)
* Good, because {argument a}
* Good, because {argument b}
* Bad, because {argument c}
* {numbers of pros and cons can vary}
{option 3}
----------
..
{example | description | pointer to more information | …} (OPTIONAL)
* Good, because {argument a}
* Good, because {argument b}
* Bad, because {argument c}
* {numbers of pros and cons can vary}
Links (OPTIONAL)
================
* {Link type} {Link to ADR}
* {numbers of links can vary}
..
<!-- example: Refined by [ADR-0005](0005-example.md) -->
|
omniverse-code/kit/exts/omni.graph.core/docs/decisions/0002-python-api-exposure.rst | Deciding What To Expose As The Python API In OmniGraph
######################################################
* Status: proposed
* Deciders: kpicott, OmniGraph devs
* Date: 2022-07-07
Technical Story: `OM-46154 <https://nvidia-omniverse.atlassian.net/browse/OM-46154>`_
Context and Problem Statement
=============================
Once the Python API definition was decided there was the further decision of exactly what should go into the public
API, what should be internal, and what should be completely deprecated.
Considered Options
==================
1. **The Wild West** - everything that was already exposed continues to be exposed in exactly the same way
2. **Complete Lockdown** - only what is strictly necessary for current functionality to work is exposed, everything
else is made inaccessible to the user
3. **Graceful Deprecation** - we define what we want exposed and make those the *__all__* exposed symbols, but we also
include all of the previously available symbols in the module itself. Soft deprecation warnings are put in for
anything we want to eventually completely hide from the user.
Decision Outcome
================
**Option 3**: the users get full compatibility without any code changes but we have drawn our line in the sand denoting
exactly what we will be supporting in the future.
The exposure of symbols will be broken into different categories depending on the exposure desired:
1. **Full Exposure** the module imports the symbol and its name is added to the module's *__all__* list
2. **Internal Exposure** the symbol is renamed to have a leading underscore and the module imports the symbol. The name
of the symbol is not added to the module's *__all__* list
3. **Hidden Exposure** for symbols that should be internal but for which the work to rename was too involved for now,
create a new exposure list called *_HIDDEN* in the same location where *__all__* is defined and but all of the
symbols that should be made internal into that list. Eventually this list will be empty as symbols are renamed to
reflect the fact that they are only internal to OmniGraph
4. **Soft Deprecation** for symbols we no longer want to support. Their import and/or definition will be moved to a
module file with the current version number, e.g. *_1_11.py*. The deprecated symbols will be added to that module's
*__all__* list and the module will be imported into the main module but not added to the main module's *__all__* list
5. **Hard Deprecation** for symbols that were deprecated before that can simply be deleted. The file from which they were
imported is replaced by a file that is empty except for a single *raise DeprecationError* statement explaining what
action the user has to take to replace the deprecated functionality
To reduce the size of the largest modules, submodules will be created to segment by grouped functionality. For example,
**omni.graph.autonode** will contain all of the AutoNode-specific functionality.
Lastly, in order to support this model some reorganization of files will be necessary. In particular, anything that is
not part of the public API should be moved to an *_impl* subdirectory with imports from it being adjusted to match.
Pros and Cons of the Options
============================
Option 1
--------
* **Good** - work required to define the API is minimal as it is just what we have
* **Bad** - with the existing structure it's difficult to ascertain exactly what the API is, and as a consequence it
would be harder to maintain in the future
* **Ugly** - the current surface is organic anarchy; continuing to support all of it would be a development drag
Option 2
--------
* **Good** - with everything well defined we make a strong statement as to what we are claiming to support.
* **Bad** - we would have to rely on tests to find any missing exports, and we know we don't have 100% coverage even within Kit
* **Ugly** - we are already at the stage where we don't know for sure what users are using, and the testing matrix is
not yet populated enough to give us any confidence we will find the problems so there as an almost certainty that we
would end up breaking somebody's code
Option 3
--------
* **Good** - users's are happy because they have no immediate work to do, and we are happy because we have defined exactly
what we are claiming to support. Because it's a smaller subset we can write a compatibility test for future changes.
* **Bad** - we will end up exposing some things we don't want to, simply because it's too difficult to remove in one step
* **Ugly** - everything is still wide open and users are free to make use of anything in our system. Despite any
warnings to the contrary `Hyrum's Law <https://abseil.io/resources/swe-book/html/ch01.html#hyrumapostrophes_law>`_
dictates that it will be used anyway.
|
omniverse-code/kit/exts/omni.graph.core/docs/decisions/0003-extension-locations.rst | OmniGraph Extension Locations In The Repo
#########################################
* Status: proposed
* Deciders: kpicott, OmniGraph devs
* Date: 2022-08-12
Technical Story: `OM-50040 <https://nvidia-omniverse.atlassian.net/browse/OM-50040>`_
Context and Problem Statement
=============================
All of the OmniGraph extensions were developed directly within Kit. As they grow, and Kit grows, the development
velocity of both the OmniGraph and the larger Kit team are negatively impacted. Kit builds have to wait for all of
the OmniGraph extensions and tests to build, and OmniGraph has to wait for the entire Kit SDK to build.
In addition the Kit team has expressed a desire to move to a thin model where only code necessary to run what they call
"Kit core" should be in the Kit repo, and OmniGraph is not considered part of that.
Considered Options
==================
1. **Status Quo** - we just leave all of the extensions we have inside Kit and continue to develop that way
2. **Single Downstream Replacement** - move all of the extensions downstream of `omni.graph.core` to an extensions repo,
removing the existing extensions from Kit
3. **Multi Downstream Replacement** - move each of the extensions downstream of `omni.graph.core` to their own
individual extensions repo, removing the existing extension from Kit
4. **Downstream Copy** - move all of the extensions downstream from `omni.graph.core` to their own repo, bumping their
major version, but leave the existing extensions in Kit at the previous version and have them be dormant
Decision Outcome
================
**Option 2**: Single Downstream Replacement. Although none of these solutions is perfect this one affords the best
balance of controlling our own destiny and administration overhead. There will be some changes in developer workflow
required to support it, especially in the short term when extensions are transitioned out, and in the medium term
when we have to backport across repos, but in the long run the workflow gains will outweigh the extra efforts.
Moving Extensions
-----------------
It's not practical to move every extension out in a single step so we'll have to move them in small subgroups. The
extension dependency diagram will guide us on the ordering required to move an extension out. Generally speaking an
extension can be moved as soon as nothing in Kit has a direct dependency on it. The
:ref:`extension dependency graph<omnigraph_extension_dependency_graph>` that was extracted from the full list of
dependencies will guide this. Any extension without an incoming arrow can be moved, starting with
`omni.graph.bundle.action`.
Pros and Cons of the Options
============================
Option 1
--------
* **Good** - Workflow is simple, we just build and run Kit and there's no overhead in managing our published versions
* **Bad** - Kit build continues to be slow for all concerned
* **Ugly** - We can't independently version our extensions, Kit becomes increasingly more complex as we add more, our
tests continue to be subject to instability due to unrelated Kit problems
Option 2
--------
* **Good** - Kit builds and our builds both get way faster, we can version our extension set indpendently of Kit, our
tests don't rely as much on how the Kit app behaves, just Kit core
* **Bad** - Some developers have to work in multiple repos during the transition, resulting in a clunky workflow for
testing changes. There's extra overhead in deciding which apps get which versions of our extensions.
* **Ugly** - Backport integration becomes more difficult as previous versions of extensions are not in a related
repo.
Option 3
--------
* **Good** - We gain full control over extension versioning, builds are even smaller and faster than Option 2, more
flexible set of Kit SDK versioning gives us a wider variety of cross-version testing
* **Bad** - It becomes more difficult to build tests as typically they will rely on nodes in multiple extensions
* **Ugly** - Multiple repos mean multiple copies of the build to handle, multiple packman packages to wrangle, and the
overhead reduction we gained by moving out of Kit is lost to administrivia
Option 4
--------
* **Good** - Backward compatibility is easier to maintain as there are full "frozen" copies of the extensions available
with Kit while we move forward in the separate repo.
* **Bad** - Same as Option 2, plus the Kit builds continue to be slow as they still build our frozen extensions
* **Ugly** - Random test failures will still occur in our code even though we aren't touching it, bugs that arise
from apps using specific versions of the Kit SDK have to be addressed in the "frozen" copies, and it becomes more
difficult for apps like Create to choose the correct versions of our extensions.
|
omniverse-code/kit/exts/omni.graph.core/docs/decisions/0001-python-api-definition.rst | Definition of Python API Surface In OmniGraph
#############################################
* Status: proposed
* Deciders: kpicott, OmniGraph devs
* Date: 2022-06-24
Technical Story: `OM-46154 <https://nvidia-omniverse.atlassian.net/browse/OM-46154>`_
Context and Problem Statement
=============================
Although the C++ ABI is stable and backwards compatible all of the Python code supplied by OmniGraph is, by the nature
of the language, public by default. This is causing problems with compatibility as much of the exposed code is
implementation detail that we wish to be able to change without affecting downstream customers. After a few iterations
we have run into many cases of unintentional dependencies, which will only grow more frequent as adoption grows.
Considered Options
==================
1. **The Wild West** - we leave everything exposed and let people use what they find useful. Behind the scenes we attempt
to maintain as much compatibility as we can, with deprecation notices where we cannot.
2. **Complete Lockdown** - we ship our Python modules as pre-interpreted bytecode (*.pyc* files), and tailor our
documentation to the specific code we wish to expose.
3. **PEP8 Play Nice** - we follow the `PEP8 Standard <https://peps.python.org/pep-0008/#public-and-internal-interfaces>`_
for defining public interfaces, where nothing is truly hidden but we do have a well-defined way of identifying the
interfaces we intend to support through multiple versions.
Decision Outcome
================
Option 3: it strikes a good balance of ease of development and ease of use.
Pros and Cons of the Options
============================
Option 1
--------
* **Good** - development is simpler as we don't have to worry about how to format or organize most Python code
* **Bad** - there is extra work involved in explicitly deprecating things that change, and since everything is visible
the need for deprecation will be quite frequent.
* **Ugly** - if everything is fair game then users will start to make use of things that we either want to change or
get rid of, making evolution of the interfaces much more difficult than they need to be
Option 2
--------
* **Good** - with everything well defined we make a strong statement as to what we are claiming to support.
* **Bad** - the code is still introspectable at the interface level even if the details are not. This makes it more
likely that a mismatch in the code and the description will cause unreconcilable user confusion.
* **Ugly** - locking everything is inherently un-Pythonic and puts up artificial barriers to our primary goal in using
Python, which is widespread adoption. The fact that the definitive documentation is outside of the code also
contributes to those barriers.
Option 3
--------
* **Good** - we get a good balance of both worlds. The retained introspection makes it easier for seasoned
programmers to understand how things are working and the API conventions make it clear where things are not meant to
be relied on in future versions.
* **Bad** - there's a bit more work to follow the guidelines and not much automatic help in doing so. Internal
developers have to be diligent in what they are exposing.
* **Ugly** - everything is still wide open and users are free to make use of anything in our system. Despite any
warnings to the contrary `Hyrum's Law <https://abseil.io/resources/swe-book/html/ch01.html#hyrumapostrophes_law>`_
dictates that it will be used anyway.
|
omniverse-code/kit/exts/omni.graph.core/docs/decisions/0000-use-markdown-any-decision-records.rst | Use Markdown Any Decision Records
#################################
Context and Problem Statement
=============================
We want to record any decisions made in this project independent whether decisions concern the architecture ("architectural decision record"), the code, or other fields.
Which format and structure should these records follow?
Considered Options
==================
* `MADR <https://adr.github.io/madr/>`_ 3.0.0 – The Markdown Any Decision Records
* Google Documents
* Confluence
Decision Outcome
================
Chosen option: "MADR 3.0.0", because
* Implicit assumptions should be made explicit.
Design documentation is important to enable people understanding the decisions later on.
See also `A rational design process: How and why to fake it <https://doi.org/10.1109/TSE.1986.6312940>`_.
* MADR allows for structured capturing of any decision.
* The MADR format is lean and fits our development style.
* The MADR structure is comprehensible and facilitates usage & maintenance.
* The MADR project is vivid.
* Google docs are not easily discoverable.
* We already spend too much time explaining past decisions.
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_usd.rst | .. _omnigraph_versioning_usd:
OmniGraph USD Versioning
########################
This type of versioning involves changes to the structure or content of the USD backing for OmniGraph. It should
happen rarely. Two examples of changes of this type are the withdrawal of support for the global implicit graph, and
the move from using plain prims to back the OmniGraph elements to using schema-based prims.
In both cases the ABI remains unchanged, only the contents of the USD backing changed.
This deprecation follows a three-step process to give users ample opportunity to modify any code they need to in order
to support the new version that follows the standard :ref:`omnigraph_deprecation_approach`.
.. hint::
Using the deprecation process ensures that each step only involves a minor version bump to the extension as
existing code will continue to work. In the final phase of removal of the original behavior a major version
bump may be required as existing scripts may not function without modification.
Step 1 - Introduce A Deprecation Setting
****************************************
For anything but the most trivial change there is always the possibility of existing code relying on the old way of
doing things so the first step is to introduce a deprecation setting using the instructions in
:ref:`omnigraph_versioning_settings`.
Step 2 - Add In Automatic File Migration
****************************************
Changes to the file format require a bump of the file format version, which can be found in the C++ file
`omni.graph.core/plugins/Graph.cpp` in a line that looks like this:
.. code-block:: cpp
FileFormatVersion s_currentFileFormatVersion = { 1, 4 };
The first number is the major version, the second is the minor version, following the
:ref:`omnigraph_semantic_versioning` guidelines. A major version bump is reserved for any changes that are incapable
of having an automatic upgrade performed - e.g. adding mandatory metadata that identifies the time the file was created.
Next step is to introduce some code that will take the USD structure in the previous file formats, going as far back as
is reasonable, and modify it to conform to the desired new structure. Any USD-based modifications can be made at this
point so it is quite flexible.
One good way of doing this is to register a callback on a file format version update in the extension startup code,
remembering to unregister it when the extension shuts down.
.. literalinclude:: ../../../../../source/extensions/omni.graph/python/_impl/extension.py
:language: python
:start-after: begin-file-format-update
:end-before: end-file-format-update
The callback should be placed somewhere that indicates it is on a deprecation path. For the **useSchemaPrims**
deprecation, for example, the deprecation code was added to a special directory `omni.graph/python/_impl/v1_5_0/` that
contains code deprecated after version 1.5.0 of that extension. The callback code should check the version of the
file being read and check it against the version that requires deprecation.
Once you've found that the file format is one that would require conversion then issue a deprecation warning, and if
the setting is configured to the new path then perform the conversion.
.. code-block:: python
:emphasize-lines: 15,21
def cb_for_my_deprecation(
old_version: Optional[og.FileFormatVersion],
new_version: Optional[og.FileFormatVersion],
graph: Optional[og.Graph]
):
if old_version is not None and\
(old_version.majorVersion > VERSION_BEFORE_MY_CHANGE.majorVersion or\
(old_version.majorVersion == VERSION_BEFORE_MY_CHANGE.majorVersion and\
old_version.minorVersion > VERSION_BEFORE_MY_CHANGE.minorVersion)):
# No conversion needed, the file format is updated
return
# If the setting to automatically migrate is not on then there is nothing to do here
carb.log_warn(f"Old format {old_version} automatically migrating to {new_version}")
# If the setting to automatically migrate is not on then there is nothing to do here
if not og.Settings().(og.Settings.MY_SETTING_NAME):
return
update_the_scene()
Step 3 - Enable The New Code Path By Default
********************************************
Once the deprecation time has been reached then modify the default value of the setting in
`omni.graph.core/plugins/OmniGraphSettings.cpp` and announce that the deprecation is happening.
Step 4 - Remove The Old Code Path
*********************************
In the rare case where full removal is required then you can remove the support code for the old code path.
- Delete the deprecation setting
- Delete all code paths that only execute when the setting is configured to use the deprecated code path
- Change the file format conversion callback to issue and error and fail
- Bump the major version of the file format to indicate older ones are no longer compatible
USD Version Effects On Python Scripting
***************************************
The automatic file migration provides a flexible upgrade path to making old files use the new way of doing things,
however there may still be incompatibilities lurking in the Python files. Simple examples include scripts that
assume the existence of deprecated prim types or attributes, or files containing script nodes that do the same.
The only way to handle these is to ensure that you have good test coverage so that when the default value of the
setting is reversed the test will fail and can be fixed.
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_extensions.rst | .. _omnigraph_versioning_extensions:
Versioning OmniGraph Extensions
###############################
The version numbers used by our extensions will follow the :ref:`omnigraph_semantic_versioning` model. Updates to the
version numbers will occur in every MR that modifies the extension. This will entail two changes. The first is an
update to the *config/extension.toml* file with the appropriate portion of the semantic version updated. The second
is an update to the *docs/CHANGELOG.md* file with a description of the changes that appear in that update.
For example if a new node was added to the `omni.graph.nodes` extension then it's *extension.toml* file would change
this section:
.. code-block:: toml
[package]
title = "OmniGraph Nodes"
version = "1.23.8"
to this, as an addition is a MINOR version change:
.. code-block:: toml
:emphasize-lines: 3
[package]
title = "OmniGraph Nodes"
version = "1.24.0"
And these lines would be added at the top of the *CHANGELOG.md*:
.. code-block:: md
## [1.24.0] - 2112-03-04
### Added
- Added the node type "Rush"
Inter-Extension Versions
************************
For now all of the extensions that live inside of Kit will always be set to use the "latest" version, so in an
`extension.toml` file they will appear like this:
.. code-block:: toml
[dependencies]
"omni.graph" = {}
"omni.graph.scriptnode" = {}
"omni.graph.tools" = {}
Extensions that live downstream of the Kit repo will be versioned explicitly. For example in the
`kit-extensions/kit-graphs` repo there is the extension *omni.graph.window.action* whose .toml includes this, where
the version of dependent extensions are explicit.
.. code-block:: toml
[dependencies]
"omni.graph.window.core" = {version="1.16"}
"omni.graph.ui" = {version="1.5"}
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/deprecation.rst | .. _omnigraph_deprecation:
OmniGraph Code Deprecation
==========================
This is a guide to the deprecation of Python classes, functions, and objects to support backward compatibility.
C++ deprecation is left to the ABI functions.
Before You Deprecate
--------------------
Unlike C++, Python is totally transparent to the user. At least it is in the way in which we are currently using it.
That means there is no fixed ABI that you can safely code to, knowing that all incompatible changes will be localized
to that set of functions.
In order to mitigate this, it is a best practice to provide a published Python API through explicit imports in your
main module's `__init__.py` file. Then you can make a statement that only things that appear at the top level of that
module are guaranteed to remain compatible through minor and patch version changes.
Here's a sample directory structure that allows hiding of the implementation details in a Pythonic way, using a
leading underscore to hint that the contents are not meant to be visible.
.. code-block:: text
omni.my.extension/
python/
__init__.py
_impl/
MyClass.py
my_function.py
Then inside the `__init__.py` file you might see something like this:
.. code-block:: python
"""Interface for the omni.my.extension module.
Only components explicitly imported through this file are to be considered part of the API, guaranteed to remain
backwardly compatible through all minor and patch version changes.
Recommended import usage is as follows:
.. code-block:: python
import omni.my.extension as ome
ome.my_function()
"""
from ._impl/my_function import my_function
from ._impl/MyClass import MyClass
A good documentation approach is still being worked out. For now using docstrings in your functions, modules, and
classes will suffice.
.. warning::
By default Git will ignore anything starting with an underscore, so you might have to add a .gitignore file
in the `python/` directory containing something like this:
# Although directories starting with _ are globally ignored, this one is intentional,
# to follow the Python convention of internal implementation details starting with underscore
!_impl/
When To Deprecate
-----------------
Now that an API is established the deprecation choice can be limited to times when that API must be changed.
Python code is very flexible so very large changes can be made without losing existing functionality. Examples of such
changes are:
- Adding new methods to classes
- Adding new parameters to existing class methods, if they have default values and appear last in the parameter list
- Adding new functions
- Adding new parameters to existing class methods, if they have default values and appear last in the parameter list
- Adding new constant objects
Each of these changes would be considered additional functionality, requiring a bump to the minor version of the extension.
Each of these changes could also be done by wholesale deprecation of the entity in question, with a replacement that uses a
new name using the `Deprecation Process` below.
Some changes require full deprecation of the existing functionality due to potential incompatibility. Examples of such
changes are:
- Removing a parameter from a class method or function
- Deleting a class method, function, or object
- Renaming an object
- Changing an implementation
- An ABI function with a Python binding has been deprecated
Deprecation Process
-------------------
Deprecation In Place
++++++++++++++++++++
The first line of defence for easy deprecation when adding new features to existing code is to provide defaults that
replicate existing functionality. For example, let's say you have this function that adds two integers together:
.. code-block:: python
def add(value_1: int, value_2: int) -> int:
return value1 + value_2
New functionality is added that lets you add three numbers. Rather than creating a new function you can use the
existing one, adding a default third parameter:
.. code-block:: python
def add(value_1: int, value_2: int, value_3: int = 0) -> int:
return value1 + value_2 + value_3
The key feature here is that all code written against the original API will continue to function without changes.
You can be creative about how you do this as well. You can instead use flexible arguments to add an arbitrary number
of values:
.. code-block:: python
def add(value_1: int, value_2: int, *args) -> int:
return value1 + value_2 + sum(args)
Or you can use the typing system to generalize the function (assuming you are not using static type checkers):
.. code-block:: python
def add(value_1: Union[int, float], value_2: Union[int, float]) -> Union[int, float]:
return value1 + value_2
Or even allow different object types to use the same pattern:
.. code-block:: python
Add_t = Union[int, Tuple[float, float]]
def add(value_1: Add_t, value_2: Add_t) -> Add_t:
if isinstance(value_1, tuple):
if isinstance(value_2, tuple):
return value_1 + value_2
else:
return value_1 + tuple([value_2] * len(value_1))
else:
if isinstance(value_2, tuple):
return value_2 + tuple([value_1] * len(value_2)
else:
return value_1 + value_2
.. tip::
A good deprecation strategy prevents the remaining code from becoming overly complex. If you start to see the
parameter type checking code outweigh the actual functioning code it's time to think about deprecating the
original function and introducing a new one.
Deprecation By Renaming
+++++++++++++++++++++++
If you really do need incompatible features then you can make your new function the primary interface and relegate
the old one to the deprecated section. One easy way to do this is to create a new subdirectory that contains all of
the deprecated functionality. This makes it both easy to find and easy to eliminate once a few releases have passed
and you can no longer support it.
For example if you want to create completely new versions of your class and function you can modify the directory
structure to look like this:
.. code-block:: text
omni.my.extension/
python/
__init__.py
_impl/
MyNewClass.py
my_new_function.py
v_1/
MyClass.py
my_function.py
Then inside the `__init__.py` file you might see something like this:
.. code-block:: python
"""Interface for the omni.my.extension module.
Only components explicitly imported through this file are to be considered part of the API, guaranteed to remain
backwardly compatible through all minor and patch version changes.
Recommended import usage is as follows:
.. code-block:: python
import omni.my.extension as ome
ome.my_new_function()
"""
from ._impl/my_new_function import my_new_function
from ._impl/MyNewClass import MyNewClass
r"""Deprecated - everything below here is deprecated as of version 1.1.
_____ ______ _____ _____ ______ _____ _______ ______ _____
| __ \ | ____|| __ \ | __ \ | ____|/ ____| /\ |__ __|| ____|| __ \
| | | || |__ | |__) || |__) || |__ | | / \ | | | |__ | | | |
| | | || __| | ___/ | _ / | __| | | / /\ \ | | | __| | | | |
| |__| || |____ | | | | \ \ | |____| |____ / ____ \ | | | |____ | |__| |
|_____/ |______||_| |_| \_\|______|\_____|/_/ \_\|_| |______||_____/
"""
from .impl.v_1.my_functon import my_function
from .impl.v_1.MyClass import MyClass
Now, as before, existing code continues to work as it is still calling the old code which it accesses with the same
imported module, however the new versions are clearly marked as the main API.
So what happens if the user is using the deprecated versions? With just this change they remain blissfuly unaware that
progress has happened. Instead we would prefer if they were notified so that they have a chance to upgrade their code
to take advantage of new features and avoid the shortcomings of the old ones. To this end some decorators can be used
to provide some messaging to the user when they are using deprecated features.
Deprecation Messaging
---------------------
Messaging can be added in deprecation situations by using one of the functions and decorators that support it.
All deprecation functions can be accessed from the top `omni.graph.tools` module level.
The :py:class:`omni.graph.tools.DeprecateMessage` class provides a simple way of logging a message that will only
show up once per session.
.. literalinclude:: ../../../../../source/extensions/omni.graph.tools/python/_impl/deprecate.py
:language: python
:start-after: begin-deprecate-message
:end-before: end-deprecate-message
The :py:class:`omni.graph.tools.DeprecateClass` decorator provides a method to emit a deprecation message when the
deprecated class is accessed.
.. literalinclude:: ../../../../../source/extensions/omni.graph.tools/python/_impl/deprecate.py
:language: python
:start-after: begin-deprecated-class
:end-before: end-deprecated-class
The :py:class:`omni.graph.tools.RenamedClass` decorator is a slightly more sophisticated method of deprecating a
class when the deprecation is simply a name change.
.. literalinclude:: ../../../../../source/extensions/omni.graph.tools/python/_impl/deprecate.py
:language: python
:start-after: begin-renamed-class
:end-before: end-renamed-class
The :py:func:`omni.graph.tools.deprecated_function` decorator provides a method to emit a deprecation message
when the old function is called.
.. literalinclude:: ../../../../../source/extensions/omni.graph.tools/python/_impl/deprecate.py
:language: python
:start-after: begin-deprecated-function
:end-before: end-deprecated-function
The :py:func:`omni.graph.tools.DeprecatedImport` decorator provides a method to emit a deprecation message
when an entire deprecated file is imported for use. This should not be used for imports that will be included
in the API for backward compatibility, nor should these files be moved as they must continue to exist at the
same import location in order to remain compatible.
.. literalinclude:: ../../../../../source/extensions/omni.graph.tools/python/_impl/deprecate.py
:language: python
:start-after: begin-deprecated-import
:end-before: end-deprecated-import
The :py:func:`omni.graph.core.Attribute.is_deprecated` method provides a way to determine if an attribute has been
deprecated, and :py:func:`omni.graph.core.Attribute.deprecation_message` returns the associated text from
the .ogn file.
For The Future
--------------
If deprecations become too difficult to manage a more structured approach can be implemented. This would involve
using a namespaced versioning for a module so that you can import a more precise version to maintain compatibility.
For example, the directory structure might be arranged as follows to support version 1 and version 2 of a module:
.. code-block:: text
omni.my.extension/
python/
__init__.py
v1.py
v2.py
_impl/
v1/
MyClass.py
my_function.py
v1/
MyClass.py
my_function.py
With this structure the imports can be selectively added to the top level files to make explicit versioning decisions
for the available APIs.
.. code-block:: python
import omni.graph.tools as og # Always use the latest version of the interfaces, potentially breaking on upgrade
import omni.graph.tools.v1 as og # Lock yourself to version 1, requiring explicit change to upgrade
import omni.graph.tools.v2 as og # Lock yourself to version 2, requiring explicit change to upgrade
The main files might contain something like this to support that import structure:
.. code-block:: python
# __init__.py
from .v1 import *
# v1.py
from ._impl/v1/MyClass import MyClass
from ._impl/v1/my_function import my_function
# v2.py
from ._impl/v2/MyClass import MyClass
from ._impl/v2/my_function import my_function
You can see how version selection redirects you to the matching versions of the classes and functions without any
renaming necessary. The deprecation messaging can then be applied to older versions as before.
It might also be desirable to apply versioning information to class implementations so that their version can be
checked in a standard way where it is important.
.. code-block:: python
@version(1)
@deprecated("Use omni.graph.tools.v2.MyClass instead")
class MyClass:
pass
More sophisticated mechanisms could provide version conversions as well, so that you can always get a certain version
of a class if you require it, even if it was created using the old API by providing a function with a standard name:
.. code-block:: python
@version(1)
@deprecated("Use omni.graph.tools.v2.MyClass instead")
class MyClass:
@classmethod
@version_compatibility(1)
def upgrade_version(cls, old_version) -> MyClass:
return MyClass(old_version.parameters)
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_nodes.rst | .. _omnigraph_versioning_nodes:
Node Type Versioning
####################
Although it will probably not happen too frequently, the implementation of node types will occasionally change in such a
way that makes them incompatible with existing uses. In extreme cases the best course of action might be to create an
entirely new node type with the new functionality. For less intrusive changes these are the types of modifications
for which OmniGraph supports backward compatibility.
- :ref:`omnigraph_versioning_nodes_abi`
- :ref:`omnigraph_versioning_nodes_renaming`
- :ref:`omnigraph_versioning_nodes_removing`
- :ref:`omnigraph_versioning_nodes_modifying`
- :ref:`omnigraph_versioning_nodes_collisions`
.. hint::
Most node type version changes will result in a corresponding minor version bump of the extension in which they live.
.. _omnigraph_versioning_nodes_abi:
ABI Versioning
**************
ABI versioning for node types should follow the basic Carbonite guidelines. `INodeType` is a Carbonite interface, so its
backward compatibiliy is assured by following the update guidelines for it described in :ref:`omnigraph_versioning_carbonite`.
.. _omnigraph_versioning_nodes_renaming:
Renaming A Node Type
********************
From time to time users may wish to change the name of their node without changing the node itself, perhaps because they
came across a better name, to fix a misspelling, or to prevent conflicts with an existing name. The functionality to do
this is currently hardcoded in the Node.cpp file but should be surfaced to the ABI. Here’s a potential addition to the
INodeType interface to make it happen:
.. code-block:: cpp
/**
* Create an alternate name for a node type, most often used to support renaming a node type.
* When the node type is deregistered the alternate name will be automatically removed as well.
*
* @param[in] alternateName Secondary name that can be used interchangeably with nodeTypeName
* @param[in] nodeTypeName Name of the node type for which an alternate is to be created
* @return True if the alternate name was successfully set up, false if the node type name could
* not be found or the alternateName was already in use (both issue a warning)
*/
bool(CARB_ABI* createNodeTypeAlias)(char const* alternateName, char const* nodeTypeName);
The alternate name(s) will be specified in the .ogn file so that the node definition can control them directly.
.. code-block:: json
{
"MySpiffyNode":
{
"$comment": "...other stuff...",
"alias": "MySpifyNode"
}
}
.. warning::
This feature is not yet implemented. Move to user-side documentation once it is.
.. _omnigraph_versioning_nodes_removing:
Removing A Node Type
********************
From time to time a user may wish to withdraw support for a node, perhaps because they have a more functional replacement,
or the function it provided is no longer relevant. Support will be added to support our soft deprecation path whereby
the node first starts issuing deprecation warnings when it is instantiated and then later will be removed entirely.
The second phase just involves deletion of the node from the extension, which still retains some measure of backward
compatibility as the user can always enable an earlier version of the extension to bring it back. The first phase will
be supported in a standard way with this ABI function on INodeType:
.. code-block:: cpp
/**
* Mark a node type as being deprecated with a message for the user on action they should take.
*
* @param[in] nodeObj Node to which this function applies
* @param[in] deprecationMsg Message to the user describing what to do before the deprecation happens
*/
void(CARB_ABI* deprecate)(NodeObj& nodeObj, char const* deprecationMsg);
/**
* Return the deprecation message for an node.
*
* @param[in] nodeObj node to which this function applies
* @return String containing the node deprecation message (nullptr if the node is not deprecated)
*/
char const*(CARB_ABI* deprecationMessage)(NodeObj const& nodeObj);
The soft deprecation is enabled through a new .ogn keyword:
.. code-block:: json
{
"MySpiffyNode":
{
"$comment": "...other stuff...",
"deprecate": "This node will be removed in the next release"
}
}
.. warning::
This feature is not yet implemented. Move to user-side documentation once it is.
.. _omnigraph_versioning_nodes_modifying:
Modifying Attributes
********************
Currently the `updateNodeVersion` method can be overridden on the node type implementation to support arbitrary changes
to attributes or internal node state, however it happens after the node initialization and therefore cannot support
things like moving connections. We've already seen several examples of desired upgrades, including the ability to have
a soft deprecation where old attributes remain in place, so this mechanism needs to be extended to be more robust.
The changes to a node type that we will have a built-in upgrade path for are:
- Renaming an attribute
- Soft deprecation of an attribute
Things that are not planned to be handled directly include:
- Removal of an attribute
- Replacement of an attribute with another attribute or attributes
.. _omnigraph_renaming_attributes:
Renaming Attributes
===================
Renaming an attribute can only be done if the attribute keeps all of the defining properties and only the name changes.
This includes the port type, data type, default value, and memory type. The metadata, such as UI name, hidden state,
etc. can be changed if required without affecting the attribute renaming.
This is what the attribute renaming would look like in the .ogn file, where an input named "x" changes its name to
"red".
.. code-block:: json
:caption: Before the renaming
{
"MyNode": {
"version": 1,
"description": "This is my node",
"inputs": {
"x": {
"type": "float",
"description": "The first value",
"uiName": "First Value"
}
}
}
}
.. code-block:: json
:caption: After the renaming
:emphasize-lines: 3,7,9,10
{
"MyNode": {
"version": 2,
"description": "This is my node",
"inputs": {
"red": {
"oldName": ["x", 1],
"type": "float",
"description": "The red value",
"uiName": "Red Value"
}
}
}
}
The **oldName** keyword takes a pair of values consisting of the old attribute name and the last node version before
it was renamed. There can be multiple pairs on the off chance that you rename the same attribute more than once.
Notice how the node type version number was incremented as well, to tell OmniGraph that this implementation is
different from the previous version.
In order to support this there must be an addition to the `IAttribute` interface in the ABI that can remember this
information for processing. This is what it might look like:
.. code-block:: cpp
/**
* Define an old name by which an attribute was known.
*
* @param[in] attributeObj Attribute to which this function applies
* @param[in] oldName Old name by which the attribute was known
* @param[in] lastVersion Last node type version in which the old name was valid
*/
void(CARB_ABI* renamedFrom)(AttributeObj& attributeObj, char const* oldName, int lastVersion);
.. warning::
This feature is not yet implemented. Move to user-side documentation once it is.
Soft Deprecation Of Attributes
==============================
In order to respect our :ref:`omnigraph_soft_deprecation`, when an attribute is to be removed it must be flagged
as deprecated in its .ogn file, along with a message telling users what they must do to prepare for the loss of the
attribute, if possible. The node type version does not yet need to be bumped as nothing has really changed for it at
this point.
.. code-block:: json
:emphasize-lines: 3,7,9,10
{
"MyNode": {
"version": 1,
"description": "This is my node",
"inputs": {
"x": {
"type": "float",
"description": "The first value",
"uiName": "First Value",
"deprecated": "Use 'offset' instead."
},
"offset": {
"type": "float",
"description": "This will be added to all values."
}
}
}
}
This flag is set on the Attribute using `IInternal::deprecateAttribute()` but there should be no need to call
this method outside of the node generator tool. `IAttribute::isDeprecated()` can be called to retrieve the flag and
`IAttribute::deprecationMessage()` will return the associated message.
When actual deprecation occurs then the appropriate action will take the place of the deprecation message - renaming,
removing, or replacing the attribute.
Modifying Node Behaviour
========================
In addition to changes in attributes, there may also be a change in attribute interpretation or of computational
algorithm. A simple example might be the switch to a more numerically stable integration method, or the addition of
an extra parameter that can switch computation type.
For any change that affects how the node is computed you must bump the node type version number. A mismatch in version
number for a node being created and the current node implementation will trigger a callback to the `updateNodeVersion`
function, which your node can override if there is some cleanup you can do that will help maintain expected operation
of the node.
.. _omnigraph_versioning_nodes_collisions:
Generated Code Rearrangement
****************************
As the number of nodes and variety of use cases has grown some weaknesses in the structure of the generated code
directories has been found. This section will describe the problems to be solved without yet any proposed solutions.
Node Collisions
===============
The generated OgnXXDatabase.h files are all stored in a single directory `_build/ogn/include`, which opens the
possibility of name collisions as well as keeping the contents of an extension self-contained.
Global Documentation
====================
The generated OgnXX.rst files are all stored in a single directory `_build/ogn/docs`, which while allowing creation of
an index that lists all available nodes opens the possibility of collisions as above, and does not tie the documentation
of individual nodes to the extensions that implement them.
This problem is even more apparent with external/downstream extensions which aren't included in the Kit build and which
therefore will not appear in documentation anywhere.
There is also a lack of ability for a user to jump from a node type definition or node instantiation to the
documentation for that node type, which would be helpful in understanding how to use a node type. Bits and pieces
appear throughout the UI (descriptions, attribute types, etc. as tooltips) but this is not the same as a single source
of everything related to the node type.
Python Builds
=============
The Python node types can be built at runtime so while it's helpful to have the prebuilt OgnXXDatabase.py files
available it's not strictly necessary. A user should be able to include a Python node type in an extension simply by
linking the directory to which it belongs to their build module directory, or even in their extension's directory if
they are using the local Python extension approach.
Getting in the way of this are first the lack of versioning support in the generated code, especially for local
extensions, so that if a user loads their extension using multiple versions of the Kit SDK the generated code will
only match one of them. Ideally they would like to support both and the generated code would be a cache, similar to
how packman works. The Warp team has implemented something like this already for their own code generator.
The Symlinked ogn Directory
===========================
Also a problem in the Python generation is how the extension is expecting an `/ogn/` directory in their Python module
that has a `nodes/` subdirectory symlinked back to the source for their nodes. The latter is to support hot reload,
however the method of doing that breaks the standard method in the build system. If, as above, the generated files
appeared elsewhere then there would be no need for the `/ogn/` directory and any `nodes/` subdirectory could be
linked directly by the build for hot reload support.
The ogn/tests Directory
=======================
Similarly to the above the generated tests appear in the `/ogn/tests` subdirectory and the .usda files used for the
tests appear in the `/ogn/tests/usd` subdirectory below that. These directories may also vary based on the version of
the code generator and the version of the node type. They may also be generated at runtime rather than at build time,
making the current mechanism for locating and registering the tests ineffective. Similarly, if outdated versions exist
then they should not be registered and run, so the whole test registration mechanism may need rethinking.
Supporting Multiple Node Type Versions
======================================
At the moment although there is ABI support for creating multiple versions of the same node type at the same time
there is no API support, so that is a necessary addition to make before the code generator can fully support them.
Setting that aside, for the code generation there is currently no support for multiple versions of the same node type.
The paths and object names are hardcoded to just the node type and/or class name without a version number attached to
it. This would also suffer from the same file collisions cited above, exacerbated by the fact that the names for
multiple versions would currently be exactly the same.
Other Possible Node Type Changes
********************************
This is a catch-all to mention any types of node modifications that won't have intrinsic support. They are listed here
to make note of the fact that they have been considered, and if they occur often may merit more direct support.
- Node type consolidation. e.g. we had a vertical stack node type and a horizontal node stack type and they were
replaced by a single stack node type with an attribute defining the direction.
- Node language conversion. A node that was implemented in Python that is changed to be implemented in C++. Ideally
both might be available to the user as the Python one is easier to debug but the C++ one is faster.
- Device targeting. A node might wish to be implemented on both the CPU and the GPU so that an intelligent scheduler
can choose where to execute the node at runtime. Or a node type implementation may just switch from one to the other
between versions for other reasons.
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_behavior.rst | .. _omnigraph_versioning_behavior:
Code Behavior Versioning
########################
Modifying code behavior is a little trickier than modifying interfaces as the change is invisible to the user
writing code. They can only see it at runtime. One example of this is the deprecation of support for the global
implicit graph. The ABI does not change to remove support for it, just the behavior of the graph construction.
This follows the standard :ref:`omnigraph_deprecation_approach` to give users ample opportunity to modify their code
use. The key here is strategic identification of reliance on the old code behaviour so that the user can be properly
notified of the need to handle the deprecation.
.. hint::
Using the deprecation process ensures that each step only involves a minor version bump to the extension as
existing code will continue to work. In the final phase of removal of the original behavior a major version
bump may be required.
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_python.rst | .. _omnigraph_versioning_python:
Python Support Versioning
#########################
Python has the simultaneous strength and weakness of introspection, where users can access anything that appears in
the Python scripts, making all of the Python deliverables a potential part of the API. This is not a reasonable amount
of code to support so part of the Python versioning involves defining the API "surface" that versioning support will
be provided for.
Python Bindings Support
***********************
Python bindings can be versioned in parallel with the ABI since for the most part they all correspond to ABI functions.
There are exceptions where bindings functions are made as variations of ABI functions to make them more Pythonic, or
to provide missing functionality like creation of Python class wrappers. These should be versioned in exactly the same
way as the ABI functions so in any given extension version the set of bindings will be consistent.
Defining The Python API Surface
*******************************
To support proper versioning we first have to be able to define what constitutes "a version" of a set of Python
scripts. The easiest way to do this is to use the import system to define the specific set of Python objects that
will be supported as part of a version in the way that many popular packages such as *numpy* and *pandas* do.
.. code-block:: python
import omni.graph.core as og
The officially supported Python API "surface" is everything that can be imported in this way. There also might be
some submodules that have separate import support in the same way you can use both *import os* and *import os.path*.
The extension will provide the set of support imports in its help information.
Hiding Implementation Definitions
=================================
Python programmers have an understanding that elements with a leading underscore or double underscore ("dunder") are
not meant to be public so this can be used to signal this fact to anyone looking through the import structure.
This is done at a high level using a module structure that looks like this:
.. code-block:: text
omni.my.extension/
└── python/
├── __init__.py
├── _impl/
| extension.py
| my_script.py
└── tests/
└── test_my_extension.py
With this structure then anyone trying to import anything that's not officially supported would be faced with
something like this:
.. code-block:: python
import omni.my.extension._impl.my_script
my_script.my_script()
It's still possible to do this kind of thing but any Python programmer will realize there is something wrong with that.
The method for accessing this same function in an officially supported way would be this:
.. code-block:: python
import omni.my.extension as me
me.my_script()
Inside the *__init__.py* file the officially supported API surface is all imported from wherever it happens to be
defined so that it can be accessed in this way.
.. code-block:: python
:caption: __init__.py
"""Define the API surface for omni.my.extension
To access the API for this extension use this import:
import omni.my.extension as me
To access the testing support for this extension use this import:
import omni.my.extension.tests as met
"""
from ._impl.extension import PublicExtension # Necessary for automatic extension initialization
from ._impl.my_script import my_script
.. code-block:: python
:caption: tests/__init__.py
"""Define the API surface for omni.my.extension.tests
To access the testing support for this extension use this import:
import omni.my.extension.tests as met
"""
from ._impl.test_support import load_test_file
.. note::
For future examples the comments will be elided for clarity but the API should always be commented.
Adding Version Support
**********************
After the soft deprecation period is over the deprecated features will move out of the main imports and into a new
location that is version-specific. This will allow continue support for deprecated functions without cluttering up
the active code.
.. warning::
This is one approach to making a version-specific API layer, to be vetted.
A deprecated version should be presented to the user as a fully intact entity so that it can be used pretty much as
it was before deprecation. For example if I have upgraded my extension from version 1, which contained a function
*old_function()* to version 2, which deprecates that function and adds *new_function()* then the module structure will
be altered to this:
.. code-block:: text
omni.my.extension/
└── python/
├── __init__.py
├── _impl/
| extension.py
| my_script.py
└── _1/
__init__.py
my_script.py
Only the new function will be exposed in the main API definition:
.. code-block:: python
:caption: __init__.py
from ._impl.extension import PublicExtension
from ._impl.my_script import new_function
.. code-block:: python
:caption: _1/__init__.py
from .my_script import old_function
A-La Carte Access
=================
The most flexible method of access is to use a mix-and-match approach to pick up the exact versions of everything you
want. This is intentionally difficult to do as it's not a usage pattern we want to encourage, more of a back door that
allows users to revert to previous behaviour in very specific places.
.. code-block:: python
:caption: user_code.py
import omni.my.extension as me
import omni.my.extension._1 as me1
if some_condition:
me.new_function()
else:
me1.old_function()
Always-Latest Access
====================
The most desired type of access is to use the latest API only so that when a new version is added your code will
already be using it. So long as we keep the top level *__init__.py* updated to define the latest API this is also the
easiest to use:
.. code-block:: python
:caption: user_code.py
import omni.my.extension as me
me.new_function()
Fixed Version Access
====================
If users wish to lock to a specific version so that they can rely on a stable base they can lock to a specific version.
This will only affect them at some point in the future when something in that version is hard-deprecated.
.. code-block:: python
:caption: user_code.py
# Note the leading underscore on the version, hinting that it should not be accessed directly
import omni.my.extension._1 as me
me.old_function()
Module Constant Gated Access
============================
We also want to support a hybrid approach where the users can enable the new interface if they want in order to check
whether they need to modify any code for compatibility, while not being locked into it if any work required cannot
happen all at once.
A Pythonic approach to this problem is to modify the interface definition to have a module-level constant that can be
set to different values in order to dynamically flip between different versions using the same import statement.
.. code-block:: python
:caption: __init__.py
import os
__version = os.getenv("omni.my.extension", 2)
if __version == 1:
from ._1 import *
elif __version == 2:
from ._impl.my_script import new_function
else:
raise VersionError(f"Environment variable omni.my.extension must be in [1, 2] - got {__version})
.. note::
This is intentionally not something that can change at runtime as that would have unpredictable effects. Something more
sophisticated like a sequence of **unload module** --> **edit environment variable** --> **reload module** would be
required to support that.
Deprecation Of Python Objects
*****************************
In order to follow the :ref:`omnigraph_deprecation_approach` for Python objects there has to be different treatment
for different types of Python objects so as to provide the deprecation information to the user in an informative but
non-intrusive manner.
The soft deprecation stage is especially import in Python as the things that can be deprecated can be small and change
at a relatively high frequency.
To make it easier to deprecate Python code a number of decorators have been created which can be accessed through
this import:
.. code-block:: python
import omni.graph.tools.deprecate as ogd
help(ogd)
You can see the details of how to deprecate everything in the :ref:`Python deprecation documentation<omnigraph_deprecation>`.
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/internal.rst | :orphan:
.. _omnigraph_internal_development:
Kit Internal Documentation
##########################
.. note::
This documentation is used internally and is intended for Kit developers, though it may provide some insights
to Kit SDK users as well.
.. toctree::
:maxdepth: 1
:glob:
Deprecated Code For Schema Prim<SchemaPrimSetting>
Versioning and Deprecation <versioning>
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning.rst | .. _omnigraph_internal_versioning:
OmniGraph Versioning And Deprecation
####################################
There are many ways in which the OmniGraph code can change from one release to another. This document sets out to
enumerate all of them and provide guidelines on how to handle each of them.
.. important::
The overall philosophy of versioning in OmniGraph is "First do no harm". Many internal and external customers will
be relying on many different versions of OmniGraph so it is up to us to minimize their disruption by providing
backward compatibility for as long as possible. Breaking changes should be rare, and vital for future development.
.. _omnigraph_deprecation_approach:
Deprecation Approach
********************
On those rare occasions where it is necessary to deprecate and eventually remove existing functionality the deprecation
should always follow a safe process. This is the generic set of steps to follow for any kind of deprecation. You can
see what it should look like to the user in :ref:`omnigraph_deprecation_communication`
If there is a significant amount of code to deprecate then you should move it into a separate location for easier
removal later. External functions follow the Carbonite ABI versioning :ref:`omnigraph_versioning_carbonite`.
If an internal function call is significantly different between the two behaviors then
introduce a new version of the function and move the old implementation to a `deprecated/` subdirectory.
.. code-block:: c++
:caption: Pre-Deprecation Version
struct MyStruct
{
void mySnazzyFunction();
};
.. code-block:: c++
:caption: Post-Deprecation Version
struct MyStruct
{
void mySnazzyFunction();
private:
// Deprecated functions
void mySnazzyFunctionV1();
};
void MyStruct::mySnazzyFunction()
{
if (OmniGraphSettings::myDeprecationFlag())
{
mySnazzyFunctionV1();
return;
}
// do the new stuff
};
Code Notices
************
It's a good idea for the deprecation path to clearly mark all of the deprecated code paths. Here is a simple but
effective comment that can be inserted before any deprecated code.
.. code-block:: cpp
// ==============================================================================================================
// _____ ______ _____ _____ ______ _____ _______ ______ _____
// | __ | | ____|| __ \ | __ \ | ____|/ ____| /\ |__ __|| ____|| __ |
// | | | || |__ | |__) || |__) || |__ | | / \ | | | |__ | | | |
// | | | || __| | ___/ | _ / | __| | | / /\ \ | | | __| | | | |
// | |__| || |____ | | | | \ \ | |____| |____ / ____ \ | | | |____ | |__| |
// |_____/ |______||_| |_| \_\|______|\_____|/_/ \_\|_| |______||_____/
//
// Code below here is no longer recommended for use in new code.
//
// ==============================================================================================================
Specific Versioning Information
*******************************
These documents illustrate versioning and deprecation information that is specific to the modification of particular
facets of OmniGraph.
.. toctree::
:maxdepth: 1
:glob:
Carbonite ABIs <versioning_carbonite>
ONI ABIs <versioning_oni>
Existing Node Types <versioning_nodes>
Code Behavior <versioning_behavior>
Extension Numbering <versioning_extensions>
Using Settings For Deprecation<versioning_settings>
USD File Format<versioning_usd>
Python Support<versioning_python>
Python Deprecation<deprecation>
Compatibility Testing<versioning_testing>
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_testing.rst | .. _omnigraph_versioning_testing:
Testing Version Upgrades
########################
In order to properly support backward compatibility we need tests in our own code that verify that any code that has not
been fully deprecated continues to be accessible and, if possible, functions as intended.
In an ideal world we would have tests that have full code coverage and when we deprecate some code we would move the
tests exercising that old functionality to the deprecated section and new tests would be added to exercise the new code.
Practically speaking this isn't possible so there will be a compromise between thoroughness and test time.
These are some of the areas to consider when designing tests for versioning:
- Use the ETM (Extension Test Matrix) to verify our new extension versions against existing users of our extensions
- For Python code, create "button tests" where every import in our API surface is checked for type and existence
- For C++ code create "ABI tests" whose only job is to exercise every function in the ABI
- For Python bindings create some combination of the above two that verify every function in the ABI
Creating these tests will require some test scaffolding that allows us to confirm coverage. Ideally this would take the
form of a code-coverage tool so that we don't have to write our own as that is prone to omissions.
.. note::
Performance testing is an important part of compatibility however it will be handled in a separate effort.
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_settings.rst | .. _omnigraph_versioning_settings:
OmniGraph Settings For Deprecation
##################################
The OmniGraph settings are instantions of `carb::settings` that are specific to OmniGraph. They are used to put in
temporary settings for deprecation that follows the standard :ref:`omnigraph_deprecation_approach`.
The settings for deprecation will use the namespace `/persistent/omnigraph/deprecation/` to emphasize the fact that
these are temporary and meant to control deprecated code.
Add The Deprecation Setting
***************************
The settings are all controlled through `omni.graph.core/plugins/OmniGraphSettings.{h,cpp}` so the first thing to do
is go there and create a new setting with the deprecation name and set up its default value to be the one that will
take the original code path. You may need to add some compatibility code if two or more of the settings have illegal
combinations. Add an accessor method as well so that your code can read it to decide which path to take.
It will also be helpful to add a Python constant containing the path to the new setting in the file
`omni.graph/python/_impl/settings.py`
Add UI Access Of The Setting
****************************
For testing and test migrations it's useful to add access to the setting through the standard settings UI. You can
do this by going to `omni.graph.ui/python/scripts/omnigraph_settings_editor.py` and adding a new settings widget that
is customized to the path of your new setting.
Modify C++ Code Paths Using The Setting
***************************************
If you've added an access method for your setting then you can switch code paths by checking that setting. This
example is how you do it if you are within the core where the settings object is accessible.
.. code-block:: cpp
#include "OmniGraphSettings.h"
void someCode()
{
if (OmniGraphSettings::myDeprecationIsActive())
{
doTheNewThing();
}
else
{
// It's important to only warn once to avoid spamming the user's console
CARB_LOG_WARNING_ONCE("The old thing is deprecated, use the new thing instead");
doTheDeprecatedThing();
}
}
If you have code that does not live in the core but changes behaviour based on the setting then you have to use the
settings ABI to access it. The most common reason for this would be changing behavior within the core Python bindings.
.. code-block:: cpp
#include <carb/settings/ISettings.h>
void someCode()
{
auto iSettings = carb::getCachedInterface<carb::settings::ISettings>();
if (! iSettings)
{
CARB_LOG_ERROR_ONCE("Unable to access the iSettings interface");
}
// If the settings interface isn't available assume the default.
// The hardcoded name is here to avoid exposing the string in the API when we know it will go away soon.
if (iSettings && iSettings->getAsBool("/persistent/omnigraph/deprecation/myNewThing"))
{
doTheNewThing();
}
else
{
// It's important to only warn once to avoid spamming the user's console
CARB_LOG_WARNING_ONCE("The old thing is deprecated, use the new thing instead");
doTheDeprecatedThing();
}
}
Modify Python Code Paths Using The Setting
******************************************
After adding the new setting to the Python *og.Settings* class you can use it to select code paths based on its value.
As the Python API is always visible there will be direct access to it and you won't have to drop down to the Carbonite
settings bindings. You can see its definition at :py:class:`omni.graph.core.Settings`
.. code-block:: python
import omni.graph.core as og
def some_code():
if og.Settings()(og.Settings.MY_SETTING_NAME):
do_the_new_thing()
else:
do_the_old_thing()
You can also use the class as a context manager to temporarily modify the setting, most useful in compatibility tests:
.. code-block:: python
import omni.graph.core as og
while og.Settings.temporary(og.Settings.MY_SETTING_NAME, False):
do_the_old_thing()
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_oni.rst | .. _omnigraph_versioning_oni:
OmniGraph Versioning For ONI Interfaces
#######################################
See the description of the :ref:`omnigraph_oni_versions`.
For this example this sample interface will be used, shown in its initial form.
.. code-block:: cpp
OMNI_DECLARE_INTERFACE(ITestInterface);
class ITestInterface_abi
: public omni::core::Inherits<omni::core::IObject, OMNI_TYPE_ID("omni.graph.core.ITestInterface")>
{
protected:
/**
* Does the old thing
*
* @param[in] value The value of the old thing
*/
virtual void oldThing_abi(int value) noexcept = 0;
};
class TestInterface : public omni::Implements<ITestInterface>
{
public:
TestInterface() = default;
~TestInterface() = default;
protected:
void oldThing_abi(int value) noexcept override
{
doSomething(value);
return;
}
};
void getInterfaceImplementations(const omni::InterfaceImplementation** out, uint32_t* outCount)
{
static const char* interfacesImplemented[] = { "ITestInterface" };
static omni::InterfaceImplementation impls[] = {
{
"omni.graph.core.ITestInterface",
[]() { return static_cast<omni::IObject*>(new ITestInterface); },
1,
interfacesImplemented, CARB_COUNTOF32(interfacesImplemented)
}
};
*out = impls;
*outCount = CARB_COUNTOF32(impls);
}
To call the interface code you instantiate one of them and call an interface function:
.. code-block:: cpp
auto iTestOld = omni::core::createType<ITestInterface>();
iTest.oldThing(1);
Adding A Function
*****************
A new function is added by creating a new interface that derives from the old interface and implements the new function.
.. code-block:: cpp
:emphasize-lines: 2,13,30,38-43
class ITestInterface2_abi
: public omni::core::Inherits<omni::graph::core::ITestInterface, OMNI_TYPE_ID("omni.graph.core.ITestInterface2")>
{
protected:
/**
* Does the new thing
*
* @param[in] value The value of the new thing
*/
virtual void newThing_abi(int value) noexcept = 0;
};
class TestInterface2 : public omni::Implements<ITestInterface2>
{
public:
TestInterface2() = default;
~TestInterface2() = default;
protected:
void newThing_abi(int value) noexcept override
{
doSomethingNew(value);
return;
}
};
void getInterfaceImplementations(const omni::InterfaceImplementation** out, uint32_t* outCount)
{
static const char* interfacesImplemented[] = { "ITestInterface" };
static const char* interfacesImplemented2[] = { "ITestInterface2" };
static omni::InterfaceImplementation impls[] = {
{
"omni.graph.core.ITestInterface",
[]() { return static_cast<omni::IObject*>(new ITestInterface); },
1,
interfacesImplemented, CARB_COUNTOF32(interfacesImplemented)
},
{
"omni.graph.core.ITestInterface2",
[]() { return static_cast<omni::IObject*>(new ITestInterface2); },
1,
interfacesImplemented2, CARB_COUNTOF32(interfacesImplemented)
}
};
*out = impls;
*outCount = CARB_COUNTOF32(impls);
}
The old code is untouched by this and continues to function as usual. Any code wishing to use the new function has to
instantiate the new interface.
.. code-block:: cpp
auto iTestOld = omni::core::createType<ITestInterface>();
iTest.oldThing(1);
// iTest.newThing(2); // Would fail - this behaves as the old interface only
auto iTestNew = omni::core::createType<ITestInterface2>();
iTestNew.oldThing(1); // Still works
iTestNew.newThing(2);
Deprecating Functions
*********************
In the rare occasion where you want to deprecate a function entirely you must follow this process to
ensure maximum compatibility and notification to affected users.
Step 1: Add a Deprecation Notice
================================
The team should decide exactly when a deprecation will happen, as it is beneficial to group deprecations together to
minimize the work required on the user end to deal with the deprecations. Once that's decided the first thing to do is
to communicate the deprecation intent through the decided-upon channels.
Once that has happend the implementation of the interface function will add a warning message that the function is
deprecated and will be going away.
.. code-block:: cpp
:emphasize-lines: 8,26
OMNI_DECLARE_INTERFACE(ITestInterface);
class ITestInterface_abi
: public omni::core::Inherits<omni::core::IObject, OMNI_TYPE_ID("omni.graph.core.ITestInterface")>
{
protected:
/**
* This function is deprecated - use newFunction() instead
*
* Does the old thing
*
* @param[in] value The value of the old thing
*/
virtual void oldThing_abi(int value) noexcept = 0;
};
class TestInterface : public omni::Implements<ITestInterface>
{
public:
TestInterface() = default;
~TestInterface() = default;
protected:
void oldThing_abi(int value) noexcept override
{
CARB_LOG_WARNING("ITestInterface::oldFunction() is deprecated - use ITestInterface::newFunction() instead");
doSomething(value);
return;
}
};
Step 2: Make The Deprecated Path An Error
=========================================
To ensure the use of the deprecated path is highly visible in this phase the warning is upgraded to an error and the
comments on the function make it more clear that it is not to be used in new code.
.. code-block:: cpp
:emphasize-lines: 8,22
OMNI_DECLARE_INTERFACE(ITestInterface);
class ITestInterface_abi
: public omni::core::Inherits<omni::core::IObject, OMNI_TYPE_ID("omni.graph.core.ITestInterface")>
{
protected:
/**
* This function is deprecated - use newFunction() instead
*/
virtual void oldThing_abi(int value) noexcept = 0;
};
class TestInterface : public omni::Implements<ITestInterface>
{
public:
TestInterface() = default;
~TestInterface() = default;
protected:
void oldThing_abi(int value) noexcept override
{
CARB_LOG_ERROR("ITestInterface::oldFunction() is deprecated - use ITestInterface::newFunction() instead");
doSomething(value);
return;
}
};
.. note::
You can choose to make the deprecation even more emphatic if calling the function could do something bad like
create instability in the code. You can do so by using a `throw` instead of just `CARB_LOG_ERROR`.
Step 3: Hard Deprecation Of The Function
========================================
When the final step is necessary you must bump the major version number of the extension as this is a breaking change.
You must also bump the version of the interface so that in those cases where an extension has access to both the old
and new versions of the interface it will prefer the new one. The interface class remains though, even if the last
function has been taken from it.
.. important::
This type of hard deprecation is extremely disruptive as it will require extensions to be rebuilt and possibly
modified to conform to the new interface. It should only be used when there is something in the interface that
is intrinsically dangerous or difficult to continue supporting.
.. code-block:: cpp
:emphasize-lines: 5-6,14-15,25
OMNI_DECLARE_INTERFACE(ITestInterface);
class ITestInterface_abi
: public omni::core::Inherits<omni::core::IObject, OMNI_TYPE_ID("omni.graph.core.ITestInterface")>
{
};
class TestInterface : public omni::Implements<ITestInterface>
{
public:
TestInterface() = default;
~TestInterface() = default;
protected:
};
void getInterfaceImplementations(const omni::InterfaceImplementation** out, uint32_t* outCount)
{
static const char* interfacesImplemented[] = { "ITestInterface" };
static const char* interfacesImplemented2[] = { "ITestInterface2" };
static omni::InterfaceImplementation impls[] = {
{
"omni.graph.core.ITestInterface",
[]() { return static_cast<omni::IObject*>(new ITestInterface); },
2,
interfacesImplemented, CARB_COUNTOF32(interfacesImplemented)
},
{
"omni.graph.core.ITestInterface2",
[]() { return static_cast<omni::IObject*>(new ITestInterface2); },
1,
interfacesImplemented2, CARB_COUNTOF32(interfacesImplemented)
}
};
*out = impls;
*outCount = CARB_COUNTOF32(impls);
}
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/SchemaPrimSetting.rst | Schema Prim Setting Flow
########################
The **useSchemaPrim** setting controls how the graph is backed by USD. The code was refactored to divert as much of the
old code as possible into the *deprecated/* directory so that it is isolated and won't change until such time as the
setting is made permanent and it can be deleted.
The graph is divided into three sections - code that was converted fully to use the schema approach, hybrid code that
uses the setting to switch between schema and non-schema behaviours, and deprecated code that is only used by the
non-schema code paths.
.. _omnigraph_extension_dependency_graph:
.. mermaid::
flowchart LR
subgraph Converted
ComputeGraphImpl::parseGraph
Graph::attachToStage
Graph::changePipelineStage
Graph::initFromStage
Graph::onNodeTypeRegistered
Graph::postLoadUpgradeFileFormatVersion
Graph::rangeContainsOGNodes
Graph::writeSettingsInUSD
GraphContext::addPrimToCache
GraphContext::attachToStage
GraphContext::createNeededNodesForPrim
GraphContext::createNewNode
GraphContext::createNewSubgraph
GraphContext::initializeFabric
GraphContext::updateChangeProcessing
end
subgraph Uses Setting
Graph::couldPrimBelongToGraph
Graph::createGraphAsNode
Graph::findFileFormatVersionInRange
Graph::initContextAndFabric
Graph::isGlobalGraphPrim
Graph::isOmniGraphPrim
Graph::isPrimAnOmniGraphMember
Graph::reloadFromStage
Graph::requiresFileFormatCallback
Graph::setNodeInstance
Graph::setPathToGraph
Graph::writeFileFormatVersion
Graph::writeSettingsToUSD
Graph::writeTokenSetting
GraphContext::_::getPrimsNodeTypeAttribute
GraphContext::_addGatherPrimToFlatCacheIndex
GraphContext::_addPrimToFlatCacheIndex
GraphContext::checkForDynamicAttributes
GraphContext::createNewNodeWithUSD
GraphContext::createNewSubgraph
GraphContext::createNodeFromPrim
GraphContext::createNodesAndSubgraphs
GraphContext::initializeNode
GraphContext::getUpstreamPrims
GraphContext::reloadGraphSettings
Node::isPrimAnOmniGraphNode
Node::getPrimsNodeTypeAttribute
Node::Node
OmniGraphUsdNoticeListener::processNewAndDeletedGraphs
PluginInterface::attach
end
subgraph Deprecated
ComputeGraphImpl::parseGraphNoSchema
Graph::attachToStageNoSchema
Graph::couldPrimBelongToGraphNoSchema
Graph::createGraphAsNodeNoSchema
Graph::findFileFormatVersionInRangeNoSchema
Graph::initContextAndFlatCache
Graph::initContextAndFlatCacheNoSchema
Graph::initFromStageNoSchema
Graph::isGlobalGraphPrimNoSchema
Graph::reloadFromStageNoSchema
Graph::writeFileFormatVersionNoSchema
Graph::writeSettingsToUSDNoSchema
GraphContext::_addPrimToFlatCacheIndexNoSchema
GraphContext::attachToStageNoSchema
GraphContext::checkForDynamicAttributesNoSchema
GraphContext::createNewSubgraphNoSchema
GraphContext::createNodesAndSubgraphsNoSchema
GraphContext::initializeFlatcacheNoSchema
GraphContext::reloadGraphSettingsNoSchema
NoSchemaSupport::writeEvaluatorTypeToSettings
NoSchemaSupport::writeGraphBackingTypeToSettings
NoSchemaSupport::writeGraphPipelineStageToSettings
NoSchemaSupport::writeEvaluationModeToSettings
NoSchemaSupport::writeTokenSetting
PluginInterface::attachNoSchema
end
ComputeGraphImpl::parseGraph --> ComputeGraphImpl::parseGraphNoSchema
ComputeGraphImpl::parseGraph --> Graph::attachToStage
ComputeGraphImpl::parseGraph --> Graph::findFileFormatVersionInRange
ComputeGraphImpl::parseGraph --> Graph::initFromStage
ComputeGraphImpl::parseGraph --> Graph::setNodeInstance
ComputeGraphImpl::parseGraph --> GraphContext::initializeFabric
ComputeGraphImpl::parseGraph --> Node::Node
GraphContext::addPrimToCache --> GraphContext::_addPrimToFlatCacheIndex
GraphContext::attachToStage --> Graph::writeSettingsInUSD
GraphContext::attachToStage --> GraphContext::createNodesAndSubgraphs
GraphContext::createNeededNodesForPrim --> GraphContext::createNodeFromPrim
GraphContext::createNewNode --> Graph::setNodeInstance
GraphContext::createNewNodeWithUSD --> GraphContext::_addPrimToFlatCacheIndex
GraphContext::createNodeFromPrim --> Graph::setNodeInstance
GraphContext::createNodeFromPrim --> GraphContext::initializeNode
GraphContext::createNodesAndSubgraphs --> Graph::writeSettingsInUSD
GraphContext::createNodesAndSubgraphs --> GraphContext::createNeededNodesForPrim
GraphContext::createNodesAndSubgraphs --> GraphContext::createNodesAndSubgraphs
GraphContext::initializeFabric --> GraphContext::_addGatherPrimToFlatCacheIndex
GraphContext::initializeFabric --> GraphContext::_addGatherPrimToFlatCacheIndex
GraphContext::initializeFabric --> GraphContext::_addPrimToFlatCacheIndex
GraphContext::initializeFabric --> GraphContext::_addPrimToFlatCacheIndex
GraphContext::initializeNode --> GraphContext::_::getPrimsNodeTypeAttribute
GraphContext::initializeNode --> GraphContext::checkForDynamicAttributes
GraphContext::initializeNode --> Node::getPrimsNodeTypeAttribute
GraphContext::updateChangeProcessing --> GraphContext::_::getPrimsNodeTypeAttribute
GraphContext::updateChangeProcessing --> GraphContext::_addPrimToFlatCacheIndex
GraphContext::updateChangeProcessing --> GraphContext::createNodesAndSubgraphs
GraphContext::updateChangeProcessing --> Node::getPrimsNodeTypeAttribute
GraphContext::_addPrimToFlatCacheIndex --> GraphContext::_addPrimToFlatCacheIndexNoSchema
GraphContext::attachToStage --> GraphContext::attachToStageNoSchema
GraphContext::checkForDynamicAttributes --> GraphContext::checkForDynamicAttributesNoSchema
GraphContext::createNewSubgraph --> GraphContext::createNewSubgraphNoSchema
GraphContext::createNodesAndSubgraphs --> GraphContext::createNodesAndSubgraphsNoSchema
GraphContext::initializeFabric --> GraphContext::initializeFlatcacheNoSchema
GraphContext::reloadGraphSettings --> GraphContext::reloadGraphSettingsNoSchema
Graph::attachToStage --> Graph::attachToStageNoSchema
Graph::attachToStage --> GraphContext::attachToStage
Graph::changePipelineStage --> Graph::setNodeInstance
Graph::couldPrimBelongToGraph --> Graph::couldPrimBelongToGraphNoSchema
Graph::couldPrimBelongToGraph --> Graph::isGlobalGraphPrim
Graph::couldPrimBelongToGraph --> Graph::isPrimAnOmniGraphMember
Graph::createGraphAsNode --> Graph::createGraphAsNodeNoSchema
Graph::createGraphAsNode --> Graph::setNodeInstance
Graph::createGraphAsNode --> GraphContext::initializeFabric
Graph::findFileFormatVersionInRange --> Graph::findFileFormatVersionInRangeNoSchema
Graph::initContextAndFabric --> GraphContext::initializeFabric
Graph::initContextAndFlatCache --> Graph::initContextAndFlatCacheNoSchema
Graph::initFromStage --> Graph::attachToStage
Graph::initFromStage --> Graph::initContextAndFabric
Graph::initFromStage --> Graph::initFromStageNoSchema
Graph::isGlobalGraphPrim --> Graph::isGlobalGraphPrimNoSchema
Graph::postLoadUpgradeFileFormatVersion --> Graph::writeFileFormatVersion
Graph::reloadFromStage --> Graph::attachToStage
Graph::reloadFromStage --> Graph::rangeContainsOGNodes
Graph::reloadFromStage --> Graph::reloadFromStageNoSchema
Graph::reloadFromStage --> GraphContext::initializeFabric
Graph::writeFileFormatVersion --> Graph::writeFileFormatVersionNoSchema
Graph::writeSettingsToUSD --> Graph::writeFileFormatVersionNoSchema
Graph::writeSettingsToUSD --> Graph::writeSettingsToUSDNoSchema
Graph::writeSettingsToUSDNoSchema --> NoSchemaSupport::writeEvaluatorTypeToSettings
Graph::writeSettingsToUSDNoSchema --> NoSchemaSupport::writeGraphBackingTypeToSettings
Graph::writeSettingsToUSDNoSchema --> NoSchemaSupport::writeGraphPipelineStageToSettings
NoSchemaSupport::writeEvaluatorTypeToSettings --> NoSchemaSupport::writeTokenSetting
NoSchemaSupport::writeGraphBackingTypeToSettings --> NoSchemaSupport::writeTokenSetting
NoSchemaSupport::writeGraphPipelineStageToSettings --> NoSchemaSupport::writeTokenSetting
NoSchemaSupport::writeEvaluationModeToSettings --> NoSchemaSupport::writeTokenSetting
Node::Node --> Node::isPrimAnOmniGraphNode
PluginInterface::attach --> ComputeGraphImpl::parseGraph
PluginInterface::attach --> Graph::findFileFormatVersionInRange
PluginInterface::attach --> Graph::initContextAndFabric
PluginInterface::attach --> Graph::rangeContainsOGNodes
PluginInterface::attach --> PluginInterface::attachNoSchema
OmniGraphUsdNoticeListener::processNewAndDeletedGraphs --> ComputeGraphImpl::parseGraph
OmniGraphUsdNoticeListener::processNewAndDeletedGraphs --> Graph::isGlobalGraphPrim
|
omniverse-code/kit/exts/omni.graph.core/docs/internal/versioning_carbonite.rst | .. _omnigraph_versioning_carbonite:
OmniGraph Versioning For Carbonite Interfaces
#############################################
See the description of the :ref:`omnigraph_carbonite_versions`.
For this example this sample interface will be used, shown in its initial form. Note the structure integrity check at
the end, designed to catch situations where a new function is unintentionally added to the middle of the interface.
.. code-block:: cpp
class ITestInterface
{
CARB_PLUGIN_INTERFACE("omni::graph::core::ITestInterface", 1, 0);
/**
* Does the old thing
*
* @param[in] value The value of the old thing
*/
void (CARB_ABI* oldFunction)(int value);
};
// Update this every time a new ABI function is added, to ensure one isn't accidentally added in the middle
STRUCT_INTEGRITY_CHECK(IAttribute, oldFunction, 1)
void oldFunction_impl(int value)
{
doSomething(value);
return;
}
Adding A Function
*****************
A new function is added. The version number is bumped and the structural integrity check is updated.
.. code-block:: cpp
:emphasize-lines: 3,12-17,20,28-32
class ITestInterface
{
CARB_PLUGIN_INTERFACE("omni::graph::core::ITestInterface", 1, 1);
/**
* Does the old thing
*
* @param[in] value The value of the old thing
*/
void (CARB_ABI* oldFunction)(int value);
/**
* Does the new thing
*
* @param[in] value The value of the new thing
*/
void (CARB_ABI* newFunction)(int value);
};
// Update this every time a new ABI function is added, to ensure one isn't accidentally added in the middle
STRUCT_INTEGRITY_CHECK(IAttribute, newFunction, 2)
void oldFunction_impl(int value)
{
doSomething(value);
return;
}
void newFunction_impl(int value)
{
doSomethingNew(value);
return;
}
Deprecating Functions
*********************
In the rare occasion where you want to deprecate a function entirely you must follow this three-step process to
ensure maximum compatibility and notification to affected users.
Step 1: Add a Deprecation Notice
================================
The team should decide exactly when a deprecation will happen, as it is beneficial to group deprecations together to
minimize the work required on the user end to deal with the deprecations. Once that's decided the first thing to do is
to communicate the deprecation intent through the decided-upon channels.
Once that has happend the implementation of the interface function will add a warning message that the function is
deprecated and will be going away.
.. code-block:: cpp
:emphasize-lines: 6,26
class ITestInterface
{
CARB_PLUGIN_INTERFACE("omni::graph::core::ITestInterface", 1, 1);
/**
* This function is deprecated - use newFunction() instead
*
* Does the old thing
*
* @param[in] value The value of the old thing
*/
void (CARB_ABI* oldFunction)(int value);
/**
* Does the new thing
*
* @param[in] value The value of the new thing
*/
void (CARB_ABI* newFunction)(int value);
};
// Update this every time a new ABI function is added, to ensure one isn't accidentally added in the middle
STRUCT_INTEGRITY_CHECK(IAttribute, newFunction, 2)
void oldFunction_impl(int value)
{
CARB_LOG_WARNING("ITestInterface::oldFunction() is deprecated - use ITestInterface::newFunction() instead");
doSomething(value);
return;
}
void newFunction_impl(int value)
{
doSomethingNew(value);
return;
}
Step 2: Make The Deprecated Path An Error
=========================================
To ensure the use of the deprecated path is highly visible in this phase the warning is upgraded to an error and the
comments on the function make it more clear that it is not to be used in new code.
.. code-block:: cpp
:emphasize-lines: 6-7,22
class ITestInterface
{
CARB_PLUGIN_INTERFACE("omni::graph::core::ITestInterface", 1, 1);
/**
* This function is deprecated - use newFunction() instead
*/
void (CARB_ABI* oldFunction)(int value);
/**
* Does the new thing
*
* @param[in] value The value of the new thing
*/
void (CARB_ABI* newFunction)(int value);
};
// Update this every time a new ABI function is added, to ensure one isn't accidentally added in the middle
STRUCT_INTEGRITY_CHECK(IAttribute, newFunction, 2)
void oldFunction_impl(int value)
{
CARB_LOG_ERROR("ITestInterface::oldFunction() is deprecated - use ITestInterface::newFunction() instead");
doSomething(value);
return;
}
void newFunction_impl(int value)
{
doSomethingNew(value);
return;
}
Step 3: Hard Deprecation Of The Function
========================================
To ensure the ABI integrity functions may not be removed from the interface, however they can be renamed and all
information about what they once were can be removed. At this point calls to them will no longer function. They will
only issue an error. The function signatures remain the same to avoid type errors. The implementation of the deprecated
function can be moved to a common graveyard of deprecated functions.
.. code-block:: cpp
:emphasize-lines: 5,23-27
class ITestInterface
{
CARB_PLUGIN_INTERFACE("omni::graph::core::ITestInterface", 1, 1);
/* DEPRECATED */ void (CARB_ABI* deprecated_1)(int);
/**
* Does the new thing
*
* @param[in] value The value of the new thing
*/
void (CARB_ABI* newFunction)(int value);
};
// Update this every time a new ABI function is added, to ensure one isn't accidentally added in the middle
STRUCT_INTEGRITY_CHECK(IAttribute, newFunction, 2)
void newFunction_impl(int value)
{
doSomethingNew(value);
return;
}
void oldFunction_impl(int)
{
CARB_LOG_ERROR("ITestInterface::oldFunction() is deprecated - use ITestInterface::newFunction() instead");
return;
}
|
omniverse-code/kit/exts/omni.kit.widget.text_editor/PACKAGE-LICENSES/omni.kit.widget.text_editor-LICENSE.md | Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited. |
omniverse-code/kit/exts/omni.kit.widget.text_editor/config/extension.toml | [package]
title = "Text Editor"
category = "Internal"
description = "Bindings to ImGuiColorTextEdit"
version = "1.0.2"
changelog = "docs/CHANGELOG.md"
icon = "data/icon.png"
preview_image = "data/preview.png"
[dependencies]
"omni.ui" = {}
[[native.library]]
path = "bin/${lib_prefix}omni.kit.widget.text_editor${lib_ext}"
[[python.module]]
name = "omni.kit.widget.text_editor"
[[test]]
dependencies = [
"omni.kit.renderer.capture",
"omni.kit.ui_test",
]
args = [
"--/app/window/dpiScaleOverride=1.0",
"--/app/window/scaleToMonitor=false",
"--no-window"
]
stdoutFailPatterns.exclude = [
"*omniclient: Initialization failed*",
]
|
omniverse-code/kit/exts/omni.kit.widget.text_editor/omni/kit/widget/text_editor/__init__.py | ## Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
"""
omni.kit.widget.text_editor
---------------------------
"""
__all__ = ["TextEditor"]
from ._text_editor import *
|
omniverse-code/kit/exts/omni.kit.widget.text_editor/omni/kit/widget/text_editor/tests/__init__.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
from .test_text_editor import TestTextEditor
|
omniverse-code/kit/exts/omni.kit.widget.text_editor/omni/kit/widget/text_editor/tests/test_text_editor.py | ## Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
__all__ = ["TestTextEditor"]
import omni.kit.test
from omni.ui.tests.test_base import OmniUiTest
from pathlib import Path
import omni.kit.app
from .._text_editor import TextEditor
EXTENSION_PATH = Path(omni.kit.app.get_app().get_extension_manager().get_extension_path_by_module(__name__))
GOLDEN_PATH = EXTENSION_PATH.joinpath("data/golden")
STYLE = {"Field": {"background_color": 0xFF24211F, "border_radius": 2}}
class TestTextEditor(OmniUiTest):
async def test_general(self):
"""Testing general look of TextEditor"""
window = await self.create_test_window()
lines = ["The quick brown fox jumps over the lazy dog."] * 20
with window.frame:
TextEditor(text_lines=lines)
for i in range(2):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test(golden_img_dir=GOLDEN_PATH, golden_img_name=f"test_general.png")
async def test_syntax(self):
"""Testing languages of TextEditor"""
import inspect
window = await self.create_test_window()
with window.frame:
TextEditor(text_lines=inspect.getsource(self.test_syntax).splitlines(), syntax=TextEditor.Syntax.PYTHON)
for i in range(2):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test(golden_img_dir=GOLDEN_PATH, golden_img_name=f"test_syntax.png")
async def test_text_changed_flag(self):
"""Testing TextEditor text edited callback"""
from omni.kit import ui_test
window = await self.create_test_window(block_devices=False)
window.focus()
text = "Lorem ipsum"
text_new = "dolor sit amet"
with window.frame:
text_editor = TextEditor(text=text)
await ui_test.wait_n_updates(2)
self.text_changed = False
def on_text_changed(text_changed):
self.text_changed = text_changed
text_editor.set_edited_fn(on_text_changed)
self.assertFalse(self.text_changed)
await ui_test.emulate_mouse_move_and_click(ui_test.Vec2(100, 100))
await ui_test.wait_n_updates(2)
await ui_test.emulate_char_press("A")
await ui_test.wait_n_updates(2)
self.assertTrue(self.text_changed)
await ui_test.emulate_key_combo("BACKSPACE")
await ui_test.wait_n_updates(2)
self.assertFalse(self.text_changed)
text_editor.text = text_new
await ui_test.wait_n_updates(2)
# Only user input will trigger the callback
self.assertFalse(self.text_changed)
await ui_test.emulate_char_press("A")
await ui_test.wait_n_updates(2)
self.assertTrue(self.text_changed)
text_editor.set_edited_fn(None)
|
omniverse-code/kit/exts/omni.kit.widget.text_editor/docs/CHANGELOG.md | # Changelog
## [1.0.2] - 2022-07-28
### Added
- Callback passing a bool that user has edited the text
## [1.0.1] - 2022-06-29
### Added
- Syntax highlighting
## [1.0.0] - 2022-04-30
### Added
- Initial commit
|
omniverse-code/kit/exts/omni.kit.widget.text_editor/docs/README.md | # omni.kit.widget.text_editor
## Overview
It's binding of ImGuiColorTextEdit in omni.ui, syntax highlighting text editor.
It approximates typical code editor look and feel.

## Fonts
TextEditor widget supports fonts from style
```
import omni.ui as ui
from omni.kit.widget.text_editor import TextEditor
font = "c:/windows/fonts/consola.ttf"
my_window = ui.Window("Example", width=600, height=300)
with my_window.frame:
TextEditor(
text="The quick brown fox",
style={"font": font})
```
## Syntax Highlighting
TextEditor widget supports many languages to highlight the syntax.
This is the list of the languages:
```
TextEditor.Syntax.NONE
TextEditor.Syntax.PYTHON
TextEditor.Syntax.CPLUSPLUS
TextEditor.Syntax.HLSL
TextEditor.Syntax.GLSL
TextEditor.Syntax.C
TextEditor.Syntax.SQL
TextEditor.Syntax.ANGELSCRIPT
TextEditor.Syntax.LUA
```
```
import omni.ui as ui
from omni.kit.widget.text_editor import TextEditor
font = "c:/windows/fonts/consola.ttf"
my_window = ui.Window("Example", width=600, height=300)
with my_window.frame:
TextEditor(
text=open(__file__).read(),
style={"font": font},
syntax=TextEditor.Syntax.PYTHON)
```
|
omniverse-code/kit/exts/omni.kit.widget.text_editor/docs/index.rst | omni.kit.widget.text_editor
###########################
.. toctree::
:maxdepth: 1
CHANGELOG |
omniverse-code/kit/exts/omni.timeline/omni/timeline/_timeline.pyi | from __future__ import annotations
import omni.timeline._timeline
import typing
import carb.events._events
__all__ = [
"ITimeline",
"Timeline",
"TimelineEventType",
"acquire_timeline_interface",
"release_timeline_interface"
]
class ITimeline():
def clear_tentative_time(self) -> None:
"""
Clear tentative time of animation in seconds.
Clear/Invalidate the tentative time
"""
def destroy_timeline(self, name: str) -> bool:
"""
Destroys the timeline with the given name if nothing references it. Does not release the default timeline.
Args:
name of the timeline.
Returns:
True if a timeline was deleted, False otherwise. The latter happens when the timeline does not exist, it is in use, or it is the default timeline.
"""
def forward_one_frame(self) -> None:
"""
Forwards the timeline by one frame.
"""
def get_current_tick(self) -> int:
"""
Gets the current tick index, starting from zero. Always returns zero when ticks per frame is one.
Returns:
The current tick index.
"""
def get_current_time(self) -> float:
"""
Gets current time of animation in seconds.
Returns:
Current time of animation in seconds.
"""
def get_end_time(self) -> float:
"""
Gets the end time of animation in seconds.
Returns:
End time of animation in seconds.
"""
def get_fast_mode(self) -> bool:
"""
Checks if fast mode is on or off.
Returns:
true is fast mode is on.
"""
def get_start_time(self) -> float:
"""
Gets the start time of animation in seconds.
Returns:
Start time of animation in seconds.
"""
def get_target_framerate(self) -> float:
"""
Gets the target frame rate, which affects the derived FPS of the runloop in play mode.
Exact runloop FPS is usually not the same as this value, as it is always a multiple of get_time_codes_per_seconds.
Returns:
The target frame rate.
"""
def get_tentative_time(self) -> float:
"""
Gets tentative time of animation in seconds.
Returns:
Tentative time of animation if it is valid, otherwise return current time
"""
def get_ticks_per_frame(self) -> int:
"""
Gets the tick count per frame, i.e. how many times update event is ticked per frame.
Returns:
The tick per frame count.
"""
def get_ticks_per_second(self) -> float:
"""
Gets the tick count per seconds, i.e. how many times update event is ticked per second.
Returns:
The tick per second count.
"""
def get_time_codes_per_seconds(self) -> float:
"""
Gets timeCodePerSecond metadata from currently opened stage.
This is equivalent to calling GetTimeCodesPerSecond on UsdStage.
Returns:
timeCodePerSecond for current UsdStage.
"""
def get_timeline(self, name: str = '') -> Timeline:
"""
Returns the timeline with the given name or creates a new if it does not exist.
Args:
name: The name of the timeline.
Returns:
Timeline object.
"""
def get_timeline_event_stream(self) -> carb.events._events.IEventStream:
"""
Gets TimelineEventStream, emitting TimelineEventType.
Returns:
TimelineEventStream.
"""
def is_auto_updating(self) -> bool:
"""
Checks if timeline is auto updating.
Returns:
True if timeline is auto updating. False otherwise.
"""
def is_looping(self) -> bool:
"""
Checks if animation is looping.
Returns:
True if animation is looping. False otherwise.
"""
def is_playing(self) -> bool:
"""
Checks if animation is playing.
Returns:
True if animation is playing. False otherwise.
"""
def is_prerolling(self) -> bool:
"""
Checks if timeline is prerolling.
Returns:
True if timeline is prerolling. False otherwise.
"""
def is_stopped(self) -> bool:
"""
Checks if animation is stopped, as opposed to paused.
Returns:
True if animation is stopped. False otherwise.
"""
def pause(self) -> None:
"""
Pauses animation.
"""
def play(self, start_timecode: float = 0, end_timecode: float = 0, looping: bool = True) -> None:
"""
Plays animation with current timeCodePerSecond. if not set session start and end timecode, will play from
global start time to end time in stage.
Args:
start_timecode: start timecode of session play, won't change the global StartTime.
end_timecode: start timecode of session play, won't change the global EndTime.
looping: true to enable session play looping, false to disable, won't change the global Looping.
"""
def rewind_one_frame(self) -> None:
"""
Rewinds the timeline by one frame.
"""
def set_auto_update(self, auto_update: bool) -> None:
"""
Turns on/off auto update.
Args:
auto_update: True to enable auto update, False to disable.
"""
def set_current_time(self, time_in_seconds: float) -> None:
"""
Sets current time of animation in seconds.
Args:
time_in_seconds Current time of animation in seconds.
"""
def set_end_time(self, end_time: float) -> None:
"""
Sets the end time of animation in seconds. This will write into current opened stage.
Args:
end_time: End time of animation in seconds.
"""
def set_fast_mode(self, fast_mode: bool) -> None:
"""
Turns fast mode on or off.
Args:
fast_mode true to turn on fast mode, false to turn it off.
"""
def set_looping(self, looping: bool) -> None:
"""
Sets animation looping mode.
Args:
looping: True to enable looping, False to disable.
"""
def set_prerolling(self, preroll: bool) -> None:
"""
Turns on/off preroll status.
Args:
preroll: True to enable preroll, False to disable.
"""
def set_start_time(self, start_time: float) -> None:
"""
Sets the begin time of animation in seconds. This will write into current opened stage.
Args:
start_time: Begin time of animation in seconds.
"""
def set_target_framerate(self, target_framerate: float) -> None:
"""
Sets the target frame rate, which affects the derived FPS of the runloop in play mode.
Exact runloop FPS is usually not the same as this value, as it is always a multiple of get_time_codes_per_seconds.
Args:
target_framerate The target frame rate.
"""
def set_tentative_time(self, time_in_seconds: float) -> None:
"""
Sets tentative time of animation in seconds.
Args:
time_in_seconds Tentative time of animation in seconds.
"""
def set_ticks_per_frame(self, ticks_per_frame: int) -> None:
"""
Sets the tick count per frame, i.e. how many times update event is ticked per frame.
Args:
ticks_per_frame: The tick per frame count.
"""
def set_time_codes_per_second(self, time_codes_per_second: float) -> None:
"""
Sets timeCodePerSecond metadata to currently opened stage.
This is equivalent to calling SetTimeCodesPerSecond on UsdStage.
Args:
time_codes_per_second: TimeCodePerSecond to set into current stage.
"""
def stop(self) -> None:
"""
Stops animation.
"""
pass
class Timeline():
def clear_tentative_time(self) -> None:
"""
Clear tentative time of animation in seconds.
Clear/Invalidate the tentative time
"""
def clear_zoom(self) -> None:
"""
Clears the zoom state, i.e. sets the zoom range to [get_start_time(), get_end_time()].
"""
def commit(self) -> None:
"""
Applies all pending state changes and invokes all callbacks.
This method is not thread-safe, it should be called only from the main thread.
"""
def commit_silently(self) -> None:
"""
Applies all pending state changes but does not invoke any callbacks.
This method is thread-safe.
"""
def forward_one_frame(self) -> None:
"""
Forwards the timeline by one frame.
"""
def get_current_tick(self) -> int:
"""
Gets the current tick index, starting from zero. Always returns zero when ticks per frame is one.
Returns:
The current tick index.
"""
def get_current_time(self) -> float:
"""
Gets current time of animation in seconds.
Returns:
Current time of animation in seconds.
"""
def get_director(self) -> Timeline:
"""
Returns the current director Timeline.
Returns:
The director timeline object or None if none is set.
"""
def get_end_time(self) -> float:
"""
Gets the end time of animation in seconds.
Returns:
End time of animation in seconds.
"""
def get_fast_mode(self) -> bool:
"""
Checks if fast mode is on or off. Deprecated, same as get_play_every_frame.
Returns:
true is fast mode is on.
"""
def get_play_every_frame(self) -> bool:
"""
Checks if the timeline sends updates every frame. Same as get_fast_mode.
Returns:
true if the timeline does not skip frames.
"""
def get_start_time(self) -> float:
"""
Gets the start time of animation in seconds.
Returns:
Start time of animation in seconds.
"""
def get_target_framerate(self) -> float:
"""
Gets the target frame rate, which affects the derived FPS of the runloop in play mode.
Exact runloop FPS is usually not the same as this value, as it is always a multiple of get_time_codes_per_seconds.
Returns:
The target frame rate.
"""
def get_tentative_time(self) -> float:
"""
Gets tentative time of animation in seconds.
Returns:
Tentative time of animation if it is valid, otherwise return current time
"""
def get_ticks_per_frame(self) -> int:
"""
Gets the tick count per frame, i.e. how many times update event is ticked per frame.
Returns:
The tick per frame count.
"""
def get_ticks_per_second(self) -> float:
"""
Gets the tick count per seconds, i.e. how many times update event is ticked per second.
Returns:
The tick per second count.
"""
def get_time_codes_per_seconds(self) -> float:
"""
Gets timeCodePerSecond metadata from currently opened stage.
This is equivalent to calling GetTimeCodesPerSecond on UsdStage.
Returns:
timeCodePerSecond for current UsdStage.
"""
def get_timeline_event_stream(self) -> carb.events._events.IEventStream:
"""
Gets TimelineEventStream, emitting TimelineEventType.
Returns:
TimelineEventStream.
"""
def get_zoom_end_time(self) -> float:
"""
Gets the end time of zoomed animation in seconds.
Returns:
End time of zoomed animation in seconds. When no zoom is set, this function returns get_end_time().
"""
def get_zoom_start_time(self) -> float:
"""
Gets the start time of zoomed animation in seconds.
Returns:
Start time of zoomed animation in seconds. When no zoom is set, this function returns get_start_time().
"""
def is_auto_updating(self) -> bool:
"""
Checks if timeline is auto updating.
Returns:
True if timeline is auto updating. False otherwise.
"""
def is_looping(self) -> bool:
"""
Checks if animation is looping.
Returns:
True if animation is looping. False otherwise.
"""
def is_playing(self) -> bool:
"""
Checks if animation is playing.
Returns:
True if animation is playing. False otherwise.
"""
def is_prerolling(self) -> bool:
"""
Checks if timeline is prerolling.
Returns:
True if timeline is prerolling. False otherwise.
"""
def is_stopped(self) -> bool:
"""
Checks if animation is stopped, as opposed to paused.
Returns:
True if animation is stopped. False otherwise.
"""
def is_zoomed(self) -> bool:
"""
Returns whether a zoom is set, i.e. whether the zoom range is not the entire
[getStartTime(), getEndTime()] interval.
Returns:
True if get_start_time() < get_zoom_start_time() or get_zoom_end_time() < get_end_time() (note that "<=" always holds).
False otherwise.
"""
def pause(self) -> None:
"""
Pauses animation.
"""
def play(self, start_timecode: float = 0, end_timecode: float = 0, looping: bool = True) -> None:
"""
Plays animation with current timeCodePerSecond. if not set session start and end timecode, will play from
global start time to end time in stage.
Args:
start_timecode: start timecode of session play, won't change the global StartTime.
end_timecode: start timecode of session play, won't change the global EndTime.
looping: true to enable session play looping, false to disable, won't change the global Looping.
"""
def rewind_one_frame(self) -> None:
"""
Rewinds the timeline by one frame.
"""
def set_auto_update(self, auto_update: bool) -> None:
"""
Turns on/off auto update.
Args:
auto_update: True to enable auto update, False to disable.
"""
def set_current_time(self, time_in_seconds: float) -> None:
"""
Sets current time of animation in seconds.
Args:
time_in_seconds Current time of animation in seconds.
"""
def set_director(self, timeline: Timeline) -> None:
"""
Sets a director Timeline.
When a director is set, the timeline mimics its behavior and any
state changing call from all other sources are ignored.
Args:
timeline: The timeline object to be set as the director.
Pass None to clear the current director.
"""
def set_end_time(self, end_time: float) -> None:
"""
Sets the end time of animation in seconds. This will write into current opened stage.
Args:
end_time: End time of animation in seconds.
"""
def set_fast_mode(self, fast_mode: bool) -> None:
"""
Turns fast mode on or off. Deprecated, same as set_play_every_frame.
Args:
fast_mode true to turn on fast mode, false to turn it off.
"""
def set_looping(self, looping: bool) -> None:
"""
Sets animation looping mode.
Args:
looping: True to enable looping, False to disable.
"""
def set_play_every_frame(self, play_every_frame: bool) -> None:
"""
Turns frame skipping off (true) or on (false). Same as set_fast_mode.
Args:
play_every_frame true to turn frame skipping off.
"""
def set_prerolling(self, preroll: bool) -> None:
"""
Turns on/off preroll status.
Args:
preroll: True to enable preroll, False to disable.
"""
def set_start_time(self, start_time: float) -> None:
"""
Sets the begin time of animation in seconds. This will write into current opened stage.
Args:
start_time: Begin time of animation in seconds.
"""
def set_target_framerate(self, target_framerate: float) -> None:
"""
Sets the target frame rate, which affects the derived FPS of the runloop in play mode.
Exact runloop FPS is usually not the same as this value, as it is always a multiple of get_time_codes_per_seconds.
Args:
target_framerate The target frame rate.
"""
def set_tentative_time(self, time_in_seconds: float) -> None:
"""
Sets tentative time of animation in seconds.
Args:
time_in_seconds Tentative time of animation in seconds.
"""
def set_ticks_per_frame(self, ticks_per_frame: int) -> None:
"""
Sets the tick count per frame, i.e. how many times update event is ticked per frame.
Args:
ticks_per_frame: The tick per frame count.
"""
def set_time_codes_per_second(self, time_codes_per_second: float) -> None:
"""
Sets timeCodePerSecond metadata to currently opened stage.
This is equivalent to calling SetTimeCodesPerSecond on UsdStage.
Args:
time_codes_per_second: TimeCodePerSecond to set into current stage.
"""
def set_zoom_range(self, start_time: float, end_time: float) -> None:
"""
Sets the zoom range, i.e. the playback interval.
Values are truncated to the [get_start_time(), get_end_time()] interval, which is also the default range.
A minimum of one frame long range is enforced.
Args:
start_time: Start time of zoom in seconds. Must be less or equal than end_time.
end_time: End time of zoom in seconds. Must be greater or equal than start_time.
"""
def stop(self) -> None:
"""
Stops animation.
"""
def time_code_to_time(self, arg0: float) -> float:
"""
Converts time codes to seconds, w.r.t. the current timeCodesPerSecond setting of the timeline.
Returns:
The converted time code.
"""
def time_to_time_code(self, arg0: float) -> float:
"""
Converts time in seconds to time codes, w.r.t. the current timeCodesPerSecond setting of the timeline.
Returns:
The converted time code.
"""
pass
class TimelineEventType():
"""
Timeline event types to be used by TimelineEventStream.
Members:
PLAY
PAUSE
STOP
CURRENT_TIME_CHANGED
CURRENT_TIME_TICKED_PERMANENT
CURRENT_TIME_TICKED
LOOP_MODE_CHANGED
START_TIME_CHANGED
END_TIME_CHANGED
TIME_CODE_PER_SECOND_CHANGED
AUTO_UPDATE_CHANGED
PREROLLING_CHANGED
TENTATIVE_TIME_CHANGED
TICKS_PER_FRAME_CHANGED
FAST_MODE_CHANGED
PLAY_EVERY_FRAME_CHANGED
TARGET_FRAMERATE_CHANGED
DIRECTOR_CHANGED
ZOOM_CHANGED
"""
def __eq__(self, other: object) -> bool: ...
def __getstate__(self) -> int: ...
def __hash__(self) -> int: ...
def __index__(self) -> int: ...
def __init__(self, value: int) -> None: ...
def __int__(self) -> int: ...
def __ne__(self, other: object) -> bool: ...
def __repr__(self) -> str: ...
def __setstate__(self, state: int) -> None: ...
@property
def name(self) -> str:
"""
:type: str
"""
@property
def value(self) -> int:
"""
:type: int
"""
AUTO_UPDATE_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.AUTO_UPDATE_CHANGED: 10>
CURRENT_TIME_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.CURRENT_TIME_CHANGED: 3>
CURRENT_TIME_TICKED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.CURRENT_TIME_TICKED: 5>
CURRENT_TIME_TICKED_PERMANENT: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.CURRENT_TIME_TICKED_PERMANENT: 4>
DIRECTOR_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.DIRECTOR_CHANGED: 16>
END_TIME_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.END_TIME_CHANGED: 8>
FAST_MODE_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.FAST_MODE_CHANGED: 14>
LOOP_MODE_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.LOOP_MODE_CHANGED: 6>
PAUSE: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.PAUSE: 1>
PLAY: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.PLAY: 0>
PLAY_EVERY_FRAME_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.FAST_MODE_CHANGED: 14>
PREROLLING_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.PREROLLING_CHANGED: 11>
START_TIME_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.START_TIME_CHANGED: 7>
STOP: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.STOP: 2>
TARGET_FRAMERATE_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.TARGET_FRAMERATE_CHANGED: 15>
TENTATIVE_TIME_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.TENTATIVE_TIME_CHANGED: 12>
TICKS_PER_FRAME_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.TICKS_PER_FRAME_CHANGED: 13>
TIME_CODE_PER_SECOND_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.TIME_CODE_PER_SECOND_CHANGED: 9>
ZOOM_CHANGED: omni.timeline._timeline.TimelineEventType # value = <TimelineEventType.ZOOM_CHANGED: 17>
__members__: dict # value = {'PLAY': <TimelineEventType.PLAY: 0>, 'PAUSE': <TimelineEventType.PAUSE: 1>, 'STOP': <TimelineEventType.STOP: 2>, 'CURRENT_TIME_CHANGED': <TimelineEventType.CURRENT_TIME_CHANGED: 3>, 'CURRENT_TIME_TICKED_PERMANENT': <TimelineEventType.CURRENT_TIME_TICKED_PERMANENT: 4>, 'CURRENT_TIME_TICKED': <TimelineEventType.CURRENT_TIME_TICKED: 5>, 'LOOP_MODE_CHANGED': <TimelineEventType.LOOP_MODE_CHANGED: 6>, 'START_TIME_CHANGED': <TimelineEventType.START_TIME_CHANGED: 7>, 'END_TIME_CHANGED': <TimelineEventType.END_TIME_CHANGED: 8>, 'TIME_CODE_PER_SECOND_CHANGED': <TimelineEventType.TIME_CODE_PER_SECOND_CHANGED: 9>, 'AUTO_UPDATE_CHANGED': <TimelineEventType.AUTO_UPDATE_CHANGED: 10>, 'PREROLLING_CHANGED': <TimelineEventType.PREROLLING_CHANGED: 11>, 'TENTATIVE_TIME_CHANGED': <TimelineEventType.TENTATIVE_TIME_CHANGED: 12>, 'TICKS_PER_FRAME_CHANGED': <TimelineEventType.TICKS_PER_FRAME_CHANGED: 13>, 'FAST_MODE_CHANGED': <TimelineEventType.FAST_MODE_CHANGED: 14>, 'PLAY_EVERY_FRAME_CHANGED': <TimelineEventType.FAST_MODE_CHANGED: 14>, 'TARGET_FRAMERATE_CHANGED': <TimelineEventType.TARGET_FRAMERATE_CHANGED: 15>, 'DIRECTOR_CHANGED': <TimelineEventType.DIRECTOR_CHANGED: 16>, 'ZOOM_CHANGED': <TimelineEventType.ZOOM_CHANGED: 17>}
pass
def acquire_timeline_interface(plugin_name: str = None, library_path: str = None) -> ITimeline:
pass
def release_timeline_interface(arg0: ITimeline) -> None:
pass
|
omniverse-code/kit/exts/omni.timeline/omni/timeline/__init__.py | from ._timeline import *
def get_timeline_interface(timeline_name: str='') -> Timeline:
"""Returns the timeline with the given name via cached :class:`omni.timeline.ITimeline` interface"""
if not hasattr(get_timeline_interface, "timeline"):
get_timeline_interface.timeline = acquire_timeline_interface()
return get_timeline_interface.timeline.get_timeline(timeline_name)
def destroy_timeline(timeline_name: str):
"""Destroys a timeline object with the given name, if it is not the default timeline and it is not in use."""
if not hasattr(get_timeline_interface, "timeline"):
get_timeline_interface.timeline = acquire_timeline_interface()
return get_timeline_interface.timeline.destroy_timeline(timeline_name)
|
omniverse-code/kit/exts/omni.timeline/omni/timeline/tests/tests.py | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import carb.events
import omni.kit.app
import omni.kit.test
import omni.timeline
import carb.settings
import queue
import asyncio
from time import sleep
USE_FIXED_TIMESTEP_PATH = "/app/player/useFixedTimeStepping"
RUNLOOP_RATE_LIMIT_PATH = "/app/runLoops/main/rateLimitFrequency"
COMPENSATE_PLAY_DELAY_PATH = "/app/player/CompensatePlayDelayInSecs"
class TestTimeline(omni.kit.test.AsyncTestCase):
async def setUp(self):
self._app = omni.kit.app.get_app()
self._timeline = omni.timeline.get_timeline_interface()
self._timeline_sub = self._timeline.get_timeline_event_stream().create_subscription_to_pop(
self._on_timeline_event
)
self._buffered_evts = queue.Queue()
self._settings = carb.settings.acquire_settings_interface()
async def tearDown(self):
self._timeline = None
self._timeline_sub = None
async def test_timeline_api(self):
## Check initial states
self.assertFalse(self._timeline.is_playing())
self.assertTrue(self._timeline.is_stopped())
self.assertEqual(0.0, self._timeline.get_start_time())
self.assertEqual(0.0, self._timeline.get_end_time())
self.assertEqual(0.0, self._timeline.get_current_time())
self.assertEqual(0.0, self._timeline.get_time_codes_per_seconds())
self.assertTrue(self._timeline.is_looping())
self.assertTrue(self._timeline.is_auto_updating())
self.assertFalse(self._timeline.is_prerolling())
self.assertFalse(self._timeline.is_zoomed())
## Change start time
start_time = -1.0
self._timeline.set_start_time(start_time)
self._assert_no_change_then_commit(self._timeline.get_start_time(), 0)
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED, "startTime", start_time)
self.assertTrue(self._buffered_evts.empty())
self.assertEqual(start_time, self._timeline.get_start_time())
self.assertEqual(0.0, self._timeline.get_current_time())
## Change end time
end_time = 2.0
self._timeline.set_end_time(end_time)
self._assert_no_change_then_commit(self._timeline.get_end_time(), 0)
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED, "endTime", end_time)
self.assertTrue(self._buffered_evts.empty())
self.assertEqual(end_time, self._timeline.get_end_time())
self.assertEqual(0.0, self._timeline.get_current_time())
## Change start time to current_time < start_time < end_time
# OM-75796: changing start time does not move current time when the timeline is not playing
old_start_time = start_time
start_time = 1.0
self._timeline.set_start_time(start_time)
self._assert_no_change_then_commit(self._timeline.get_start_time(), old_start_time)
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED, "startTime", start_time)
self.assertTrue(self._buffered_evts.empty())
self.assertEqual(start_time, self._timeline.get_start_time())
self.assertEqual(0.0, self._timeline.get_current_time())
## Change timecode per second
time_code_per_sec = 24.0
self._timeline.set_time_codes_per_second(time_code_per_sec)
self._assert_no_change_then_commit(self._timeline.get_time_codes_per_seconds(), 0)
self._verify_evt(
omni.timeline.TimelineEventType.TIME_CODE_PER_SECOND_CHANGED, "timeCodesPerSecond", time_code_per_sec
)
self.assertEqual(time_code_per_sec, self._timeline.get_time_codes_per_seconds())
## Do not allow endtime <= starttime
frame_time = 1.0 / time_code_per_sec
new_start_time = end_time
self._timeline.set_start_time(new_start_time)
self._assert_no_change_then_commit(self._timeline.get_start_time(), start_time)
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED, "startTime", new_start_time)
self._verify_evt(
omni.timeline.TimelineEventType.END_TIME_CHANGED,
"endTime",
new_start_time + frame_time,
exact=False
)
self.assertTrue(self._buffered_evts.empty())
new_start_time = new_start_time + 10.0 * frame_time
self._timeline.set_start_time(new_start_time)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED, "startTime", new_start_time)
self._verify_evt(
omni.timeline.TimelineEventType.END_TIME_CHANGED,
"endTime",
new_start_time + frame_time,
exact=False
)
self.assertTrue(self._buffered_evts.empty())
new_end_time = self._timeline.get_start_time()
self._timeline.set_end_time(new_end_time)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED, "endTime", new_end_time)
self._verify_evt(
omni.timeline.TimelineEventType.START_TIME_CHANGED,
"startTime",
new_end_time - frame_time,
exact=False
)
self.assertTrue(self._buffered_evts.empty())
new_end_time = new_end_time - 10.0 * frame_time
self._timeline.set_end_time(new_end_time)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED, "endTime", new_end_time)
self._verify_evt(
omni.timeline.TimelineEventType.START_TIME_CHANGED,
"startTime",
new_end_time - frame_time,
exact=False
)
self.assertTrue(self._buffered_evts.empty())
# Revert
start_time = 1.0
self._timeline.set_start_time(start_time)
self._timeline.set_end_time(end_time)
self._timeline.set_current_time(start_time)
self._timeline.commit()
self._clear_evt_queue() # Don't care
## Time conversion
self.assertAlmostEqual(self._timeline.time_to_time_code(-1), -24, places=5)
self.assertAlmostEqual(self._timeline.time_to_time_code(0), 0, places=5)
self.assertAlmostEqual(self._timeline.time_to_time_code(0.5), 12, places=5)
self.assertAlmostEqual(self._timeline.time_to_time_code(2), 48, places=5)
self.assertAlmostEqual(self._timeline.time_code_to_time(-24), -1, places=5)
self.assertAlmostEqual(self._timeline.time_code_to_time(0), 0, places=5)
self.assertAlmostEqual(self._timeline.time_code_to_time(12), 0.5, places=5)
self.assertAlmostEqual(self._timeline.time_code_to_time(48), 2, places=5)
self.assertTrue(self._buffered_evts.empty())
## Set current time
old_current_time = self._timeline.get_current_time()
new_current_time = start_time
self._timeline.set_current_time(new_current_time)
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "currentTime", new_current_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "currentTime", new_current_time)
self.assertEqual(new_current_time, self._timeline.get_current_time())
# dt is smaller than 1 frame
old_current_time = self._timeline.get_current_time()
expected_dt = 0.5 * frame_time
new_current_time = start_time + expected_dt
self._timeline.set_current_time(new_current_time)
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "dt", expected_dt, False)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "dt", expected_dt, False)
self.assertEqual(new_current_time, self._timeline.get_current_time())
# Edge case: dt is exactly one frame
old_current_time = self._timeline.get_current_time()
expected_dt = 1.0 * frame_time
new_current_time = new_current_time + expected_dt
self._timeline.set_current_time(new_current_time)
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "dt", expected_dt, False)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "dt", expected_dt, False)
self.assertEqual(new_current_time, self._timeline.get_current_time())
# If dt would be too large to simulate, it is set to zero
old_current_time = self._timeline.get_current_time()
expected_dt = 0
new_current_time = new_current_time + 1.5 * frame_time
self._timeline.set_current_time(new_current_time)
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "dt", expected_dt, False)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "dt", expected_dt, False)
self.assertEqual(new_current_time, self._timeline.get_current_time())
# If dt would be negative, it is set to zero
old_current_time = self._timeline.get_current_time()
expected_dt = 0
new_current_time = start_time
self._timeline.set_current_time(new_current_time)
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "dt", expected_dt, False)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "dt", expected_dt, False)
self.assertEqual(new_current_time, self._timeline.get_current_time())
## Forward one frame
old_current_time = self._timeline.get_current_time()
self._timeline.forward_one_frame()
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
# Forward one frame triggers a play and pause if the timeline was not playing
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED,
"currentTime", start_time + 1.0 / time_code_per_sec
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT,
"currentTime", start_time + 1.0 / time_code_per_sec
)
self._verify_evt(omni.timeline.TimelineEventType.PAUSE)
self.assertAlmostEqual(start_time + 1.0 / time_code_per_sec, self._timeline.get_current_time(), places=5)
## Rewind one frame
old_current_time = self._timeline.get_current_time()
self._timeline.rewind_one_frame()
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "currentTime", start_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "currentTime", start_time)
self.assertAlmostEqual(start_time, self._timeline.get_current_time(), places=5)
new_current_time = self._timeline.get_current_time()
## Set target framerate.
DEFAULT_TARGET_FRAMERATE = 60
self.assertEqual(self._timeline.get_target_framerate(), DEFAULT_TARGET_FRAMERATE)
target_vs_set_fps = [
[2 * time_code_per_sec - 1, 2 * time_code_per_sec],
[3 * time_code_per_sec + 15, 4 * time_code_per_sec],
[5 * time_code_per_sec - 7, 5 * time_code_per_sec],
[time_code_per_sec, time_code_per_sec] # this comes last to avoid frame skipping in Play tests
]
self._timeline.play()
self._timeline.commit()
self._clear_evt_queue() # don't care
for fps in target_vs_set_fps:
target_fps = fps[0]
desired_runloop_fps = fps[1]
old_fps = self._timeline.get_target_framerate()
self._timeline.set_target_framerate(target_fps)
self._assert_no_change_then_commit(self._timeline.get_target_framerate(), old_fps)
self._verify_evt(
omni.timeline.TimelineEventType.TARGET_FRAMERATE_CHANGED, "targetFrameRate", target_fps
)
self.assertEqual(target_fps, self._timeline.get_target_framerate())
self.assertEqual(self._settings.get(RUNLOOP_RATE_LIMIT_PATH), desired_runloop_fps)
self._timeline.stop()
self._timeline.commit()
self._clear_evt_queue() # don't care
## Play
self._timeline.play()
self._assert_no_change_then_commit(self._timeline.is_playing(), False)
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
self.assertTrue(self._settings.get(USE_FIXED_TIMESTEP_PATH))
self.assertTrue(self._timeline.is_playing())
self.assertFalse(self._timeline.is_stopped())
await self._app.next_update_async()
dt = 1.0 / time_code_per_sec # timeline uses fixed dt by default
self.assertAlmostEqual(new_current_time + dt, self._timeline.get_current_time(), places=5)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "currentTime", new_current_time + dt, exact=False
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "dt", dt, exact=False
)
# varying dt
self._timeline.stop()
self._settings.set(USE_FIXED_TIMESTEP_PATH, False)
self._timeline.play()
self._timeline.commit()
self._clear_evt_queue()
new_current_time = self._timeline.get_current_time()
dt = await self._app.next_update_async()
self.assertAlmostEqual(new_current_time + dt, self._timeline.get_current_time(), places=5)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "currentTime", new_current_time + dt, exact=False
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "dt", dt, exact=False
)
self._settings.set(USE_FIXED_TIMESTEP_PATH, True)
## Pause
self._timeline.pause()
self._assert_no_change_then_commit(self._timeline.is_playing(), True)
self._verify_evt(omni.timeline.TimelineEventType.PAUSE)
self.assertFalse(self._timeline.is_playing())
self.assertFalse(self._timeline.is_stopped())
# current time should not change
await self._app.next_update_async()
self.assertAlmostEqual(new_current_time + dt, self._timeline.get_current_time(), places=5)
# permanent update event is still ticking
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
## Stop
self._timeline.stop()
self._assert_no_change_then_commit(self._timeline.is_stopped(), False)
self._verify_evt(omni.timeline.TimelineEventType.STOP)
self.assertFalse(self._timeline.is_playing())
self.assertTrue(self._timeline.is_stopped())
self.assertEqual(start_time, self._timeline.get_current_time())
## Loop
# self._timeline.set_looping(True) # it was already set
self._timeline.play()
self._timeline.commit()
dt = 1.0 / time_code_per_sec # timeline uses fixed dt by default
elapsed_time = 0.0
while elapsed_time < end_time * 1.5:
await self._app.next_update_async()
elapsed_time += dt
self._timeline.pause()
self._timeline.commit()
self._clear_evt_queue() # don't care
# time is looped
loopped_time = elapsed_time % (end_time - start_time) + start_time
self.assertAlmostEqual(loopped_time, self._timeline.get_current_time(), places=5)
## Non-loop
self._timeline.set_looping(False)
self._assert_no_change_then_commit(self._timeline.is_looping(), True)
self._verify_evt(omni.timeline.TimelineEventType.LOOP_MODE_CHANGED, "looping", False)
self._timeline.stop()
self._timeline.play()
self._timeline.commit()
elapsed_time = 0.0
while elapsed_time < end_time * 1.5:
dt = await self._app.next_update_async()
elapsed_time += dt
# timeline paused when reached the end
self.assertFalse(self._timeline.is_playing())
self.assertFalse(self._timeline.is_stopped())
self.assertAlmostEqual(end_time, self._timeline.get_current_time(), places=5)
## Change end time that should change current time because current time was > end time
self._clear_evt_queue()
end_time = 1.5
current_time = self._timeline.get_current_time()
self._timeline.set_end_time(end_time)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED, "endTime", end_time)
# OM-75796: changing the end time does not move current time when timeline is not playing
self.assertTrue(self._buffered_evts.empty())
self.assertEqual(end_time, self._timeline.get_end_time())
self.assertEqual(current_time, self._timeline.get_current_time())
self._timeline.stop()
# OM-75796: changing the end time moves current time to start when timeline is playing
self._timeline.play()
self._timeline.set_current_time(end_time) # someting after the new end time
self._timeline.commit()
self._clear_evt_queue() # don't care
end_time = 1.25
self._timeline.set_end_time(end_time)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED, "endTime", end_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "currentTime", start_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "currentTime", start_time)
self.assertTrue(self._buffered_evts.empty())
self.assertEqual(end_time, self._timeline.get_end_time())
self.assertEqual(start_time, self._timeline.get_current_time())
# OM-75796: changing the start time moves current time to start when timeline is playing
end_time = 100
self._timeline.set_end_time(end_time)
self._timeline.commit()
self._clear_evt_queue() # don't care
start_time = 2.0
self._timeline.set_start_time(start_time)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED, "startTime", start_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "currentTime", start_time)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "currentTime", start_time)
self.assertTrue(self._buffered_evts.empty())
self.assertEqual(start_time, self._timeline.get_start_time())
self.assertEqual(start_time, self._timeline.get_current_time())
self._timeline.stop()
self._timeline.commit()
## Auto update
self._clear_evt_queue()
self._timeline.set_auto_update(False)
self._assert_no_change_then_commit(self._timeline.is_auto_updating(), True)
self._verify_evt(omni.timeline.TimelineEventType.AUTO_UPDATE_CHANGED, "autoUpdate", False)
self.assertFalse(self._timeline.is_auto_updating())
self._timeline.set_looping(True)
self._timeline.play()
for i in range(5):
await self._app.next_update_async()
self.assertEqual(start_time, self._timeline.get_current_time())
## Prerolling
self._clear_evt_queue()
self._timeline.set_prerolling(True)
self._assert_no_change_then_commit(self._timeline.is_prerolling(), False)
self.assertTrue(self._timeline.is_prerolling())
self._verify_evt(omni.timeline.TimelineEventType.PREROLLING_CHANGED, "prerolling", True)
## Change TimeCodesPerSeconds and verify if CurrentTime is properly refitted.
time_codes_per_second = 24
timecode = 50
self._timeline.set_time_codes_per_second(time_codes_per_second)
self._timeline.set_start_time(0)
self._timeline.set_end_time(100)
self._timeline.set_current_time(timecode)
self._timeline.set_time_codes_per_second(100)
self._timeline.commit()
# we stay the same timecode after change TimeCodesPerSeconds
self.assertEqual(timecode, self._timeline.get_current_time())
## Play in range
self._timeline.stop()
self._timeline.set_auto_update(True)
start_time_seconds = 0
range_start_timecode = 20
range_end_timecode = 30
end_time_seconds = 40
self._timeline.set_start_time(start_time_seconds)
self._timeline.set_end_time(end_time_seconds)
self._timeline.commit()
self._clear_evt_queue()
self._timeline.play(range_start_timecode, range_end_timecode, False)
self._assert_no_change_then_commit(True, True)
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
self.assertTrue(self._timeline.is_playing())
self.assertFalse(self._timeline.is_stopped())
await asyncio.sleep(5)
self.assertEqual(start_time_seconds, self._timeline.get_start_time())
self.assertEqual(end_time_seconds, self._timeline.get_end_time())
self.assertEqual(range_end_timecode, self._timeline.get_current_time() * self._timeline.get_time_codes_per_seconds())
## tentative time
current_time = 2.0
tentative_time = 2.5
self._timeline.set_current_time(current_time)
self._timeline.commit()
self._clear_evt_queue()
self._timeline.set_tentative_time(tentative_time)
self._assert_no_change_then_commit(self._timeline.get_tentative_time(), current_time)
self._verify_evt(omni.timeline.TimelineEventType.TENTATIVE_TIME_CHANGED)
self.assertEqual(tentative_time, self._timeline.get_tentative_time())
self.assertEqual(current_time, self._timeline.get_current_time())
self._timeline.clear_tentative_time()
self._assert_no_change_then_commit(self._timeline.get_tentative_time(), tentative_time)
self.assertEqual(self._timeline.get_tentative_time(), self._timeline.get_current_time())
## tentative time and events
tentative_time = 3.5
self._timeline.set_tentative_time(tentative_time)
self._timeline.commit()
self._clear_evt_queue()
self._timeline.forward_one_frame()
self._assert_no_change_then_commit(True, True)
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
self._verify_evt(omni.timeline.TimelineEventType.PAUSE)
self.assertTrue(self._buffered_evts.empty()) # tentative time was used, no other event
self._timeline.set_tentative_time(tentative_time)
self._timeline.commit()
self._clear_evt_queue()
self._timeline.rewind_one_frame()
self._assert_no_change_then_commit(True, True)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
self.assertTrue(self._buffered_evts.empty()) # tentative time was used, no other event
## Forward one frame while the timeline is playing: only time is ticked, no play/pause
self._timeline.play()
self._timeline.commit()
self._clear_evt_queue() # don't care
old_current_time = self._timeline.get_current_time()
tcps_current = self._timeline.get_time_codes_per_seconds()
self._timeline.forward_one_frame()
self._assert_no_change_then_commit(self._timeline.get_current_time(), old_current_time)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED,
"currentTime", old_current_time + 1.0 / tcps_current
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT,
"currentTime", old_current_time + 1.0 / tcps_current
)
self.assertTrue(self._buffered_evts.empty())
self.assertAlmostEqual(old_current_time + 1.0 / tcps_current, self._timeline.get_current_time(), places=5)
self._timeline.stop()
self._timeline.commit()
self._clear_evt_queue() # don't care
## substepping
subsample_rate = 3
old_subsample_rate = self._timeline.get_ticks_per_frame()
self._timeline.set_ticks_per_frame(subsample_rate)
self._assert_no_change_then_commit(self._timeline.get_ticks_per_frame(), old_subsample_rate)
self._verify_evt(
omni.timeline.TimelineEventType.TICKS_PER_FRAME_CHANGED, "ticksPerFrame", subsample_rate, exact=True
)
self.assertEqual(self._timeline.get_ticks_per_frame(), subsample_rate)
self.assertEqual(self._timeline.get_ticks_per_second(), subsample_rate * self._timeline.get_time_codes_per_seconds())
self._timeline.play()
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
await self._app.next_update_async()
for i in range(subsample_rate):
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "tick", i, exact=True
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "tick", i, exact=True
)
tick_dt = 1.0 / (self._timeline.get_time_codes_per_seconds() * subsample_rate)
await self._app.next_update_async()
for i in range(subsample_rate):
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, "dt", tick_dt, exact=False
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, "dt", tick_dt, exact=False
)
start_time = self._timeline.get_current_time()
await self._app.next_update_async()
for i in range(subsample_rate):
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED,
"currentTime",
start_time + (i + 1) * tick_dt,
exact=False
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT,
"currentTime",
start_time + (i + 1) * tick_dt,
exact=False
)
self._timeline.stop()
self._timeline.commit()
## substepping with event handling and event firing in the kit runloop
self._tick = True
def pause(e: carb.events.IEvent):
if e.type == int(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED):
if self._tick:
self._timeline.pause()
else:
self._timeline.play()
self._tick = not self._tick
pause_sub = self._timeline.get_timeline_event_stream().create_subscription_to_pop(pause, order=1)
start_time = self._timeline.get_current_time()
self._timeline.play()
self._timeline.commit()
self._clear_evt_queue()
# callbacks call stop and pause but we still finish the frame
await self._app.post_update_async()
for i in range(subsample_rate):
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED,
"currentTime",
start_time + (i + 1) * tick_dt,
exact=False
)
self._verify_evt(
omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT,
"currentTime",
start_time + (i + 1) * tick_dt,
exact=False
)
# No events until the next frame
self.assertTrue(self._buffered_evts.empty())
# wait a until the next update, events are then re-played
await self._app.next_update_async()
self._tick = True
for i in range(subsample_rate):
if self._tick:
self._verify_evt(omni.timeline.TimelineEventType.PAUSE)
else:
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
self._tick = not self._tick
self.assertEqual(self._timeline.is_playing(), self._tick)
pause_sub = None
# stop for the next test
self._timeline.stop()
self._timeline.commit()
self._clear_evt_queue()
## frame skipping
self.assertFalse(self._timeline.get_play_every_frame()) # off by default
self.assertAlmostEqual(self._settings.get(COMPENSATE_PLAY_DELAY_PATH), 0.0) # No compensation by default
self._timeline.set_time_codes_per_second(time_codes_per_second)
self._timeline.set_ticks_per_frame(1)
self._timeline.set_target_framerate(3.0 * time_code_per_sec)
self._timeline.play()
self._timeline.commit()
self._clear_evt_queue()
await self._app.next_update_async()
# events are fired on the first call
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
# 2 frames are skipped
await self._app.next_update_async()
self.assertTrue(self._buffered_evts.empty())
await self._app.next_update_async()
self.assertTrue(self._buffered_evts.empty())
# firing events again
await self._app.next_update_async()
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
self._timeline.stop()
self._timeline.commit()
self._clear_evt_queue() # don't care
# fast mode, target frame rate is still 3 * time_code_per_sec, but frames should not be skipped
self._timeline.set_play_every_frame(True)
self._assert_no_change_then_commit(self._timeline.get_play_every_frame(), False)
self._verify_evt(
omni.timeline.TimelineEventType.PLAY_EVERY_FRAME_CHANGED, "playEveryFrame", True, exact=True
)
self._timeline.play()
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
await self._app.next_update_async()
for i in range(5):
await self._app.next_update_async()
# events are fired for every call
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
self._timeline.set_play_every_frame(False)
self._timeline.commit()
self._clear_evt_queue() # don't care
## test multiple pipelines
self._timeline.stop()
self._timeline.commit()
default_timeline_current_time = self._timeline.get_current_time()
new_timeline_name = "new_timeline"
self._new_timeline = omni.timeline.get_timeline_interface(new_timeline_name)
self._new_timeline.set_current_time(default_timeline_current_time + 1.0)
self._new_timeline.set_end_time(default_timeline_current_time + 10.0)
self._new_timeline.commit()
new_timeline_current_time = self._new_timeline.get_current_time()
self.assertNotEqual(new_timeline_current_time, self._timeline.get_current_time())
self._new_timeline.play()
self._new_timeline.commit()
self.assertFalse(self._timeline.is_playing())
self.assertTrue(self._new_timeline.is_playing())
for i in range(10):
await self._app.next_update_async()
self.assertNotEqual(new_timeline_current_time, self._new_timeline.get_current_time())
self.assertEqual(default_timeline_current_time, self._timeline.get_current_time())
# OM-76359: test that cleaning works
self._new_timeline = None
self._dummy_timeline = omni.timeline.get_timeline_interface('dummy')
self._new_timeline = omni.timeline.get_timeline_interface(new_timeline_name)
self.assertAlmostEqual(0.0, self._new_timeline.get_current_time(), places=5)
omni.timeline.destroy_timeline(new_timeline_name)
omni.timeline.destroy_timeline('dummy')
## commit and its silent version
self._clear_evt_queue()
self._timeline.set_target_framerate(17)
self._timeline.set_time_codes_per_second(1)
self._timeline.set_ticks_per_frame(19)
self._timeline.play()
self._timeline.stop()
self._timeline.set_current_time(15)
self._timeline.rewind_one_frame()
self._timeline.commit_silently()
self.assertTrue(self._buffered_evts.empty())
self.assertAlmostEqual(self._timeline.get_target_framerate(), 17)
self.assertAlmostEqual(self._timeline.get_time_codes_per_seconds(), 1)
self.assertAlmostEqual(self._timeline.get_ticks_per_frame(), 19)
self.assertFalse(self._timeline.is_playing())
self.assertAlmostEqual(self._timeline.get_current_time(), 14)
# Revert
self._timeline.set_ticks_per_frame(1)
await self._app.next_update_async()
self._clear_evt_queue()
self._timeline.set_start_time(1)
self._timeline.set_end_time(10)
self._timeline.set_current_time(5)
self._timeline.play()
self._timeline.forward_one_frame()
self._timeline.rewind_one_frame()
self.assertAlmostEqual(self._timeline.get_current_time(), 14)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED, 'startTime', 1)
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED, 'endTime', 10)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, 'currentTime', 5)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, 'currentTime', 5)
self._verify_evt(omni.timeline.TimelineEventType.PLAY)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, 'currentTime', 6)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, 'currentTime', 6)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED, 'currentTime', 5)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT, 'currentTime', 5)
self.assertTrue(self._buffered_evts.empty())
self.assertAlmostEqual(self._timeline.get_current_time(), 5)
self.assertTrue(self._timeline.is_playing())
self._timeline.stop()
self._timeline.commit()
self._clear_evt_queue()
# calling manually to make sure they come after the API test
# independently of the test env
await self._test_runloop_integration()
await self._test_director()
async def _test_runloop_integration(self):
""" Test how the run loop behaves when it is controlled by the timeline
and test proper behavior when runloop slows down
"""
# make sure other tests do not interfere
TIMELINE_FPS = 25.0
RUNLOOP_FPS = TIMELINE_FPS * 4.0 # 100, 1/4 frame skipping
self._timeline.stop()
self._timeline.set_current_time(0.0)
self._timeline.set_start_time(0.0)
self._timeline.set_end_time(1000.0)
self._timeline.set_target_framerate(RUNLOOP_FPS)
self._timeline.set_time_codes_per_second(TIMELINE_FPS)
self._timeline.set_play_every_frame(False)
self._timeline.set_ticks_per_frame(1)
self._timeline.play()
self._timeline.commit()
# Run loop is expected to run with 100 FPS, nothing should slow it down.
# Timeline relies on this so we test this here.
self.assertEqual(self._settings.get(RUNLOOP_RATE_LIMIT_PATH), RUNLOOP_FPS)
FPS_TOLERANCE_PERCENT = 5
runloop_dt_min = (1.0 / RUNLOOP_FPS) * (1 - FPS_TOLERANCE_PERCENT * 0.01)
runloop_dt_max = (1.0 / RUNLOOP_FPS) * (1 + FPS_TOLERANCE_PERCENT * 0.01)
update_sub = self._app.get_update_event_stream().create_subscription_to_pop(
self._save_runloop_dt,
name="[TimelineTest save dt]"
)
for i in range(3):
await self._app.next_update_async() # warm up, "consume" old FPS
for i in range(5):
await self._app.next_update_async()
self.assertTrue(self._runloop_dt >= runloop_dt_min,
"Run loop dt is too far from expected: {} vs {}"
.format(self._runloop_dt, (1.0 / RUNLOOP_FPS)))
self.assertTrue(self._runloop_dt <= runloop_dt_max,
"Run loop dt is too far from expected: {} vs {}"
.format(self._runloop_dt, (1.0 / RUNLOOP_FPS)))
# Timeline wants to keep up with real time if run loop gets too slow (no frame skipping)
self._settings.set(COMPENSATE_PLAY_DELAY_PATH, 1000.0) # set something high, e.g. 1000s
pre_update_sub = self._app.get_pre_update_event_stream().create_subscription_to_pop(
self._sleep,
name="[TimelineTest sleep]"
)
self._timeline.stop()
self._timeline.play()
self._timeline.commit()
self._clear_evt_queue()
await self._app.next_update_async()
for i in range(5):
await self._app.next_update_async()
self.assertTrue(self._runloop_dt > 0.99) # sleep
# run loop is slow, no frame skipping even though fast mode is off
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED)
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
self._timeline.stop()
self._timeline.commit()
self._clear_evt_queue()
async def _test_director(self):
self._timeline.stop()
self._timeline.set_current_time(0)
self._timeline.set_end_time(10)
self._timeline.set_start_time(0)
self._timeline.set_time_codes_per_second(24)
self._timeline.set_looping(False)
self._timeline.set_prerolling(True)
self._timeline.set_auto_update(False)
self._timeline.set_play_every_frame(True)
self._timeline.set_ticks_per_frame(2)
self._timeline.set_target_framerate(24)
self._timeline.commit()
self._clear_evt_queue()
self.assertIsNone(self._timeline.get_director())
self._director_timeline = omni.timeline.get_timeline_interface('director')
# Make sure they have the same parameters
self._director_timeline.stop()
self._director_timeline.set_current_time(self._timeline.get_current_time())
self._director_timeline.set_end_time(self._timeline.get_end_time())
self._director_timeline.set_start_time(self._timeline.get_start_time())
self._director_timeline.set_time_codes_per_second(self._timeline.get_time_codes_per_seconds())
self._director_timeline.set_looping(self._timeline.is_looping())
self._director_timeline.set_prerolling(self._timeline.is_prerolling())
self._director_timeline.set_auto_update(self._timeline.is_auto_updating())
self._director_timeline.set_play_every_frame(self._timeline.get_play_every_frame())
self._director_timeline.set_ticks_per_frame(self._timeline.get_ticks_per_frame())
self._director_timeline.set_target_framerate(self._timeline.get_target_framerate())
self._director_timeline.commit()
self._timeline.set_director(self._director_timeline)
self._timeline.commit()
self._verify_evt(omni.timeline.TimelineEventType.DIRECTOR_CHANGED, 'directorName', 'director')
self._director_timeline.play()
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertTrue(self._timeline.is_playing())
self._director_timeline.pause()
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertFalse(self._timeline.is_playing())
self._director_timeline.stop()
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertTrue(self._timeline.is_stopped())
self._director_timeline.set_current_time(2)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertAlmostEqual(self._timeline.get_current_time(), 2, places=4)
self._director_timeline.set_end_time(5)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertAlmostEqual(self._timeline.get_end_time(), 5, places=4)
self._director_timeline.set_start_time(1)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertAlmostEqual(self._timeline.get_start_time(), 1, places=4)
self._director_timeline.set_time_codes_per_second(30)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertAlmostEqual(self._timeline.get_time_codes_per_seconds(), 30, places=4)
self._director_timeline.set_looping(True)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertTrue(self._timeline.is_looping())
self._director_timeline.set_prerolling(False)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertFalse(self._timeline.is_prerolling())
self._director_timeline.set_auto_update(True)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertTrue(self._timeline.is_auto_updating())
self._director_timeline.set_play_every_frame(False)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertFalse(self._timeline.get_play_every_frame())
self._director_timeline.set_ticks_per_frame(1)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertEqual(self._timeline.get_ticks_per_frame(), 1)
self._director_timeline.set_target_framerate(30)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertAlmostEqual(self._timeline.get_target_framerate(), 30, places=4)
self.assertFalse(self._timeline.is_zoomed())
zoom_start = self._timeline.get_start_time() + 1
zoom_end = self._timeline.get_end_time() - 1
self._director_timeline.set_zoom_range(zoom_start, zoom_end)
self._director_timeline.commit()
self._timeline.commit()
await self._app.next_update_async()
self.assertAlmostEqual(self._timeline.get_zoom_start_time(), zoom_start)
self.assertAlmostEqual(self._timeline.get_zoom_end_time(), zoom_end)
self.assertTrue(self._timeline.is_zoomed())
self._clear_evt_queue() # don't care
# Make sure we still get the permanent tick from the timeline
# It might be delayed by one frame
await self._app.next_update_async()
await self._app.next_update_async()
self._verify_evt(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)
self._clear_evt_queue() # don't care
self._timeline.set_director(None)
self._timeline.commit()
await self._app.next_update_async()
self._verify_evt_exists(omni.timeline.TimelineEventType.DIRECTOR_CHANGED, 'hasDirector', False)
omni.timeline.destroy_timeline('director')
async def test_zoom(self):
timeline_name = 'zoom_test'
# create a new timeline so we don't interfere with other tests.
timeline = omni.timeline.get_timeline_interface(timeline_name)
timeline.set_time_codes_per_second(30)
start_time = 0
end_time = 10
timeline.set_start_time(start_time)
timeline.set_end_time(end_time)
# initial state: no zoom
self.assertAlmostEqual(timeline.get_start_time(), timeline.get_zoom_start_time())
self.assertAlmostEqual(timeline.get_end_time(), timeline.get_zoom_end_time())
self.assertFalse(timeline.is_zoomed())
# setting start and end time keeps the non-zoomed state
start_time = 1 # smaller interval than the current
end_time = 9
timeline.set_start_time(start_time)
self.assertAlmostEqual(timeline.get_start_time(), timeline.get_zoom_start_time())
self.assertFalse(timeline.is_zoomed())
timeline.commit()
timeline.set_end_time(end_time)
timeline.commit()
self.assertAlmostEqual(timeline.get_end_time(), timeline.get_zoom_end_time())
self.assertFalse(timeline.is_zoomed())
start_time = 0 # larger interval
end_time = 10
timeline.set_start_time(start_time)
self.assertAlmostEqual(timeline.get_start_time(), timeline.get_zoom_start_time())
self.assertFalse(timeline.is_zoomed())
timeline.commit()
timeline.set_end_time(end_time)
timeline.commit()
self.assertAlmostEqual(timeline.get_end_time(), timeline.get_zoom_end_time())
self.assertFalse(timeline.is_zoomed())
# changes are not immediate
timeline.set_zoom_range(start_time + 1, end_time - 1)
self.assertAlmostEqual(timeline.get_zoom_start_time(), start_time)
self.assertAlmostEqual(timeline.get_zoom_end_time(), end_time)
self.assertFalse(timeline.is_zoomed())
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), start_time + 1)
self.assertAlmostEqual(timeline.get_zoom_end_time(), end_time - 1)
self.assertTrue(timeline.is_zoomed())
# clear
timeline.clear_zoom()
self.assertAlmostEqual(timeline.get_zoom_start_time(), start_time + 1)
self.assertAlmostEqual(timeline.get_zoom_end_time(), end_time - 1)
self.assertTrue(timeline.is_zoomed())
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), start_time)
self.assertAlmostEqual(timeline.get_zoom_end_time(), end_time)
self.assertFalse(timeline.is_zoomed())
# set zoom ranges inside the timeline's range
timeline_sub = timeline.get_timeline_event_stream().create_subscription_to_pop(
self._on_timeline_event
)
self._test_zoom_change(timeline, start_time + 1, end_time - 1, True, start_time + 1, end_time - 1,
"startTime", start_time + 1, False)
self._test_zoom_change(timeline, start_time + 1, end_time - 2, True, start_time + 1, end_time - 2,
"endTime", end_time - 2, False)
self._test_zoom_change(timeline, start_time + 2, end_time - 1, True, start_time + 2, end_time - 1,
"cleared", False)
# invalid input
zoom_start, zoom_end = timeline.get_zoom_start_time(), timeline.get_zoom_end_time()
timeline.set_zoom_range(start_time + 3, start_time + 1) # end < start
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), zoom_start)
self.assertAlmostEqual(timeline.get_zoom_end_time(), zoom_end)
# setting the same values should fire no events
timeline.set_zoom_range(timeline.get_zoom_start_time(), timeline.get_zoom_end_time())
timeline.commit()
self.assertTrue(self._buffered_evts.empty())
# set zoom ranges fully or partially outside the timeline's range, should be clipped
self._test_zoom_change(timeline, start_time + 1, end_time + 1, True, start_time + 1, end_time,
"endTime", end_time, False)
self._test_zoom_change(timeline, start_time - 1, end_time - 1, True, start_time, end_time - 1,
"startTime", start_time, False)
self._test_zoom_change(timeline, start_time - 1, end_time + 1, False, start_time, end_time,
"cleared", True)
# passing an empty interval should set a 1 frame long zoom range
self.assertGreater(timeline.get_time_codes_per_seconds(), 0)
dt = 1.0 / timeline.get_time_codes_per_seconds()
self._test_zoom_change(timeline, end_time, end_time, True, end_time - dt, end_time,
"startTime", end_time - dt)
self._test_zoom_change(timeline, start_time + 1, start_time + 1, True, start_time + 1, start_time + 1 + dt,
"endTime", start_time + 1 + dt)
# changing start/end time should not affect the zoom when setting a larger range
old_zoom_start = timeline.get_zoom_start_time()
old_zoom_end = timeline.get_zoom_end_time()
start_time = -1
timeline.set_start_time(start_time)
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), old_zoom_start)
self.assertAlmostEqual(timeline.get_zoom_end_time(), old_zoom_end)
self.assertTrue(timeline.is_zoomed())
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED)
self.assertTrue(self._buffered_evts.empty())
end_time = 9
timeline.set_end_time(end_time)
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), old_zoom_start)
self.assertAlmostEqual(timeline.get_zoom_end_time(), old_zoom_end)
self.assertTrue(timeline.is_zoomed())
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED)
self.assertTrue(self._buffered_evts.empty())
# zoom range should shrink with start and end time
timeline.set_zoom_range(start_time + 1, end_time - 1) # preparations for this test
timeline.commit()
old_zoom_start = timeline.get_zoom_start_time()
old_zoom_end = timeline.get_zoom_end_time()
self.assertTrue(timeline.is_zoomed())
self._clear_evt_queue() # don't care
start_time = timeline.get_zoom_start_time() + 1
timeline.set_start_time(start_time)
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), start_time)
self.assertAlmostEqual(timeline.get_zoom_end_time(), old_zoom_end)
self._verify_evt(omni.timeline.TimelineEventType.START_TIME_CHANGED)
self._verify_evt(omni.timeline.TimelineEventType.ZOOM_CHANGED, "startTime", start_time, False)
end_time = timeline.get_zoom_end_time() - 1
timeline.set_end_time(end_time)
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), start_time)
self.assertAlmostEqual(timeline.get_zoom_end_time(), end_time)
self._verify_evt(omni.timeline.TimelineEventType.END_TIME_CHANGED)
self._verify_evt(omni.timeline.TimelineEventType.ZOOM_CHANGED, "endTime", end_time, False)
# playback is affected by zoom
timeline_sub = None # don't care anymore
zoom_start = start_time + 1
zoom_end = end_time - 1
timeline.set_zoom_range(zoom_start, zoom_end)
timeline.commit()
self.assertTrue(timeline.is_zoomed())
timeline.play()
timeline.commit()
self.assertAlmostEqual(timeline.get_current_time(), zoom_start)
timeline.rewind_one_frame()
timeline.commit()
self.assertAlmostEqual(timeline.get_current_time(), zoom_end)
timeline.forward_one_frame()
timeline.commit()
self.assertAlmostEqual(timeline.get_current_time(), zoom_start)
timeline.set_current_time(zoom_start + 1)
timeline.stop()
timeline.commit()
self.assertAlmostEqual(timeline.get_current_time(), zoom_start)
omni.timeline.destroy_timeline(timeline_name)
def _on_timeline_event(self, e: carb.events.IEvent):
self._buffered_evts.put(e)
def _verify_evt(
self, type: omni.timeline.TimelineEventType, payload_key: str = None, payload_val=None, exact=False
):
try:
evt = self._buffered_evts.get_nowait()
if evt:
self.assertEqual(evt.type, int(type))
if payload_key and payload_val:
if exact:
self.assertEqual(evt.payload[payload_key], payload_val)
else:
self.assertAlmostEqual(evt.payload[payload_key], payload_val, places=4)
except queue.Empty:
self.assertTrue(False, "Expect event in queue but queue is empty")
# verifies that the an event of the given type exists in the queue, and its payload matches
def _verify_evt_exists(
self, type: omni.timeline.TimelineEventType, payload_key: str = None, payload_val=None, exact=False
):
found = False
while not self._buffered_evts.empty():
evt = self._buffered_evts.get_nowait()
if evt and evt.type == int(type):
found = True
if payload_key and payload_val:
if exact:
self.assertEqual(evt.payload[payload_key], payload_val)
else:
self.assertAlmostEqual(evt.payload[payload_key], payload_val, places=4)
self.assertTrue(found, f"Event {type} was not found in the queue.")
def _clear_evt_queue(self):
while not self._buffered_evts.empty():
# clear the buffer
self._buffered_evts.get_nowait()
def _sleep(self, _):
sleep(1.0)
def _save_runloop_dt(self, e: carb.events.IEvent):
self._runloop_dt = e.payload['dt']
def _assert_no_change_then_commit(self, old_value, new_value):
self.assertEqual(old_value, new_value)
self.assertTrue(self._buffered_evts.empty())
self._timeline.commit()
def _test_zoom_change(
self,
timeline,
start_time,
end_time,
expected_is_zoomed,
expected_start_time,
expected_end_time,
payload_key,
payload_value,
exact=True
):
timeline.set_zoom_range(start_time, end_time)
timeline.commit()
self.assertAlmostEqual(timeline.get_zoom_start_time(), expected_start_time)
self.assertAlmostEqual(timeline.get_zoom_end_time(), expected_end_time)
self.assertEqual(expected_is_zoomed, timeline.is_zoomed())
self._verify_evt(omni.timeline.TimelineEventType.ZOOM_CHANGED, payload_key, payload_value, exact)
self.assertTrue(self._buffered_evts.empty())
|
omniverse-code/kit/exts/omni.timeline/omni/timeline/tests/test_thread_safety.py | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import carb.events
import omni.kit.app
import omni.kit.test
import omni.timeline
import carb.settings
import random
from threading import get_ident, Lock, Thread
from time import sleep
from typing import List
class TestTimelineThreadSafety(omni.kit.test.AsyncTestCase):
async def setUp(self):
self._app = omni.kit.app.get_app()
self._timeline = omni.timeline.get_timeline_interface()
self._timeline.stop() # make sure other tests do not interfere
await self._app.next_update_async()
self._timeline.set_end_time(100)
self._timeline.set_start_time(0)
self._timeline.set_current_time(0)
self._timeline.set_time_codes_per_second(30)
self._timeline.clear_zoom()
await self._app.next_update_async()
self._buffered_evts = []
self._timeline_sub = self._timeline.get_timeline_event_stream().create_subscription_to_pop(
self._on_timeline_event
)
self._setter_to_event_map = {
'set_auto_update': omni.timeline.TimelineEventType.AUTO_UPDATE_CHANGED,
'set_prerolling': omni.timeline.TimelineEventType.PREROLLING_CHANGED,
'set_looping': omni.timeline.TimelineEventType.LOOP_MODE_CHANGED,
'set_fast_mode': omni.timeline.TimelineEventType.FAST_MODE_CHANGED,
'set_target_framerate': omni.timeline.TimelineEventType.TARGET_FRAMERATE_CHANGED,
'set_current_time': omni.timeline.TimelineEventType.CURRENT_TIME_TICKED,
'set_start_time': omni.timeline.TimelineEventType.START_TIME_CHANGED,
'set_end_time': omni.timeline.TimelineEventType.END_TIME_CHANGED,
'set_time_codes_per_second': omni.timeline.TimelineEventType.TIME_CODE_PER_SECOND_CHANGED,
'set_ticks_per_frame': omni.timeline.TimelineEventType.TICKS_PER_FRAME_CHANGED,
'set_tentative_time': omni.timeline.TimelineEventType.TENTATIVE_TIME_CHANGED,
'play': omni.timeline.TimelineEventType.PLAY,
'pause': omni.timeline.TimelineEventType.PAUSE,
'stop': omni.timeline.TimelineEventType.STOP,
'rewind_one_frame': omni.timeline.TimelineEventType.CURRENT_TIME_TICKED,
'forward_one_frame': omni.timeline.TimelineEventType.CURRENT_TIME_TICKED,
}
async def tearDown(self):
self._timeline = None
self._timeline_sub = None
async def test_setters(self):
self._main_thread_id = get_ident()
all_setters = ['set_auto_update', 'set_prerolling', 'set_looping', 'set_fast_mode', 'set_target_framerate',
'set_current_time', 'set_start_time', 'set_end_time', 'set_time_codes_per_second', 'set_ticks_per_frame',
'set_tentative_time']
all_getters = ['is_auto_updating', 'is_prerolling', 'is_looping', 'get_fast_mode', 'get_target_framerate',
'get_current_time', 'get_start_time', 'get_end_time', 'get_time_codes_per_seconds', 'get_ticks_per_frame',
'get_tentative_time']
all_values_to_set = [[True, False], [True, False], [True, False], [True, False], [24, 30, 60, 100],
[0, 10, 12, 20], [0, 10], [20, 100], [24, 30, 60], [1, 2, 4],
[0, 10, 12, 20]]
self.assertEqual(len(all_getters), len(all_setters))
self.assertEqual(len(all_getters), len(all_values_to_set))
# Run for every attribute individually
for i in range(len(all_setters)):
print(f'Thread safety test for timeline method {all_setters[i]}')
# Trying all values
await self.do_multithreaded_test([[all_setters[i]]], [[all_getters[i]]], [[all_values_to_set[i]]], 200, 100)
# Setting a single value, no other values should appear. Making sure the initial value is what we'll set.
getattr(self._timeline, all_setters[i])(all_values_to_set[i][0])
await self._app.next_update_async()
await self.do_multithreaded_test([[all_setters[i]]], [[all_getters[i]]], [[[all_values_to_set[i][0]]]], 50, 50)
# Run for all attributes
print('Thread safety test for all timeline methods')
await self.do_multithreaded_test([all_setters], [all_getters], [all_values_to_set], 100, 100)
async def test_time_control(self):
self._main_thread_id = get_ident()
all_methods = ['play', 'pause', 'stop', 'rewind_one_frame', 'forward_one_frame']
await self.do_multithreaded_test([all_methods], [None], [None], 50, 50)
async def do_multithreaded_test(
self,
setters: List[List[str]],
getters: List[List[str]],
values_to_set: List[list],
thread_count_per_type: int = 50,
thread_runs: int = 50):
MIN_SLEEP = 0.01
SLEEP_RANGE = 0.05
self.assertEqual(len(setters), len(getters))
for i, setter_list in enumerate(setters):
if values_to_set[i] is not None:
self.assertEqual(len(setter_list), len(values_to_set[i]))
lock = Lock()
running_threads = 0
def do(runs: int, thread_id: int, setters: List[str], getters: List[str], values_to_set: list, running_threads: int):
with lock:
running_threads = running_threads + 1
if getters is not None:
self.assertEqual(len(setters), len(getters))
if values_to_set is not None:
self.assertEqual(len(setters), len(values_to_set))
timeline = omni.timeline.get_timeline_interface()
rnd = random.Random()
rnd.seed(thread_id)
for run in range(runs):
i_attr = rnd.randint(0, len(setters) - 1)
if values_to_set is not None: # setter is a setter method that accepts a value
values = values_to_set[i_attr]
i_value = rnd.randint(0, len(values) - 1)
getattr(timeline, setters[i_attr])(values[i_value])
# We might want to see this when running tests, commented out for now
# print(f'Thread {thread_id} has called {setters[i_attr]}({values[i_value]})')
else: # "setter" is a method with no parameter (e.g. play())
getattr(timeline, setters[i_attr])()
sleep(MIN_SLEEP + rnd.random() * SLEEP_RANGE)
if getters is not None and values_to_set is not None:
current_value = getattr(timeline, getters[i_attr])()
self.assertTrue(current_value in values, f'Invalid value in thread {thread_id}: {current_value} is not in {values}')
with lock:
running_threads = running_threads - 1
thread_id = 0
threads = []
for thread_type_idx, setter in enumerate(setters):
for i in range(thread_count_per_type):
threads.append(
Thread(
target = do,
args = (
thread_runs,
thread_id,
setters[thread_type_idx],
getters[thread_type_idx],
values_to_set[thread_type_idx],
running_threads
)
)
)
threads[-1].start()
thread_id = thread_id + 1
self._buffered_evts = []
threads_running = True
while threads_running:
await self._app.next_update_async()
with lock:
threads_running = running_threads > 0
for thread in threads:
thread.join()
# an extra update to trigger last callbacks
await self._app.next_update_async()
# validate that we received only the expected events
all_setters = []
for setter_list in setters:
for setter in setter_list:
all_setters.append(setter)
allowed_events = [int(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED_PERMANENT)]
for setter in all_setters:
allowed_events.append(int(self._setter_to_event_map[setter]))
for evt in self._buffered_evts:
self.assertTrue(evt in allowed_events, f'Error: event {evt} is not in allowed events {allowed_events} for setters {all_setters}')
def _on_timeline_event(self, e: carb.events.IEvent):
# callbacks are on the main thread
self.assertEqual(get_ident(), self._main_thread_id)
# save event type
self._buffered_evts.append(e.type)
|
omniverse-code/kit/exts/omni.timeline/omni/timeline/tests/__init__.py | from .tests import *
from .test_thread_safety import * |
omniverse-code/kit/exts/omni.kit.widget.graph/PACKAGE-LICENSES/omni.kit.widget.graph-LICENSE.md | Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited. |
omniverse-code/kit/exts/omni.kit.widget.graph/config/extension.toml | [package]
title = "Omni::UI Graph Widget"
category = "Internal"
description = "Omniverse Kit Graph Widget"
version = "1.5.6-104_2"
authors = ["NVIDIA"]
repository = ""
keywords = ["graph", "widget"]
changelog = "docs/CHANGELOG.md"
preview_image = "data/preview.png"
icon = "data/icon.png"
[dependencies]
"omni.ui" = {}
"omni.usd.libs" = {}
[[python.module]]
name = "omni.kit.widget.graph"
[[python.module]]
name = "omni.kit.widget.graph.tests"
[settings]
exts."omni.kit.widget.graph".raster_nodes = false
[[test]]
args = [
"--/app/window/dpiScaleOverride=1.0",
"--/app/window/scaleToMonitor=false",
]
dependencies = [
"omni.kit.renderer.capture",
]
stdoutFailPatterns.exclude = [
"*omniclient: Initialization failed*",
]
unreliable = true # OM-48945
[documentation]
pages = [
"docs/overview.md",
"docs/CHANGELOG.md",
]
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_node_delegate.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphNodeDelegate"]
from .graph_model import GraphModel
from .graph_node_delegate_closed import GraphNodeDelegateClosed
from .graph_node_delegate_full import GraphNodeDelegateFull
from .graph_node_delegate_router import GraphNodeDelegateRouter
from pathlib import Path
import omni.ui as ui
CURRENT_PATH = Path(__file__).parent
ICON_PATH = CURRENT_PATH.parent.parent.parent.parent.joinpath("icons")
# The main colors
LABEL_COLOR = 0xFFB4B4B4
BACKGROUND_COLOR = 0xFF34302A
BORDER_DEFAULT = 0xFFDBA656
BORDER_SELECTED = 0xFFFFFFFF
CONNECTION = 0xFF80C280
NODE_BACKGROUND = 0xFF675853
NODE_BACKGROUND_SELECTED = 0xFF7F6C66
class GraphNodeDelegate(GraphNodeDelegateRouter):
"""
The delegate with the Omniverse design that has both full and collapsed states.
"""
def __init__(self):
super().__init__()
def is_closed(model, node):
expansion_state = model[node].expansion_state
return expansion_state == GraphModel.ExpansionState.CLOSED
# Initial setup is the delegates for full and closed states.
self.add_route(GraphNodeDelegateFull())
self.add_route(GraphNodeDelegateClosed(), expression=is_closed)
@staticmethod
def get_style():
"""Return style that can be used with this delegate"""
style = {
"Graph": {"background_color": BACKGROUND_COLOR},
"Graph.Connection": {"color": CONNECTION, "background_color": CONNECTION, "border_width": 2.0},
# Node
"Graph.Node.Background": {"background_color": NODE_BACKGROUND},
"Graph.Node.Background:selected": {"background_color": NODE_BACKGROUND_SELECTED},
"Graph.Node.Border": {"background_color": BORDER_DEFAULT},
"Graph.Node.Border:selected": {"background_color": BORDER_SELECTED},
"Graph.Node.Resize": {"background_color": BORDER_DEFAULT},
"Graph.Node.Resize:selected": {"background_color": BORDER_SELECTED},
"Graph.Node.Description": {"color": LABEL_COLOR},
"Graph.Node.Description.Edit": {"color": LABEL_COLOR, "background_color": NODE_BACKGROUND},
# Header Input
"Graph.Node.Input": {
"background_color": BORDER_DEFAULT,
"border_color": BORDER_DEFAULT,
"border_width": 3.0,
},
# Header
"Graph.Node.Header.Label": {"color": 0xFFB4B4B4, "margin_height": 5.0, "font_size": 14.0},
"Graph.Node.Header.Label::Degenerated": {"font_size": 22.0},
"Graph.Node.Header.Collapse": {
"background_color": 0x0,
"padding": 0,
"image_url": f"{ICON_PATH}/hamburger-open.svg",
},
"Graph.Node.Header.Collapse::Minimized": {"image_url": f"{ICON_PATH}/hamburger-minimized.svg"},
"Graph.Node.Header.Collapse::Closed": {"image_url": f"{ICON_PATH}/hamburger-closed.svg"},
# Header Output
"Graph.Node.Output": {
"background_color": BACKGROUND_COLOR,
"border_color": BORDER_DEFAULT,
"border_width": 3.0,
},
# Port Group
"Graph.Node.Port.Group": {"color": 0xFFB4B4B4},
"Graph.Node.Port.Group::Plus": {"image_url": f"{ICON_PATH}/Plus.svg"},
"Graph.Node.Port.Group::Minus": {"image_url": f"{ICON_PATH}/Minus.svg"},
# Port Input
"Graph.Node.Port.Input": {
"background_color": BORDER_DEFAULT,
"border_color": BORDER_DEFAULT,
"border_width": 3.0,
},
"Graph.Node.Port.Input:selected": {"border_color": BORDER_SELECTED},
"Graph.Node.Port.Input.CustomColor": {"background_color": 0x0},
# Port
"Graph.Node.Port.Branch": {
"color": 0xFFB4B4B4,
"border_width": 0.75,
},
"Graph.Node.Port.Label": {
"color": 0xFFB4B4B4,
"margin_width": 5.0,
"margin_height": 3.0,
"font_size": 14.0,
},
"Graph.Node.Port.Label::output": {"alignment": ui.Alignment.RIGHT},
# Port Output
"Graph.Node.Port.Output": {
"background_color": BACKGROUND_COLOR,
"border_color": BORDER_DEFAULT,
"border_width": 3.0,
},
"Graph.Node.Port.Output.CustomColor": {"background_color": 0x0, "border_color": 0x0, "border_width": 3.0},
# Footer
"Graph.Node.Footer": {
"background_color": BACKGROUND_COLOR,
"border_color": BORDER_DEFAULT,
"border_width": 3.0,
"border_radius": 8.0,
},
"Graph.Node.Footer:selected": {"border_color": BORDER_SELECTED},
"Graph.Node.Footer.Image": {"image_url": f"{ICON_PATH}/0101.svg", "color": BORDER_DEFAULT},
"Graph.Selecion.Rect": {
"background_color": ui.color(1.0, 1.0, 1.0, 0.1),
"border_color": ui.color(1.0, 1.0, 1.0, 0.5),
"border_width": 1,
},
}
return style
@staticmethod
def specialized_color_style(name, color, icon, icon_tint_color=None):
"""
Return part of the style that has everything to color special node type.
Args:
name: Node type
color: Node color
icon: Filename to the icon the node type should display
icon_tint_color: Icon tint color
"""
style = {
# Node
f"Graph.Node.Border::{name}": {"background_color": color},
# Header Input
f"Graph.Node.Input::{name}": {"background_color": color, "border_color": color},
# Header Output
f"Graph.Node.Output::{name}": {"border_color": color},
# Port Input
f"Graph.Node.Port.Input::{name}": {"background_color": color, "border_color": color},
f"Graph.Node.Port.Input::{name}:selected": {"background_color": color, "BORDER_SELECTED": color},
# Port Output
f"Graph.Node.Port.Output::{name}": {"border_color": color},
# Footer
f"Graph.Node.Footer::{name}": {"border_color": color},
f"Graph.Node.Footer.Image::{name}": {"image_url": icon, "color": icon_tint_color or color},
}
return style
@staticmethod
def specialized_port_style(name, color):
"""
Return part of the style that has everything to color the customizable part of the port.
Args:
name: Port type type
color: Port color
"""
style = {
f"Graph.Node.Port.Input.CustomColor::{name}": {"background_color": color},
f"Graph.Node.Port.Output.CustomColor::{name}": {"background_color": color, "border_color": color},
f"Graph.Connection::{name}": {"color": color, "background_color": color},
}
return style
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/isolation_graph_model.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["IsolationGraphModel"]
from .graph_model import GraphModel
from typing import Any
from typing import Dict
from typing import List
from typing import Optional
from typing import Tuple
from typing import Union
from .graph_model_batch_position_helper import GraphModelBatchPositionHelper
def _get_ports_recursive(model, root):
"""Recursively get all the ports"""
ports = model[root].ports
if ports:
recursive = []
for port in ports:
recursive.append(port)
subports = _get_ports_recursive(model, port)
if subports:
recursive += subports
return recursive
return ports
class IsolationGraphModel:
class MagicWrapperMeta(type):
"""
Python always looks in the class (and parent classes) __dict__ for
magic methods and __getattr__ doesn't work, but since we want to
override them, we need to use this trick with proxy property.
It makes the class looking like the source object.
See https://stackoverflow.com/questions/9057669 for details.
"""
def __init__(cls, name, bases, dct):
ignore = "class mro new init setattr getattribute dict"
def make_proxy(name):
def proxy(self, *args):
return getattr(self.source, name)
return proxy
type.__init__(cls, name, bases, dct)
ignore = set("__%s__" % n for n in ignore.split())
for name in dir(cls):
if name.startswith("__"):
if name not in ignore and name not in dct:
setattr(cls, name, property(make_proxy(name)))
class EmptyPort:
"""Is used by the model for an empty port"""
def __init__(self, parent: Union["IsolationGraphModel.InputNode", "IsolationGraphModel.OutputNode"]):
self.parent = parent
@staticmethod
def get_type_name() -> str:
"""The string type that goes the source model abd view"""
return "EmptyPort"
class InputNode(metaclass=MagicWrapperMeta):
"""
Is used by the model for the input node. This node represents input
ports of the compound node and it's placed to the subnetwork of the
compound.
"""
def __init__(self, model: GraphModel, source):
self._model = model
self._inputs = []
self.source = source
# TODO: circular reference
self.empty_port = IsolationGraphModel.EmptyPort(self)
def __getattr__(self, attr):
return getattr(self.source, attr)
def __hash__(self):
# Hash should be different from source and from OutputNode
return hash((self.source, "InputNode"))
@staticmethod
def get_type_name() -> str:
"""The string type that goes the source model and view"""
return "InputNode"
@property
def ports(self) -> Optional[List[Any]]:
"""The list of ports of this node. It only has input ports from the compound node."""
ports = _get_ports_recursive(self._model, self.source)
if ports is not None:
inputs = [(port, self._model[port].inputs) for port in ports]
self._inputs = [(p, i) for p, i in inputs if i is not None]
return [p for p, _ in self._inputs] + [self.empty_port]
class OutputNode(metaclass=MagicWrapperMeta):
"""
Is used by the model for the ouput node. This node represents output
ports of the comound node and it's placed to the subnetwork of the
compound.
"""
def __init__(self, model: GraphModel, source):
self.source = source
self._model = model
self._outputs = []
# TODO: circular reference
self.empty_port = IsolationGraphModel.EmptyPort(self)
def __getattr__(self, attr):
return getattr(self.source, attr)
def __hash__(self):
# Hash should be different from source and from InputNode
return hash((self.source, "OutputNode"))
@staticmethod
def get_type_name() -> str:
"""The string type that goes the source model and view"""
return "OutputNode"
@property
def ports(self) -> Optional[List[Any]]:
"""The list of ports of this node. It only has output ports from the compound node."""
ports = self._model[self.source].ports
if ports is not None:
outputs = [(port, self._model[port].outputs) for port in ports]
self._outputs = [(p, o) for p, o in outputs if o is not None]
return [p for p, _ in self._outputs] + [self.empty_port]
class _IsolationItemProxy(GraphModel._ItemProxy):
"""
The proxy class that redirects the model calls from view to the
source model or to the isolation model.
"""
def __init__(self, model: GraphModel, item: Any, isolation_model: "IsolationGraphModel"):
super().__init__(model, item)
object.__setattr__(self, "_isolation_model", isolation_model)
def __setattr__(self, attr, value):
"""
Called when an attribute assignment is attempted. This is called
instead of the normal mechanism (i.e. store the value in the
instance dictionary).
"""
if hasattr(type(self), attr):
proxy_property = getattr(type(self), attr)
proxy_property.fset(self, value)
else:
super().__setattr__(attr, value)
def is_input_node(
self, item: Union["IsolationGraphModel.InputNode", "IsolationGraphModel.OutputNode"] = None
) -> bool:
"""True if the current redirection is related to the input node"""
if item is None:
item = self._item
return isinstance(item, IsolationGraphModel.InputNode)
def is_output_node(
self, item: Union["IsolationGraphModel.InputNode", "IsolationGraphModel.OutputNode"] = None
) -> bool:
"""True if the current redirection is related to the output node"""
if item is None:
item = self._item
return isinstance(item, IsolationGraphModel.OutputNode)
def is_empty_port(self, item: "IsolationGraphModel.EmptyPort" = None) -> bool:
"""True if the current redirection is related to the empty port"""
if item is None:
item = self._item
return isinstance(item, IsolationGraphModel.EmptyPort)
# The methods of the model
@property
def name(self) -> str:
"""Redirects call to the source model if it's the model of the source model"""
if self.is_input_node():
return self._model[self._item.source].name + " (input)"
elif self.is_output_node():
return self._model[self._item.source].name + " (output)"
elif self.is_empty_port():
return type(self._item).get_type_name()
else:
return self._model[self._item].name
@name.setter
def name(self, value: str):
# Usually it's redirected automatically, but since we overrided property getter, we need to override setter
if self.is_input_node() or self.is_output_node():
self._model[self._item.source].name = value
else:
self._model[self._item].name = value
@property
def type(self) -> str:
"""Redirects call to the source model if it's the model of the source model"""
if self.is_input_node() or self.is_output_node() or self.is_empty_port():
return type(self._item).get_type_name()
else:
return self._model[self._item].type
@property
def ports(self) -> Optional[List[Any]]:
"""Redirects call to the source model if it's the model of the source model"""
if self.is_input_node() or self.is_output_node():
ports = self._item.ports
elif self.is_empty_port():
# TODO: Sub-ports
return
else:
ports = self._model[self._item].ports
# Save it for the future, so we know which ports belong to the current sub-network.
# TODO: read from cache when second call
for port in ports or []:
self._isolation_model._ports[port] = self._item
self._isolation_model._nodes[self._item] = ports
return ports
@ports.setter
def ports(self, value: List[Any]):
if self.is_input_node():
# Pretend like we set the ports of the compound node of the source model.
filtered = list(self._isolation_model._root_outputs.keys())
for port in value:
if self.is_empty_port(port):
continue
filtered.append(port)
self._model[self._item.source].ports = filtered
elif self.is_output_node():
# Pretend like we set the ports of the compound node of the source model.
filtered = []
for port in value:
if self.is_empty_port(port):
continue
filtered.append(port)
filtered += list(self._isolation_model._root_inputs.keys())
self._model[self._item.source].ports = filtered
else:
# Just redirect the call
self._model[self._item].ports = value
@property
def inputs(self) -> Optional[List[Any]]:
if self.is_empty_port():
if self.is_output_node(self._item.parent):
# Show dot on the left side
return []
else:
return
elif self._item in self._isolation_model._root_ports:
# This is the port of the root compound node
inputs = self._isolation_model._root_outputs.get(self._item, None)
if inputs is None:
return
else:
inputs = self._model[self._item].inputs
if not inputs:
return inputs
# Filter out the connections that go outside of this isolated model
return [
i
for i in inputs
if i in self._isolation_model._ports
or i in self._isolation_model._root_inputs
or i in self._isolation_model._root_outputs
]
@inputs.setter
def inputs(self, value: List[Any]):
if self.is_empty_port():
# The request to connect something to the Empty port of InputNode or OutputNode
target = self._item.parent.source
elif self.is_input_node() or self.is_output_node():
# The request to connect something to the InputNode or OutputNode itself
target = self._item.source
else:
target = self._item
if value is not None:
filtered = []
for p in value:
if isinstance(p, IsolationGraphModel.EmptyPort):
# If it's an EmptyPort, we need to connect it to the
# source directly. When the model receives the request
# to connect to the node itself, it will create a port
# for it.
source = p.parent.source
filtered.append(source)
else:
filtered.append(p)
value = filtered
self._model[target].inputs = value
@property
def outputs(self) -> Optional[List[Any]]:
if self.is_empty_port():
if self.is_input_node(self._item.parent):
# Show dot on the right side
return []
else:
return
elif self._item in self._isolation_model._root_ports:
outputs = self._isolation_model._root_inputs.get(self._item)
if outputs is None:
return
else:
outputs = self._model[self._item].outputs
if not outputs:
return outputs
# Filter out the connections that go outside of this isolated model
return [
o
for o in outputs
if o in self._isolation_model._ports
or o in self._isolation_model._root_inputs
or o in self._isolation_model._root_outputs
]
@property
def position(self) -> Optional[Tuple[float]]:
if self.is_input_node():
if self._isolation_model._root_inputs:
# TODO: It's not good to keep the position on the first available port. We need to group them.
port_to_keep_position = list(self._isolation_model._root_inputs.keys())[0]
position = self._model[port_to_keep_position].position
if position is not None:
return position
elif self._isolation_model._input_position:
return self._isolation_model._input_position
else:
return self._isolation_model._input_position
elif self.is_output_node():
if self._isolation_model._root_outputs:
# TODO: It's not good to keep the position on the first available port. We need to group them.
port_to_keep_position = list(self._isolation_model._root_outputs.keys())[0]
position = self._model[port_to_keep_position].position
if position is not None:
return position
elif self._isolation_model._output_position:
return self._isolation_model._output_position
else:
return self._isolation_model._output_position
else:
return self._model[self._item].position
@position.setter
def position(self, value: Optional[Tuple[float]]):
if self.is_input_node():
if self._isolation_model._root_inputs:
# TODO: get the proper port_to_keep_position
port_to_keep_position = list(self._isolation_model._root_inputs.keys())[0]
self._model[port_to_keep_position].position = value
else:
self._isolation_model._input_position = value
elif self.is_output_node():
if self._isolation_model._root_outputs:
# TODO: get the proper port_to_keep_position
port_to_keep_position = list(self._isolation_model._root_outputs.keys())[0]
self._model[port_to_keep_position].position = value
else:
self._isolation_model._output_position = value
else:
self._model[self._item].position = value
@property
def size(self) -> Optional[Tuple[float]]:
if self.is_input_node() or self.is_output_node():
# Can't set/get size of input/output node
return
else:
return self._model[self._item].size
@size.setter
def size(self, value: Optional[Tuple[float]]):
if self.is_input_node() or self.is_output_node():
# Can't set/get size of input/output node
pass
else:
self._model[self._item].size = value
@property
def display_color(self) -> Optional[Tuple[float]]:
if self.is_input_node() or self.is_output_node():
# Can't set/get display_color of input/output node
return
else:
return self._model[self._item].display_color
@display_color.setter
def display_color(self, value: Optional[Tuple[float]]):
if self.is_input_node() or self.is_output_node():
# Can't set display color of input/output
pass
else:
self._model[self._item].display_color = value
@property
def stacking_order(self) -> int:
if self.is_input_node() or self.is_output_node():
return 1
else:
return self._model[self._item].stacking_order
@property
def preview(self) -> Any:
"""Redirects call to the source model if it's the model of the source model"""
if self.is_input_node() or self.is_output_node():
return self._model[self._item.source].preview
else:
return self._model[self._item].preview
@property
def icon(self) -> Any:
"""Redirects call to the source model if it's the model of the source model"""
if self.is_input_node() or self.is_output_node():
return self._model[self._item.source].icon
else:
return self._model[self._item].icon
@property
def preview_state(self) -> GraphModel.PreviewState:
"""Redirects call to the source model if it's the model of the source model"""
if self.is_input_node() or self.is_output_node():
return self._model[self._item.source].preview_state
else:
return self._model[self._item].preview_state
@preview_state.setter
def preview_state(self, value: GraphModel.PreviewState):
if self.is_input_node() or self.is_output_node():
self._model[self._item.source].preview_state = value
else:
self._model[self._item].preview_state = value
############################################################################
def __getattr__(self, attr):
"""Pretend it's self._model"""
return getattr(self._model, attr)
def __getitem__(self, item):
"""Called to implement evaluation of self[key]"""
# Return a proxy that redirects its properties back to the model.
# return self._model[item]
return self._IsolationItemProxy(self._model, item, self)
def __init__(self, model: GraphModel, root):
self._model: Optional[GraphModel] = model
# It's important to set the proxy to set the position through this model
if self._model and isinstance(self._model, GraphModelBatchPositionHelper):
self._model.batch_proxy = self
self._root = root
self.clear_caches()
# Redirect events from the original model to here
self.__on_item_changed = GraphModel._Event()
self.__on_selection_changed = GraphModel._Event()
self.__on_node_changed = GraphModel._Event()
self.__item_changed_subscription = self._model.subscribe_item_changed(self._item_changed)
self.__selection_changed_subscription = self._model.subscribe_selection_changed(self._selection_changed)
self.__node_changed_subscription = self._model.subscribe_node_changed(self._rebuild_node)
# We create input and output nodes for the attributes of the parent compound
# node. When the parent doesn't have attributes, we need to keep the
# position locally.
self._input_position = None
self._output_position = None
# Virtual nodes
if self._root is not None and self._root_inputs:
self._input_nodes = [IsolationGraphModel.InputNode(self._model, self._root)]
else:
self._input_nodes = []
if self._root is not None and self._root_outputs:
self._output_nodes = [IsolationGraphModel.OutputNode(self._model, self._root)]
else:
self._output_nodes = []
def destroy(self):
self._model = None
self._root = None
self._ports = {}
self._nodes = {}
self._root_inputs = {}
self._root_outputs = {}
self.__on_item_changed = GraphModel._Event()
self.__on_selection_changed = GraphModel._Event()
self.__on_node_changed = GraphModel._Event()
self.__item_changed_subscription = None
self.__selection_changed_subscription = None
self._input_nodes = []
self._output_nodes = []
def clear_caches(self):
# Port to node
# TODO: WeakKeyDictionary
self._ports: Dict[Any, Any] = {}
# Nodes to ports
# TODO: WeakKeyDictionary
self._nodes: Dict[Any, Any] = {}
# Cache ports and inputs
self._root_ports: List[Any] = _get_ports_recursive(self._model, self._root) if (self._root is not None) else []
# Port to input/output
self._root_inputs: Dict[Any, Any] = {}
self._root_outputs: Dict[Any, Any] = {}
for p in self._root_ports or []:
inputs = self._model[p].inputs
if inputs is not None:
self._root_inputs[p] = inputs
outputs = self._model[p].outputs
if outputs is not None:
self._root_outputs[p] = outputs
def add_input_or_output(self, position: Tuple[float], is_input: bool = True):
if is_input:
if self._input_nodes:
# TODO: Remove when we can handle many nodes
if self._root_inputs:
# Set the position of the input node if the position is not set
port_position = list(self._root_inputs.keys())[0]
if not self[port_position].position:
self[port_position].position = position
return
node = IsolationGraphModel.InputNode(self._model, self._root)
self._input_nodes.append(node)
else:
if self._output_nodes:
# TODO: Remove when we can handle many nodes
if self._root_outputs:
# Set the position of the output node if the position is not set
port_position = list(self._root_outputs.keys())[0]
if not self[port_position].position:
self[port_position].position = position
return
node = IsolationGraphModel.OutputNode(self._model, self._root)
self._output_nodes.append(node)
self[node].position = position
# TODO: Root is changed
self._item_changed(None)
def _item_changed(self, item=None):
"""Call the event object that has the list of functions"""
if item is None:
self.clear_caches()
if item == self._root:
# Root item is changed. Rebuild all.
self.clear_caches()
item = None
if item in self._root_inputs.keys():
self.__on_item_changed(self._input_nodes[0])
elif item in self._root_outputs.keys():
self.__on_item_changed(self._output_nodes[0])
else:
# TODO: Filter unnecessary calls
self.__on_item_changed(item)
def subscribe_item_changed(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_item_changed, fn)
def _selection_changed(self):
"""Call the event object that has the list of functions"""
# TODO: Filter unnecessary calls
self.__on_selection_changed()
def _rebuild_node(self, item=None, full=False):
"""Call the event object that has the list of functions"""
self.__on_node_changed(item, full=full)
def subscribe_selection_changed(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_selection_changed, fn)
def subscribe_node_changed(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_node_changed, fn)
@property
def nodes(self, item: Any = None):
"""It's only called to get the nodes from the top level"""
nodes = self._model[self._root].nodes
if nodes is not None:
# Inject the input and the output nodes
nodes += self._input_nodes + self._output_nodes
return nodes
def can_connect(self, source: Any, target: Any):
"""Return if it's possible to connect source to target"""
if isinstance(source, IsolationGraphModel.EmptyPort) or isinstance(target, IsolationGraphModel.EmptyPort):
return True
return self._model.can_connect(source, target)
def position_begin_edit(self, item: Any):
if isinstance(item, IsolationGraphModel.InputNode):
if self._root_inputs:
# The position of the input node
port_to_keep_position = list(self._root_inputs.keys())[0]
self._model.position_begin_edit(port_to_keep_position)
elif isinstance(item, IsolationGraphModel.OutputNode):
if self._root_outputs:
# The position of the output node
port_to_keep_position = list(self._root_outputs.keys())[0]
self._model.position_begin_edit(port_to_keep_position)
else:
# Position of the regular node
self._model.position_begin_edit(item)
def position_end_edit(self, item: Any):
if isinstance(item, IsolationGraphModel.InputNode):
if self._root_inputs:
# The position of the input node
port_to_keep_position = list(self._root_inputs.keys())[0]
self._model.position_end_edit(port_to_keep_position)
elif isinstance(item, IsolationGraphModel.OutputNode):
if self._root_outputs:
# The position of the output node
port_to_keep_position = list(self._root_outputs.keys())[0]
self._model.position_end_edit(port_to_keep_position)
else:
# Position of the regular node
self._model.position_end_edit(item)
@property
def selection(self) -> Optional[List[Any]]:
# Redirect to the model. We need it to override the setter
if self._model:
return self._model.selection
@selection.setter
def selection(self, value: Optional[List[Any]]):
if not self._model:
return
# Remove InputNode and OutputNode from selection
filtered = []
for v in value:
if not isinstance(v, IsolationGraphModel.EmptyPort):
filtered.append(v)
self._model.selection = filtered
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/backdrop_delegate.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["BackdropDelegate"]
from .abstract_graph_node_delegate import GraphNodeDescription
from .graph_node_delegate_full import GraphNodeDelegateFull
from .graph_node_delegate_full import LINE_VISIBLE_MIN
from .graph_node_delegate_full import TEXT_VISIBLE_MIN
import omni.ui as ui
class BackdropDelegate(GraphNodeDelegateFull):
"""
The delegate with the Omniverse design for the nodes of the closed state.
"""
def node_header(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the top of the node"""
node = node_desc.node
def set_color(model, node, item_model):
sub_models = item_model.get_item_children()
rgb = (
item_model.get_item_value_model(sub_models[0]).as_float,
item_model.get_item_value_model(sub_models[1]).as_float,
item_model.get_item_value_model(sub_models[2]).as_float,
)
model[node].display_color = rgb
with ui.ZStack(skip_draw_when_clipped=True):
with ui.VStack():
self._common_node_header_top(model, node)
with ui.VStack():
ui.Spacer(height=23)
color_widget = ui.ColorWidget(width=20, height=20, style={"margin": 1})
sub_models = color_widget.model.get_item_children()
# This constant is BORDER_DEFAULT = 0xFFDBA656
border_color = model[node].display_color or (0.337, 0.651, 0.859)
color_widget.model.get_item_value_model(sub_models[0]).as_float = border_color[0]
color_widget.model.get_item_value_model(sub_models[1]).as_float = border_color[1]
color_widget.model.get_item_value_model(sub_models[2]).as_float = border_color[2]
color_widget.model.add_end_edit_fn(lambda m, i: set_color(model, node, m))
def node_background(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the node background"""
node = node_desc.node
# Constants from GraphNodeDelegateFull.node_background
MARGIN_WIDTH = 7.5
MARGIN_TOP = 20.0
MARGIN_BOTTOM = 25.0
BORDER_THICKNESS = 3.0
BORDER_RADIUS = 3.5
HEADER_HEIGHT = 25.0
MIN_WIDTH = 180.0
TRIANGLE = 20.0
MIN_HEIGHT = 50.0
def on_size_changed(placer, model, node):
offset_x = placer.offset_x
offset_y = placer.offset_y
model[node].size = (offset_x.value, offset_y.value)
if offset_x.value < MIN_WIDTH:
placer.offset_x = MIN_WIDTH
if offset_y.value < MIN_HEIGHT:
placer.offset_y = MIN_HEIGHT
def set_description(field: ui.StringField, model: "GraphModel", node: any, description: str):
"""Called to set the description on the model and remove the field with the description"""
model[node].description = description
field.visible = False
def on_description(model: "GraphModel", node: any, frame: ui.Frame, description: str):
"""Called to create a field on the node to edit description"""
with frame:
field = ui.StringField(multiline=True, style_type_name_override="Graph.Node.Description.Edit")
if description:
field.model.as_string = description
field.model.add_end_edit_fn(lambda m: set_description(field, model, node, m.as_string))
field.focus_keyboard()
width, height = model[node].size or (MIN_WIDTH, MIN_HEIGHT)
with ui.ZStack(width=0, height=0):
super().node_background(model, node_desc)
# Properly alighned description
with ui.VStack():
ui.Spacer(height=HEADER_HEIGHT + MARGIN_TOP)
with ui.HStack():
ui.Spacer(width=MARGIN_WIDTH + BORDER_THICKNESS)
with ui.ZStack():
description = model[node].description
if description:
ui.Label(
description,
alignment=ui.Alignment.LEFT_TOP,
style_type_name_override="Graph.Node.Description",
)
# Frame with description field
frame = ui.Frame()
frame.set_mouse_double_clicked_fn(
lambda x, y, b, m, f=frame, d=description: on_description(model, node, f, d)
)
ui.Spacer(width=MARGIN_WIDTH + BORDER_THICKNESS)
ui.Spacer(height=MARGIN_BOTTOM + BORDER_THICKNESS)
# The triangle that resizes the node
with ui.VStack():
with ui.HStack():
placer = ui.Placer(draggable=True)
placer.offset_x = width
placer.offset_y = height
placer.set_offset_x_changed_fn(lambda _, p=placer, m=model, n=node: on_size_changed(p, m, n))
placer.set_offset_y_changed_fn(lambda _, p=placer, m=model, n=node: on_size_changed(p, m, n))
with placer:
triangle = ui.Triangle(
width=TRIANGLE,
height=TRIANGLE,
alignment=ui.Alignment.RIGHT_TOP,
style_type_name_override="Graph.Node.Resize",
)
triangle.set_mouse_pressed_fn(lambda x, y, b, m: b == 0 and model.size_begin_edit(node))
triangle.set_mouse_released_fn(lambda x, y, b, m: b == 0 and model.size_end_edit(node))
ui.Spacer(width=MARGIN_WIDTH + 0.5 * BORDER_THICKNESS)
ui.Spacer(height=MARGIN_BOTTOM + BORDER_THICKNESS)
def node_footer(self, model, node_desc: GraphNodeDescription):
pass
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_node_delegate_full.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["color_to_hex", "GraphNodeDelegateFull"]
from .abstract_graph_node_delegate import AbstractGraphNodeDelegate
from .abstract_graph_node_delegate import GraphConnectionDescription
from .abstract_graph_node_delegate import GraphNodeDescription
from .abstract_graph_node_delegate import GraphNodeLayout
from .abstract_graph_node_delegate import GraphPortDescription
from .graph_model import GraphModel
from functools import partial
import colorsys
import math
import omni.ui as ui
# Zoom level when the text disappears and the replacement line appears
TEXT_VISIBLE_MIN = 0.6
# Zoom level when the line disappears
LINE_VISIBLE_MIN = 0.15
CONNECTION_CURVE = 60
def color_to_hex(color: tuple) -> int:
"""Convert float rgb to int"""
def to_int(f: float) -> int:
return int(255 * max(0.0, min(1.0, f)))
red = to_int(color[0])
green = to_int(color[1])
blue = to_int(color[2])
alpha = to_int(color[3]) if len(color) > 3 else 255
return (alpha << 8 * 3) + (blue << 8 * 2) + (green << 8 * 1) + red
class GraphNodeDelegateFull(AbstractGraphNodeDelegate):
"""
The delegate with the Omniverse design.
"""
def __init__(self, scale_factor=1.0):
self._scale_factor = scale_factor
def __scale(self, value):
"""Return the value multiplied by global scale multiplier"""
return value * self._scale_factor
def __build_rectangle(self, radius, width, height, draw_top, style_name, name, style_override=None):
"""
Build rectangle of the specific design.
Args:
radius: The radius of round corers. The corners are top-left and
bottom-right.
width: The width of triangle to cut. The top-right and
bottom-left trialgles are cut.
height: The height of triangle to cut. The top-right and
bottom-left trialgles are cut.
draw_top: When false the top corners are straight.
style_name: style_type_name_override of each widget
name: name of each widget
style_override: the style to apply to the top level node
"""
stack = ui.VStack()
if style_override:
stack.set_style(style_override)
with stack:
# Top of the rectangle
if draw_top:
with ui.HStack(height=0):
with ui.VStack(width=0):
ui.Circle(
radius=radius,
width=0,
height=0,
size_policy=ui.CircleSizePolicy.FIXED,
alignment=ui.Alignment.RIGHT_BOTTOM,
style_type_name_override=style_name,
name=name,
)
ui.Rectangle(style_type_name_override=style_name, name=name)
ui.Rectangle(style_type_name_override=style_name, name=name)
ui.Triangle(
width=width,
height=height,
alignment=ui.Alignment.LEFT_TOP,
style_type_name_override=style_name,
name=name,
)
# Middle of the rectangle
ui.Rectangle(style_type_name_override=style_name, name=name)
# Bottom of the rectangle
with ui.HStack(height=0):
ui.Triangle(
width=width,
height=height,
alignment=ui.Alignment.RIGHT_BOTTOM,
style_type_name_override=style_name,
name=name,
)
ui.Rectangle(style_type_name_override=style_name, name=name)
with ui.VStack(width=0):
ui.Rectangle(style_type_name_override=style_name, name=name)
ui.Circle(
radius=radius,
width=0,
height=0,
size_policy=ui.CircleSizePolicy.FIXED,
alignment=ui.Alignment.LEFT_TOP,
style_type_name_override=style_name,
name=name,
)
# TODO: build_node_footer_input/build_node_footer_output
def node_background(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the node background"""
node = node_desc.node
# Constants
MARGIN_WIDTH = 7.5
MARGIN_TOP = 20.0
MARGIN_BOTTOM = 25.0
BORDER_THICKNESS = 3.0
BORDER_RADIUS = 3.5
HEADER_HEIGHT = 25.0
MIN_WIDTH = 180.0
TRIANGLE = 20.0
# Computed values
left_right_offset = MARGIN_WIDTH - BORDER_THICKNESS * 0.5
outer_radius = BORDER_RADIUS + BORDER_THICKNESS * 0.5
inner_radius = BORDER_RADIUS - BORDER_THICKNESS * 0.5
# The size of the triangle
outer_triangle = TRIANGLE
# The size of the triangle to make the thickness of the diagonal line
# the same as horizontal and vertial thickness. Works only with 45
# degrees diagonals
inner_triangle = outer_triangle - BORDER_THICKNESS / (
math.sqrt(2) * math.tan(math.radians((180.0 - 45.0) / 2.0))
)
style_name = str(model[node].type)
# Draw a rectangle and a top line
with ui.HStack():
# Left offset
ui.Spacer(width=self.__scale(left_right_offset))
with ui.VStack():
ui.Spacer(height=self.__scale(MARGIN_TOP))
# The node body
with ui.ZStack():
# This trick makes min width
ui.Spacer(width=self.__scale(MIN_WIDTH))
# The color override for the border
border_color = model[node].display_color
if border_color:
style = {"Graph.Node.Border": {"background_color": color_to_hex(border_color)}}
else:
style = None
# Build outer rectangle
self.__build_rectangle(
self.__scale(outer_radius),
self.__scale(outer_triangle),
self.__scale(outer_triangle),
True,
"Graph.Node.Border",
style_name,
style,
)
# Build inner rectangle
with ui.VStack():
ui.Spacer(height=self.__scale(HEADER_HEIGHT))
with ui.HStack():
ui.Spacer(width=self.__scale(BORDER_THICKNESS))
if border_color:
# 140% lightness from the border color
# 50% saturation from the border color
L_MULT = 1.4
S_MULT = 0.5
hls = colorsys.rgb_to_hls(border_color[0], border_color[1], border_color[2])
rgb = colorsys.hls_to_rgb(hls[0], min(1.0, (hls[1] * L_MULT)), hls[2] * S_MULT)
if len(border_color) > 3: # alpha
rgb = rgb + (border_color[3],)
style = {"background_color": color_to_hex(rgb)}
else:
style = None
self.__build_rectangle(
self.__scale(inner_radius),
self.__scale(inner_triangle),
self.__scale(inner_triangle),
False,
"Graph.Node.Background",
style_name,
style,
)
ui.Spacer(width=self.__scale(BORDER_THICKNESS))
ui.Spacer(height=self.__scale(BORDER_THICKNESS))
ui.Spacer(height=self.__scale(MARGIN_BOTTOM))
# Right offset
ui.Spacer(width=self.__scale(left_right_offset))
def node_header_input(self, model, node_desc: GraphNodeDescription):
"""Called to create the left part of the header that will be used as input when the node is collapsed"""
ui.Spacer(width=self.__scale(8))
def node_header_output(self, model, node_desc: GraphNodeDescription):
"""Called to create the right part of the header that will be used as output when the node is collapsed"""
ui.Spacer(width=self.__scale(8))
def _common_node_header_top(self, model, node):
"""Node header part that is used in both full and closed states"""
def switch_expansion(model, node):
current = model[node].expansion_state
model[node].expansion_state = GraphModel.ExpansionState((current.value + 1) % 3)
# Draw the node name and a bit of space
with ui.ZStack(width=0):
ui.Label(model[node].name, style_type_name_override="Graph.Node.Header.Label", visible_min=TEXT_VISIBLE_MIN)
with ui.Placer(stable_size=True, visible_min=LINE_VISIBLE_MIN, visible_max=TEXT_VISIBLE_MIN, offset_y=-8):
ui.Label(model[node].name, name="Degenerated", style_type_name_override="Graph.Node.Header.Label")
with ui.HStack():
# Collapse button
collapse = ui.ImageWithProvider(
width=self.__scale(18), height=self.__scale(18), style_type_name_override="Graph.Node.Header.Collapse"
)
expansion_state = model[node].expansion_state
if expansion_state == GraphModel.ExpansionState.CLOSED:
collapse.name = "Closed"
elif expansion_state == GraphModel.ExpansionState.MINIMIZED:
collapse.name = "Minimized"
else:
collapse.name = "Open"
collapse.set_mouse_pressed_fn(lambda x, y, b, m, model=model, node=node: switch_expansion(model, node))
def node_header(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the top of the node"""
with ui.VStack(skip_draw_when_clipped=True):
self._common_node_header_top(model, node_desc.node)
ui.Spacer(height=self.__scale(8))
def node_footer(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the bottom of the node"""
node = node_desc.node
style_name = str(model[node].type)
# Draw the circle and the image on the top of it
with ui.VStack(visible_min=LINE_VISIBLE_MIN, skip_draw_when_clipped=True):
ui.Spacer(height=self.__scale(10))
with ui.HStack(height=0):
ui.Spacer()
with ui.ZStack(width=self.__scale(50), height=self.__scale(50)):
ui.Rectangle(
style_type_name_override="Graph.Node.Footer", name=style_name, visible_min=TEXT_VISIBLE_MIN
)
ui.ImageWithProvider(
style_type_name_override="Graph.Node.Footer.Image",
name=style_name,
visible_min=TEXT_VISIBLE_MIN,
)
# Scale up the icon when zooming out
with ui.Placer(
stable_size=True,
visible_min=LINE_VISIBLE_MIN,
visible_max=TEXT_VISIBLE_MIN,
offset_x=self.__scale(-15),
offset_y=self.__scale(-30),
):
with ui.ZStack(width=0, height=0):
ui.Rectangle(style_type_name_override="Graph.Node.Footer", name=style_name)
ui.ImageWithProvider(
style_type_name_override="Graph.Node.Footer.Image",
name=style_name,
width=self.__scale(80),
height=self.__scale(80),
)
ui.Spacer()
ui.Spacer(height=self.__scale(2))
def port_input(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the left part of the port that will be used as input"""
node = node_desc.node
port = port_desc.port
connected_source = port_desc.connected_source
connected_target = port_desc.connected_target
if port is None:
ui.Spacer(width=self.__scale(7.5))
return
outputs = model[port].outputs
is_output = outputs is not None
if is_output:
ui.Spacer(width=self.__scale(7.5))
return
style_type_name = "Graph.Node.Port.Input"
customcolor_style_type_name = "Graph.Node.Port.Input.CustomColor"
inputs = model[port].inputs
node_type = str(model[node].type)
port_type = str(model[port].type)
with ui.HStack(skip_draw_when_clipped=True):
ui.Spacer(width=self.__scale(6))
with ui.ZStack(width=self.__scale(8)):
if inputs is not None:
# Half-circle that shows the port is able to have input connection
# Background of the node color
ui.Circle(
radius=self.__scale(6),
name=node_type,
style_type_name_override=style_type_name,
size_policy=ui.CircleSizePolicy.FIXED,
style={"border_color": 0x0, "border_width": 0},
alignment=ui.Alignment.LEFT_CENTER,
arc=ui.Alignment.RIGHT,
visible_min=TEXT_VISIBLE_MIN,
)
# Port has unique color
ui.Circle(
radius=self.__scale(6),
name=port_type,
style_type_name_override=customcolor_style_type_name,
size_policy=ui.CircleSizePolicy.FIXED,
alignment=ui.Alignment.LEFT_CENTER,
arc=ui.Alignment.RIGHT,
visible_min=TEXT_VISIBLE_MIN,
)
# Border of the node color
ui.Circle(
radius=self.__scale(6),
name=node_type,
style_type_name_override=style_type_name,
size_policy=ui.CircleSizePolicy.FIXED,
style={"background_color": 0x0},
alignment=ui.Alignment.LEFT_CENTER,
arc=ui.Alignment.RIGHT,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_source:
# Circle that shows that the port is a source for the connection
ui.Circle(
radius=self.__scale(7),
name=port_type,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.LEFT_CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_target:
# Circle that shows that the port is a target for the connection
ui.Circle(
radius=self.__scale(5),
name=port_type,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.LEFT_CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
def port_output(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the right part of the port that will be used as output"""
node = node_desc.node
port = port_desc.port
connected_source = port_desc.connected_source
connected_target = port_desc.connected_target
if port is None:
ui.Spacer(width=self.__scale(7.5))
return
style_type_name = "Graph.Node.Port.Output"
customcolor_style_type_name = "Graph.Node.Port.Output.CustomColor"
outputs = model[port].outputs
inputs = model[port].inputs
node_type = str(model[node].type)
port_type = str(model[port].type)
with ui.ZStack(width=self.__scale(16), skip_draw_when_clipped=True):
if outputs is not None:
# Circle that shows the port is able to be an output connection
# Background of the node color
ui.Circle(
name=node_type,
style_type_name_override=style_type_name,
alignment=ui.Alignment.CENTER,
radius=self.__scale(6),
size_policy=ui.CircleSizePolicy.FIXED,
visible_min=TEXT_VISIBLE_MIN,
)
# Port has unique color
ui.Circle(
name=port_type,
style_type_name_override=customcolor_style_type_name,
style={"background_color": 0x0},
alignment=ui.Alignment.CENTER,
radius=self.__scale(6),
size_policy=ui.CircleSizePolicy.FIXED,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_source:
# Circle that shows that the port is a source for the connection
ui.Circle(
radius=self.__scale(7),
name=port_type,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_target or (outputs == [] and inputs):
# Circle that shows that the port is a target for the connection
ui.Circle(
radius=self.__scale(5),
name=port_type,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
def port(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the middle part of the port"""
def set_expansion_state(model, port, state: GraphModel.ExpansionState, *args):
model[port].expansion_state = state
port = port_desc.port
level = port_desc.level
if port is None:
return
sub_ports = model[port].ports
is_group = sub_ports is not None
inputs = model[port].inputs
outputs = model[port].outputs
is_input = inputs is not None
is_output = outputs is not None
style_name = "input" if is_input and not is_output else "output"
with ui.HStack(skip_draw_when_clipped=True):
if level > 0:
style_type_name_override = "Graph.Node.Port.Branch"
if port_desc.relative_position == port_desc.parent_child_count - 1:
ui.Line(
width=4,
height=ui.Percent(50),
style_type_name_override=style_type_name_override,
alignment=ui.Alignment.RIGHT,
visible_min=TEXT_VISIBLE_MIN,
)
else:
ui.Line(
width=4,
style_type_name_override=style_type_name_override,
alignment=ui.Alignment.RIGHT,
visible_min=TEXT_VISIBLE_MIN,
)
ui.Line(
width=8,
style_type_name_override=style_type_name_override,
visible_min=TEXT_VISIBLE_MIN,
)
if is_group:
# +/- button
state = model[port].expansion_state
if state == GraphModel.ExpansionState.CLOSED:
image_name = "Plus"
next_state = GraphModel.ExpansionState.OPEN
else:
image_name = "Minus"
next_state = GraphModel.ExpansionState.CLOSED
ui.ImageWithProvider(
width=10,
style_type_name_override="Graph.Node.Port.Group",
name=image_name,
visible_min=TEXT_VISIBLE_MIN,
mouse_pressed_fn=partial(set_expansion_state, model, port, next_state),
)
with ui.ZStack():
ui.Label(
model[port].name,
style_type_name_override="Graph.Node.Port.Label",
name=style_name,
visible_min=TEXT_VISIBLE_MIN,
)
ui.Line(
style_type_name_override="Graph.Node.Port.Label",
visible_min=LINE_VISIBLE_MIN,
visible_max=TEXT_VISIBLE_MIN,
)
def connection(self, model, source: GraphConnectionDescription, target: GraphConnectionDescription):
"""Called to create the connection between ports"""
port_type = str(model[source.port].type)
if target.is_tangent_reversed != source.is_tangent_reversed:
# It's the same node connection. Set tangent in pixels.
start_tangent_width=ui.Pixel(20)
end_tangent_width=ui.Pixel(20)
else:
# If the connection is reversed, we need to mirror tangents
source_reverced_tangent = -1.0 if target.is_tangent_reversed else 1.0
target_reverced_gangent = -1.0 if source.is_tangent_reversed else 1.0
start_tangent_width=ui.Percent(-CONNECTION_CURVE * source_reverced_tangent)
end_tangent_width=ui.Percent(CONNECTION_CURVE * target_reverced_gangent)
ui.FreeBezierCurve(
target.widget,
source.widget,
start_tangent_width=start_tangent_width,
end_tangent_width=end_tangent_width,
name=port_type,
style_type_name_override="Graph.Connection",
)
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_layout.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["SugiyamaLayout"]
from collections import defaultdict
from bisect import bisect
class SugiyamaLayout:
"""
Compute the coordinates for drawing directed graphs following the method
developed by Sugiyama.
This method consists of four phases:
1. Cycle Removal
2. Layer Assignment
3. Crossing Reduction
4. Coordinate Assignment
As input it takes the list of edges in the following format:
[(vertex1, vertex2), (vertex3, vertex4), ... ]
Once the object is created, it's possible to get the node layer number
immediately.
To get the node positions, it's necessary to set each node's size and
call `update_positions`.
Follows closely to the following papers:
[1] "An Efficient Implementation of Sugiyama's Algorithm for Layered Graph Drawing"
Eiglsperger Siebenhaller Kaufmann
[2] "Sugiyama Algorithm"
Nikolov
[3] "Graph Drawing Algorithms in Information Visualization"
Frishman
"""
class Node:
"""Temporary node that caches all the intermediate compute data"""
def __init__(self, id):
self.id = id
# Upstream and downstream nodes
self.upstream = []
self.downstream = []
# For iteration
self.is_currently_iterating = False
self.highest_iteration_index = 0
self.lowest_iteration_index = 0
# Layer and position
self.layer = None
self.is_dummy = False
# Barycenter is the position in the layer in [0..1] interval
self.barycenter = None
self.index_in_layer = None
# The final geometry
self.width = None
self.height = None
self.final_position = None
self.root = None
self.horizontal_aligned_to = None
self.vertical_aligned_to = None
self.offset = None
self.max_y = None
self.bound = [0.0, 0.0, 0.0, 0.0]
def add_upstream(self, node):
"""Add the upstream node. It will add current node to downstream as well."""
if node.id not in self.upstream:
self.upstream.append(node.id)
if self.id not in node.downstream:
node.downstream.append(self.id)
def __repr__(self):
result = f"<Node {self.id}: up"
for n in self.upstream:
result += f" {n}"
result += "; down"
for n in self.downstream:
result += f" {n}"
result += f"; layer_id{self.layer}>"
return result
def __init__(self, edges=[], vertical_distance=10.0, horizontal_distance=10.0):
# Minimal space between items
self.vertical_distance = vertical_distance
self.horizontal_distance = horizontal_distance
# Current alignment direction. It will call property setter.
self._alignment_direction = 0
# Counter for creating dummy nodes. Dummy nodes are negative ID.
self._dummy_counter = -1
self._edges = set(edges)
# All the nodes
self._nodes = {}
# Connected graphs
self._graphs = []
#
self._dummies = {}
# All the layers it's a dict with list of vertices id
self._layers = defaultdict(list)
# The action
self._split_to_graphs()
self._layout()
@property
def _alignment_direction(self):
return self.__alignment_direction
@property
def _alignment_direction_horizontal(self):
return self.__alignment_direction_horizontal
@property
def _alignment_direction_vertical(self):
return self.__alignment_direction_vertical
@_alignment_direction.setter
def _alignment_direction(self, alignment_direction):
"""
Alignment policy:
_alignment_direction=0 -> vertical=1, horizontal=-1
_alignment_direction=1 -> vertical=-1, horizontal=-1
_alignment_direction=2 -> vertical=1, horizontal=1
_alignment_direction=3 -> vertical=-1, horizontal=1
"""
self.__alignment_direction = alignment_direction
self.__alignment_direction_vertical, self.__alignment_direction_horizontal = {
0: (1, -1),
1: (-1, -1),
2: (1, 1),
3: (-1, 1),
}[alignment_direction]
@_alignment_direction_horizontal.setter
def _alignment_direction_horizontal(self, _alignment_direction_horizontal):
_alignment_direction = (_alignment_direction_horizontal + 1) + (1 - self.__alignment_direction_vertical) // 2
self._alignment_direction = _alignment_direction
@_alignment_direction_vertical.setter
def _alignment_direction_vertical(self, _alignment_direction_vertical):
_alignment_direction = (self.__alignment_direction_horizontal + 1) + (1 - _alignment_direction_vertical) // 2
self._alignment_direction = _alignment_direction
def _get_roots(self, graph):
# Nodes that doesn't have anything downstream
current_roots = []
for edge in graph:
vertex = edge[0]
if not self._nodes[vertex].downstream:
current_roots.append(self._nodes[vertex])
return current_roots
def __get_connected_dummy(self, node):
dummy_id = node.dummy_nodes.get(node.layer - 1, None)
return [dummy_id] if dummy_id is not None else []
def __is_between_dummies(self, node):
return any([x.is_dummy for x in self.__get_connected_dummy(node)])
def __iterate_node_edges(self, node, counter, visited, reversed_edges):
counter[0] += 1
node.highest_iteration_index = counter[0]
node.lowest_iteration_index = counter[0]
visited.append(node)
node.is_currently_iterating = True
for vertex in node.upstream:
upstream = self._nodes[vertex]
if upstream.highest_iteration_index == 0:
self.__iterate_node_edges(upstream, counter, visited, reversed_edges)
node.lowest_iteration_index = min(node.lowest_iteration_index, upstream.lowest_iteration_index)
elif upstream.is_currently_iterating:
# It's a loop. We need to invert this connection.
reversed_edges.append((node.id, upstream.id))
if upstream in visited:
node.lowest_iteration_index = min(node.lowest_iteration_index, upstream.highest_iteration_index)
if node.lowest_iteration_index == node.highest_iteration_index:
backtracing = [visited.pop()]
while backtracing[-1] != node:
backtracing.append(visited.pop())
node.is_currently_iterating = False
def _get_reversed_edges(self, graph, roots):
counter = [0]
visited = []
reversed_edges = []
for vertex, node in self._nodes.items():
node.highest_iteration_index = 0
# Start from roots
for root in roots:
self.__iterate_node_edges(root, counter, visited, reversed_edges)
# Iterate rest
for edge in graph:
for vertex in edge:
node = self._nodes[vertex]
if node.highest_iteration_index == 0:
self.__iterate_node_edges(node, counter, visited, reversed_edges)
return reversed_edges
def _invert_edge(self, edge):
"""Invert the flow direction of the given edge"""
v1 = edge[0]
v2 = edge[1]
node1 = self._nodes[v1]
node1.upstream.remove(v2)
node2 = self._nodes[v2]
node2.downstream.remove(v1)
node2.add_upstream(node1)
def _set_layer(self, node):
"""Set the layer id of the node"""
current_layer = node.layer
# Layers start from 1
new_layer = max([self._nodes[x].layer for x in node.downstream] + [0]) + 1
if current_layer == new_layer:
# Nothing changed
return
if current_layer is not None:
self._layers[current_layer].remove(node.id)
node.layer = new_layer
self._layers[node.layer].append(node.id)
def _iterate_layer(self, roots):
scaned_edges = {}
# Roots are in the first layer
current_layer = roots
while len(current_layer) > 0:
next_layer = []
for node in current_layer:
self._set_layer(node)
# Mark out-edges has scanned.
for vertex in node.upstream:
edge = (node.id, vertex)
scaned_edges[edge] = True
# Check if out-vertices are rank-able.
for x in node.upstream:
upstream = self._nodes[x]
if not (
False in [scaned_edges.get((vertex, upstream.id), False) for vertex in upstream.downstream]
):
if x not in next_layer:
next_layer.append(self._nodes[x])
current_layer = next_layer
def _create_dummy(self, layer, dummy_nodes):
"""Create a dummy node and put it to the layer."""
# Setup a new node
dummy_node = self._nodes[self._dummy_counter] = self.Node(self._dummy_counter)
self._dummy_counter -= 1
dummy_node.is_dummy = True
dummy_node.layer = layer
dummy_node.width = 0.0
dummy_node.height = 0.0
self._layers[layer].append(dummy_node.id)
dummy_node.dummy_nodes = dummy_nodes
dummy_nodes[layer] = dummy_node
return dummy_node
def _create_dummies(self, edge):
"""Create and all dummy nodes for the given edge."""
v1, v2 = edge
layer1, layer2 = self._nodes[v1].layer, self._nodes[v2].layer
if layer1 > layer2:
v1, v2 = v2, v1
layer1, layer2 = layer2, layer1
if (layer2 - layer1) > 1:
# "dummy vertices" are stored in the dict, keyed by their layer.
dummy_nodes = self._dummies[edge] = {}
node1 = self._nodes[v1]
node2 = self._nodes[v2]
dummy_nodes[layer1] = node1
dummy_nodes[layer2] = node2
# Disconnect the nodes
node1.upstream.remove(v2)
node2.downstream.remove(v1)
# Insert dummies between nodes
prev_dummy = node1
for layer_id in range(layer1 + 1, layer2):
dummy = self._create_dummy(layer_id, dummy_nodes)
prev_dummy.add_upstream(dummy)
prev_dummy = dummy
prev_dummy.add_upstream(node2)
def _get_cross_count(self, layer_id):
"""Number of crosses in the layer"""
neighbor_indices = []
layer = self._layers[layer_id]
for vertex in layer:
node = self._nodes[vertex]
neighbor_indices.extend(
sorted([self._nodes[neighbor].index_in_layer for neighbor in self._get_neighbors(node)])
)
s = []
count = 0
for i, neighbor_index in enumerate(neighbor_indices):
j = bisect(s, neighbor_index)
if j < i:
count += i - j
s.insert(j, neighbor_index)
return count
def _get_next_layer_id(self, layer_id):
"""The layer that is the next according to the current slignment direction"""
layer_id += 1 if self._alignment_direction_horizontal == -1 else -1
return layer_id
def _get_prev_layer_id(self, layer_id):
"""The layer that is the previous according to the current slignment direction"""
layer_id += 1 if self._alignment_direction_horizontal == 1 else -1
return layer_id
def _get_mean_value_position(self, node):
"""
Compute the position of the node according to the position of
neighbors in the previous layer. It's the mean value of adjacent
positions of neighbors.
"""
layer_id = node.layer
prev_layer = self._get_prev_layer_id(layer_id)
if prev_layer not in self._layers:
return node.barycenter
bars = [self._nodes[vertex].barycenter for vertex in self._get_neighbors(node)]
return node.barycenter if len(bars) == 0 else float(sum(bars)) / len(bars)
def _reduce_crossings(self, layer_id):
"""
Reorder the nodes in the layer to reduce the number of crossings in the layer.
Return the new number of crossing.
"""
layer = self._layers[layer_id]
num_nodes = len(layer)
total_crossings = 0
for i, j in zip(range(num_nodes - 1), range(1, num_nodes)):
vertex_i = layer[i]
vertex_j = layer[j]
barycenters_neighbors_i = [
self._nodes[vertex].barycenter for vertex in self._get_neighbors(self._nodes[vertex_i])
]
barycenters_neighbors_j = [
self._nodes[vertex].barycenter for vertex in self._get_neighbors(self._nodes[vertex_j])
]
crossings_ij = crossings_ji = 0
for neightbor_j in barycenters_neighbors_j:
crossings = len([neighbor_i for neighbor_i in barycenters_neighbors_i if neighbor_i > neightbor_j])
# Crossings we have now
crossings_ij += crossings
# Crossings we would have if swap verices
crossings_ji += len(barycenters_neighbors_i) - crossings
if crossings_ji < crossings_ij:
# Swap vertices
layer[i] = vertex_j
layer[j] = vertex_i
total_crossings += crossings_ji
else:
total_crossings += crossings_ij
return total_crossings
def _reorder(self, layer_id):
"""
Reorder the nodes within the layer to reduce the number of crossings in the layer.
Return the number of crossing.
"""
# TODO: Use code from _reduce_crossings to find the initial number of
# crossings.
c = self._get_cross_count(layer_id)
layer = self._layers[layer_id]
barycenter_height = 1.0 / (len(layer) - 1) if len(layer) > 1 else 1.0
if c > 0:
for vertex in layer:
node = self._nodes[vertex]
node.barycenter = self._get_mean_value_position(node)
# Reorder layers according to barycenter.
layer.sort(key=lambda x: self._nodes[x].barycenter)
c = self._reduce_crossings(layer_id)
# Re assign new position since it was reordered
for i, vertex in enumerate(layer):
self._nodes[vertex].index_in_layer = i
self._nodes[vertex].barycenter = i * barycenter_height
return c
def _ordering_pass(self, direction=-1):
"""
Ordering of the vertices such that the number of edge crossings is reduced.
"""
crossings = 0
self._alignment_direction_horizontal = direction
for layer_id in sorted(self._layers.keys())[::-direction]:
crossings += self._reorder(layer_id)
return crossings
def _get_neighbors(self, node):
"""
Neighbors are to left/right adjacent nodes. Node.upstream/downstream
have connections from all the layers. This returns from neighbour
layers according to the current alignment direction.
"""
alignment_direction_horizontal = self._alignment_direction_horizontal
# TODO: It's called very often. We need to cache it.
node.neighbors_at_direction = {-1: list(node.downstream), 1: list(node.upstream)}
if node.is_dummy:
return node.neighbors_at_direction[alignment_direction_horizontal]
for direction in (-1, 1):
tr = node.layer + direction
for i, x in enumerate(node.neighbors_at_direction[direction]):
if self._nodes[x].layer == tr:
continue
# TODO: check if we need this code. upstream/downstream has dummies.
edge = (node.id, x)
if edge not in self._dummies:
edge = (x, node.id)
dum = self._dummies[edge][tr]
node.neighbors_at_direction[direction][i] = dum.id
return node.neighbors_at_direction[alignment_direction_horizontal]
def _get_median_index(self, node):
"""
Find the position of node according to neigbor positions in neigbor layer.
"""
neighbors = self._get_neighbors(node)
index_in_layer = [self._nodes[x].index_in_layer for x in neighbors]
neighbors_size = len(index_in_layer)
if neighbors_size == 0:
return []
index_in_layer.sort()
index_in_layer = index_in_layer[:: self._alignment_direction_vertical]
i, j = divmod(neighbors_size - 1, 2)
return [index_in_layer[i]] if j == 0 else [index_in_layer[i], index_in_layer[i + j]]
def _horizontal_alignment(self):
"""
Horizontal alignment according to current alignment direction.
"""
alignment_direction_vertical, alignment_direction_horizontal = (
self._alignment_direction_vertical,
self._alignment_direction_horizontal,
)
for layer_id in sorted(self._layers.keys())[::-alignment_direction_horizontal]:
prev_layer_id = self._get_prev_layer_id(layer_id)
if prev_layer_id not in self._layers:
continue
current_vertex_index = None
layer = self._layers[layer_id]
prev_layer = self._layers[prev_layer_id]
for vertex in layer[::alignment_direction_vertical]:
node = self._nodes[vertex]
for median_vertex_index in self._get_median_index(node):
median_vertex = prev_layer[median_vertex_index]
if self._nodes[vertex].horizontal_aligned_to is vertex:
if alignment_direction_horizontal == 1:
edge = (vertex, median_vertex)
else:
edge = (median_vertex, vertex)
if edge in self._dummy_intersections:
continue
if (current_vertex_index is None) or (
alignment_direction_vertical * current_vertex_index
< alignment_direction_vertical * median_vertex_index
):
# Align median
self._nodes[median_vertex].horizontal_aligned_to = vertex
# Align the given one
median_root = self._nodes[median_vertex].root
self._nodes[vertex].horizontal_aligned_to = median_root
self._nodes[vertex].root = median_root
current_vertex_index = median_vertex_index
def _align_subnetwork(self, vertex):
"""
Vertical alignment according to current alignment direction.
"""
if self._nodes[vertex].max_y is not None:
return
self._nodes[vertex].max_y = 0.0
vertex_to_align = vertex
while True:
prev_index_in_layer = self._nodes[vertex_to_align].index_in_layer - self._alignment_direction_vertical
layer_id = self._nodes[vertex_to_align].layer
if 0 <= prev_index_in_layer < len(self._layers[layer_id]):
prev_vertex_id = self._layers[layer_id][prev_index_in_layer]
vertical_distance = (
self.vertical_distance
+ self._nodes[prev_vertex_id].height * 0.5
+ self._nodes[vertex_to_align].height * 0.5
)
# Recursively place subnetwork
root_vertex_id = self._nodes[prev_vertex_id].root
self._align_subnetwork(root_vertex_id)
# Adjust node position
if self._nodes[vertex].vertical_aligned_to is vertex:
self._nodes[vertex].vertical_aligned_to = self._nodes[root_vertex_id].vertical_aligned_to
if self._nodes[vertex].vertical_aligned_to == self._nodes[root_vertex_id].vertical_aligned_to:
self._nodes[vertex].max_y = max(
self._nodes[vertex].max_y, self._nodes[root_vertex_id].max_y + vertical_distance
)
else:
aligned_vertex = self._nodes[root_vertex_id].vertical_aligned_to
offset = self._nodes[vertex].max_y - self._nodes[root_vertex_id].max_y + vertical_distance
if self._nodes[aligned_vertex].offset is None:
self._nodes[aligned_vertex].offset = offset
else:
self._nodes[aligned_vertex].offset = min(self._nodes[aligned_vertex].offset, offset)
vertex_to_align = self._nodes[vertex_to_align].horizontal_aligned_to
if vertex_to_align is vertex:
# Already aligned
break
def _vertical_alignment(self):
"""
Vertical alignment according to current alignment direction.
"""
_alignment_direction_vertical, _alignment_direction_horizontal = (
self._alignment_direction_vertical,
self._alignment_direction_horizontal,
)
layer_ids = sorted(self._layers.keys())[::-_alignment_direction_horizontal]
for layer_id in layer_ids:
for vertex in self._layers[layer_id][::_alignment_direction_vertical]:
if self._nodes[vertex].root is vertex:
self._align_subnetwork(vertex)
# Mirror nodes when the alignment is bottom.
if _alignment_direction_vertical == -1:
for layer_id in layer_ids:
for vertex in self._layers[layer_id]:
y = self._nodes[vertex].max_y
if y:
self._nodes[vertex].max_y = -y
# Assign bound
bound = None
for layer_id in layer_ids:
for vertex in self._layers[layer_id][::_alignment_direction_vertical]:
self._nodes[vertex].bound[self._alignment_direction] = self._nodes[self._nodes[vertex].root].max_y
aligned_vertex = self._nodes[self._nodes[vertex].root].vertical_aligned_to
offset = self._nodes[aligned_vertex].offset
if offset is not None:
self._nodes[vertex].bound[self._alignment_direction] += _alignment_direction_vertical * offset
if bound is None:
bound = self._nodes[vertex].bound[self._alignment_direction]
else:
bound = min(bound, self._nodes[vertex].bound[self._alignment_direction])
# Initialize
for layer_id, layer in self._layers.items():
for vertex in layer:
self._nodes[vertex].root = vertex
self._nodes[vertex].horizontal_aligned_to = vertex
self._nodes[vertex].vertical_aligned_to = vertex
self._nodes[vertex].offset = None
self._nodes[vertex].max_y = None
def _split_to_graphs(self):
"""Split edges to multiple connected graphs. Result is self._graphs."""
# All vertices
vertices = set()
# Dict where key is vertex, value is all the connected vertices
dependencies = defaultdict(set)
for v1, v2 in self._edges:
vertices.add(v1)
vertices.add(v2)
dependencies[v1].add(v2)
dependencies[v2].add(v1)
if v1 in self._nodes:
n1 = self._nodes[v1]
else:
n1 = self.Node(v1)
self._nodes[v1] = n1
if v2 in self._nodes:
n2 = self._nodes[v2]
else:
n2 = self.Node(v2)
self._nodes[v2] = n2
n1.add_upstream(n2)
while vertices:
# Start with any vertex
current_vertices = [vertices.pop()]
# For each vertex, remove all connected from `vertices` and add it to current graph
for vertex in current_vertices:
for dependent in dependencies.get(vertex, []):
if dependent in vertices:
vertices.remove(dependent)
current_vertices.append(dependent)
# Here `current_vertices` has all the vertices that are conneted to a single graph
current_edges = set()
for vertex in current_vertices:
for edge in self._edges:
if edge[0] in current_vertices or edge[1] in current_vertices:
current_edges.add(edge)
self._graphs.append(current_edges)
def _find_dummy_intersections(self):
"""
Detect crossings in dummy nodes.
"""
self._dummy_intersections = []
for layer_id, layer_vertices in self._layers.items():
prev_layer_id = layer_id - 1
if prev_layer_id not in self._layers:
continue
first_vertex = 0
vertices_count = len(layer_vertices) - 1
prev_layer_vertices_count = len(self._layers[prev_layer_id]) - 1
vertex_range_from = 0
for i, current_vertex_id in enumerate(layer_vertices):
node = self._nodes[current_vertex_id]
if not node.is_dummy:
continue
if i == vertices_count or self.__is_between_dummies(node):
connected_dummy_index = prev_layer_vertices_count
if self.__is_between_dummies(node):
connected_dummy_index = self.__get_connected_dummy(node)[-1].index_in_layer
for vertex_in_this_layer in layer_vertices[vertex_range_from : i + 1]:
for neighbor in self._get_neighbors(self._nodes[vertex_in_this_layer]):
neighbor_index = self._nodes[neighbor].index_in_layer
if neighbor_index < first_vertex or neighbor_index > connected_dummy_index:
# Intersection found
self._dummy_intersections.append((neighbor, vertex_in_this_layer))
vertex_range_from = i + 1
first_vertex = connected_dummy_index
def _layout(self):
"""Perform first three steps of Sugiyama layouting"""
for graph in self._graphs:
# 1. Cycle Removal
roots = self._get_roots(graph)
reversed_edges = self._get_reversed_edges(graph, roots)
# Make the graph acyclic by reversing appropriate edges.
for edge in reversed_edges:
self._invert_edge(edge)
# Get roots one more time
roots += [node for node in self._get_roots(graph) if node not in roots]
# 2. Layer Assignment
self._iterate_layer(roots)
# TODO: Optimize layers. Root nodes could potentially go to other
# layers if it could minimize crossings
# Add dummies
for edge in graph:
self._create_dummies(edge)
# Pre-setup of some indices
for _, layer in self._layers.items():
# Barycenter is the position in the layer in [0..1] interval
barycenter_height = 1.0 / (len(layer) - 1) if len(layer) > 1 else 1.0
for i, vertex in enumerate(layer):
node = self._nodes[vertex]
node.index_in_layer = i
node.barycenter = i * barycenter_height
# 3. Crossing Reduction, 2 passes in both direction for optimal ordering
for i in range(2):
# Ordering pass for layers from 0 to end
crossings = self._ordering_pass()
if crossings > 0:
# Second pass in reverse direction
self._ordering_pass(direction=1)
self._ordering_pass()
# 3a. Crossing Reduction in dummy nodes
self._find_dummy_intersections()
def update_positions(self):
"""4. Coordinate Assignment"""
# Initialize node attributes.
# TODO: Put it to init
for _, layer in self._layers.items():
for vertex in layer:
node = self._nodes[vertex]
node.root = vertex
node.horizontal_aligned_to = vertex
node.vertical_aligned_to = vertex
node.offset = None
node.max_y = None
node.bound = [0.0, 0.0, 0.0, 0.0]
for _alignment_direction in range(4):
self._alignment_direction = _alignment_direction
self._horizontal_alignment()
self._vertical_alignment()
# Final coordinate assigment of all nodes:
x_counter = 0
for _, layer in self._layers.items():
if not layer:
continue
max_width = max([self._nodes[vertex].width / 2.0 for vertex in layer])
y_counter = None
for vertex in layer:
y = sorted(self._nodes[vertex].bound)
# Average of computed bound
average_y = (y[1] + y[2]) / 2.0
# Prevent node intersections
if y_counter is None:
y_counter = average_y
y_counter = max(y_counter, average_y)
# Final xy-coordinates. Negative X because we go from right to left.
self._nodes[vertex].final_position = (-x_counter - max_width, y_counter)
y_counter += self._nodes[vertex].height + self.vertical_distance
x_counter += 2 * max_width + self.horizontal_distance
def set_size(self, vertex, width, height):
"""Set the size of the node"""
node = self._nodes.get(vertex, None)
if not node:
# The node not in the list because the node is not connected to anythig.
# Since the graph layers start from 1, we put those non-graph
# vertices to the layer 0 and they will stay in a separate column.
layer_id = 0
node = self.Node(vertex)
node.layer = layer_id
node.index_in_layer = len(self._layers[layer_id])
self._nodes[node.id] = node
self._layers[layer_id].append(node.id)
node.width = width
node.height = height
def get_position(self, vertex):
"""The position of the node"""
node = self._nodes.get(vertex, None)
if node:
return node.final_position
def get_layer(self, vertex):
"""The layer id of the node"""
node = self._nodes.get(vertex, None)
if node:
return node.layer
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_node_index.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphNodeIndex"]
from .graph_model import GraphModel
from collections import defaultdict
from typing import Any, Set, Tuple
from typing import Dict
from typing import List
from typing import Optional
from typing import Union
import itertools
import weakref
class CachePort:
"""
The structure to keep in the ports and their properties and don't
access the model many times.
"""
def __init__(
self,
port: Any,
state: Optional[GraphModel.ExpansionState],
child_count: int,
level: int,
relative_position: int,
parent_child_count: int,
parent_cached_port: Optional[weakref.ProxyType] = None,
):
# Port from the model
self.port: Any = port
# Expansion state if it's a groop
self.state: Optional[GraphModel.ExpansionState] = state
# Number of children
self.child_count: int = child_count
# The level in the tree structure
self.level: int = level
# Position number in the parent list
self.relative_position = relative_position
# Number of children of the parent
self.parent_child_count = parent_child_count
# The visibility after collapsing
self.visibile: bool = True
# Inputs from the model
self.inputs: Optional[List[Any]] = None
# Outputs from the model
self.outputs: Optional[List[Any]] = None
self.parent_cached_node: Optional[weakref.ProxyType] = None
self.parent_cached_port: Optional[weakref.ProxyType] = parent_cached_port
def __repr__(self):
return f"<CachePort {self.port}>"
def __hash__(self):
if self.inputs:
inputs_hash = hash(tuple(hash(i) for i in self.inputs))
else:
inputs_hash = 0
if self.outputs:
outputs_hash = hash(tuple(hash(i) for i in self.outputs))
else:
outputs_hash = 0
return hash(
(
CachePort,
self.port,
self.state and self.state.value,
self.visibile,
inputs_hash,
outputs_hash,
)
)
def __eq__(self, other):
return hash(self) == hash(other)
class CacheNode:
"""The structure to keep in the cache and don't access the model many times"""
def __init__(
self,
node: Any,
state: Optional[GraphModel.ExpansionState],
cached_ports: List[CachePort],
stacking_order: int,
additional_hash=0,
):
# Node from the model
self.node: Any = node
# Expansion state if it's a groop
self.state: Optional[GraphModel.ExpansionState] = state
# Ports from the model
self.cached_ports: List[CachePort] = cached_ports
# Level represents the distance of the current node from the root
self.level = None
self.stacking_order: int = stacking_order
# The nodes dependent from the current node. If the nodes connected like this A.out -> B.in,
# then A.dependent = [B]
self.dependent = []
# Hash
self._additional_hash = additional_hash
self.inputs = []
self.outputs = []
# Add connection to the parent node
selfproxy = weakref.proxy(self)
for port in self.cached_ports:
port.parent_cached_node = selfproxy
def __repr__(self):
return f"<CacheNode {self.node}>"
def __hash__(self):
cached_ports_hash = hash(tuple(hash(p) for p in self.cached_ports))
return hash((CacheNode, self.node, self.state and self.state.value, cached_ports_hash, self._additional_hash))
def __eq__(self, other):
return hash(self) == hash(other)
class CacheConnection:
"""The structure to keep connections in the cache and don't access the model many times"""
def __init__(
self,
source_cached_port: CachePort,
target_cached_port: CachePort,
):
self.source_port: Any = source_cached_port.port
self.target_port: Any = target_cached_port.port
self.__hash = hash(
(
CacheConnection,
source_cached_port,
target_cached_port,
source_cached_port.parent_cached_node.__hash__(),
target_cached_port.parent_cached_node.__hash__(),
)
)
@property
def pair(self):
return (self.source_port, self.target_port)
def __repr__(self):
return f"<CacheConnection {self.pair}>"
def __hash__(self):
return self.__hash
def __eq__(self, other):
return hash(self) == hash(other)
class GraphNodeDiff:
"""
The object that keeps the difference that is the list of nodes and
connections to add and delete.
"""
def __init__(
self,
nodes_to_add: List[CacheNode] = None,
nodes_to_del: List[CacheNode] = None,
connections_to_add: List[CacheConnection] = None,
connections_to_del: List[CacheConnection] = None,
):
self.nodes_to_add = nodes_to_add
self.nodes_to_del = nodes_to_del
self.connections_to_add = connections_to_add
self.connections_to_del = connections_to_del
@property
def valid(self):
return (
self.nodes_to_add is not None
and self.nodes_to_del is not None
and self.connections_to_add is not None
and self.connections_to_del is not None
)
def __repr__(self):
return (
"<GraphNodeDiff\n"
f" Add: {self.nodes_to_add} {self.connections_to_add}\n"
f" Del: {self.nodes_to_del} {self.connections_to_del}\n"
">"
)
class GraphNodeIndex:
""" Hirarchy index of the model. Provides fast access to the nodes and connections."""
def __init__(self, model: Optional[GraphModel], port_grouping: bool):
# List of all the nodes from the model
self.cached_nodes: List[Optional[CacheNode]] = []
# List of all the connections from the model
self.cached_connections: List[Optional[CacheConnection]] = []
# Dictionary that has a node from the model as a key and the
# index of this node in all_cached_nodes.
self.node_to_id: Dict[Any, int] = {}
# Dictionary that has a port from the model as a key and the
# index of the parent node in self.cached_nodes.
self.port_to_id: Dict[Any, int] = {}
# Dictionary that has a port from the model as a key and the
# index of the port in cached_node.cached_ports.
port_to_port_id: Dict[Any, int] = {}
# Connection hash to ID in self.cached_connections.
self.connection_to_id: Dict[Tuple[Any, Any], int] = {}
# Set with all the ports from the model if this port is output. We need
# it to detect the flow direction.
self.ports_used_as_output = set()
# All the connections.
self.source_to_target = defaultdict(set)
# Dict[child port: parent port]
self.port_child_to_parent: Dict[Any, Any] = {}
if not model:
return
# Preparing the cache and indices.
for node in model.nodes or []:
# Caching ports
# TODO: Nested group support
root_ports = model[node].ports or []
root_port_count = len(root_ports)
cached_ports: List[CachePort] = []
for id, port in enumerate(root_ports):
if port_grouping:
sub_ports = model[port].ports
is_group = sub_ports is not None
else:
sub_ports = None
is_group = False
state = model[port].expansion_state
cached_port = CachePort(port, state, len(sub_ports) if is_group else 0, 0, id, root_port_count)
cached_ports.append(cached_port)
if is_group:
# TODO: Nested group support
cached_port_proxy = weakref.proxy(cached_port)
sub_port_count = len(sub_ports)
cached_sub_ports = [
CachePort(p, None, 0, 1, id, sub_port_count, cached_port_proxy)
for id, p in enumerate(sub_ports)
]
cached_ports += cached_sub_ports
# The ID of the current cached node in the cache
cached_node_id = len(self.cached_nodes)
# The global index
self.node_to_id[node] = cached_node_id
state = model[node].expansion_state
# Allows to draw backdrops in backgroud
stacking_order = model[node].stacking_order
# Hash name, description and color, so the node is autogenerated when it's changed.
additional_hash = hash((model[node].name, model[node].description, model[node].display_color))
# The global cache
self.cached_nodes.append(CacheNode(node, state, cached_ports, stacking_order, additional_hash))
# Cache connections
for cached_node_id, cached_node in enumerate(self.cached_nodes):
cached_ports = cached_node.cached_ports
# True if node has output node
node_has_outputs = False
# Parent port to put the connections to
cached_port_parent: Optional[CachePort] = None
# How many ports already hidden
hidden_port_counter = 0
# How many ports we need to hide
hidden_port_number = 0
# for port, port_state, port_child_count in zip(ports, port_states, port_child_counts):
for cached_port_id, cached_port in enumerate(cached_ports):
port = cached_port.port
# print("Cached ports", port, cached_port_id)
port_inputs = model[port].inputs
port_outputs = model[port].outputs
# Copy to detach from the model
port_inputs = port_inputs[:] if port_inputs is not None else None
port_outputs = port_outputs[:] if port_outputs is not None else None
cached_port.visibile = not cached_port_parent
if cached_port_parent:
# Put inputs/outputs to the parent
if port_inputs is not None:
cached_port_parent.inputs = cached_port_parent.inputs or []
cached_port_parent.inputs += port_inputs
if port_outputs is not None:
cached_port_parent.outputs = cached_port_parent.outputs or []
cached_port_parent.outputs += port_outputs
else:
# Put inputs/outputs to the port
cached_port.inputs = port_inputs
cached_port.outputs = port_outputs
if cached_port_parent:
connection_port = cached_port_parent.port
# Dict[child: parent]
self.port_child_to_parent[cached_port.port] = cached_port_parent.port
else:
connection_port = cached_port.port
if port_inputs:
for source in port_inputs:
# |--------| |-----------------|
# | source | ---> | connection_port |
# |--------| |-----------------|
# Ex:
# source is Usd.Prim(</World/Looks/OmniGlass/Shader>).GetAttribute('outputs:out')
# connection_port is Usd.Prim(</World/Looks/OmniGlass>).GetAttribute('outputs:mdl:displacement')
self.source_to_target[source].add(connection_port)
if port_outputs:
for source in port_outputs:
self.source_to_target[source].add(connection_port)
if port_outputs is not None:
node_has_outputs = True
self.port_to_id[port] = cached_node_id
port_to_port_id[port] = cached_port_id
# Check if the port is hidden and hide it. It happens
# when the port is collapsed.
if cached_port_parent:
hidden_port_counter += 1
hidden_port_number += cached_port.child_count
if hidden_port_counter == hidden_port_number:
# We hide enough
cached_port_parent = None
continue
if cached_port.child_count > 0 and cached_port.state == GraphModel.ExpansionState.CLOSED:
# The next one should be hidden
cached_port_parent = cached_port
hidden_port_counter = 0
hidden_port_number = cached_port.child_count
for cached_port in cached_ports:
# If the node don't have any output (in OmniGraph inputs and outputs are equal) we consider that all
# the input attributes are used as outputs.
if not node_has_outputs or cached_port.outputs is not None:
self.ports_used_as_output.add(cached_port.port)
# chain.from_iterable(['ABC', 'DEF']) --> A B C D E F
cached_node.inputs = list(itertools.chain.from_iterable([p.inputs for p in cached_ports if p.inputs]))
cached_node.outputs = list(itertools.chain.from_iterable([p.outputs for p in cached_ports if p.outputs]))
# Replace input/output of cached ports to point to the parents of the collapsed ports
for cached_node in self.cached_nodes:
for cached_port in cached_node.cached_ports:
if cached_port.inputs:
inputs = []
for i in cached_port.inputs:
parent = self.port_child_to_parent.get(i, None)
if parent:
inputs.append(parent)
else:
inputs.append(i)
cached_port.inputs = inputs
if cached_port.outputs:
outputs = []
for i in cached_port.outputs:
parent = self.port_child_to_parent.get(i, None)
if parent:
outputs.append(parent)
else:
outputs.append(i)
cached_port.outputs = outputs
# Remove the collapsed ports from source_to_target and move the removed content to the port parents.
source_to_target_filtered = defaultdict(set)
for source, target in self.source_to_target.items():
if source in self.port_child_to_parent:
source_to_target_filtered[self.port_child_to_parent[source]].update(target)
else:
source_to_target_filtered[source].update(target)
self.source_to_target = source_to_target_filtered
# Save connections
for source, targets in self.source_to_target.items():
source_node_id = self.port_to_id[source]
source_port_id = port_to_port_id[source]
source_cached_port = self.cached_nodes[source_node_id].cached_ports[source_port_id]
for target in targets:
target_node_id = self.port_to_id[target]
target_port_id = port_to_port_id[target]
target_cached_port = self.cached_nodes[target_node_id].cached_ports[target_port_id]
connection = CacheConnection(source_cached_port, target_cached_port)
self.connection_to_id[connection.pair] = len(self.cached_connections)
self.cached_connections.append(connection)
def get_diff(self, other: "GraphNodeIndex"):
"""
Generate the difference object: the list of nodes and connections to add
and delete.
"""
# Nodes diff
self_nodes = set(self.cached_nodes[i] for _, i in self.node_to_id.items())
other_nodes = set(other.cached_nodes[i] for _, i in other.node_to_id.items())
nodes_to_add = list(other_nodes - self_nodes)
nodes_to_del = list(self_nodes - other_nodes)
if len(self.node_to_id) == len(nodes_to_del):
# If we need to remove all nodes, it's better to create a new graph node index
return GraphNodeDiff()
max_level = max(self.cached_nodes[i].stacking_order for _, i in self.node_to_id.items())
for node in nodes_to_add:
if node.stacking_order is not None and node.stacking_order < max_level:
# We can't put the node under others. Only to top.
return GraphNodeDiff()
# Connections diff
self_connections = set(self.cached_connections[i] for _, i in self.connection_to_id.items())
other_connections = set(other.cached_connections[i] for _, i in other.connection_to_id.items())
connections_to_add = list(other_connections - self_connections)
connections_to_del = list(self_connections - other_connections)
return GraphNodeDiff(nodes_to_add, nodes_to_del, connections_to_add, connections_to_del)
def mutate(self, diff: Union["GraphNodeIndex", GraphNodeDiff]) -> Tuple[Set]:
"""Apply the difference to the cache index."""
if isinstance(diff, GraphNodeIndex):
diff = self.get_diff(diff)
node_add = set()
node_del = set()
connection_add = set()
connection_del = set()
# Remove nodes from index
for cached_node in diff.nodes_to_del:
node = cached_node.node
node_id = self.node_to_id[node]
self.cached_nodes[node_id] = None
for cached_port in cached_node.cached_ports:
port = cached_port.port
self.port_to_id.pop(port)
if port in self.ports_used_as_output:
self.ports_used_as_output.remove(port)
self.port_child_to_parent.pop(port, None)
node_del.add(node)
self.node_to_id.pop(node)
# Remove connections from index
for cached_connection in diff.connections_to_del:
connection_id = self.connection_to_id[cached_connection.pair]
self.cached_connections[connection_id] = None
source_port = cached_connection.source_port
target_port = cached_connection.target_port
targets = self.source_to_target[source_port]
targets.remove(target_port)
if len(targets) == 0:
self.source_to_target.pop(source_port)
connection_del.add(cached_connection)
self.connection_to_id.pop(cached_connection.pair)
# Add nodes to index
for cached_node in diff.nodes_to_add:
node = cached_node.node
node_id = len(self.cached_nodes)
self.cached_nodes.append(cached_node)
self.node_to_id[node] = node_id
node_add.add(node)
cached_ports = cached_node.cached_ports
# True if node has output node
node_has_outputs = False
for cached_port in cached_ports:
port = cached_port.port
self.port_to_id[port] = node_id
self.ports_used_as_output
if cached_port.outputs is not None:
node_has_outputs = True
cached_port_parent = cached_port.parent_cached_port
if cached_port_parent:
self.port_child_to_parent[cached_port.port] = cached_port_parent.port
for cached_port in cached_ports:
# If the node don't have any output (in OmniGraph inputs and
# outputs are equal) we consider that all the input attributes
# are used as outputs.
if not node_has_outputs or cached_port.outputs is not None:
self.ports_used_as_output.add(cached_port.port)
# Add connections to index
for cached_connection in diff.connections_to_add:
connection_id = len(self.cached_connections)
self.cached_connections.append(cached_connection)
self.connection_to_id[cached_connection.pair] = connection_id
connection_add.add(cached_connection)
source_port = cached_connection.source_port
target_port = cached_connection.target_port
self.source_to_target[source_port].add(target_port)
return node_add, node_del, connection_add, connection_del
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_model.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphModel"]
from enum import Enum
from enum import IntFlag
from enum import auto
from typing import Any
from typing import Optional
from typing import Union
class GraphModel:
"""
The base class for the Graph model.
The model is the central component of the graph widget. It is the
application's dynamic data structure, independent of the user interface,
and it directly manages the data. It follows closely model–view pattern.
It defines the standard interface to be able to interoperate with the
components of the model-view architecture. It is not supposed to be
instantiated directly. Instead, the user should subclass it to create a
new model.
The model manages two kinds of data elements. Node and port are the
atomic data elements of the model. Both node and port can have any number
of sub-ports and any number of input and output connections.
There is no specific Python type for the elements of the model. Since
Python has dynamic types, the model can return any object as a node or a
port. When the widget needs to get a property of the node, it provides
the given node back to the model.
Example:
.. code:: python
class UsdShadeModel(GraphModel):
@property
def nodes(self, prim=None):
# Return Usd.Prim in a list
return [stage.GetPrimAtPath(selection)]
@property
def name(self, item=None):
# item here is Usd.Prim because UsdShadeModel.nodes returns
# Usd.Prim
return item.GetPath().name
# Accessing nodes and properties example
model = UsdShadeModel()
# UsdShadeModel decides the type of nodes. It's a list with Usd.Prim
nodes = model.nodes
for node in nodes:
# The node is accessed through evaluation of self[key]. It will
# return the proxy object that redirects its properties back to
# model. So the following line will call UsdShadeModel.name(node).
name = model[node].name
print(f"The model has node {name}")
"""
class _ItemProxy:
"""
The proxy that allows accessing the nodes and ports of the model
through evaluation of self[key]. This proxy object redirects its
properties to the model that has properties like this:
class Model:
@property
def name(self, item):
pass
@name.setter
def name(self, value, item):
pass
"""
def __init__(self, model, item):
"""Save model and item to be able to redirect the properties"""
# Since we already have __setattr__ reimplemented, the following
# code is the way to bypass it
super().__setattr__("_model", model)
super().__setattr__("_item", item)
def __getattr__(self, attr):
"""
Called when the default attribute access fails with an
AttributeError.
"""
model_property = getattr(type(self._model), attr)
return model_property.fget(self._model, self._item)
def __setattr__(self, attr, value):
"""
Called when an attribute assignment is attempted. This is called
instead of the normal mechanism (i.e. store the value in the
instance dictionary).
"""
model_property = getattr(type(self._model), attr)
model_property.fset(self._model, value, self._item)
# TODO: Maybe it's better to automatically call model._item_changed here?
# No, it's not better because in some cases node is changing and
# the widget doesn't change. For example when the user changes the
# position.
class _Event(set):
"""
A list of callable objects. Calling an instance of this will cause a
call to each item in the list in ascending order by index.
"""
def __call__(self, *args, **kwargs):
"""Called when the instance is “called” as a function"""
# Call all the saved functions
for f in list(self):
f(*args, **kwargs)
def __repr__(self):
"""
Called by the repr() built-in function to compute the “official”
string representation of an object.
"""
return f"Event({set.__repr__(self)})"
class ExpansionState(Enum):
OPEN = 0
MINIMIZED = 1
CLOSED = 2
class PreviewState(IntFlag):
NONE = 0
OPEN = auto()
CACHED = auto()
LARGE = auto()
class _EventSubscription:
"""
Event subscription.
_Event has callback while this object exists.
"""
def __init__(self, event, fn):
"""
Save the function, the event, and add the function to the event.
"""
self._fn = fn
self._event = event
event.add(self._fn)
def __del__(self):
"""Called by GC."""
self._event.remove(self._fn)
DISPLAY_NAME = None
def __init__(self):
super().__init__()
# TODO: begin_edit/end_edit
self.__on_item_changed = self._Event()
self.__on_selection_changed = self._Event()
self.__on_node_changed = self._Event()
def __getitem__(self, item):
"""Called to implement evaluation of self[key]"""
# Return a proxy that redirects its properties back to the model.
return self._ItemProxy(self, item)
def _item_changed(self, item=None):
"""Call the event object that has the list of functions"""
self.__on_item_changed(item)
def _rebuild_node(self, item=None, full=False):
"""Call the event object that has the list of functions"""
self.__on_node_changed(item, full=full)
def subscribe_item_changed(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_item_changed, fn)
def _selection_changed(self):
"""Call the event object that has the list of functions"""
self.__on_selection_changed()
def subscribe_selection_changed(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_selection_changed, fn)
def subscribe_node_changed(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_node_changed, fn)
@staticmethod
def has_nodes(obj):
"""Returns true if the model can currently build the graph network using the provided object"""
pass
# The list of properties of node and port.
@property
def name(self, item) -> str:
"""The name of the item how it should be displayed in the view"""
pass
@name.setter
def name(self, value: str, item=None):
"""Request to rename item"""
pass
@property
def type(self, item):
pass
@property
def inputs(self, item):
pass
@inputs.setter
def inputs(self, value, item=None):
pass
@property
def outputs(self, item):
pass
@outputs.setter
def outputs(self, value, item=None):
pass
@property
def nodes(self, item=None):
pass
@property
def ports(self, item=None):
pass
@ports.setter
def ports(self, value, item=None):
pass
@property
def expansion_state(self, item=None) -> ExpansionState:
return self.ExpansionState.OPEN
@expansion_state.setter
def expansion_state(self, value: ExpansionState, item=None):
pass
def can_connect(self, source, target):
"""Return if it's possible to connect source to target"""
return True
@property
def position(self, item=None):
"""Returns the position of the node"""
pass
@position.setter
def position(self, value, item=None):
"""The node position setter"""
pass
def position_begin_edit(self, item):
"""Called when the user started dragging the node"""
pass
def position_end_edit(self, item):
"""Called when the user finished dragging the node and released the mouse"""
pass
@property
def size(self, item):
"""The node size. Is used for nodes like Backdrop."""
pass
@size.setter
def size(self, value, item=None):
"""The node position setter"""
pass
def size_begin_edit(self, item):
"""Called when the user started resizing the node"""
pass
def size_end_edit(self, item):
"""Called when the user finished resizing the node and released the mouse"""
pass
@property
def description(self, item):
"""The text label that is displayed on the backdrop in the node graph."""
pass
@description.setter
def description(self, value, item=None):
"""The node description setter"""
pass
@property
def display_color(self, item):
"""The node color."""
pass
@display_color.setter
def display_color(self, value, item=None):
"""The node color setter"""
pass
@property
def stacking_order(self, item):
"""
This value is a hint when an application cares about the visibility
of a node and whether each node overlaps another.
"""
return 0
@property
def selection(self):
pass
@selection.setter
def selection(self, value: list):
pass
def special_select_widget(self, node, node_widget):
# Returning None means node_widget will get used
return None
def destroy(self):
pass
@property
def icon(self, item) -> Optional[Union[str, Any]]:
"""Return icon of the image"""
pass
@icon.setter
def icon(self, value: Optional[Union[str, Any]], item=None):
"""Set the icon of the image"""
pass
@property
def preview(self, item) -> Optional[Union[str, Any]]:
"""Return the preview of the image"""
pass
@preview.setter
def preview(self, value: Optional[Union[str, Any]], item=None):
"""Set the preview of the image"""
pass
@property
def preview_state(self, item) -> PreviewState:
"""Return the state of the preview of the node"""
return GraphModel.PreviewState.NONE
@preview_state.setter
def preview_state(self, value: PreviewState, item=None):
"""Set the state of the preview of the node"""
pass
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/__init__.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
# This extension both provides generic Omni UI Graph interface and implements couple of graph models, including one for
# omni.graph. we need to split it eventually in 2 exts. So that one can use ui graph without omni.graph. For now
# just make it optional by handling import failure as non-error.
from .abstract_batch_position_getter import AbstractBatchPositionGetter
from .abstract_graph_node_delegate import AbstractGraphNodeDelegate
from .abstract_graph_node_delegate import GraphConnectionDescription
from .abstract_graph_node_delegate import GraphNodeDescription
from .abstract_graph_node_delegate import GraphNodeLayout
from .abstract_graph_node_delegate import GraphPortDescription
from .backdrop_delegate import BackdropDelegate
from .backdrop_getter import BackdropGetter
from .compound_node_delegate import CompoundInputOutputNodeDelegate
from .compound_node_delegate import CompoundNodeDelegate
from .graph_model import GraphModel
from .graph_model_batch_position_helper import GraphModelBatchPositionHelper
from .graph_node_delegate import GraphNodeDelegate
from .graph_node_delegate_router import GraphNodeDelegateRouter
from .graph_view import GraphView
from .isolation_graph_model import IsolationGraphModel
from .selection_getter import SelectionGetter
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/compound_node_delegate.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["CompoundNodeDelegate", "CompoundInputOutputNodeDelegate"]
from .abstract_graph_node_delegate import GraphNodeDescription
from .abstract_graph_node_delegate import GraphPortDescription
from .graph_node_delegate_full import GraphNodeDelegateFull
from typing import Any
from typing import List
import omni.ui as ui
class CompoundNodeDelegate(GraphNodeDelegateFull):
"""
The delegate for the compound nodes.
"""
pass
class CompoundInputOutputNodeDelegate(GraphNodeDelegateFull):
"""
The delegate for the input/output nodes of the compound.
"""
def __init__(self):
super().__init__()
self.__port_context_menu = None
def destroy(self):
self.__port_context_menu = None
def port_input(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
node = node_desc.node
port = port_desc.port
frame = ui.Frame()
with frame:
super().port_input(model, node_desc, port_desc)
frame.set_mouse_pressed_fn(lambda x, y, b, _, m=model, n=node, p=port: b == 1 and self.__on_menu(m, n, p))
def port_output(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
node = node_desc.node
port = port_desc.port
frame = ui.Frame()
with frame:
super().port_output(model, node_desc, port_desc)
frame.set_mouse_pressed_fn(lambda x, y, b, _, m=model, n=node, p=port: b == 1 and self.__on_menu(m, n, p))
def port(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
port = port_desc.port
stack = ui.ZStack()
with stack:
super().port(model, node_desc, port_desc)
# Draw input on top of the original port
field = ui.StringField(visible=False)
value_model = field.model
value_model.as_string = model[port].name
def begin_edit():
field.visible = True
def end_edit(value_model):
# Hide
field.visible = False
# Rename
model[port].name = value_model.as_string
# Activate input with double click
stack.set_mouse_double_clicked_fn(lambda x, y, b, m: begin_edit())
value_model.add_end_edit_fn(end_edit)
def __on_menu(self, model: "GraphModel", node: Any, port: Any):
def disconnect(model: "GraphModel", port: Any):
"""Disconnect everything from the given port"""
model[port].inputs = []
def remove(model: "GraphModel", node: Any, port: Any):
"""Remove the given port from the node"""
ports = model[node].ports
ports.remove(port)
model[node].ports = ports
def move_up(model: "GraphModel", node: Any, port: Any):
"""Move the given port one position up"""
ports: List = model[node].ports
index = ports.index(port)
if index > 0:
ports.insert(index - 1, ports.pop(index))
model[node].ports = ports
def move_down(model: "GraphModel", node: Any, port: Any):
"""Move the given port one position down"""
ports: List = model[node].ports
index = ports.index(port)
if index < len(ports) - 1:
ports.insert(index + 1, ports.pop(index))
model[node].ports = ports
def move_top(model: "GraphModel", node: Any, port: Any):
"""Move the given port to the top of the node"""
ports: List = model[node].ports
index = ports.index(port)
if index > 0:
ports.insert(0, ports.pop(index))
model[node].ports = ports
def move_bottom(model: "GraphModel", node: Any, port: Any):
"""Move the given port to the bottom of the node"""
ports: List = model[node].ports
index = ports.index(port)
if index < len(ports) - 1:
ports.insert(len(ports) - 1, ports.pop(index))
model[node].ports = ports
self.__port_context_menu = ui.Menu("CompoundInputOutputNodeDelegate Port Menu")
with self.__port_context_menu:
if model[port].inputs:
ui.MenuItem("Disconnect", triggered_fn=lambda m=model, p=port: disconnect(m, p))
ui.MenuItem("Remove", triggered_fn=lambda m=model, n=node, p=port: remove(m, n, p))
ui.MenuItem("Move Up", triggered_fn=lambda m=model, n=node, p=port: move_up(m, n, p))
ui.MenuItem("Move Down", triggered_fn=lambda m=model, n=node, p=port: move_down(m, n, p))
ui.MenuItem("Move Top", triggered_fn=lambda m=model, n=node, p=port: move_top(m, n, p))
ui.MenuItem("Move Bottom", triggered_fn=lambda m=model, n=node, p=port: move_bottom(m, n, p))
self.__port_context_menu.show()
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_node_delegate_closed.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphNodeDelegateClosed"]
from .abstract_graph_node_delegate import GraphNodeDescription
from .graph_node_delegate_full import GraphNodeDelegateFull
from .graph_node_delegate_full import LINE_VISIBLE_MIN
from .graph_node_delegate_full import TEXT_VISIBLE_MIN
import omni.ui as ui
class GraphNodeDelegateClosed(GraphNodeDelegateFull):
"""
The delegate with the Omniverse design for the nodes of the closed state.
"""
def __init__(self, scale_factor=1.0):
super().__init__(scale_factor)
def node_header_input(self, model, node_desc: GraphNodeDescription):
"""Called to create the left part of the header that will be used as input when the node is collapsed"""
node = node_desc.node
connected_source = node_desc.connected_source
connected_target = node_desc.connected_target
with ui.ZStack(width=8, skip_draw_when_clipped=True):
ui.Circle(
radius=6,
name=str(model[node].type),
style_type_name_override="Graph.Node.Input",
size_policy=ui.CircleSizePolicy.FIXED,
alignment=ui.Alignment.RIGHT_CENTER,
arc=ui.Alignment.RIGHT,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_source:
# Circle that shows that the port is a source for the connection
ui.Circle(
radius=7,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.RIGHT_CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_target:
# Circle that shows that the port is a target for the connection
ui.Circle(
radius=5,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.RIGHT_CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
def node_header_output(self, model, node_desc: GraphNodeDescription):
"""Called to create the right part of the header that will be used as output when the node is collapsed"""
node = node_desc.node
connected_source = node_desc.connected_source
connected_target = node_desc.connected_target
with ui.ZStack(width=8, skip_draw_when_clipped=True):
ui.Circle(
radius=6,
name=str(model[node].type),
style_type_name_override="Graph.Node.Output",
size_policy=ui.CircleSizePolicy.FIXED,
alignment=ui.Alignment.LEFT_CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_source:
# Circle that shows that the port is a source for the connection
ui.Circle(
radius=7,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.LEFT_CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
if connected_target:
# Circle that shows that the port is a target for the connection
ui.Circle(
radius=5,
size_policy=ui.CircleSizePolicy.FIXED,
style_type_name_override="Graph.Connection",
alignment=ui.Alignment.LEFT_CENTER,
visible_min=TEXT_VISIBLE_MIN,
)
def node_header(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the top of the node"""
node = node_desc.node
with ui.VStack(skip_draw_when_clipped=True):
self._common_node_header_top(model, node)
style_name = str(model[node].type)
# Draw the circle and the image on the top of it
with ui.HStack(height=0):
with ui.VStack():
ui.Label(
"Inputs",
alignment=ui.Alignment.CENTER,
style_type_name_override="Graph.Node.Port.Label",
visible_min=TEXT_VISIBLE_MIN,
)
ui.Spacer(height=15)
with ui.ZStack(width=50, height=50):
ui.Rectangle(
style_type_name_override="Graph.Node.Footer", name=style_name, visible_min=TEXT_VISIBLE_MIN
)
ui.ImageWithProvider(
style_type_name_override="Graph.Node.Footer.Image",
name=style_name,
visible_min=TEXT_VISIBLE_MIN,
)
# Scale up the icon when zooming out
with ui.Placer(
stable_size=True,
visible_min=LINE_VISIBLE_MIN,
visible_max=TEXT_VISIBLE_MIN,
offset_x=-15,
offset_y=-20,
):
with ui.ZStack(width=0, height=0):
ui.Rectangle(
style_type_name_override="Graph.Node.Footer", name=style_name, width=80, height=80
)
ui.ImageWithProvider(
style_type_name_override="Graph.Node.Footer.Image", name=style_name, width=80, height=80
)
with ui.VStack():
ui.Label(
"Outputs",
alignment=ui.Alignment.CENTER,
style_type_name_override="Graph.Node.Port.Label",
visible_min=TEXT_VISIBLE_MIN,
)
ui.Spacer(height=15)
ui.Spacer(height=18)
def node_footer(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the bottom of the node"""
# Don't draw footer
pass
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_view.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = [
"GraphView",
]
from typing import Any
from typing import Dict
from typing import List
from typing import Optional
import asyncio
import carb
import carb.settings
import functools
import traceback
import omni.kit.app
import omni.ui as ui
from .abstract_graph_node_delegate import AbstractGraphNodeDelegate
from .abstract_graph_node_delegate import GraphConnectionDescription
from .graph_layout import SugiyamaLayout
from .graph_model import GraphModel
from .graph_node import GraphNode
from .graph_node_index import GraphNodeDiff, GraphNodeIndex
CONNECTION_CURVE = 60
FLOATING_DELTA = 0.00001
def handle_exception(func):
"""
Decorator to print exception in async functions
TODO: The alternative way would be better, but we want to use traceback.format_exc for better error message.
result = await asyncio.gather(*[func(*args)], return_exceptions=True)
"""
@functools.wraps(func)
async def wrapper(*args, **kwargs):
try:
return await func(*args, **kwargs)
except asyncio.CancelledError:
# We always cancel the task. It's not a problem.
pass
except Exception as e:
carb.log_error(f"Exception when async '{func}'")
carb.log_error(f"{e}")
carb.log_error(f"{traceback.format_exc()}")
return wrapper
class GraphView:
"""
The visualisation layer of omni.kit.widget.graph. It behaves like a
regular widget and displays nodes and their connections.
"""
DEFAULT_DISTANCE_BETWEEN_NODES = 50.0
class _Proxy:
"""
A service proxy object to keep the given object in a central
location. It's used to keep the delegate and this object is passed to
the nodes, so when the delegate is changed, it's automatically
updated in all the nodes.
TODO: Add `set_delegate`. Otherwise it's useless.
"""
def __init__(self, reference=None):
self.set_object(reference)
def __getattr__(self, attr):
"""
Called when the default attribute access fails with an
AttributeError.
"""
return getattr(self._ref, attr)
def __setattr__(self, attr, value):
"""
Called when an attribute assignment is attempted. This is called
instead of the normal mechanism (i.e. store the value in the
instance dictionary).
"""
setattr(self._ref, attr, value)
def set_object(self, reference):
"""Replace the object this proxy holds with the new one"""
super().__setattr__("_ref", reference)
class _Event(set):
"""
A list of callable objects. Calling an instance of this will cause a
call to each item in the list in ascending order by index.
"""
def __call__(self, *args, **kwargs):
"""Called when the instance is “called” as a function"""
# Call all the saved functions
for f in self:
f(*args, **kwargs)
def __repr__(self):
"""
Called by the repr() built-in function to compute the “official”
string representation of an object.
"""
return f"Event({set.__repr__(self)})"
class _EventSubscription:
"""
Event subscription.
_Event has callback while this object exists.
"""
def __init__(self, event, fn):
"""
Save the function, the event, and add the function to the event.
"""
self._fn = fn
self._event = event
event.add(self._fn)
def __del__(self):
"""Called by GC."""
self._event.remove(self._fn)
def __init__(self, **kwargs):
"""
### Keyword Arguments:
`model : GraphModel`
Model to display the node graph
`delegate : AbstractGraphNodeDelegate`
Delegate to draw the node
`virtual_ports : bool`
True when the model should use reversed output for better look
of the graph.
`port_grouping : bool`
True when the widget should use sub-ports for port grouping.
All other kwargs are passed to CanvasFrame which is the root widget.
"""
# Selected nodes
self.__selection = []
self.__build_task = None
self._node_widgets = {}
self._connection_widgets = {}
self._node_placers = {}
self._delegate = self._Proxy()
# Regenerate the cache index when true
self._force_regenerate = True
self.__reference_style = kwargs.get("style", None)
self._model = None
self.set_model(kwargs.pop("model", None))
self.set_delegate(kwargs.pop("delegate", None))
self.__virtual_ports = kwargs.pop("virtual_ports", False)
self.__port_grouping = kwargs.pop("port_grouping", False)
self.__rectangle_selection = kwargs.pop("rectangle_selection", False)
self.__connections_on_top = kwargs.pop("connections_on_top", False)
self.__allow_same_side_connections = kwargs.pop("allow_same_side_connections", False)
self.__always_force_regenerate = kwargs.pop("always_force_regenerate", True)
# The variable raster_nodes is an experimental feature that can improve
# performance by rasterizing all the nodes when it is set to true.
# However, it is important to note that this feature may not always
# behave as expected and may cause issues with the editor. It is
# recommended to use this feature with caution and thoroughly test any
# changes before deploying them.
settings = carb.settings.get_settings()
self.__raster_nodes = kwargs.pop("raster_nodes", None)
if self.__raster_nodes is None:
self.__raster_nodes = settings.get("/exts/omni.kit.widget.graph/raster_nodes")
if self.__raster_nodes:
kwargs["compatibility"] = False
self._graph_node_index = GraphNodeIndex(None, self.__port_grouping)
if "style_type_name_override" not in kwargs:
kwargs["style_type_name_override"] = "Graph"
# The nodes for filtering upstrem.
self._filtering_nodes = None
# Nodes & Connections to force draw
self._force_draw_nodes = []
with ui.ZStack():
# Capturing selection
if self.__rectangle_selection:
ui.Spacer(
mouse_pressed_fn=self.__rectangle_selection_begin,
mouse_released_fn=self.__rectangle_selection_end,
mouse_moved_fn=self.__rectangle_selection_moved,
)
# The graph canvas
self.__root_frame = ui.CanvasFrame(**kwargs)
self.__root_stack = None
# Drawing selection
self.__selection_layer = ui.Frame(separate_window=True, visible=False)
with self.__selection_layer:
with ui.ZStack():
self.__selection_placer_start = ui.Placer(draggable=True)
self.__selection_placer_end = ui.Placer(draggable=True)
with self.__selection_placer_start:
rect_corner_start = ui.Spacer(width=1, height=1)
with self.__selection_placer_end:
rect_corner_end = ui.Spacer(width=1, height=1)
# The rectangle that is drawn when selection with the rectangle
ui.FreeRectangle(rect_corner_start, rect_corner_end, style_type_name_override="Graph.Selecion.Rect")
# Build the network
self.__on_item_changed(None)
# ZStack with connections to be able to add connections to the layout
self.__connections_stack = None
# The frame that follows mouse cursor when the user creates a new connection
self.__user_drag_connection_frame = None
# The frame with the temporary connection that is sticky to the port.
# We need it to demonstrate that the connection is valid.
self.__user_drag_connection_accepted_frame = None
# The source port when the user creates a new connection
self.__making_connection_source_node = None
self.__making_connection_source = None
self.__making_connection_source_widget = None
# The target port when the user creates a new connection
self.__making_connection_target_node = None
self.__making_connection_target = None
# Dict that has a port as a key and two frames with port widgets
self.__port_to_frames = {}
# Nodes that will be dragged once the mouse is moved
self.__nodes_to_drag = []
# Nodes that are currently dragged
self.__nodes_dragging = []
self.__can_connect = False
self.__position_changed = False
self.__on_post_delayed_build_layout = self._Event()
self.__on_pre_delayed_build_layout = self._Event()
# Selection caches
# Position when the user started selection
self.__rectangle_selection_start_position = None
# Positions of the nodes for selection caches
self.__positions_cache = None
# Selection when the user started rectangle
self.__selection_cache = []
# Flag for async method to deselect all. See
# `_clear_selection_next_frame_async` for details.
self.__need_clear_selection = False
def __getattr__(self, attr):
"""Pretend it's self.__root_frame"""
return getattr(self.__root_frame, attr)
def _post_delayed_build_layout(self):
"""Call the event object that has the list of functions"""
self.__on_post_delayed_build_layout()
def subscribe_post_delayed_build_layout(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_post_delayed_build_layout, fn)
def _pre_delayed_build_layout(self):
"""Call the event object that has the list of functions"""
self.__on_pre_delayed_build_layout()
def subscribe_pre_delayed_build_layout(self, fn):
"""
Return the object that will automatically unsubscribe when destroyed.
"""
return self._EventSubscription(self.__on_pre_delayed_build_layout, fn)
def layout_all(self):
"""Reset positions of all the nodes in the model"""
# Turn force_regenerate on whenever calling layout_all, so it doesn't always
# need to be called right before the layout_all call.
self._force_regenerate = True
for node in self._model.nodes:
self._model[node].position = None
self.__on_item_changed(None)
def set_expansion(self, state: GraphModel.ExpansionState):
"""
Open, close or minimize all the nodes in the model.
"""
for node in self._model.nodes:
self._model[node].expansion_state = state
@property
def raster_nodes(self):
# Read only
return self.__raster_nodes
@property
def model(self):
return self._model
@model.setter
def model(self, model):
self._force_regenerate = True
self.set_model(model)
@property
def virtual_ports(self):
"""
Typically source connections go from the left side of the node to the
right side of the node. But sometimes it looks messy when circular
connections. When `virtual_ports` is true, and the connection is
circular, the view draws it from the right side to the left side.
Example:
A.out -------------> B.surface
A.color [REVERSED] <- B.color
"""
return self.__virtual_ports
@virtual_ports.setter
def virtual_ports(self, value):
self.__virtual_ports = not not value
self.__on_item_changed(None)
def set_model(self, model: GraphModel):
"""Replace the model of the widget. It will refresh all the content."""
self._filtering_nodes = None
if self._model:
self._model.destroy()
self._model = model
# Re-subscribe
if self._model:
self._model_subscription = self._model.subscribe_item_changed(self.__on_item_changed)
self._selection_subscription = self._model.subscribe_selection_changed(self.__on_selection_changed)
self._node_subscription = self._model.subscribe_node_changed(self.__rebuild_node)
else:
self._model_subscription = None
self._selection_subscription = None
self._node_subscription = None
self.__on_item_changed(None)
self.__on_selection_changed()
def set_delegate(self, delegate: AbstractGraphNodeDelegate):
"""Replace the delegate of the widget"""
self._force_regenerate = True
self._delegate.set_object(delegate)
self.__on_item_changed(None)
def filter_upstream(self, nodes: list):
"""Remove nodes that are not upstream of the given nodes"""
self._filtering_nodes = nodes
self.__on_item_changed(None)
def get_bbox_of_nodes(self, nodes: list):
"""Get the bounding box of nodes"""
if not nodes:
nodes = self._model.nodes
if not nodes:
return
min_pos_x = None
min_pos_x_node = None
min_pos_y = None
min_pos_y_node = None
max_pos_x = None
max_pos_x_node = None
max_pos_y = None
max_pos_y_node = None
for node in nodes:
if node not in self._node_widgets:
continue
pos = self._model[node].position
if pos is None:
continue
if min_pos_x is None:
min_pos_x = pos[0]
min_pos_x_node = node
min_pos_y = pos[1]
min_pos_y_node = node
max_pos_x = pos[0]
max_pos_x_node = node
max_pos_y = pos[1]
max_pos_y_node = node
continue
if pos[0] < min_pos_x:
min_pos_x = pos[0]
min_pos_x_node = node
previous_max_max_x = max_pos_x + self._node_widgets[max_pos_x_node].computed_width
if pos[0] + self._node_widgets[node].computed_width < previous_max_max_x:
pass
else:
max_pos_x = pos[0]
max_pos_x_node = node
if pos[1] < min_pos_y:
min_pos_y = pos[1]
min_pos_y_node = node
previous_max_max_y = max_pos_y + self._node_widgets[max_pos_y_node].computed_height
if pos[1] + self._node_widgets[node].computed_height < previous_max_max_y:
pass
else:
max_pos_y = pos[1]
max_pos_y_node = node
if max_pos_x_node in self._node_widgets:
computed_width = (max_pos_x + self._node_widgets[max_pos_x_node].computed_width) - min_pos_x
computed_height = (max_pos_y + self._node_widgets[max_pos_y_node].computed_height) - min_pos_y
else:
min_pos_x = 0
min_pos_y = 0
computed_height = 500
computed_width = 500
return computed_width, computed_height, min_pos_x, min_pos_y
def focus_on_nodes(self, nodes: Optional[List[Any]] = None):
"""Focus the view on nodes"""
if not nodes:
nodes = self._model.nodes
if not nodes:
return
computed_width, computed_height, min_pos_x, min_pos_y = self.get_bbox_of_nodes(nodes)
# TODO: Looks like it's a bug of ui.CanvasFrame.computed_width. But it works for now.
if self.__raster_nodes:
canvas_computed_width = self.__root_frame.computed_width
canvas_computed_height = self.__root_frame.computed_height
else:
canvas_computed_width = self.__root_frame.computed_width * self.zoom
canvas_computed_height = self.__root_frame.computed_height * self.zoom
zoom = min([canvas_computed_width / computed_width, canvas_computed_height / computed_height])
# We need to clamp the zoom with zoom_min and zoom_max here since we need to update the zoom of graphView and
# use the clamped zoom to compute self.pan_x and self.pan_y
# this FLOATING_DELTA is needed to avoid floating issue
zoom = max(min(zoom, self.zoom_max - FLOATING_DELTA), self.zoom_min + FLOATING_DELTA)
self.zoom = zoom
self.pan_x = -min_pos_x * zoom + canvas_computed_width / 2 - computed_width * zoom / 2
self.pan_y = -min_pos_y * zoom + canvas_computed_height / 2 - computed_height * zoom / 2
async def restore_smooth_async():
"""Set smooth_zoom to True"""
await omni.kit.app.get_app().next_update_async()
self.__root_frame.smooth_zoom = True
if self.smooth_zoom:
# Temporarly disable smooth_zoom
self.__root_frame.smooth_zoom = False
asyncio.ensure_future(restore_smooth_async())
@property
def selection(self):
"""Return selected nodes"""
return self.__selection
@selection.setter
def selection(self, value):
"""Set selection"""
if self._model:
self._model.selection = value
def __rebuild_node(self, item, full=False):
"""Rebuild the delegate of the node, this could be used to just update the look of one node"""
if item:
if full:
self._force_draw_nodes.append(item)
# Still need to call __delayed_build_layout where this is called.
return
node = self._node_widgets[item]
if node:
# Doesn't rebuild ports or connections
node.rebuild_layout()
def __on_item_changed(self, item):
"""Called by the model when something is changed"""
if item is not None:
# Only one item is changed. Not necessary to rebuild the whole network.
placer = self._node_placers.get(item, None)
if placer:
position = self._model[item].position
if position:
placer.offset_x = position[0]
placer.offset_y = position[1]
placer.invalidate_raster()
# TODO: Rebuild the node
return
# The whole graph is changed. Rebuild
if self.__build_task:
self.__build_task.cancel()
# Build the layout in the next frame because it's possible that
# multiple items is changed and in this case we still need to execute
# __build_layouts once.
self.__build_task = asyncio.ensure_future(self.__delayed_build_layout())
def __on_selection_changed(self):
"""Called by the model when selection is changed"""
if not self._model:
return
# Deselect existing selection
for node in self.__selection:
widget = self._node_widgets.get(node, None)
if not widget:
continue
widget.selected = False
# Keep only nodes if we have a widget for them. We need it to skip
# nodes that are selected in model but they are filtered out here
self.__selection = []
for model_selection_node in self._model.selection or []:
for widget_node in self._node_widgets:
if model_selection_node == widget_node:
self.__selection.append(widget_node)
# Select new selection
for node in self.__selection:
widget = self._node_widgets.get(node, None)
if not widget:
continue
widget.selected = True
def __cache_positions(self):
"""Creates cache for all the positions of all the nodes"""
if self.__positions_cache is not None:
return
class PositionCache:
def __init__(self, node: Any, pos: List[float], size: List[float]):
self.node = node
self.__x0 = pos[0]
self.__y0 = pos[1]
self.__x1 = pos[0] + size[0]
self.__y1 = pos[1] + size[1]
def __repr__(self):
return f"<PositionCache {self.node} {self.__x0} {self.__y0} {self.__x1} {self.__y1}"
def is_in_rect(self, x_min: float, y_min: float, x_max: float, y_max: float):
return not (x_max < self.__x0 or x_min > self.__x1 or y_max < self.__y0 or y_min > self.__y1)
self.__positions_cache = []
for node in self._model.nodes or []:
# NOTE: If we ever have selection objects that aren't positioned in the top left corner of the
# node (like the header), we will need to get the select_widget's position instead, here.
pos = self._model[node].position
if pos:
widget = self._node_widgets.get(node, None)
if not widget:
continue
select_widget = self._model.special_select_widget(node, widget) or widget
size = [select_widget.computed_width, select_widget.computed_height]
self.__positions_cache.append(PositionCache(node, pos, size))
self.__selection_cache = self.model.selection or []
async def _clear_selection_next_frame_async(self, model):
"""
Called by background layer to clear selection. The idea is that if no
node cancels it, then the yser clicked in the background and we need
to deselect all.
"""
await omni.kit.app.get_app().next_update_async()
if self.__need_clear_selection:
model.selection = []
def _on_rectangle_select(self, model, start, end, modifier=0):
"""Called when the user is performing the rectangle selection"""
# this function could be triggered when we multi-selected nodes and move one of them too quickly.
# in this case, we don't want to update the selection.
if start == end:
return
# Don't clear selection frame after
self.__need_clear_selection = False
self.__cache_positions()
x_min = min(start[0], end[0])
x_max = max(start[0], end[0])
y_min = min(start[1], end[1])
y_max = max(start[1], end[1])
selection = []
for position_cache in self.__positions_cache:
if position_cache.is_in_rect(x_min, y_min, x_max, y_max):
selection.append(position_cache.node)
if modifier & carb.input.KEYBOARD_MODIFIER_FLAG_SHIFT:
# Add to the current selection
selection = self.__selection_cache + [s for s in selection if s not in self.__selection_cache]
elif modifier & carb.input.KEYBOARD_MODIFIER_FLAG_CONTROL:
# Subtract from the current selection
selection = [s for s in self.__selection_cache if s not in selection]
model.selection = selection
def _on_node_selected(self, model, node, modifier=0, pressed=True):
"""Called when the user is picking to select nodes"""
# Stop rectangle selection
self.__rectangle_selection_start_position = None
# Don't clear selection frame after
self.__need_clear_selection = False
selection = model.selection
if selection is None:
return
# Copy
selection = selection[:]
if modifier & carb.input.KEYBOARD_MODIFIER_FLAG_SHIFT:
if not pressed:
return
# Add
if node not in selection:
selection.append(node)
elif modifier & carb.input.KEYBOARD_MODIFIER_FLAG_CONTROL:
if not pressed:
return
# Add / Remove
if node in selection:
selection.remove(node)
else:
selection.append(node)
else:
if pressed and node in selection:
return
# Set
selection = [node]
model.selection = selection
@handle_exception
async def __delayed_build_layout(self):
"""
Rebuild all the nodes and connections in the next update cycle. It's
delayed because it's possible that it's called multiple times a
frame. And we need to create all the widgets only one.
The challenge here is the ability of USD making input-to-input and output-to-output connections. In USD it's
perfectly fine to have the following network:
A.out --> B.out
A.in <-- B.in
So it doesn't matter if the port is input or output. To draw such connections properly and to know which side
of the node it should connect, we need to find out the direction of the flow in this node network. This is the
overview of the algorithm to trace the nodes and compute the direction of the flow.
STEP 1. Create the cache of the model and indices to be able to access the cache fast.
STEP 2. Scan all the nodes and find roots. A root is a node that doesn't have an output connected to another
node.
STEP 3. Trace connections of roots and assign them level. Level is the distance of the node from the root.
The root has level 0. The nodes connected to root has level one. The next connected nodes have level 2 etc.
We assume that level is the flow direction. The flow goes from nodes to root.
STEP 4. It's easy to check if the connection goes to the direction opposite to flow. Such connections have
virtual ports.
"""
self._pre_delayed_build_layout()
await omni.kit.app.get_app().next_update_async()
# STEP 1
# Small explanation on input/output, source/target. Input and output is
# just a USD tag. We inherit this terminology to be USD compliant.
# Input/output is not the direction of the data flow. To express the
# data flow direction we use source/target. In USD it's possible to
# have input-to-input and output-to-output connections.
graph_node_index = GraphNodeIndex(self._model, self.__port_grouping)
if self.__always_force_regenerate or self._force_regenerate:
self._force_regenerate = False
regenerate_all = True
else:
# Diff
diff = self._graph_node_index.get_diff(graph_node_index)
regenerate_all = not diff.valid
if regenerate_all:
self._graph_node_index = graph_node_index
# Clean up the previous nodes
for _, node_widget in self._node_widgets.items():
node_widget.destroy()
for _, connection_widget in self._connection_widgets.items():
connection_widget.destroy()
self._node_widgets = {}
self._connection_widgets = {}
self.__port_to_frames = {}
else:
if self._force_draw_nodes:
self.__add_force_redraws_to_diff(diff, graph_node_index)
nodes_add, nodes_del, connections_add, connections_del = self._graph_node_index.mutate(diff)
# Utility to access the index
def get_node_level_from_port(port, port_to_id, all_cached_nodes):
if port is None:
return 0
node_id = port_to_id.get(port, None)
if node_id is None:
return 0
return all_cached_nodes[node_id].level
# List of all the nodes from the model. Members are CacheNode objects
all_cached_nodes = self._graph_node_index.cached_nodes
all_cached_connections = self._graph_node_index.cached_connections
# Dictionary that has a port from the model as a key and the index of the parent node in all_cached_nodes.
port_to_id = self._graph_node_index.port_to_id
# Dictionary that has a port from the model as a key and a flag if this port is output. We need it to detect
# the flow direction.
ports_used_as_output = self._graph_node_index.ports_used_as_output
# All the connections. Usded to determine if we need to draw output circle.
source_to_target = self._graph_node_index.source_to_target
# Dict[child port: parent port]
port_child_to_parent: Dict[Any, Any] = self._graph_node_index.port_child_to_parent
if not self._model:
return
# STEP 2
# True if the node in all_cached_nodes is a root node. The node A is a root node if other nodes are not
# connected to outputs of A.
is_root_node = [True] * len(all_cached_nodes)
for cache_node in all_cached_nodes:
if not cache_node:
continue
if not regenerate_all and cache_node.node not in nodes_add:
continue
for source in cache_node.inputs + cache_node.outputs:
if source not in ports_used_as_output:
continue
# The source node is not root because it has output connected to this node
source_node_id = port_to_id.get(source, None)
if source_node_id is None:
continue
is_root_node[source_node_id] = False
if self._filtering_nodes:
root_nodes = [cached_node for cached_node in all_cached_nodes if cached_node.node in self._filtering_nodes]
else:
root_nodes = [cached_node for cached_node, is_root in zip(all_cached_nodes, is_root_node) if is_root]
if not root_nodes and all_cached_nodes:
# We have a circular connected network. In this way it doesn't matter which node to pick, but we assume that
# the first node of the model will be the root node.
root_nodes = [all_cached_nodes[0]]
# STEP 3
# Create the list of edges. We need to put them into layout.
edges = []
edges_sorted = []
for i, cache_node in enumerate(all_cached_nodes):
if not cache_node:
continue
if not regenerate_all and cache_node.node not in nodes_add:
continue
for source in cache_node.inputs + cache_node.outputs:
source_node_id = port_to_id.get(source, None)
if source_node_id is None:
# try to resolve id of parent instead
parent_source = port_child_to_parent.get(source, None)
if parent_source is not None:
source_node_id = port_to_id.get(parent_source, None)
if source_node_id is None:
continue
v1 = source_node_id
v2 = i
edge_sorted = (v1, v2) if v1 < v2 else (v2, v1)
if v1 != v2 and edge_sorted not in edges_sorted:
edges.append((v2, v1))
edges_sorted.append(edge_sorted)
if regenerate_all:
self.layout = SugiyamaLayout(edges=edges, vertical_distance=20.0, horizontal_distance=200.0)
for i, cache_node in enumerate(all_cached_nodes):
if not cache_node:
continue
if not regenerate_all and cache_node.node not in nodes_add:
continue
level = self.layout.get_layer(i)
# If level is None, it means the node is not connected to anything. We assume its level is 0.
cache_node.level = level or 0
# STEP 4. Generate widgets.
node_and_level = []
if self.__root_stack is None or regenerate_all:
with self.__root_frame:
self.__root_stack = ui.ZStack()
if regenerate_all:
self.__connections_stack = None
if not regenerate_all:
# Delete nodes
for node in nodes_del:
self._node_widgets[node].visible = False
self._node_widgets[node].destroy()
# Delete connections
for connection in connections_del:
connection_widget = self._connection_widgets[connection]
connection_widget.visible = False
connection_widget.clear()
connection_widget.destroy()
# If 1 to keep indentation. Otherwise indentation is changed, it's bad
# for code review.
if 1:
with self.__root_stack:
# Save the connections. It's the list of tuples (target_port, source_port)
for node_id, cached_node in sorted(
enumerate(n for n in all_cached_nodes if n), key=lambda a: a[1].stacking_order
):
if not regenerate_all and cached_node.node not in nodes_add:
continue
stacking_order = cached_node.stacking_order
# Create ZStack for connections between -1 and 0 stacking
# order. It allows to put the connections under nodes and
# over the backdrops.
if not self.__connections_on_top and not self.__connections_stack and stacking_order >= 0:
with ui.Frame(separate_window=True):
self.__connections_stack = ui.ZStack()
expansion_state = self._model[cached_node.node].expansion_state
# Filtered ports
ports = []
# Two bools per port:
# (True if it's target, True if it's source)
# So the connection can have four states: input/output and
# source/target. They can be mixed every possible way.
port_input = []
# Two bools per port
# (True if it's target, True if it's source)
port_output = []
port_levels: List[int] = []
port_position: List[int] = []
parent_child_count: List[int] = []
node_input = (False, False)
node_output = (False, False)
# Save connections and check if the node has virtual input/output
for cached_port in cached_node.cached_ports:
if not cached_port.visibile:
continue
target_port = cached_port.port
# Or it happens when the node is minimized
# TODO: compute it when computing port_visibility
if (
expansion_state == GraphModel.ExpansionState.MINIMIZED
or expansion_state == GraphModel.ExpansionState.CLOSED
):
if not cached_port.inputs and not cached_port.outputs:
if target_port not in source_to_target:
# Filter out ports with no connections
continue
# True if the port has input (left side) connection
# from this port to another
input_is_source_connection = False
# True if the port has input (left side) connection
# from another port to this port
input_is_target_connection = False
# True if the port has output (right side) connection
# from this port to another
output_is_source_connection = False
# True if the port has output (right side) connection
# from another port to this port
output_is_target_connection = False
if cached_port.inputs:
for source_port in cached_port.inputs:
if source_port not in port_to_id:
carb.log_warn(
f"[Graph UI] The port {target_port} can't be connected to the port "
f"{source_port} because it doesn't exist in the model"
)
continue
if (
self.virtual_ports
and get_node_level_from_port(source_port, port_to_id, all_cached_nodes)
< cached_node.level
):
# The input port is connected from the downflow node
#
# Example:
# A.out -------------------------> B.surface
# A.color [checking this option] <- B.color
output_is_target_connection = True
else:
# Example:
# A.out -> [checking this option] B.in
input_is_target_connection = True
if cached_port.outputs:
for source_port in cached_port.outputs:
if source_port not in port_to_id:
carb.log_warn(
f"[Graph UI] The port {target_port} can't be connected to the port "
f"{source_port} because it doesn't exist in the model"
)
continue
if (
self.virtual_ports
and get_node_level_from_port(source_port, port_to_id, all_cached_nodes)
< cached_node.level
):
output_is_target_connection = True
else:
input_is_target_connection = True
# The output port has input connection. A.out -> [checking this option] B.out
if target_port in source_to_target:
for t in source_to_target[target_port]:
compare_level = get_node_level_from_port(t, port_to_id, all_cached_nodes)
if (
self.virtual_ports
and compare_level is not None
and cached_node.level < compare_level
):
# Example:
# A.out -------------------------> B.surface
# A.color <- [checking this option] B.color
input_is_source_connection = True
else:
# A.out [checking this option] -> B.in
output_is_source_connection = True
node_input = (
node_input[0] or input_is_source_connection,
node_input[1] or input_is_target_connection,
)
node_output = (
node_output[0] or output_is_source_connection,
node_output[1] or output_is_target_connection,
)
if expansion_state == GraphModel.ExpansionState.CLOSED:
continue
# Ports
ports.append(target_port)
port_input.append((input_is_source_connection, input_is_target_connection))
port_output.append((output_is_source_connection, output_is_target_connection))
port_levels.append(cached_port.level)
port_position.append(cached_port.relative_position)
parent_child_count.append(cached_port.parent_child_count)
# Rasterize the nodes. This is an experimental feature that
# can improve performance.
raster_policy = ui.RasterPolicy.AUTO if self.__raster_nodes else ui.RasterPolicy.NEVER
# The side effect of having level, is the possibility to put the node to the correct position.
placer = ui.Placer(
draggable=True, frames_to_start_drag=2, name="node_graph", raster_policy=raster_policy
)
# Set position if possible
position = self._model[cached_node.node].position
if position:
placer.offset_x = position[0]
placer.offset_y = position[1]
def save_position(placer, model, node):
if self.__nodes_to_drag:
for drag_node in self.__nodes_to_drag:
model.position_begin_edit(drag_node)
self.__nodes_dragging[:] = self.__nodes_to_drag[:]
self.__nodes_to_drag[:] = []
# Without this, unselected backdrops can still be dragged by their placer
if not self.__nodes_dragging:
if model[node].position:
placer.offset_x.value, placer.offset_y.value = model[node].position
return
node_position = (placer.offset_x.value, placer.offset_y.value)
model[node].position = node_position
self.__position_changed = True
placer.set_offset_x_changed_fn(
lambda _, p=placer, m=self._model, n=cached_node.node: save_position(p, m, n)
)
placer.set_offset_y_changed_fn(
lambda _, p=placer, m=self._model, n=cached_node.node: save_position(p, m, n)
)
with placer:
# The node widget
node_widget = GraphNode(
self._model,
cached_node.node,
node_input,
node_output,
list(zip(ports, port_input, port_output, port_levels, port_position, parent_child_count)),
self._delegate,
)
def position_begin_edit(model, node, modifier):
self.__nodes_to_drag[:] = [node]
self.__nodes_dragging[:] = []
self.__position_changed = False
def position_end_edit(model, node, modifier):
self.__nodes_to_drag[:] = []
for drag_node in self.__nodes_dragging:
model.position_end_edit(drag_node)
self.__nodes_dragging[:] = []
return self.__position_changed
select_widget = self._model.special_select_widget(cached_node.node, node_widget) or node_widget
# Select node
select_widget.set_mouse_pressed_fn(
lambda x, y, b, modifier, model=self._model, node=cached_node.node: b == 0
and (
position_begin_edit(model, node, modifier)
or self._on_node_selected(model, node, modifier, True)
)
)
select_widget.set_mouse_released_fn(
lambda x, y, b, modifier, model=self._model, node=cached_node.node: b == 0
and (
position_end_edit(model, node, modifier)
or self._on_node_selected(model, node, modifier, False)
)
)
# Save the node so it's not removed
self._node_widgets[cached_node.node] = node_widget
self._node_placers[cached_node.node] = placer
node_and_level.append((cached_node.node, node_widget, placer, node_id))
# Save the ports for fast access
if expansion_state == GraphModel.ExpansionState.CLOSED:
for cached_port in cached_node.cached_ports:
self.__port_to_frames[cached_port.port] = (
node_widget.header_input_frame,
node_widget.header_output_frame,
)
else:
self.__port_to_frames.update(node_widget.ports)
for port, input_output in node_widget.ports.items():
input_port_widget, output_port_widget = input_output
if input_port_widget:
# Callback when mouse enters/leaves the port widget
input_port_widget.set_mouse_hovered_fn(
lambda h, n=cached_node.node, p=port, w=input_port_widget: self.__on_port_hovered(
n, p, w, h
)
)
input_port_widget.set_mouse_pressed_fn(
lambda x, y, b, m, n=cached_node.node, p=port: self._on_port_context_menu(n, p)
if b == 1
else None
)
if self.__allow_same_side_connections and output_port_widget:
# Callback when mouse enters/leaves the port widget
output_port_widget.set_mouse_hovered_fn(
lambda h, n=cached_node.node, p=port, w=output_port_widget: self.__on_port_hovered(
n, p, w, h
)
)
for port, input_output in node_widget.user_drag.items():
_, drag_widget = input_output
port_widget = self.__port_to_frames.get(port, (None, None))[1]
# Callback when user starts doing a new connection
drag_widget.set_mouse_pressed_fn(
lambda *_, f=port_widget, t=drag_widget, n=cached_node.node, p=port: self.__on_start_connection(
n, p, f, t
)
)
# Callback when user finishes doing a new connection
drag_widget.set_mouse_released_fn(lambda *_: self._on_new_connection())
# The connections ZStack is not created if all the nodes are under connections
if not self.__connections_stack:
with ui.Frame(separate_window=True):
self.__connections_stack = ui.ZStack()
# Buffer to keep the lines already built. We use it for filtering.
built_lines = set()
# Build all the connections
for connection in all_cached_connections:
if not connection:
continue
if not regenerate_all and connection not in connections_add:
continue
target = connection.target_port
source = connection.source_port
target_node_id = port_to_id.get(target, None)
target_cached_node = all_cached_nodes[target_node_id]
target_level = target_cached_node.level
source_node_id = port_to_id.get(source, None)
source_cached_node = all_cached_nodes[source_node_id]
source_level = source_cached_node.level
# Check if the direction of this connection is the same as flow
if self.virtual_ports:
is_reversed_connection = source_level < target_level
else:
is_reversed_connection = False
target_node = target_cached_node.node
source_node = source_cached_node.node
# 0 means the frame from input (left side). 1 is output (right side).
target_frames = self.__port_to_frames.get(target, None)
source_frames = self.__port_to_frames.get(source, None)
if self.__allow_same_side_connections and target_node == source_node:
is_reversed_target = True
is_reversed_source = False
f1 = target_frames[1]
f2 = source_frames[1]
else:
is_reversed_target = is_reversed_connection
is_reversed_source = is_reversed_connection
f1 = target_frames[1 if is_reversed_connection else 0]
f2 = source_frames[0 if is_reversed_connection else 1]
# Filter out lines already built.
line1 = (f1, f2)
line2 = (f2, f1)
if line1 in built_lines or line2 in built_lines:
continue
built_lines.add(line1)
# Objects passed to the delegate
source_connection_description = GraphConnectionDescription(
source_node, source, f2, source_level, is_reversed_source
)
target_connection_description = GraphConnectionDescription(
target_node, target, f1, target_level, is_reversed_target
)
with self.__connections_stack:
connection_frame = ui.Frame()
self._connection_widgets[connection] = connection_frame
with connection_frame:
self._delegate.connection(
self._model, source_connection_description, target_connection_description
)
with self.__connections_stack:
self.__user_drag_connection_frame = ui.Frame()
self.__user_drag_connection_accepted_frame = ui.Frame()
self.__making_connection_source_node = None
self.__making_connection_source = None
self.__making_connection_source_widget = None
# Init selection state
self.__on_selection_changed()
# Arrange node position in the next frame because now the node is not
# created and it's not possible to get its size.
if not node_and_level:
self._post_delayed_build_layout()
return
await self.__arrange_nodes(node_and_level)
self._post_delayed_build_layout()
def __add_force_redraws_to_diff(self, diff: GraphNodeDiff, graph_node_index: GraphNodeIndex):
"""
Add the force_redraw_nodes, if any, by adding them to both the nodes_to_add and nodes_to_del sides.
Also do the same with any connections to the node(s). By deleting and then adding, it forces a redraw.
"""
cached_add_connections = set()
cached_del_connections = set()
for force_node in self._force_draw_nodes:
# Add node to nodes_to_add
if force_node in graph_node_index.node_to_id:
force_add_cache_node = graph_node_index.cached_nodes[
graph_node_index.node_to_id[force_node]
]
if force_add_cache_node not in diff.nodes_to_add:
diff.nodes_to_add.append(force_add_cache_node)
# Add any inputs or outputs
for c_port in force_add_cache_node.cached_ports:
# Check for outputs
source = c_port.port
targets = graph_node_index.source_to_target.get(c_port.port, [])
for t in targets:
if (source, t) in graph_node_index.connection_to_id:
cached_add_connection = graph_node_index.cached_connections[
graph_node_index.connection_to_id[(source, t)]
]
cached_add_connections.add(cached_add_connection)
if (source, t) in self._graph_node_index.connection_to_id:
cached_del_connection = self._graph_node_index.cached_connections[
self._graph_node_index.connection_to_id[(source, t)]
]
cached_del_connections.add(cached_del_connection)
# Check for inputs
if not targets:
target = source
if c_port.inputs:
for inp in c_port.inputs:
source = inp
if (source, target) in graph_node_index.connection_to_id:
cached_add_connection = graph_node_index.cached_connections[
graph_node_index.connection_to_id[(source, target)]
]
cached_add_connections.add(cached_add_connection)
if (source, target) in self._graph_node_index.connection_to_id:
cached_del_connection = self._graph_node_index.cached_connections[
self._graph_node_index.connection_to_id[(source, target)]
]
cached_del_connections.add(cached_del_connection)
# Add node to nodes_to_del
if force_node in self._graph_node_index.node_to_id:
force_del_cache_node = self._graph_node_index.cached_nodes[
self._graph_node_index.node_to_id[force_node]
]
if force_del_cache_node not in diff.nodes_to_del:
diff.nodes_to_del.append(force_del_cache_node)
# Add all connections to the diff
for c_connection in cached_add_connections:
if c_connection not in diff.connections_to_add:
diff.connections_to_add.append(c_connection)
for c_connection in cached_del_connections:
if c_connection not in diff.connections_to_del:
diff.connections_to_del.append(c_connection)
self._force_draw_nodes = []
async def __arrange_nodes(self, node_and_level):
"""
Set the the position of the nodes. If the node doesn't have a
predefined position (TODO), the position is assigned automatically.
It's async because it waits until the node is created to get its size.
"""
# Wait untill the node appears on the screen.
while node_and_level[0][1].computed_width == 0.0:
await omni.kit.app.get_app().next_update_async()
# Set size
for _, node_widget, _, layout_node_id in node_and_level:
self.layout.set_size(layout_node_id, node_widget.computed_width, node_widget.computed_height)
# Recompute positions
self.layout.update_positions()
# Set positions in graph view
for node, node_widget, placer, layout_node_id in node_and_level:
model_position = self._model[node].position
self._model[node].size = (node_widget.computed_width, node_widget.computed_height)
if not model_position:
# The model doesn't have the position. Set it.
model_position = self.layout.get_position(layout_node_id)
if model_position:
self._model[node].position = model_position
placer.offset_x = model_position[0]
placer.offset_y = model_position[1]
def __on_start_connection(self, node, port, from_widget, to_widget):
"""Called when the user starts creating connection"""
# Keep the source port
self.__making_connection_source_node = node
self.__making_connection_source = port
self.__making_connection_source_widget = from_widget
self.__user_drag_connection_frame.visible = True
self.__user_drag_connection_accepted_frame.visible = False
with self.__user_drag_connection_frame:
if from_widget and to_widget:
# Temporary connection line follows the mouse
# Objects passed to the delegate
source_connection_description = GraphConnectionDescription(node, port, from_widget, 0, False)
target_connection_description = GraphConnectionDescription(None, None, to_widget, 0, False)
self._delegate.connection(self._model, source_connection_description, target_connection_description)
else:
# Dummy widget will destroy the temporary connection line if it exists
ui.Spacer()
def __on_port_hovered(self, node, port, widget, hovered):
"""Called when the mouse pointer enters the area of the port"""
if not self.__making_connection_source:
# We are not in connection mode, return
if self.__making_connection_target:
self.__making_connection_target = None
if self.__making_connection_target_node:
self.__making_connection_target_node = None
return
# We are here because the user is trying to connect ports
if hovered:
# The user entered the port widget
self.__making_connection_target_node = node
self.__making_connection_target = port
self.__can_connect = self._model.can_connect(self.__making_connection_source, port)
if self.__allow_same_side_connections and node == self.__making_connection_source_node:
is_reversed_source = False
is_reversed_target = True
else:
is_reversed_source = False
is_reversed_target = False
# Replace connection with the sticky one
if self.__can_connect:
# Show sticky connection and hide the one that follows the mouse
self.__user_drag_connection_frame.visible = False
self.__user_drag_connection_accepted_frame.visible = True
with self.__user_drag_connection_accepted_frame:
# Temporary connection line sticked to the port
# Objects passed to the delegate
source_connection_description = GraphConnectionDescription(
self.__making_connection_source_node,
self.__making_connection_source,
self.__making_connection_source_widget,
0,
is_reversed_source,
)
target_connection_description = GraphConnectionDescription(
self.__making_connection_target_node,
self.__making_connection_target,
widget,
0,
is_reversed_target,
)
self._delegate.connection(self._model, source_connection_description, target_connection_description)
elif self.__making_connection_target == port:
# Check that mouse left the currently hovered port because
# sometimes __on_port_hovered gets called like this:
# __on_port_hovered("A", 1)
# __on_port_hovered("B", 1)
# __on_port_hovered("A", 0)
self.__making_connection_target_node = None
self.__making_connection_target = None
# Hide sticky connection and show the one that follows the mouse
self.__user_drag_connection_frame.visible = True
self.__user_drag_connection_accepted_frame.visible = False
def __disconnect_inputs(self, port):
"""Remove connections from port"""
self._model[port].inputs = []
def __canvas_space(self, x: float, y: float):
"""Convert mouse to canvas space"""
if self.__raster_nodes:
return self.__root_frame.screen_to_canvas_x(x), self.__root_frame.screen_to_canvas_y(y)
offset_x = x - self.__root_frame.screen_position_x * self.zoom
offset_y = y - self.__root_frame.screen_position_y * self.zoom
offset_x = (offset_x - self.pan_x) / self.zoom
offset_y = (offset_y - self.pan_y) / self.zoom
return offset_x, offset_y
def __rectangle_selection_begin(self, x: float, y: float, button: int, modifier: int):
"""Mouse pressed callback"""
if not self._model or button != 0:
return
if (
not modifier & carb.input.KEYBOARD_MODIFIER_FLAG_SHIFT
and not modifier & carb.input.KEYBOARD_MODIFIER_FLAG_CONTROL
):
self.__need_clear_selection = True
asyncio.ensure_future(self._clear_selection_next_frame_async(self._model))
# if there is modifier, but the modifier is not shift or control, we skip drawing the selection
# so that users are free use other modifier for other actions
if modifier:
return
self.__rectangle_selection_start_position = self.__canvas_space(x, y)
# Mouse position in widget space
x_w = x - self.__selection_layer.screen_position_x
y_w = y - self.__selection_layer.screen_position_y
self.__selection_placer_start.offset_x = x_w
self.__selection_placer_start.offset_y = y_w
self.__selection_placer_end.offset_x = x_w
self.__selection_placer_end.offset_y = y_w
def __rectangle_selection_end(self, x: float, y: float, button: int, modifier: int):
"""Mouse released callback"""
if self.__rectangle_selection_start_position is None:
return
# Clean up selection caches
self.__selection_placer_start.offset_x = 0
self.__selection_placer_start.offset_y = 0
self.__selection_placer_end.offset_x = 0
self.__selection_placer_end.offset_y = 0
self.__selection_layer.visible = False
self.__rectangle_selection_start_position = None
self.__positions_cache = None
self.__selection_cache = []
def __rectangle_selection_moved(self, x: float, y: float, modifier: int, pressed: bool):
"""Mouse moved callback"""
if self.__rectangle_selection_start_position is None:
# No rectangle selection
return
self._on_rectangle_select(
self._model, self.__rectangle_selection_start_position, self.__canvas_space(x, y), modifier
)
# Mouse position in widget space
x_w = x - self.__selection_layer.screen_position_x
y_w = y - self.__selection_layer.screen_position_y
self.__selection_layer.visible = True
self.__selection_placer_end.offset_x = x_w
self.__selection_placer_end.offset_y = y_w
def _on_new_connection(self):
"""Called when the user finished creating connection. This method sends the connection to the model."""
if (
not self.__making_connection_source
or not self.__making_connection_target
or not self.__making_connection_target_node
):
self.__making_connection_source = None
self.__making_connection_target_node = None
self.__making_connection_target = None
if (
self.__making_connection_source_node
or self.__making_connection_source
or self.__making_connection_source_widget
):
# We are here because the user cancelled the connection (most
# likely RMB is pressed). We need to clear the current connection.
self.__on_start_connection(None, None, None, None)
return
if self.__can_connect:
# Set the connection in the model.
self._model[self.__making_connection_target].inputs = [self.__making_connection_source]
self.__on_start_connection(None, None, None, None)
def _on_port_context_menu(self, node, port):
"""Open context menu for specific port"""
self.__port_context_menu = ui.Menu("Port Context Menu", visible=False)
with self.__port_context_menu:
if self._model[port].inputs:
self.__port_context_menu.visible = True
ui.MenuItem("Disconnect", triggered_fn=lambda p=port: self.__disconnect_inputs(p))
self.__port_context_menu.show()
def _on_set_zoom_key_shortcut(self, mouse_button, key):
"""allow user to set the key shortcut for the graphView zoom"""
self.__root_frame.set_zoom_key_shortcut(mouse_button, key)
def destroy(self):
"""
Called by extension before destroying this object. It doesn't happen automatically.
Without this hot reloading doesn't work.
"""
self.set_model(None)
self.__root_frame = None
self._delegate = None
# Destroy each node
for _, node_widget in self._node_widgets.items():
node_widget.destroy()
for _, connection_widget in self._connection_widgets.items():
connection_widget.destroy()
self._node_widgets = {}
self._connection_widgets = {}
self._node_placers = {}
self.__port_context_menu = None
self.__connections_stack = None
self.__port_to_frames = {}
self.__user_drag_connection_frame = None
self.__user_drag_connection_accepted_frame = None
self.__making_connection_source_node = None
self.__making_connection_source = None
self.__making_connection_source_widget = None
# Destroy the graph index
self._graph_node_index = GraphNodeIndex(None, self.__port_grouping)
@property
def zoom(self):
"""The zoom level of the scene"""
return self.__root_frame.zoom
@zoom.setter
def zoom(self, value):
"""The zoom level of the scene"""
self.__root_frame.zoom = value
@property
def zoom_min(self):
"""The minminum zoom level of the scene"""
return self.__root_frame.zoom_min
@zoom_min.setter
def zoom_min(self, value):
"""The minminum zoom level of the scene"""
self.__root_frame.zoom_min = value
@property
def zoom_max(self):
"""The maximum zoom level of the scene"""
return self.__root_frame.zoom_max
@zoom_max.setter
def zoom_max(self, value):
"""The maximum zoom level of the scene"""
self.__root_frame.zoom_max = value
@property
def pan_x(self):
"""The horizontal offset of the scene"""
return self.__root_frame.pan_x
@pan_x.setter
def pan_x(self, value):
"""The horizontal offset of the scene"""
self.__root_frame.pan_x = value
@property
def pan_y(self):
"""The vertical offset of the scene"""
return self.__root_frame.pan_y
@pan_y.setter
def pan_y(self, value):
"""The vertical offset of the scene"""
self.__root_frame.pan_y = value
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_node.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphNode"]
from .abstract_graph_node_delegate import GraphNodeDescription
from .abstract_graph_node_delegate import GraphNodeLayout
from .abstract_graph_node_delegate import GraphPortDescription
from .graph_model import GraphModel
import omni.ui as ui
class GraphNode:
"""
Represents the Widget for the single node. Uses the model and the
delegate to fill up its layout.
"""
def __init__(self, model: GraphModel, item, has_input_connection, has_output_connection, ports: list, delegate):
"""
Save the model, item and delegate to reuse when drawing the widget
"""
# Port to the tuple of two widgets that represent the port
self._port_to_frames = {}
# Port to the tuple of two widgets that are used to draw line to show the connection is in process
self._port_to_user_drag = {}
self._port_to_user_drag_placer = {}
# center port widget (i.e. name labels and string edit fields)
self._port_center_widgets = {}
self._header_input_widget = None
self._header_output_widget = None
self._has_input_connection = has_input_connection
self._has_output_connection = has_output_connection
self._header_widget = None
self._footer_widget = None
self._node_bg_widget = None
self.__root_frame = ui.Frame(width=0, height=0, skip_draw_when_clipped=True)
# TODO: WeakRef for the model?
self.__model = model
self.__delegate = delegate
self.__item = item
self.__ports = ports
# Build everything
self.__build_layout()
def __getattr__(self, attr):
"""Pretend it's self.__root_frame"""
return getattr(self.__root_frame, attr)
def rebuild_layout(self):
node_desc = GraphNodeDescription(self.__item, self._has_input_connection[0], self._has_input_connection[1])
def __build_node_bg():
self.__delegate.node_background(self.__model, node_desc)
def __build_footer():
self.__delegate.node_footer(self.__model, node_desc)
def __build_header():
self.__delegate.node_header(self.__model, node_desc)
if not self._header_widget.has_build_fn():
self._header_widget.set_build_fn(__build_header)
self._header_widget.rebuild()
if not self._node_bg_widget.has_build_fn():
self._node_bg_widget.set_build_fn(__build_node_bg)
self._node_bg_widget.rebuild()
if not self._footer_widget.has_build_fn():
self._footer_widget.set_build_fn(__build_footer)
self._footer_widget.rebuild()
def __build_layout(self):
"""
Build the Node layout with the delegate. See GraphNodeDelegate for info.
When it's called, it will replace all the sub-widgets.
"""
self._port_to_frames.clear()
self._port_to_user_drag.clear()
self._port_to_user_drag_placer.clear()
self._port_center_widgets.clear()
# Sorting ports for column layout. If the port has inputs only, it goes
# to input_ports. IF the port has both inputs and outputs, it goes to
# mixed_ports.
mixed_ports = []
input_ports = []
output_ports = []
node_desc = GraphNodeDescription(self.__item, self._has_input_connection[0], self._has_input_connection[1])
node_layout = self.__delegate.get_node_layout(self.__model, node_desc)
for port_entry in self.__ports:
if node_layout == GraphNodeLayout.LIST or node_layout == GraphNodeLayout.HEAP:
mixed_ports.append(port_entry)
elif node_layout == GraphNodeLayout.COLUMNS:
port, port_input_flags, port_output_flags, level, position, parent_child_count = port_entry
inputs = self.__model[port].inputs
outputs = self.__model[port].outputs
if inputs is None and outputs is None:
input_ports.append(port_entry)
elif inputs is not None and outputs is None:
input_ports.append(port_entry)
elif inputs is None and outputs is not None:
output_ports.append(port_entry)
else:
mixed_ports.append(port_entry)
with self.__root_frame:
# See GraphNodeDelegate for the detailed information on the layout
with ui.ZStack(height=0):
self._node_bg_widget = ui.Frame()
with self._node_bg_widget:
self.__delegate.node_background(self.__model, node_desc)
if node_layout == GraphNodeLayout.HEAP:
stack = ui.ZStack(height=0)
else:
stack = ui.VStack(height=0)
with stack:
# Header
with ui.HStack():
self._header_input_widget = ui.Frame(width=0)
with self._header_input_widget:
port_widget = self.__delegate.node_header_input(self.__model, node_desc)
# The way to set the center of connection. If the
# port function doesn't return anything, the
# connection is centered to the frame. If it
# returns a widget, the connection is centered to
# the widget returned.
if port_widget:
self._header_input_widget = port_widget
self._header_widget = ui.Frame()
with self._header_widget:
self.__delegate.node_header(self.__model, node_desc)
self._header_output_widget = ui.Frame(width=0)
with self._header_output_widget:
port_widget = self.__delegate.node_header_output(self.__model, node_desc)
# Set the center of connection.
if port_widget:
self._header_output_widget = port_widget
# One port per line
for port, port_input_flags, port_output_flags, port_level, pos, siblings in mixed_ports:
self.__build_port(
node_desc,
port,
port_input_flags,
port_output_flags,
port_level,
pos,
siblings,
node_layout == GraphNodeLayout.HEAP,
)
# Two ports per line
with ui.HStack():
# width=0 lets each column only take up the amount of space it needs
with ui.VStack(height=0, width=0):
for port, port_input_flags, port_output_flags, port_level, pos, siblings in input_ports:
self.__build_port(
node_desc, port, port_input_flags, None, port_level, pos, siblings, False
)
ui.Spacer() # spreads the 2 sides apart from each other
with ui.VStack(height=0, width=0):
for port, port_input_flags, port_output_flags, port_level, pos, siblings in output_ports:
self.__build_port(
node_desc, port, None, port_output_flags, port_level, pos, siblings, False
)
# Footer
self._footer_widget = ui.Frame()
with self._footer_widget:
self.__delegate.node_footer(self.__model, node_desc)
def __build_port(
self,
node_desc: GraphNodeDescription,
port,
port_input_flags,
port_output_flags,
level: int,
relative_position: int,
parent_child_count: int,
is_heap: bool,
):
"""The layout for one single port line."""
# port_input_flags is a tuple of 2 bools. The first item is the flag
# that is True if the port is a source in connection to the input (left
# side). The second item is the flag that is True when this port is a
# target in the connection to the input. port_output_flags is the same
# for output (right side). If port_input_flags or port_output_flags is
# None, it doesn't produce left or right part.
if is_heap:
width = ui.Fraction(1)
stack = ui.ZStack()
else:
width = ui.Pixel(0)
stack = ui.HStack()
with stack:
if port_input_flags is None:
input_port_widget = None
else:
input_port_widget = ui.Frame(width=width)
with input_port_widget:
port_desc = GraphPortDescription(
port, level, relative_position, parent_child_count, port_input_flags[0], port_input_flags[1]
)
port_widget = self.__delegate.port_input(self.__model, node_desc, port_desc)
# Set the center of connection.
if port_widget:
input_port_widget = port_widget
port_desc = GraphPortDescription(port, level, relative_position, parent_child_count)
port_center_widget = self.__delegate.port(self.__model, node_desc, port_desc)
self._port_center_widgets[port_desc.port] = port_center_widget
def fit_to_mose(placer: ui.Placer, x: float, y: float):
"""
If the port is big we need to make the placer small enough
and move it to the mouse cursor to make the connection follow
the mouse
"""
# The new size
SIZE = 3.0
placer.width = ui.Pixel(SIZE)
placer.height = ui.Pixel(SIZE)
# Move to the mouse position
placer.offset_x = x - placer.screen_position_x - SIZE / 2
placer.offset_y = y - placer.screen_position_y - SIZE / 2
def reset_offset(placer: ui.Placer):
# Put it back to the port
placer.offset_x = 0
placer.offset_y = 0
# Restore the size
placer.width = ui.Fraction(1)
placer.height = ui.Fraction(1)
def set_draggable(placer: ui.Placer, frame: ui.Frame, draggable: bool):
placer.draggable = draggable
if port_output_flags is None:
output_port_widget = None
else:
with ui.ZStack(width=width):
# The placer and the widget that follows the
# mouse cursor when the user creates a
# connection
output_userconnection_placer = ui.Placer(stable_size=True, draggable=True)
output_userconnection_placer.set_mouse_pressed_fn(
lambda x, y, *_, p=output_userconnection_placer: fit_to_mose(p, x, y)
)
output_userconnection_placer.set_mouse_released_fn(
lambda *_, p=output_userconnection_placer: reset_offset(p)
)
if port_center_widget:
port_center_widget.set_mouse_pressed_fn(
lambda *_, p=output_userconnection_placer, w=port_center_widget: set_draggable(p, w, False)
)
port_center_widget.set_mouse_released_fn(
lambda *_, p=output_userconnection_placer, w=port_center_widget: set_draggable(p, w, True)
)
self._port_to_user_drag_placer[port] = output_userconnection_placer
with output_userconnection_placer:
# This widget follows the cursor when the
# user creates a connection. Line is
# stuck to this widget.
rect = ui.Spacer()
self._port_to_user_drag[port] = (None, rect)
# The port widget
output_port_widget = ui.Frame()
with output_port_widget:
port_desc = GraphPortDescription(
port,
level,
relative_position,
parent_child_count,
port_output_flags[0],
port_output_flags[1],
)
port_widget = self.__delegate.port_output(self.__model, node_desc, port_desc)
# Set the center of connection.
if port_widget:
output_port_widget = port_widget
# Save input and output frame for each port
self._port_to_frames[port] = (input_port_widget, output_port_widget)
@property
def ports(self):
return self._port_to_frames
@property
def port_center_widgets(self):
"""
Return the dict for port center widgets (which contains the port name label and edit fields).
Dictionary key is the port, value is ui.Widget or None if the delegate does not return a widget
"""
return self._port_center_widgets
@property
def user_drag(self):
"""
Return the dict with the port as the key and widget that follows the
mouse cursor when the user creates a connection.
"""
return self._port_to_user_drag
@property
def user_drag_placer(self):
"""
Return the dict with the port as the key and placer that follows the
mouse cursor when the user creates a connection.
"""
return self._port_to_user_drag_placer
@property
def header_input_frame(self):
"""
Return the Frame that holds the inputs on the left side of the header bar.
"""
return self._header_input_widget
@property
def header_output_frame(self):
"""
Return the Frame that holds the outputs on the right side of the header bar.
"""
return self._header_output_widget
@property
def header_frame(self):
"""
Return the Frame that holds the entire header bar.
"""
return self._header_widget
def destroy(self):
"""
Called by extension before destroying this object. It doesn't happen automatically.
Without this hot reloading doesn't work.
"""
self._port_to_frames = None
self._port_to_user_drag = None
self._port_to_user_drag_placer = None
self._port_center_widgets = None
if self.__root_frame:
self.__root_frame.destroy()
self.__root_frame = None
self.__model = None
self.__delegate = None
self._header_widget = None
self._footer_widget = None
self._node_bg_widget = None
@property
def skip_draw_clipped(self):
"""Get skip_draw_when_clipped property of the root frame"""
return self.__root_frame.skip_draw_when_clipped
@skip_draw_clipped.setter
def skip_draw_clipped(self, value):
"""Set skip_draw_when_clipped property of the root frame"""
self.__root_frame.skip_draw_when_clipped = value
@property
def selected(self):
"""Return the widget selected style state"""
if self.__root_frame:
return self.__root_frame.selected
@selected.setter
def selected(self, value):
"""Set the widget selected style state"""
if self.__root_frame:
self.__root_frame.selected = value
@property
def visible(self) -> bool:
return self.__root_frame.visible
@visible.setter
def visible(self, value: bool):
self.__root_frame.visible = value
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/abstract_batch_position_getter.py | # Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["AbstractBatchPositionGetter"]
from typing import Any, List, Optional
import weakref
import abc
from .graph_model import GraphModel
class AbstractBatchPositionGetter(abc.ABC):
"""Helper to get the nodes to move at the same time"""
def __init__(self, model: GraphModel):
self.model = model
@property
def model(self):
return self.__model()
@model.setter
def model(self, value):
self.__model = weakref.ref(value)
@abc.abstractmethod
def __call__(self, drive_item: Any) -> Optional[List[Any]]:
pass
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/backdrop_getter.py | # Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["BackdropGetter"]
from functools import lru_cache
from typing import Any, Callable, List, Optional
import carb.input
from .abstract_batch_position_getter import AbstractBatchPositionGetter
from .graph_model import GraphModel
@lru_cache()
def __get_input() -> carb.input.IInput:
return carb.input.acquire_input_interface()
def _is_alt_down() -> bool:
input = __get_input()
return (
input.get_keyboard_value(None, carb.input.KeyboardInput.LEFT_ALT)
+ input.get_keyboard_value(None, carb.input.KeyboardInput.RIGHT_ALT)
> 0
)
class BackdropGetter(AbstractBatchPositionGetter):
"""Helper to get the nodes that placed in the given backdrop"""
def __init__(self, model: GraphModel, is_backdrop_fn: Callable[[Any], bool], graph_widget = None):
super().__init__(model)
# Callback to determine if the node is a backdrop
self.__is_backdrop_fn = is_backdrop_fn
self.__graph_widget = graph_widget
def __call__(self, drive_item: Any) -> Optional[List[Any]]:
# Get siblings and return
MIN_OFFSET = 25 # Same value used in _create_backdrop_for_selected_nodes()
# If user presses ALT while moving the backdrop, don't carry nodes with it.
if _is_alt_down():
return []
model = self.model
if model and self.__is_backdrop_fn(drive_item):
position = model[drive_item].position
size = model[drive_item].size
if position and size:
result = []
for node in model.nodes:
node_position = model[node].position
if not node_position:
continue
# Add the node size offset to the extent
if self.__graph_widget:
widget = self.__graph_widget._graph_view._node_widgets[node]
s = (widget.computed_width, widget.computed_height)
else:
s = (0, 0) # For backwards compatibility
# Check if the node is inside the backdrop
if (
node_position[0] < position[0]
or node_position[1] < position[1]
or node_position[0] + s[0] > position[0] + size[0] + MIN_OFFSET
or node_position[1] + s[1] > position[1] + size[1] + MIN_OFFSET
):
continue
result.append(node)
return result
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_model_batch_position_helper.py | # Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphModelBatchPositionHelper"]
from .abstract_batch_position_getter import AbstractBatchPositionGetter
from typing import Any, Callable, Dict, List, Tuple, Union
import weakref
class GraphModelBatchPositionHelper:
"""
The model that manages batch position of the items. It can be used to manage
the position of multi selection, backdrops and upstreams.
"""
def __init__(self):
super().__init__()
# Indicates that the method multiselection_set_position is called and
# not finished. We need it because multiselection_set_position can be
# called recursively
self.__position_set_in_process = False
# The same for multiselection_position_begin_edit
self.__position_begin_edit_in_process = False
# The same for multiselection_position_end_edit
self.__position_end_edit_in_process = False
# The node that drives position of many nodes. It's the node the user
# actually drags.
self.__current_drive_node = None
# Other nodes in the selection
self.__driven_nodes: List = []
self.__position_difference: Dict[Any, Union[List, Tuple]] = {}
# Functions to get the nodes that are moving
self.__get_moving_items_fns = {}
# Proxy is the isolation model. We need it to set the position of
# input/output nodes.
self.__batch_proxy = None
self.batch_proxy = self
@property
def batch_proxy(self):
"""
Proxy is the isolation model. We need it because it converts calls to
input/output nodes.
"""
return self.__batch_proxy()
@batch_proxy.setter
def batch_proxy(self, value):
# Weak ref so we don't have circular references
self.__batch_proxy = weakref.ref(value)
for _, fn in self.__get_moving_items_fns.items():
if isinstance(fn, AbstractBatchPositionGetter):
fn.model = value
def add_get_moving_items_fn(self, fn: Callable[[Any], List[Any]]):
"""
Add the function that is called in batch_position_begin_edit to
determine which items to move
"""
def get_new_id(batch_position_fns: Dict[int, Callable]):
if not batch_position_fns:
return 0
return max(batch_position_fns.keys()) + 1
class Subscription:
"""The object that will remove the callback when destroyed"""
def __init__(self, parent, id):
self.__parent = weakref.ref(parent)
self.__id = id
def __del__(self):
parent = self.__parent()
if parent:
parent._remove_batch_position_fn(self.__id)
id = get_new_id(self.__get_moving_items_fns)
self.__get_moving_items_fns[id] = fn
def _remove_get_moving_items_fn(self, id: int):
self.__get_moving_items_fns.pop(id, None)
def batch_set_position(self, position: List[float], item: Any = None):
"""
Should be called in the position setter to make sure the position
setter is called for all the selected nodes
"""
if self.__position_set_in_process:
return
if self.__current_drive_node != item:
# Somehow it's possible that batch_set_position is called for the
# wrong node. We need to filter such cases. It happens rarely, but
# when it happens, it leads to a crash because the node applies the
# diff to itself infinitely.
# TODO: investigate OM-58441 more
return
self.__position_set_in_process = True
try:
current_position = position
if not current_position:
# Can't do anything without knowing position
return
for node in self.__driven_nodes:
diff = self.__position_difference.get(node, None)
if diff:
self.batch_proxy[node].position = (
diff[0] + current_position[0],
diff[1] + current_position[1],
)
# TODO: except
finally:
self.__position_set_in_process = False
def batch_position_begin_edit(self, item: Any):
"""
Should be called from position_begin_edit to make sure
position_begin_edit is called for all the selected nodes
"""
if self.__position_begin_edit_in_process:
return
self.__position_begin_edit_in_process = True
try:
# The current node is the drive node.
self.__current_drive_node = item
# Get driven nodes from the callbacks
driven_nodes = []
for _, fn in self.__get_moving_items_fns.items():
driven_nodes += fn(self.__current_drive_node) or []
# Just in case filter out drive node
self.__driven_nodes = [i for i in driven_nodes if i != self.__current_drive_node]
current_position = self.batch_proxy[self.__current_drive_node].position
if not current_position:
# Can't do anything without knowing position
return
for node in self.__driven_nodes:
node_position = self.batch_proxy[node].position
if node_position:
self.__position_difference[node] = (
node_position[0] - current_position[0],
node_position[1] - current_position[1],
)
self.batch_proxy.position_begin_edit(node)
# TODO: except
finally:
self.__position_begin_edit_in_process = False
def batch_position_end_edit(self, item: Any):
"""
Should be called from position_begin_edit to make sure
position_begin_edit is called for all the selected nodes
"""
if self.__position_end_edit_in_process:
return
self.__position_end_edit_in_process = True
try:
for node in self.__driven_nodes:
self.batch_proxy.position_end_edit(node)
# Clean up
self.__driven_nodes = []
self.__position_difference = {}
# TODO: except
finally:
self.__position_end_edit_in_process = False
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/graph_node_delegate_router.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphNodeDelegateRoutingError", "GraphNodeDelegateRouter", "RoutingCondition"]
from .abstract_graph_node_delegate import AbstractGraphNodeDelegate
from .abstract_graph_node_delegate import GraphConnectionDescription
from .abstract_graph_node_delegate import GraphNodeDescription
from .abstract_graph_node_delegate import GraphPortDescription
from collections import namedtuple
RoutingCondition = namedtuple("RoutingCondition", ["type", "expression", "delegate"])
class GraphNodeDelegateRoutingError(Exception):
"""
This exception is used when it's not possible to do routing. Can only
happen if there is no default route in the table.
"""
pass
class GraphNodeDelegateRouter(AbstractGraphNodeDelegate):
"""
The delegate that keeps multiple delegates and pick them depending on the
routing conditions.
It's possible to add the routing conditions with `add_route`, and
conditions could be a type or a lambda expression.
The latest added routing is stronger than previously added. Routing added
without conditions is the default.
We use type routing to make the specific kind of nodes unique, and also
we can use the lambda function to make the particular state of nodes
unique (ex. full/collapsed).
It's possible to use type and lambda routing at the same time.
Usage examples:
delegate.add_route(TextureDelegate(), type="Texture2d")
delegate.add_route(CollapsedDelegate(), expressipon=is_collapsed)
"""
def __init__(self):
super().__init__()
self.__routing_table = []
def add_route(self, delegate: AbstractGraphNodeDelegate, type=None, expression=None):
"""Add delegate to the routing tablle"""
if not delegate:
return
if type is None and expression is None:
# It's a default delegate, we don't need what we had before
self.__routing_table.clear()
self.__routing_table.append(RoutingCondition(type, expression, delegate))
def __route(self, model, node):
"""Return the delegate for the given node"""
for c in reversed(self.__routing_table):
if (c.type is None or model[node].type == c.type) and (c.expression is None or c.expression(model, node)):
return c.delegate
raise GraphNodeDelegateRoutingError("Can't route.")
def get_node_layout(self, model, node_desc: GraphNodeDescription):
"""Called to determine the node layout"""
return self.__route(model, node_desc.node).get_node_layout(model, node_desc)
def node_background(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the node background"""
return self.__route(model, node_desc.node).node_background(model, node_desc)
def node_header_input(self, model, node_desc: GraphNodeDescription):
"""Called to create the left part of the header that will be used as input when the node is collapsed"""
return self.__route(model, node_desc.node).node_header_input(model, node_desc)
def node_header_output(self, model, node_desc: GraphNodeDescription):
"""Called to create the right part of the header that will be used as output when the node is collapsed"""
return self.__route(model, node_desc.node).node_header_output(model, node_desc)
def node_header(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the top of the node"""
return self.__route(model, node_desc.node).node_header(model, node_desc)
def node_footer(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the bottom of the node"""
return self.__route(model, node_desc.node).node_footer(model, node_desc)
def port_input(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the left part of the port that will be used as input"""
return self.__route(model, node_desc.node).port_input(model, node_desc, port_desc)
def port_output(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the right part of the port that will be used as output"""
return self.__route(model, node_desc.node).port_output(model, node_desc, port_desc)
def port(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the middle part of the port"""
return self.__route(model, node_desc.node).port(model, node_desc, port_desc)
def connection(self, model, source: GraphConnectionDescription, target: GraphConnectionDescription):
"""Called to create the connection between ports"""
return self.__route(model, source.node).connection(model, source, target)
def destroy(self):
while self.__routing_table:
routing_condition = self.__routing_table.pop()
routing_condition.delegate.destroy()
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/selection_getter.py | # Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["SelectionGetter"]
from .abstract_batch_position_getter import AbstractBatchPositionGetter
from typing import Any
import weakref
from .graph_model import GraphModel
class SelectionGetter(AbstractBatchPositionGetter):
"""
Helper to get the selection of the given model. It's supposed to be
used with GraphModelBatchPosition.
"""
def __init__(self, model: GraphModel):
super().__init__(model)
def __call__(self, drive_item: Any):
model = self.model
if model:
return model.selection
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/abstract_graph_node_delegate.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
__all__ = ["GraphNodeLayout", "GraphNodeDescription", "GraphPortDescription", "GraphConnectionDescription", "AbstractGraphNodeDelegate"]
from enum import Enum, auto
class GraphNodeLayout(Enum):
LIST = auto()
COLUMNS = auto()
HEAP = auto()
class GraphNodeDescription:
"""The object that holds the main attributes of the node"""
def __init__(self, node, connected_source=None, connected_target=None):
self.node = node
self.connected_source = connected_source
self.connected_target = connected_target
class GraphPortDescription:
"""The object that holds the main attributes of the port"""
def __init__(
self, port, level, relative_position, parent_child_count, connected_source=None, connected_target=None
):
self.port = port
self.level = level
self.relative_position = relative_position
self.parent_child_count = parent_child_count
self.connected_source = connected_source
self.connected_target = connected_target
class GraphConnectionDescription:
"""The object that holds the main attributes of the connection"""
def __init__(self, node, port, widget, level, is_tangent_reversed=False):
self.node = node
self.port = port
self.widget = widget
self.level = level
self.is_tangent_reversed = is_tangent_reversed
class AbstractGraphNodeDelegate:
"""
The delegate generates widgets that together form the node using the
model. The following figure shows the LIST layout of the node. For every
zone, there is a method that is called to build this zone.
::
+-------------------------+
| node_background |
| +---+-------------+---+ |
| |[A]| node_header |[B]| |
| +---+-------------+---+ |
| |[C]| port |[D]| |
| +---+-------------+---+ |
| |[D]| port |[D]| |
| +---+-------------+---+ |
| |[E]| node_footer |[F]| |
| +---+-------------+---+ |
+-------------------------+
COLUMN layout allows to put input and output ports at the same line:
::
+-------------------------+
| node_background |
| +---+-------------+---+ |
| |[A]| node_header |[B]| |
| +---+------+------+---+ |
| |[C]| port | port |[D]| |
| | | |------+---| |
| |---+------| port |[D]| |
| |[D]| port | | | |
| +---+------+------+---+ |
| |[E]| node_footer |[F]| |
| +---+-------------+---+ |
+-------------------------+
::
[A] node_header_input
[B] node_header_output
[C] port_input
[D] port_output
[E] node_footer_input (TODO)
[F] node_footer_output (TODO)
"""
def destroy(self):
pass
def get_node_layout(self, model, node_desc: GraphNodeDescription):
"""Called to determine the node layout"""
return GraphNodeLayout.LIST
def node_background(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the node background"""
pass
def node_header_input(self, model, node_desc: GraphNodeDescription):
"""Called to create the left part of the header that will be used as input when the node is collapsed"""
pass
def node_header_output(self, model, node_desc: GraphNodeDescription):
"""Called to create the right part of the header that will be used as output when the node is collapsed"""
pass
def node_header(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the top of the node"""
pass
def node_footer(self, model, node_desc: GraphNodeDescription):
"""Called to create widgets of the bottom of the node"""
pass
def port_input(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the left part of the port that will be used as input"""
pass
def port_output(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the right part of the port that will be used as output"""
pass
def port(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
"""Called to create the middle part of the port"""
pass
def connection(self, model, source_desc: GraphConnectionDescription, target_desc: GraphConnectionDescription):
"""Called to create the connection between ports"""
pass
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/tests/test_delegate.py | ## Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.test
from omni.kit.widget.graph.graph_node_delegate import GraphNodeDelegate
from omni.kit.widget.graph.graph_view import GraphView
from omni.ui.tests.test_base import OmniUiTest
from .test_graph import Model
import omni.kit.app
class TestDelegate(OmniUiTest):
async def test_general(self):
"""Testing general properties of the GraphView default delegate"""
window = await self.create_test_window(768, 512)
style = GraphNodeDelegate.get_style()
# Don't use the image because the image scaling looks different in editor and kit-mini
style["Graph.Node.Footer.Image"]["image_url"] = ""
style["Graph.Node.Header.Collapse"]["image_url"] = ""
style["Graph.Node.Header.Collapse::Minimized"]["image_url"] = ""
style["Graph.Node.Header.Collapse::Closed"]["image_url"] = ""
with window.frame:
delegate = GraphNodeDelegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style, pan_x=600, pan_y=170)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(3):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test()
async def test_zoom(self):
"""Testing zooming of the GraphView default delegate"""
window = await self.create_test_window(384, 256)
style = GraphNodeDelegate.get_style()
# Don't use the image because the image scaling looks different in editor and kit-mini
style["Graph.Node.Footer.Image"]["image_url"] = ""
style["Graph.Node.Header.Collapse"]["image_url"] = ""
style["Graph.Node.Header.Collapse::Minimized"]["image_url"] = ""
style["Graph.Node.Header.Collapse::Closed"]["image_url"] = ""
with window.frame:
delegate = GraphNodeDelegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style, zoom=0.5, pan_x=300, pan_y=85)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(3):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test()
async def test_focus(self):
"""Testing focusing of the GraphView default delegate"""
window = await self.create_test_window(768, 512)
style = GraphNodeDelegate.get_style()
# Don't use the image because the image scaling looks different in editor and kit-mini
style["Graph.Node.Footer.Image"]["image_url"] = ""
style["Graph.Node.Header.Collapse"]["image_url"] = ""
style["Graph.Node.Header.Collapse::Minimized"]["image_url"] = ""
style["Graph.Node.Header.Collapse::Closed"]["image_url"] = ""
with window.frame:
delegate = GraphNodeDelegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(3):
await omni.kit.app.get_app().next_update_async()
graph_view.focus_on_nodes()
for i in range(2):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test()
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/tests/test_graph_with_subports.py | ## Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
__all__ = ["TestGraphWithSubports"]
import omni.kit.test
from omni.kit.widget.graph import GraphNodeDescription
from omni.kit.widget.graph import GraphPortDescription
from omni.kit.widget.graph.abstract_graph_node_delegate import AbstractGraphNodeDelegate
from omni.kit.widget.graph.graph_model import GraphModel
from omni.kit.widget.graph.graph_view import GraphView
from omni.ui.tests.test_base import OmniUiTest
import omni.kit.app
import omni.ui as ui
from functools import partial
inputs = ["First.color1", "First.color1.tex", "First.color1.shader", "First.color2", "Third.size", "Fourth.color1", "Fourth.color1.tex", "Fourth.color1.shader",]
outputs = ["First.result", "Second.out", "Third.outcome", "Third.outcome.out1", "Third.outcome.out2"]
connections = {"First.color1.tex": ["Second.out"], "First.color2": ["Third.outcome.out2"], "Fourth.color1.tex": ["Third.outcome.out1"]}
style = {
"Graph": {"background_color": 0xFF000000},
"Graph.Connection": {"color": 0xFFDBA656, "background_color": 0xFFDBA656, "border_width": 2.0},
}
BORDER_WIDTH = 16
class Model(GraphModel):
def __init__(self):
super().__init__()
@property
def nodes(self, item=None):
if item:
return
return sorted(set([n.split(".")[0] for n in inputs + outputs]))
@property
def name(self, item=None):
return item.split(".")[-1]
@property
def ports(self, item=None):
item_len = len(item.split("."))
if item_len == 1 or item_len == 2:
result = [n for n in inputs + outputs if n.startswith(item) and len(n.split(".")) == item_len + 1]
return result
return []
@property
def expansion_state(self, item=None):
"""the expansion state of a node or a port"""
item_len = len(item.split("."))
if item_len == 2:
return GraphModel.ExpansionState.CLOSED
return GraphModel.ExpansionState.OPEN
@property
def inputs(self, item):
if item in inputs:
return connections.get(item, [])
return None
@property
def outputs(self, item):
if item in outputs:
return []
return None
class ExpandModel(Model):
def __init__(self):
super().__init__()
@property
def expansion_state(self, item=None):
"""the expansion state of a node or a port"""
return GraphModel.ExpansionState.OPEN
class Delegate(AbstractGraphNodeDelegate):
def node_background(self, model, node_desc: GraphNodeDescription):
ui.Rectangle(style={"background_color": 0xFF015C3A})
def node_header(self, model, node_desc: GraphNodeDescription):
ui.Label(model[node_desc.node].name, style={"font_size": 12, "margin": 6, "color": 0xFFDBA656})
def port_input(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
# this is output port
if model[port_desc.port].inputs is None:
color = 0x0
elif port_desc.connected_target:
color = 0xFFDBA6FF
else:
color = 0xFFDBA656
ui.Circle(width=10, style={"background_color": color})
def port_output(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
# this is output port
if model[port_desc.port].outputs is None:
color = 0x0
elif port_desc.connected_source:
color = 0xFFDBA6FF
else:
color = 0xFFDBA656
ui.Circle(width=10, style={"background_color": color})
def port(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
def set_expansion_state(model, port, state: GraphModel.ExpansionState, *args):
model[port].expansion_state = state
port = port_desc.port
level = port_desc.level
sub_ports = model[port].ports
is_group = len(sub_ports) > 0
def draw_collapse_button():
if is_group:
state = model[port].expansion_state
if state == GraphModel.ExpansionState.CLOSED:
button_name = "+"
next_state = GraphModel.ExpansionState.OPEN
else:
button_name = "-"
next_state = GraphModel.ExpansionState.CLOSED
ui.Label(button_name,
width=10,
style={"font_size": 10, "margin": 2},
mouse_pressed_fn=partial(set_expansion_state, model, port, next_state))
else:
ui.Spacer(width=10)
def draw_branch():
if level > 0:
ui.Line(width=8, style={"color": 0xFFFFFFFF})
alignment = ui.Alignment.LEFT if model[port].inputs is not None else ui.Alignment.RIGHT
with ui.HStack():
if alignment == ui.Alignment.RIGHT:
ui.Spacer(width=40)
else:
draw_collapse_button()
draw_branch()
ui.Label(model[port].name, style={"font_size": 10, "margin": 2}, alignment=alignment)
if alignment == ui.Alignment.RIGHT:
draw_branch()
draw_collapse_button()
else:
ui.Spacer(width=40)
def connection(self, model, source, target):
"""Called to create the connection between ports"""
# If the connection is reversed, we need to mirror tangents
ui.FreeBezierCurve(target.widget, source.widget, style={"color": 0xFFFFFFFF})
class TestGraphWithSubports(OmniUiTest):
"""Testing GraphView"""
async def test_general(self):
"""Testing general properties of GraphView with port_grouping is True"""
window = await self.create_test_window(512, 256)
with window.frame:
delegate = Delegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style, pan_x=400, pan_y=100, port_grouping=True)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(3):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test()
async def test_expansion(self):
"""Testing subport expansion"""
window = await self.create_test_window(512, 256)
with window.frame:
delegate = Delegate()
model = ExpandModel()
graph_view = GraphView(model=model, delegate=delegate, style=style, pan_x=400, pan_y=100, port_grouping=True)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(3):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test() |
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/tests/test_graph.py | ## Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
__all__ = ["TestGraph"]
import omni.kit.test
from omni.kit.widget.graph import GraphNodeDescription
from omni.kit.widget.graph import GraphNodeLayout
from omni.kit.widget.graph import GraphPortDescription
from omni.kit.widget.graph.abstract_graph_node_delegate import AbstractGraphNodeDelegate
from omni.kit.widget.graph.graph_model import GraphModel
from omni.kit.widget.graph.graph_view import GraphView
from omni.ui import color as cl
from omni.ui.tests.test_base import OmniUiTest
import omni.kit.app
import omni.ui as ui
inputs = ["First.color1", "First.color2", "Third.size"]
outputs = ["First.result", "Second.out", "Third.outcome"]
connections = {"First.color1": ["Second.out"], "First.color2": ["Third.outcome"]}
style = {
"Graph": {"background_color": 0xFF000000},
"Graph.Connection": {"color": 0xFFDBA656, "background_color": 0xFFDBA656, "border_width": 2.0},
}
BORDER_WIDTH = 16
class Model(GraphModel):
def __init__(self):
super().__init__()
self.__positions = {}
@property
def nodes(self, item=None):
if item:
return
return sorted(set([n.split(".")[0] for n in inputs + outputs]))
@property
def name(self, item=None):
return item.split(".")[-1]
@property
def ports(self, item=None):
if item in inputs:
return
if item in outputs:
return
return [n for n in inputs + outputs if n.startswith(item)]
@property
def inputs(self, item):
if item in inputs:
return connections.get(item, [])
@property
def outputs(self, item):
if item in outputs:
return []
@property
def position(self, item=None):
"""Returns the position of the node"""
return self.__positions.get(item, None)
@position.setter
def position(self, value, item=None):
"""The node position setter"""
self.__positions[item] = value
class Delegate(AbstractGraphNodeDelegate):
def node_background(self, model, node_desc: GraphNodeDescription):
ui.Rectangle(style={"background_color": 0xFF015C3A})
def node_header(self, model, node_desc: GraphNodeDescription):
ui.Label(model[node_desc.node].name, style={"font_size": 12, "margin": 6, "color": 0xFFDBA656})
def port_input(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
color = 0xFFDBA656 if port_desc.connected_target else 0x0
ui.Circle(width=10, style={"background_color": color})
def port_output(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
color = 0xFFDBA656 if port_desc.connected_source else 0x0
ui.Circle(width=10, style={"background_color": color})
def port(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
ui.Label(model[port_desc.port].name, style={"font_size": 10, "margin": 2})
def connection(self, model, source, target):
"""Called to create the connection between ports"""
# If the connection is reversed, we need to mirror tangents
ui.FreeBezierCurve(target.widget, source.widget, style={"color": 0xFFFFFFFF})
class DelegateHeap(AbstractGraphNodeDelegate):
def get_node_layout(self, model, node_desc: GraphNodeDescription):
return GraphNodeLayout.HEAP
def port_input(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
return ui.Rectangle(style={"background_color": cl(1.0, 0.0, 0.0, 1.0)})
def port_output(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
return ui.Rectangle(style={"background_color": cl(0.0, 1.0, 1.0, 0.5)})
def port(self, model, node_desc: GraphNodeDescription, port_desc: GraphPortDescription):
with ui.HStack():
ui.Spacer(width=BORDER_WIDTH)
with ui.VStack():
ui.Spacer(height=BORDER_WIDTH)
frame = ui.Frame(separate_window=True)
with frame:
with ui.ZStack():
ui.Rectangle(width=32, height=32, style={"background_color": cl(0.75)})
ui.Spacer(height=BORDER_WIDTH)
ui.Spacer(width=BORDER_WIDTH)
return frame
def connection(self, model, source, target):
"""Called to create the connection between ports"""
ui.OffsetLine(
target.widget,
source.widget,
alignment=ui.Alignment.UNDEFINED,
begin_arrow_type=ui.ArrowType.ARROW,
bound_offset=20,
style_type_name_override="Graph.Connection",
style={"color": cl.white, "border_width": 1.0},
)
class TestGraph(OmniUiTest):
"""Testing GraphView"""
async def test_general(self):
"""Testing general properties of GraphView"""
window = await self.create_test_window(512, 256)
with window.frame:
delegate = Delegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style, pan_x=400, pan_y=100)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(10):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test()
async def test_zoom(self):
"""Testing zooming of GraphView"""
window = await self.create_test_window(256, 128)
with window.frame:
delegate = Delegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style, zoom=0.5, pan_x=200, pan_y=50)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(10):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test()
async def test_focus(self):
"""Testing focusing of GraphView"""
window = await self.create_test_window(512, 256)
with window.frame:
delegate = Delegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style)
# Wait several frames to draw and auto-layout.
for i in range(10):
await omni.kit.app.get_app().next_update_async()
graph_view.focus_on_nodes()
for i in range(10):
await omni.kit.app.get_app().next_update_async()
self.assertAlmostEqual(graph_view.zoom, 1.6606260538101196)
await self.finalize_test()
async def test_focus_with_zoom_limits(self):
"""Testing focusing of GraphView"""
window = ui.Window("test", width=512, height=256)
with window.frame:
delegate = Delegate()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style, zoom_max=1.5)
# Wait several frames to draw and auto-layout.
for i in range(10):
await omni.kit.app.get_app().next_update_async()
graph_view.focus_on_nodes()
for i in range(10):
await omni.kit.app.get_app().next_update_async()
self.assertAlmostEqual(graph_view.zoom, 1.5, places=4)
async def test_heap(self):
"""Testing general properties of GraphView"""
window = await self.create_test_window(512, 256)
with window.frame:
delegate = DelegateHeap()
model = Model()
graph_view = GraphView(model=model, delegate=delegate, style=style, pan_x=400, pan_y=100)
# Two frames because the first frame is to draw, the second frame is to auto-layout.
for i in range(10):
await omni.kit.app.get_app().next_update_async()
await self.finalize_test()
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/tests/test_graph_cache.py | ## Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.test
from .test_graph import Model
from omni.ui.tests.test_base import OmniUiTest
from ..graph_node_index import GraphNodeIndex
class ModelPlus(Model):
def __init__(self):
super().__init__()
self._inputs = ["First.color1", "First.color2", "Third.size"]
self._outputs = ["First.result", "Second.out", "Third.outcome"]
self._connections = {"First.color1": ["Second.out"], "First.color2": ["Third.outcome"]}
def change(self):
self._inputs = ["First.color1", "First.color2", "Fourth.size"]
self._outputs = ["First.result", "Second.out", "Fourth.outcome"]
self._connections = {"First.color1": ["Second.out"], "First.color2": ["Fourth.outcome"]}
@property
def nodes(self, item=None):
if item:
return
return sorted(set([n.split(".")[0] for n in self._inputs + self._outputs]))
@property
def name(self, item=None):
return item.split(".")[-1]
@property
def ports(self, item=None):
if item in self._inputs:
return
if item in self._outputs:
return
return [n for n in self._inputs + self._outputs if n.startswith(item)]
@property
def inputs(self, item):
if item in self._inputs:
return self._connections.get(item, [])
@property
def outputs(self, item):
if item in self._outputs:
return []
class TestGraphCache(OmniUiTest):
async def test_general(self):
model = ModelPlus()
cache1 = GraphNodeIndex(model, True)
cache2 = GraphNodeIndex(model, True)
diff = cache1.get_diff(cache2)
self.assertTrue(diff.valid)
self.assertTrue(not diff.nodes_to_add)
self.assertTrue(not diff.connections_to_add)
self.assertTrue(not diff.nodes_to_del)
self.assertTrue(not diff.connections_to_del)
model.change()
cache2 = GraphNodeIndex(model, True)
diff = cache1.get_diff(cache2)
self.assertTrue(diff.valid)
# Check the difference
to_add = list(sorted([n.node for n in diff.nodes_to_add]))
self.assertEqual(to_add, ["First", "Fourth"])
to_add = list(sorted([(c.source_port, c.target_port) for c in diff.connections_to_add]))
self.assertEqual(to_add, [("Fourth.outcome", "First.color2"), ("Second.out", "First.color1")])
to_del = list(sorted([n.node for n in diff.nodes_to_del]))
self.assertEqual(to_del, ["First", "Third"])
to_del = list(sorted([(c.source_port, c.target_port) for c in diff.connections_to_del]))
self.assertEqual(to_del, [("Second.out", "First.color1"), ("Third.outcome", "First.color2")])
|
omniverse-code/kit/exts/omni.kit.widget.graph/omni/kit/widget/graph/tests/__init__.py | ## Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
from .test_delegate import TestDelegate
from .test_graph import TestGraph
from .test_graph_cache import TestGraphCache
from .test_graph_with_subports import TestGraphWithSubports
|
omniverse-code/kit/exts/omni.kit.widget.graph/docs/CHANGELOG.md | # Changelog
Omniverse Kit Graph Widget
## [1.5.6-104_2] - 2023-01-11
### Fixed
- Connection or Node is not updated when model changes
## [1.5.4] - 2022-12-19
### Added
- Setting `/exts/omni.kit.widget.graph/raster_nodes` to rasterize the nodes
## [1.5.3] - 2022-10-28
### Fixed
- Crash when moving several nodes
## [1.5.2] - 2022-10-24
### Added
- Overview for extension
## [1.5.1] - 2022-08-31
### Changed
- make sure rebuild_node only rebuilds the node look, but not rebuilds the ports so that it will not affect connections
## [1.5.0] - 2022-08-30
### Added
- a hook of rebuild_node (rebuild node delegate) to GraphModel, so that user can call it from the model
## [1.4.7] - 2022-07-29
### Changed
- revert the previous change since it crashes the graph during the multiple selection
## [1.4.6] - 2022-07-28
### Fixed
- incorrect node moving in graph (OM-55727)
## [1.4.5] - 2022-05-26
### Added
- The change of node name, description and display_color will trigger the node delegate rebuild
## [1.4.4] - 2022-05-26
### Fixed
- multi selection quick drag resets the selection in graph view(OM-48395)
## [1.4.3] - 2022-04-20
### Changed
- A flag to automatically detect which node is added/removed and edit the widget
hierarchy instead of rebuilding everything. `always_force_regenerate=False`
## [1.4.2] - 2022-04-22
### Fixed
- fix nodes go wild when the zoom level is beyond the zoom limits(OM-48407)
## [1.4.1] - 2022-04-19
### Fixed
- fix focus_on_nodes to consider zoom_min and zoom_max limits
- fix unreliable tests (OM-48945)
## [1.4.0] - 2022-03-31
### Added
- add _on_set_zoom_key_shortcut API for GraphView to allow user set key shortcut to zoom the graph view
## [1.3.8] - 2022-03-17
### Fixed
- avoid modifier except ctrl and shift to trigger the selection
## [1.3.7] - 2022-03-16
### Added
- Fix the source_node_id is None loop
- Add tests for graph supporting subport
## [1.3.6] - 2022-03-15
### Fixed
- Fix OM-46181 to consider edges from folded ports in the layout algorithm
## [1.3.5] - 2022-03-14
### Fixed
- The connection issue with RMB click. Clear the connection while mouse release.
## [1.3.4] - 2022-02-18
### Changed
- Fix OM-42502 about drag selection node so that as long as the selection rectangle touches the node, the node is selected
## [1.3.3] - 2022-02-18
### Changed
- Fix OM-45166 about the empty menu box in graph
## [1.3.2] - 2021-12-02
### Added
- The ability to use same side connections
## [1.3.1] - 2021-12-01
### Added
- Update graph when delegate is changed
## [1.3.0] - 2021-11-29
### Added
- GraphModelBatchPosition to simplify multiselection and backdrops
## [1.2.1] - 2021-11-12
### Fixed
- Graph model set size changed during iterating
## [1.2.0] - 2021-07-22
### Added
- GraphNodeLayout.HEAP node layout "put everything to ZStack"
## [1.1.3] - 2021-06-01
### Added
- New properties `preview_state` for the model
## [1.1.2] - 2021-05-12
### Fixed
- Port grouping in compound nodes
## [1.1.1] - 2021-05-05
### Added
- Multisilection (flag `rectangle_selection=True`)
- Ability to draw connections on top (flag `connections_on_top`)
- New properties `icon` and `preview` on the model
### Fixed
- Ability to set `display_color` on the model
## [1.1.0] - 2021-01-25
### Changed
- The signatures of the delegate API. There is no compatibility with the
previous version.
### Added
- Port grouping
- Smooth scrolling
## [1.0.2] - 2020-10-30
### Added
- GraphView has the new property virtual_ports that disables virtual ports.
- New column node layout.
- Ability to make the ports colored depending on the type.
- Ability to check the port type before connection and stick the connection
to the port if it can be accepted.
- Display color.
- IsolationGraphModel isolates any model to show the children of specific
item only.
## [1.0.1] - 2020-10-14
- The ability to disable virtual ports
- Two-column node layout
## [1.0.0] - 2020-10-13
### Added
- CHANGELOG
- Node routing
### Changed
- The signature of `AbstractGraphNodeDelegate.connection`
|
omniverse-code/kit/exts/omni.kit.widget.graph/docs/overview.md | # Overview
Omni.kit.widget.graph provides the base for the delegate of graph nodes. It also defines the standard interface for graph models. With the layout algorithm, GraphView gives a visualization layer of how graph nodes display and how they connect/disconnect with each to form a graph.
## Delegate
The graph node's delegate defines how the graph node looks like. It has two types of layout: List and Column. It defines the appearance of the ports, header, footer and connection.
The delegate keeps multiple delegates and pick them depending on the
routing conditions. The conditions could be a type or a lambda expression. We use type routing to make the specific kind of nodes unique (e.g. backdrop delegate or compound delegate), and also we can use the lambda function to make the particular state of nodes unique (e.g. full expanded or collapsed delegate).
## Model
GraphModel defines the base class for the Graph model. It defines the standard interface to be able to interoperate with the components of the model-view architecture. The model manages two kinds of data elements. Node and port are the atomic data elements of the model.
## Widget
GraphNode Represents the Widget for the single node. Uses the model and the delegate to fill up its layout. The overall graph layout follows the method developed by Sugiyama which computes the coordinates for drawing the entire directed graphs. GraphView plays as the visualization layer of omni.kit.widget.graph. It behaves like a regular widget and displays nodes and their connections.
## Batch Position Helper
This extension also adds support for batch position updates for graphs. It makes moving a collection of nodes together super easy by inheriting from GraphModelBatchPositionHelper, such as multi-selection, backdrops etc.
## Relationship with omni.kit.graph.editor.core
For users to easily set up a graph framework, we provide another extension `omni.kit.graph.editor.core` which is based on this extension.
omni.kit.graph.editor.core is more on the application level which defines the graph to have a catalog view to show all the available graph nodes and the graph editor view to construct the actual graph by dragging nodes from the catalog view, while `omni.kit.widget.graph` is the core definition of how graphs work.
There is the documentation extension of `omni.kit.graph.docs` which explains how `omni.kit.graph.editor.core` is built up. Also we provide a real graph example extension called `omni.kit.graph.editor.example` which is based on `omni.kit.graph.editor.core`. It showcases how you can easily build a graph extension by feeding in your customized graph model and how to control the graph look by switching among different graph delegates.
Here is an example graph built from `omni.kit.graph.editor.example`:

|
omniverse-code/kit/exts/omni.kit.viewport.bundle/PACKAGE-LICENSES/omni.kit.viewport.bundle-LICENSE.md | Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited. |
omniverse-code/kit/exts/omni.kit.viewport.bundle/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "104.0.2"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarly for displaying extension info in UI
title = "Viewport Bundle"
description="A bundle of Viewport extensions that creates a baseline interactive Viewport."
# URL of the extension source repository.
repository = ""
# Keywords for the extension
keywords = ["kit", "ui", "viewport", "hydra", "render"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
preview_image = "data/preview.png"
category = "Viewport"
[dependencies]
# Load the actual ViewportWindow extension
"omni.kit.viewport.window" = {}
# Load the camera-manipulator (navigation)
"omni.kit.manipulator.camera" = {}
# Load the prim-manipulator (translate, rotate, scale)
"omni.kit.manipulator.prim" = {}
# Load the selection-manipulator (selecteable prims)
"omni.kit.manipulator.selection" = {}
# Load drag-drop support of external files
"omni.kit.window.drop_support" = {}
# Load the stats HUD (resoltuin, fsp, etc..)
"omni.hydra.engine.stats" = {}
# Load the viewport settings menu
"omni.kit.viewport.menubar.settings" = {}
# Load the renderer selection menu
"omni.kit.viewport.menubar.render" = {}
# Load the view-from-camera menu
"omni.kit.viewport.menubar.camera" = {}
# Load the display type menu
"omni.kit.viewport.menubar.display" = {}
# Load the legacy grid and gizmo drawing for now
"omni.kit.viewport.legacy_gizmos" = {}
# Main python module this extension provides, it will be publicly available as "import omni.kit.viewport.bundle".
# [[python.module]]
# name = "omni.kit.viewport.bundle"
[settings]
# Setup Kit to create 'omni.kit.viewport.window' Windows named Viewport
exts."omni.kit.viewport.window".startup.windowName = "Viewport"
# Setting to disable opening a Window instance when loaded
exts."omni.kit.viewport.window".startup.disableWindowOnLoad = false
# Collapse the additional camera control area
persistent.exts."omni.kit.viewport.menubar.camera".expand = false
# Set the default perspective camera focalLength
persistent.app.primCreation.typedDefaults.camera.focalLength = 18.147562
# Default legacy display options for any consumers (all visible but Skeletons)
persistent.app.viewport.displayOptions = 32255
[[test]]
# This is just a collection of extensions, they should be tested indivdiually for now
waiver = ""
|
omniverse-code/kit/exts/omni.kit.viewport.bundle/docs/CHANGELOG.md | # CHANGELOG
This document records all notable changes to ``omni.kit.viewport.bundle`` extension.
This project adheres to `Semantic Versioning <https://semver.org/>`_.
## [104.0.2] - 2022-09-28
### Added
- New Lighting menu item.
## [104.0.1] - 2022-08-26
### Added
- Default legacy Viewport displayOptions to all visible except Skeletons.
## [104.0.0] - 2022-05-04
### Added
- Initial version
|
omniverse-code/kit/exts/omni.kit.viewport.bundle/docs/README.md | # Viewport Bundle Extension [omni.kit.viewport.bundle]
A bundle of Viewport extensions that creates a baseline interactive Viewport.
|
omniverse-code/kit/exts/omni.kit.viewport.bundle/docs/index.rst | omni.kit.viewport.bundle
###########################
Viewport Bundle
.. toctree::
:maxdepth: 1
README
CHANGELOG
.. automodule:: omni.kit.viewport.bundle
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:
:imported-members:
|
omniverse-code/kit/exts/omni.kit.livestream.native/PACKAGE-LICENSES/omni.kit.livestream.native-LICENSE.md | Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited. |
omniverse-code/kit/exts/omni.kit.livestream.native/config/extension.toml | [package]
version = "0.1.5"
title = "Livestream Native Backend"
description = "Enable streaming capabilities, making the application useable remotely."
readme = "docs/README.md"
repository = ""
category = "Rendering"
keywords = ["streaming", "server"]
changelog = "docs/CHANGELOG.md"
icon = "data/icon.png"
preview_image = "data/preview.png"
[[python.module]]
name = "omni.kit.livestream.native"
[dependencies]
"omni.kit.livestream.core" = {}
"omni.kit.streamsdk.plugins" = {}
# Declare an optional dependency on the Streaming Manager, as the extension may
# not be available in the standalone Kit SDK, but be available in Omniverse
# applications consuming other streaming extensions:
"omni.services.streaming.manager" = { optional = true }
[[native.plugin]]
path = "${omni.kit.streamsdk.plugins}/bin/carb.livestream.plugin"
[settings]
app.livestream.port = 48010
app.livestream.proto = "websocket"
app.livestream.ipversion = "auto"
app.livestream.allowResize = true
# Prevent the viewport from being throttled down when using native streaming,
# as it may assume that the application is out of the User's focus during
# streaming sessions:
app.renderer.skipWhileMinimized = false
app.renderer.sleepMsOnFocus = 0
app.renderer.sleepMsOutOfFocus = 0
[[test]]
waiver = "Bundle extension" # OM-48114
|
omniverse-code/kit/exts/omni.kit.livestream.native/omni/kit/livestream/native/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
from .scripts.extension import *
|
omniverse-code/kit/exts/omni.kit.livestream.native/omni/kit/livestream/native/scripts/extension.py | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Native streaming extension."""
import carb
import omni.ext
import omni.kit.livestream.bind
class NativeStreamingExtension(omni.ext.IExt):
"""Native streaming extension."""
def __init__(self) -> None:
super().__init__()
self._stream_interface = None
def on_startup(self) -> None:
self._kit_livestream = omni.kit.livestream.bind.acquire_livestream_interface()
self._kit_livestream.startup()
self._register_streaming_interface()
def on_shutdown(self) -> None:
self._unregister_streaming_interface()
self._kit_livestream.shutdown()
self._kit_livestream = None
def _register_streaming_interface(self) -> None:
"""Register the streaming interface for the extension."""
try:
from omni.services.streaming.manager import get_stream_manager, StreamManager
from .streaming_interface import NativeStreamInterface
self._stream_interface = NativeStreamInterface()
stream_manager: StreamManager = get_stream_manager()
stream_manager.register_stream_interface(stream_interface=self._stream_interface)
stream_manager.enable_stream_interface(stream_interface_id=self._stream_interface.id)
except ImportError:
carb.log_info("The Streaming Manager extension was not available when enabling native streaming features.")
except Exception as exc:
carb.log_error(f"An error occurred while attempting to register the native streaming interface: {str(exc)}.")
def _unregister_streaming_interface(self) -> None:
"""Unregister the streaming interface for the extension."""
if self._stream_interface is None:
# No stream interface was register when enabling the extension, so there is no need to attempt to unregister
# it and the function can exit early:
return
try:
from omni.services.streaming.manager import get_stream_manager, StreamManager
stream_manager: StreamManager = get_stream_manager()
stream_manager.disable_stream_interface(stream_interface_id=self._stream_interface.id)
stream_manager.unregister_stream_interface(stream_interface_id=self._stream_interface.id)
except ImportError:
carb.log_info("The Streaming Manager extension was not available when disabling native streaming features.")
except Exception as exc:
carb.log_error(f"An error occurred while attempting to unregister the native streaming interface: {str(exc)}.")
|
omniverse-code/kit/exts/omni.kit.livestream.native/omni/kit/livestream/native/scripts/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
omniverse-code/kit/exts/omni.kit.livestream.native/omni/kit/livestream/native/scripts/streaming_interface.py | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Native streaming interface."""
import os
from typing import List
import psutil
import carb.settings
from omni.services.streaming.manager import StreamInterface
class NativeStreamInterface(StreamInterface):
"""Native streaming interface."""
@property
def id(self) -> str:
return "Native"
@property
def menu_label(self) -> str:
return "Native"
@property
def module_name(self) -> str:
return __name__
@property
def stream_urls(self) -> List[str]:
return self.local_ips
async def is_healthy(self) -> bool:
"""
Check if the streaming server is in a state that is considered healthy (i.e. that the streaming server listens
for connection requests on the configured port).
Args:
None
Returns:
bool: A flag indicating whether the streaming server is in a healthy state.
"""
# Confirm if the superclass may have already flagged the stream as being in an unhealthy state, in order to
# potentially return early:
is_healthy = await super().is_healthy()
if not is_healthy:
return is_healthy
# Check that the host process has the expected native streaming server port in a "LISTENING" state by querying
# its process, rather than issuing an actual request against the server:
kit_process = psutil.Process(pid=os.getpid())
expected_server_port = self._get_native_streaming_port_number()
is_listening_on_expected_port = False
for connection in kit_process.connections():
if connection.laddr.port == expected_server_port:
if connection.status is psutil.CONN_LISTEN:
is_listening_on_expected_port = True
break
return is_listening_on_expected_port
def _get_native_streaming_port_number(self) -> int:
"""
Return the port number of on which the streaming server is expected to receive connection requests.
Args:
None
Returns:
int: The port number of on which the streaming server is expected to receive connection requests.
"""
settings = carb.settings.get_settings()
native_server_port = settings.get_as_int("app/livestream/port")
return native_server_port
|
omniverse-code/kit/exts/omni.kit.livestream.native/docs/CHANGELOG.md | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.1.5] - 2022-09-06
### Added
- Added streaming manager interface to the extension, in order to handle the lifecycle of other streaming extensions also potentially enabled at the time native streaming is enabled.
- Added ability to perform health checks on the stream based on whether the signaling port of the native extension is set to listen for connections on the Omniverse application process.
## [0.1.4] - 2022-05-30
### Added
- Updated icon and preview of the extension.
## [0.1.3] - 2022-04-25
### Added
- Added test waiver
## [0.1.2] - 2022-03-19
### Added
- Updated metadata information about the extension.
## [0.1.1] - 2022-02-23
### Added
- Added metadata information about the extension.
|
omniverse-code/kit/exts/omni.kit.livestream.native/docs/README.md | # Livestream Native Backend [omni.kit.livestream.native]
Server-side native streaming feature, allowing Users to interact with Omniverse applications over the network or the Internet.
For details about using streaming clients on end-User machines, download and install "Kit Remote" from the Omniverse Launcher. Visit https://docs.omniverse.nvidia.com for additional deployment and configuration options.
|
omniverse-code/kit/exts/omni.kit.livestream.native/docs/index.rst | omni.kit.livestream.native
##########################
This section presents the server-side native streaming feature, allowing Users to interact with Omniverse applications from clients over the network or the Internet.
Python API Reference
*********************
.. automodule:: omni.kit.livestream.native
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:imported-members:
|
omniverse-code/kit/exts/omni.kit.stage_templates/PACKAGE-LICENSES/omni.kit.stage_templates-LICENSE.md | Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited. |
omniverse-code/kit/exts/omni.kit.stage_templates/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "1.1.13"
category = "Internal"
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarly for displaying extension info in UI
title = "Stage Templates"
description="Allows custom stage templates to be loaded on omni.usd.get_context().new_stage_async()"
[dependencies]
"omni.kit.commands" = {}
"omni.usd" = {}
"omni.kit.actions.core" = {}
"omni.kit.primitive.mesh" = {}
[settings]
persistent.app.newStage.defaultTemplate = "sunlight"
persistent.app.newStage.templatePath = ["${app_documents}/scripts/new_stage"]
persistent.app.stage.timeCodeRange = [0, 100]
[[python.module]]
name = "omni.kit.stage_templates"
[[test]]
args = [
"--/renderer/enabled=pxr",
"--/renderer/active=pxr",
"--/renderer/multiGpu/enabled=false",
"--/renderer/multiGpu/autoEnable=false", # Disable mGPU with PXR due to OM-51026, OM-53611
"--/renderer/multiGpu/maxGpuCount=1",
"--/app/asyncRendering=false",
"--/app/file/ignoreUnsavedStage=true",
"--/app/window/dpiScaleOverride=1.0",
"--/app/window/scaleToMonitor=false",
"--no-window",
]
dependencies = [
"omni.hydra.pxr",
"omni.kit.renderer.capture",
"omni.kit.mainwindow",
"omni.kit.window.file",
"omni.kit.menu.utils",
"omni.kit.window.preferences",
"omni.kit.property.usd",
"omni.kit.ui_test",
"omni.kit.test_suite.helpers",
"omni.kit.material.library"
]
stdoutFailPatterns.exclude = [
"*HydraRenderer failed to render this frame*", # Can drop a frame or two rendering with OpenGL interop
"*Cannot use omni.hydra.pxr without OpenGL interop*" # Linux TC configs with multi-GPU might not have OpenGL available
]
pyCoverageFilter = ["omni.kit.stage_templates"] # Restrict coverage to this extension
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/__init__.py | # NOTE: all imported classes must have different class names
from .new_stage import *
from .templates import *
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/stage_templates_menu.py | import locale
import carb
import functools
import carb.settings
import omni.kit.app
from functools import partial
def get_action_name(name):
return name.lower().replace('-', '_').replace(' ', '_')
class StageTemplateMenu:
def __init__(self):
pass
def on_startup(self):
self._build_sub_menu()
omni.kit.menu.utils.add_menu_items(self._menu_list, "File")
# update submenu if defaultTemplate changes
self._update_setting = omni.kit.app.SettingChangeSubscription(
"/persistent/app/newStage/defaultTemplate", lambda *_: omni.kit.menu.utils.refresh_menu_items("File")
)
def on_shutdown(self):
self._update_setting = None
omni.kit.menu.utils.remove_menu_items(self._menu_list, "File")
self._menu_list = None
def _build_sub_menu(self):
from omni.kit.menu.utils import MenuItemDescription
def sort_cmp(template1, template2):
return locale.strcoll(template1[0], template2[0])
sub_menu = []
default_template = omni.kit.stage_templates.get_default_template()
stage_templates = omni.kit.stage_templates.get_stage_template_list()
def template_ticked(sn: str) -> bool:
return omni.kit.stage_templates.get_default_template() == sn
for template_list in stage_templates:
for template in sorted(template_list.items(), key=functools.cmp_to_key(sort_cmp)):
stage_name = template[0]
sub_menu.append(
MenuItemDescription(
name=stage_name.replace("_", " ").title(),
ticked=True,
ticked_fn=lambda sn=template[0]: template_ticked(sn),
onclick_action=("omni.kit.stage.templates", f"create_new_stage_{get_action_name(stage_name)}")
)
)
self._menu_list = [
MenuItemDescription(
name="New From Stage Template", glyph="file.svg", appear_after=["Open Recent", "New"], sub_menu=sub_menu
)
]
def _rebuild_sub_menu(self):
if self._menu_list:
omni.kit.menu.utils.remove_menu_items(self._menu_list, "File")
self._build_sub_menu()
omni.kit.menu.utils.add_menu_items(self._menu_list, "File")
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/stage_templates_page.py | import re
import os
import carb.settings
import omni.kit.app
import omni.ui as ui
from functools import partial
from omni.kit.window.preferences import PreferenceBuilder, show_file_importer, SettingType, PERSISTENT_SETTINGS_PREFIX
class StageTemplatesPreferences(PreferenceBuilder):
def __init__(self):
super().__init__("Template Startup")
# update on setting change
def on_change(item, event_type):
if event_type == carb.settings.ChangeEventType.CHANGED:
omni.kit.window.preferences.rebuild_pages()
self._update_setting = omni.kit.app.SettingChangeSubscription(PERSISTENT_SETTINGS_PREFIX + "/app/newStage/templatePath", on_change)
def build(self):
template_paths = carb.settings.get_settings().get("/persistent/app/newStage/templatePath")
default_template = omni.kit.stage_templates.get_default_template()
script_names = []
new_templates = omni.kit.stage_templates.get_stage_template_list()
for templates in new_templates:
for template in templates.items():
script_names.append(template[0])
if len(script_names) == 0:
script_names = ["None##None"]
with ui.VStack(height=0):
with self.add_frame("New Stage Template"):
with ui.VStack():
for index, path in enumerate(template_paths):
with ui.HStack(height=24):
self.label("Path to user templates")
widget = ui.StringField(height=20)
widget.model.set_value(path)
ui.Button(style={"image_url": "resources/icons/folder.png"}, clicked_fn=lambda p=self.cleanup_slashes(carb.tokens.get_tokens_interface().resolve(path)), i=index, w=widget: self._on_browse_button_fn(p, i, w), width=24)
self.create_setting_widget_combo("Default Template", PERSISTENT_SETTINGS_PREFIX + "/app/newStage/defaultTemplate", script_names)
def _on_browse_button_fn(self, path, index, widget):
""" Called when the user picks the Browse button. """
show_file_importer(
"Select Template Directory",
click_apply_fn=lambda p=self.cleanup_slashes(path), i=index, w=widget: self._on_file_pick(
p, index=i, widget=w),
filename_url=path)
def _on_file_pick(self, full_path, index, widget):
""" Called when the user accepts directory in the Select Directory dialog. """
directory = self.cleanup_slashes(full_path, True)
settings = carb.settings.get_settings()
template_paths = settings.get(PERSISTENT_SETTINGS_PREFIX + "/app/newStage/templatePath")
template_paths[index] = directory
settings.set(PERSISTENT_SETTINGS_PREFIX + "/app/newStage/templatePath", template_paths)
widget.model.set_value(directory)
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/new_stage.py | import os
import carb
import carb.settings
import omni.ext
import omni.kit.app
import omni.usd
import asyncio
import glob
import omni.kit.actions.core
from inspect import signature
from functools import partial
from pxr import Sdf, UsdGeom, Usd, Gf
from contextlib import suppress
from .stage_templates_menu import get_action_name
_extension_instance = None
_extension_path = None
_template_list = []
_clear_dirty_task = None
def _try_unregister_page(page: str):
with suppress(Exception):
import omni.kit.window.preferences
omni.kit.window.preferences.unregister_page(page)
# must be initalized before templates
class NewStageExtension(omni.ext.IExt):
def on_startup(self, ext_id):
global _extension_path
global clear_dirty_task
_extension_path = omni.kit.app.get_app().get_extension_manager().get_extension_path(ext_id)
_clear_dirty_task = None
register_template("empty", self.new_stage_empty, 0)
omni.kit.stage_templates.templates.load_templates()
self.load_user_templates()
# as omni.kit.stage_templates loads before the UI, it cannot depend on omni.kit.window.preferences or other extensions.
self._preferences_page = None
self._menu_button1 = None
self._menu_button2 = None
self._hooks = []
manager = omni.kit.app.get_app().get_extension_manager()
self._hooks.append(
manager.subscribe_to_extension_enable(
on_enable_fn=lambda _: self._register_page(),
on_disable_fn=lambda _: self._unregister_page(),
ext_name="omni.kit.window.preferences",
hook_name="omni.kit.stage_templates omni.kit.window.preferences listener",
)
)
self._stage_template_menu = None
self._hooks.append(
manager.subscribe_to_extension_enable(
on_enable_fn=lambda _: self._register_menu(),
on_disable_fn=lambda _: self._unregister_menu(),
ext_name="omni.kit.menu.utils",
hook_name="omni.kit.stage_templates omni.kit.menu.utils listener",
)
)
self._hooks.append(
manager.subscribe_to_extension_enable(
on_enable_fn=lambda _: self._register_property_menu(),
on_disable_fn=lambda _: self._unregister_property_menu(),
ext_name="omni.kit.property.usd",
hook_name="omni.kit.stage_templates omni.kit.property.usd listener",
)
)
global _extension_instance
_extension_instance = self
def on_shutdown(self):
global _extension_instance
global _template_list
global _clear_dirty_task
if _clear_dirty_task:
_clear_dirty_task.cancel()
_clear_dirty_task = None
unregister_template("empty")
self._unregister_page()
self._unregister_menu()
self._unregister_property_menu()
self._hooks = None
_extension_instance = None
_template_list = None
for user_template in self._user_scripts:
del user_template
omni.kit.stage_templates.templates.unload_templates()
self._user_scripts = None
def _register_page(self):
try:
from omni.kit.window.preferences import register_page
from .stage_templates_page import StageTemplatesPreferences
self._preferences_page = register_page(StageTemplatesPreferences())
except ModuleNotFoundError:
pass
def _unregister_page(self):
if self._preferences_page:
try:
import omni.kit.window.preferences
omni.kit.window.preferences.unregister_page(self._preferences_page)
self._preferences_page = None
except ModuleNotFoundError:
pass
def _register_menu(self):
try:
from .stage_templates_menu import StageTemplateMenu
self._stage_template_menu = StageTemplateMenu()
self._stage_template_menu.on_startup()
except ModuleNotFoundError:
pass
def _unregister_menu(self):
if self._stage_template_menu:
try:
self._stage_template_menu.on_shutdown()
self._stage_template_menu = None
except ModuleNotFoundError:
pass
def _has_axis(self, objects, axis):
if not "stage" in objects:
return False
stage = objects["stage"]
if stage:
return UsdGeom.GetStageUpAxis(stage) != axis
else:
carb.log_error("_click_set_up_axis stage not found")
return False
def _click_set_up_axis(self, payload, axis):
stage = payload.get_stage()
if stage:
rootLayer = stage.GetRootLayer()
if rootLayer:
rootLayer.SetPermissionToEdit(True)
with Usd.EditContext(stage, rootLayer):
UsdGeom.SetStageUpAxis(stage, axis)
from omni.kit.property.usd import PrimPathWidget
PrimPathWidget.rebuild()
else:
carb.log_error("_click_set_up_axis rootLayer not found")
else:
carb.log_error("_click_set_up_axis stage not found")
def _register_property_menu(self):
# +add menu item(s)
from omni.kit.property.usd import PrimPathWidget
context_menu = omni.kit.context_menu.get_instance()
if context_menu is None:
self._menu_button1 = None
self._menu_button2 = None
carb.log_error("context_menu is disabled!")
return None
self._menu_button1 = PrimPathWidget.add_button_menu_entry(
"Stage/Set up axis +Y",
show_fn=partial(self._has_axis, axis=UsdGeom.Tokens.y),
onclick_fn=partial(self._click_set_up_axis, axis=UsdGeom.Tokens.y),
add_to_context_menu=False,
)
self._menu_button2 = PrimPathWidget.add_button_menu_entry(
"Stage/Set up axis +Z",
show_fn=partial(self._has_axis, axis=UsdGeom.Tokens.z),
onclick_fn=partial(self._click_set_up_axis, axis=UsdGeom.Tokens.z),
add_to_context_menu=False,
)
def _unregister_property_menu(self):
if self._menu_button1 or self._menu_button2:
try:
from omni.kit.property.usd import PrimPathWidget
if self._menu_button1:
PrimPathWidget.remove_button_menu_entry(self._menu_button1)
self._menu_button1 = None
if self._menu_button2:
PrimPathWidget.remove_button_menu_entry(self._menu_button2)
self._menu_button2 = None
except ModuleNotFoundError:
pass
def new_stage_empty(self, rootname):
pass
def load_user_templates(self):
settings = carb.settings.get_settings()
template_paths = settings.get("/persistent/app/newStage/templatePath")
# Create template directories
original_umask = os.umask(0)
for path in template_paths:
path = carb.tokens.get_tokens_interface().resolve(path)
if not os.path.isdir(path):
try:
os.makedirs(path)
except Exception:
carb.log_error(f"Failed to create directory {path}")
os.umask(original_umask)
# Load template scripts
self._user_scripts = []
for path in template_paths:
for full_path in glob.glob(f"{path}/*.py"):
try:
with open(os.path.normpath(full_path)) as f:
user_script = f.read()
carb.log_verbose(f"loaded new_stage template {full_path}")
exec(user_script)
self._user_scripts.append(user_script)
except Exception as e:
carb.log_error(f"error loading {full_path}: {e}")
def register_template(name, new_stage_fn, group=0, rebuild=True):
"""Register new_stage Template
Args:
param1 (str): template name
param2 (callable): function to create template
param3 (int): group number. User by menu to split into groups with seperators
Returns:
bool: True for success, False when template already exists.
"""
# check if element exists
global _template_list
exists = get_stage_template(name)
if exists:
carb.log_warn(f"template {name} already exists")
return False
try:
exists = _template_list[group]
except IndexError:
_template_list.insert(group, {})
_template_list[group][name] = (name, new_stage_fn)
omni.kit.actions.core.get_action_registry().register_action(
"omni.kit.stage.templates",
f"create_new_stage_{get_action_name(name)}",
lambda t=name: omni.kit.window.file.new(t),
display_name=f"Create New Stage {name}",
description=f"Create New Stage {name}",
tag="Create Stage Template",
)
if rebuild:
rebuild_stage_template_menu()
return True
def unregister_template(name, rebuild: bool=True):
"""Remove registered new_stage Template
Args:
param1 (str): template name
Returns:
nothing
"""
global _template_list
if _template_list is not None:
for templates in _template_list:
if name in templates:
del templates[name]
if rebuild:
rebuild_stage_template_menu()
omni.kit.actions.core.get_action_registry().deregister_action("omni.kit.stage.templates", f"create_new_stage_{get_action_name(name)}")
def rebuild_stage_template_menu() -> None:
if _extension_instance and _extension_instance._stage_template_menu:
_extension_instance._stage_template_menu._rebuild_sub_menu()
def get_stage_template_list():
"""Get list of loaded new_stage templates
Args:
none
Returns:
list: list of groups of new_stage template names & create function pointers
"""
global _template_list
if not _template_list or len(_template_list) == 0:
return None
return _template_list
def get_stage_template(name):
"""Get named new_stage template & create function pointer
Args:
param1 (str): template name
Returns:
tuple: new_stage template name & create function pointer
"""
global _template_list
if not _template_list:
return None
for templates in _template_list:
if name in templates:
return templates[name]
return None
def get_default_template():
"""Get name of default new_stage template. Used when new_stage is called without template name specified
Args:
param1 (str): template name
Returns:
str: new_stage template name
"""
settings = carb.settings.get_settings()
return settings.get("/persistent/app/newStage/defaultTemplate")
def new_stage_finalize(create_result, error_message, usd_context, template=None, on_new_stage_fn=None):
global _clear_dirty_task
if _clear_dirty_task:
_clear_dirty_task.cancel()
_clear_dirty_task = None
if create_result:
stage = usd_context.get_stage()
# already finilized
if stage.HasDefaultPrim():
return
with Usd.EditContext(stage, stage.GetRootLayer()):
settings = carb.settings.get_settings()
default_prim_name = settings.get("/persistent/app/stage/defaultPrimName")
up_axis = settings.get("/persistent/app/stage/upAxis")
time_codes_per_second = settings.get_as_float("/persistent/app/stage/timeCodesPerSecond")
time_code_range = settings.get("/persistent/app/stage/timeCodeRange")
if time_code_range is None:
time_code_range = [0, 100]
rootname = f"/{default_prim_name}"
# Set up axis
if up_axis == "Y" or up_axis == "y":
UsdGeom.SetStageUpAxis(stage, UsdGeom.Tokens.y)
else:
UsdGeom.SetStageUpAxis(stage, UsdGeom.Tokens.z)
# Set timecodes per second
stage.SetTimeCodesPerSecond(time_codes_per_second)
# Start and end time code
if time_code_range:
stage.SetStartTimeCode(time_code_range[0])
stage.SetEndTimeCode(time_code_range[1])
# Create defaultPrim
default_prim = UsdGeom.Xform.Define(stage, Sdf.Path(rootname)).GetPrim()
if not default_prim:
carb.log_error("Failed to create defaultPrim at {rootname}")
return
stage.SetDefaultPrim(default_prim)
if template is None:
template = get_default_template()
# Run script
item = get_stage_template(template)
if item and item[1]:
try:
create_fn = item[1]
sig = signature(create_fn)
if len(sig.parameters) == 1:
create_fn(rootname)
elif len(sig.parameters) == 2:
create_fn(rootname, usd_context.get_name())
else:
carb.log_error(f"template {template} has incorrect parameter count")
except Exception as error:
carb.log_error(f"exception in {template} {error}")
omni.kit.undo.clear_stack()
usd_context.set_pending_edit(False)
# Clear the stage dirty state again, as bound materials can set it
async def clear_dirty(usd_context):
import omni.kit.app
await omni.kit.app.get_app().next_update_async()
await omni.kit.app.get_app().next_update_async()
usd_context.set_pending_edit(False)
_clear_dirty_task = None
if on_new_stage_fn:
on_new_stage_fn(create_result, error_message)
_clear_dirty_task = asyncio.ensure_future(clear_dirty(usd_context))
def new_stage(on_new_stage_fn=None, template="empty", usd_context=None):
"""Execute new_stage
Args:
param2 (str): template name, if not specified default name is used
param3 (omni.usd.UsdContext): usd_context, the usd_context to create new stage
Returns:
nothing
"""
if usd_context is None:
usd_context = omni.usd.get_context()
if on_new_stage_fn is not None:
carb.log_warn(
"omni.kit.stage_templates.new_stage(callback, ...) is deprecated. \
Use `omni.kit.stage_templates.new_stage_with_callback` instead."
)
new_stage_with_callback(on_new_stage_fn, template, usd_context)
else:
new_stage_finalize(usd_context.new_stage(), "", usd_context, template=template, on_new_stage_fn=None)
def new_stage_with_callback(on_new_stage_fn=None, template="empty", usd_context=None):
"""Execute new_stage
Args:
param1 (callable): callback when new_stage is created
param2 (str): template name, if not specified default name is used
param3 (omni.usd.UsdContext): usd_context, the usd_context to create new stage
Returns:
nothing
"""
if usd_context is None:
usd_context = omni.usd.get_context()
usd_context.new_stage_with_callback(
partial(new_stage_finalize, usd_context=usd_context, template=template, on_new_stage_fn=on_new_stage_fn)
)
async def new_stage_async(template="empty", usd_context=None):
"""Execute new_stage asynchronously
Args:
param1 (str): template name, if not specified default name is used
param2 (omni.usd.UsdContext): usd_context, the usd_context to create new stage
Returns:
awaitable object until stage is created
"""
if usd_context is None:
usd_context = omni.usd.get_context()
f = asyncio.Future()
def on_new_stage(result, err_msg):
if not f.done():
f.set_result((result, err_msg))
new_stage_with_callback(on_new_stage, template, usd_context)
return await f
def get_extension_path(sub_directory):
global _extension_path
path = _extension_path
if sub_directory:
path = os.path.normpath(os.path.join(path, sub_directory))
return path
def set_transform_helper(
prim_path,
translate=Gf.Vec3d(0, 0, 0),
euler=Gf.Vec3d(0, 0, 0),
scale=Gf.Vec3d(1, 1, 1),
):
rotation = (
Gf.Rotation(Gf.Vec3d.ZAxis(), euler[2])
* Gf.Rotation(Gf.Vec3d.YAxis(), euler[1])
* Gf.Rotation(Gf.Vec3d.XAxis(), euler[0])
)
xform = Gf.Matrix4d().SetScale(scale) * Gf.Matrix4d().SetRotate(rotation) * Gf.Matrix4d().SetTranslate(translate)
omni.kit.commands.execute(
"TransformPrim",
path=prim_path,
new_transform_matrix=xform,
)
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/templates/default_stage.py | import carb
import omni.ext
import omni.kit.commands
from pxr import UsdLux, UsdShade, Kind, Vt, Gf, UsdGeom, Sdf
from ..new_stage import *
class DefaultStage:
def __init__(self):
register_template("default stage", self.new_stage)
def __del__(self):
unregister_template("default stage")
def new_stage(self, rootname, usd_context_name):
# S3 URL's
carlight_hdr = Sdf.AssetPath("https://omniverse-content-production.s3.us-west-2.amazonaws.com/Assets/Scenes/Templates/Default/SubUSDs/textures/CarLight_512x256.hdr")
grid_basecolor = Sdf.AssetPath("https://omniverse-content-production.s3.us-west-2.amazonaws.com/Assets/Scenes/Templates/Default/SubUSDs/textures/ov_uv_grids_basecolor_1024.png")
# change ambientLightColor
carb.settings.get_settings().set("/rtx/sceneDb/ambientLightColor", (0, 0, 0))
carb.settings.get_settings().set("/rtx/indirectDiffuse/enabled", True)
carb.settings.get_settings().set("/rtx/domeLight/upperLowerStrategy", 0)
carb.settings.get_settings().set("/rtx/post/lensFlares/flareScale", 0.075)
carb.settings.get_settings().set("/rtx/sceneDb/ambientLightIntensity", 0)
# get up axis
usd_context = omni.usd.get_context(usd_context_name)
stage = usd_context.get_stage()
up_axis = UsdGeom.GetStageUpAxis(stage)
with Usd.EditContext(stage, stage.GetRootLayer()):
# create Environment
omni.kit.commands.execute(
"CreatePrim",
prim_path="/Environment",
prim_type="Xform",
select_new_prim=False,
create_default_xform=True,
context_name=usd_context_name
)
prim = stage.GetPrimAtPath("/Environment")
prim.CreateAttribute("ground:size", Sdf.ValueTypeNames.Int, False).Set(1400)
prim.CreateAttribute("ground:type", Sdf.ValueTypeNames.String, False).Set("On")
# create Sky
omni.kit.commands.execute(
"CreatePrim",
prim_path="/Environment/Sky",
prim_type="DomeLight",
select_new_prim=False,
attributes={
UsdLux.Tokens.inputsIntensity: 1,
UsdLux.Tokens.inputsColorTemperature: 6250,
UsdLux.Tokens.inputsEnableColorTemperature: True,
UsdLux.Tokens.inputsExposure: 9,
UsdLux.Tokens.inputsTextureFile: carlight_hdr,
UsdLux.Tokens.inputsTextureFormat: UsdLux.Tokens.latlong,
UsdGeom.Tokens.visibility: "inherited",
} if hasattr(UsdLux.Tokens, 'inputsIntensity') else \
{
UsdLux.Tokens.intensity: 1,
UsdLux.Tokens.colorTemperature: 6250,
UsdLux.Tokens.enableColorTemperature: True,
UsdLux.Tokens.exposure: 9,
UsdLux.Tokens.textureFile: carlight_hdr,
UsdLux.Tokens.textureFormat: UsdLux.Tokens.latlong,
UsdGeom.Tokens.visibility: "inherited",
},
create_default_xform=True,
context_name=usd_context_name
)
prim = stage.GetPrimAtPath("/Environment/Sky")
prim.CreateAttribute("xformOp:scale", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(1, 1, 1))
if up_axis == "Y":
prim.CreateAttribute("xformOp:translate", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, 305, 0))
prim.CreateAttribute("xformOp:rotateXYZ", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, -90, -90))
else:
prim.CreateAttribute("xformOp:translate", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, 0, 305))
prim.CreateAttribute("xformOp:rotateXYZ", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, 0, 0))
prim.CreateAttribute("xformOpOrder", Sdf.ValueTypeNames.String, False).Set(["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"])
# create DistantLight
omni.kit.commands.execute(
"CreatePrim",
prim_path="/Environment/DistantLight",
prim_type="DistantLight",
select_new_prim=False,
attributes={UsdLux.Tokens.inputsAngle: 2.5,
UsdLux.Tokens.inputsIntensity: 1,
UsdLux.Tokens.inputsColorTemperature: 7250,
UsdLux.Tokens.inputsEnableColorTemperature: True,
UsdLux.Tokens.inputsExposure: 10,
UsdGeom.Tokens.visibility: "inherited",
} if hasattr(UsdLux.Tokens, 'inputsIntensity') else \
{
UsdLux.Tokens.angle: 2.5,
UsdLux.Tokens.intensity: 1,
UsdLux.Tokens.colorTemperature: 7250,
UsdLux.Tokens.enableColorTemperature: True,
UsdLux.Tokens.exposure: 10,
UsdGeom.Tokens.visibility: "inherited",
},
create_default_xform=True,
context_name=usd_context_name
)
prim = stage.GetPrimAtPath("/Environment/DistantLight")
if up_axis == "Y":
prim.CreateAttribute("xformOp:translate", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, 305, 0))
prim.CreateAttribute("xformOp:rotateXYZ", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(-105, 0, 0))
else:
prim.CreateAttribute("xformOp:translate", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, 0, 305))
prim.CreateAttribute("xformOp:rotateXYZ", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(-15, 0, 0))
prim.CreateAttribute("xformOpOrder", Sdf.ValueTypeNames.String, False).Set(["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"])
# Material "Grid"
mtl_path = omni.usd.get_stage_next_free_path(stage, "/Environment/Looks/Grid", False)
omni.kit.commands.execute("CreateMdlMaterialPrim", mtl_url="OmniPBR.mdl", mtl_name="Grid", mtl_path=mtl_path, context_name=usd_context_name)
mat_prim = stage.GetPrimAtPath(mtl_path)
shader = UsdShade.Material(mat_prim).ComputeSurfaceSource("mdl")[0]
shader.SetSourceAssetSubIdentifier("OmniPBR", "mdl")
# set inputs
omni.usd.create_material_input(mat_prim, "albedo_add", 0, Sdf.ValueTypeNames.Float)
omni.usd.create_material_input(mat_prim, "albedo_brightness", 0.52, Sdf.ValueTypeNames.Float)
omni.usd.create_material_input(mat_prim, "albedo_desaturation", 1, Sdf.ValueTypeNames.Float)
omni.usd.create_material_input(mat_prim, "project_uvw", False, Sdf.ValueTypeNames.Bool)
omni.usd.create_material_input(mat_prim, "reflection_roughness_constant", 0.333, Sdf.ValueTypeNames.Float)
omni.usd.create_material_input(mat_prim, "diffuse_texture", grid_basecolor, Sdf.ValueTypeNames.Asset, def_value=Sdf.AssetPath(""), color_space="sRGB")
omni.usd.create_material_input(mat_prim, "texture_rotate", 0, Sdf.ValueTypeNames.Float, def_value=Vt.Float(0.0))
omni.usd.create_material_input(mat_prim, "texture_scale", Gf.Vec2f(0.5, 0.5), Sdf.ValueTypeNames.Float2, def_value=Gf.Vec2f(1, 1))
omni.usd.create_material_input(mat_prim, "texture_translate", Gf.Vec2f(0, 0), Sdf.ValueTypeNames.Float2, def_value=Gf.Vec2f(0, 0))
omni.usd.create_material_input(mat_prim, "world_or_object", False, Sdf.ValueTypeNames.Bool, def_value=Vt.Bool(False))
# Ground
ground_path = "/Environment/ground"
omni.kit.commands.execute(
"CreateMeshPrimWithDefaultXform",
prim_path=ground_path,
prim_type="Plane",
select_new_prim=False,
prepend_default_prim=False,
context_name=usd_context_name
)
prim = stage.GetPrimAtPath(ground_path)
prim.CreateAttribute("xformOp:translate", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, 0, 0))
prim.CreateAttribute("xformOp:scale", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(1, 1, 1))
prim.CreateAttribute("xformOpOrder", Sdf.ValueTypeNames.String, False).Set(["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"])
if up_axis == "Y":
prim.CreateAttribute("xformOp:rotateXYZ", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, -90, -90))
else:
prim.CreateAttribute("xformOp:rotateXYZ", Sdf.ValueTypeNames.Double3, False).Set(Gf.Vec3d(0, 0, 0))
mesh = UsdGeom.Mesh(prim)
mesh.CreateSubdivisionSchemeAttr("none")
mesh.GetFaceVertexCountsAttr().Set([4])
mesh.GetFaceVertexIndicesAttr().Set([0, 1, 3, 2])
mesh.GetPointsAttr().Set(Vt.Vec3fArray([(-50, -50, 0), (50, -50, 0), (-50, 50, 0), (50, 50, 0)]))
mesh.GetNormalsAttr().Set(Vt.Vec3fArray([(0, 0, 1), (0, 0, 1), (0, 0, 1), (0, 0, 1)]))
mesh.SetNormalsInterpolation("faceVarying")
# https://github.com/PixarAnimationStudios/USD/commit/592b4d39edf5daf0534d467e970c95462a65d44b
# UsdGeom.Imageable.CreatePrimvar deprecated in v19.03 and removed in v22.08
primvar = UsdGeom.PrimvarsAPI(prim).CreatePrimvar("st", Sdf.ValueTypeNames.TexCoord2fArray, UsdGeom.Tokens.faceVarying)
primvar.Set(Vt.Vec2fArray([(0, 0), (1, 0), (1, 1), (0, 1)]))
# Create a ground plane collider
# NOTE: replace with USD plane prim after the next USD update
try:
from pxr import UsdPhysics
colPlanePath = "/Environment/groundCollider"
planeGeom = UsdGeom.Plane.Define(stage, colPlanePath)
planeGeom.CreatePurposeAttr().Set("guide")
planeGeom.CreateAxisAttr().Set(up_axis)
colPlanePrim = stage.GetPrimAtPath(colPlanePath)
UsdPhysics.CollisionAPI.Apply(colPlanePrim)
except ImportError:
carb.log_warn("Failed to create a ground plane collider. Please load the omni.physx.bundle extension and create a new stage from this template if you need it.")
# bind "Grid" to "Ground"
omni.kit.commands.execute("BindMaterialCommand", prim_path=ground_path, material_path=mtl_path, strength=None, context_name=usd_context_name)
# update extent
attr = prim.GetAttribute(UsdGeom.Tokens.extent)
if attr:
bounds = UsdGeom.Boundable.ComputeExtentFromPlugins(UsdGeom.Boundable(prim), Usd.TimeCode.Default())
if bounds:
attr.Set(bounds)
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/templates/sunlight.py | import carb
import omni.ext
import omni.kit.commands
from pxr import UsdLux
from ..new_stage import *
class SunlightStage:
def __init__(self):
register_template("sunlight", self.new_stage)
def __del__(self):
unregister_template("sunlight")
def new_stage(self, rootname, usd_context_name):
# Create basic DistantLight
usd_context = omni.usd.get_context(usd_context_name)
stage = usd_context.get_stage()
with Usd.EditContext(stage, stage.GetRootLayer()):
# create Environment
omni.kit.commands.execute(
"CreatePrim",
prim_path="/Environment",
prim_type="Xform",
select_new_prim=False,
create_default_xform=True,
context_name=usd_context_name
)
omni.kit.commands.execute(
"CreatePrim",
prim_path="/Environment/defaultLight",
prim_type="DistantLight",
select_new_prim=False,
# https://github.com/PixarAnimationStudios/USD/commit/b5d3809c943950cd3ff6be0467858a3297df0bb7
attributes={UsdLux.Tokens.inputsAngle: 1.0, UsdLux.Tokens.inputsIntensity: 3000} if hasattr(UsdLux.Tokens, 'inputsIntensity') else \
{UsdLux.Tokens.angle: 1.0, UsdLux.Tokens.intensity: 3000},
create_default_xform=True,
context_name=usd_context_name
)
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/templates/__init__.py | from .sunlight import SunlightStage
from .default_stage import DefaultStage
new_stage_template_list = None
def load_templates():
global new_stage_template_list
new_stage_template_list = [SunlightStage(), DefaultStage()]
def unload_templates():
global new_stage_template_list
new_stage_template_list = None
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/tests/test_new_stage_menu.py | import omni.kit.test
import asyncio
import carb
import omni.usd
import omni.ui as ui
from omni.kit import ui_test
from omni.kit.menu.utils import MenuItemDescription, MenuLayout
class TestMenuFile(omni.kit.test.AsyncTestCase):
async def setUp(self):
# wait for material to be preloaded so create menu is complete & menus don't rebuild during tests
await omni.kit.material.library.get_mdl_list_async()
await ui_test.human_delay()
self._future_test = None
self._required_stage_event = -1
self._stage_event_sub = omni.usd.get_context().get_stage_event_stream().create_subscription_to_pop(self._on_stage_event, name="omni.usd.menu.file")
async def tearDown(self):
pass
def _on_stage_event(self, event):
if self._future_test and int(self._required_stage_event) == int(event.type):
self._future_test.set_result(event.type)
async def reset_stage_event(self, stage_event):
self._required_stage_event = stage_event
self._future_test = asyncio.Future()
async def wait_for_stage_event(self):
async def wait_for_event():
await self._future_test
try:
await asyncio.wait_for(wait_for_event(), timeout=30.0)
except asyncio.TimeoutError:
carb.log_error(f"wait_for_stage_event timeout waiting for {self._required_stage_event}")
self._future_test = None
self._required_stage_event = -1
async def test_file_new_from_stage_template_empty(self):
stage = omni.usd.get_context().get_stage()
layer_name = stage.GetRootLayer().identifier if stage else "None"
await self.reset_stage_event(omni.usd.StageEventType.OPENED)
menu_widget = ui_test.get_menubar()
await menu_widget.find_menu("File").click()
await menu_widget.find_menu("New From Stage Template").click()
# select empty and wait for stage open
await menu_widget.find_menu("Empty").click()
await self.wait_for_stage_event()
# verify Empty stage
stage = omni.usd.get_context().get_stage()
self.assertFalse(stage.GetRootLayer().identifier == layer_name)
prim_list = [prim.GetPath().pathString for prim in stage.TraverseAll()]
self.assertTrue(prim_list == ['/World'])
async def test_file_new_from_stage_template_sunlight(self):
stage = omni.usd.get_context().get_stage()
layer_name = stage.GetRootLayer().identifier if stage else "None"
await self.reset_stage_event(omni.usd.StageEventType.OPENED)
menu_widget = ui_test.get_menubar()
await menu_widget.find_menu("File").click()
await menu_widget.find_menu("New From Stage Template").click()
# select Sunlight and wait for stage open
await menu_widget.find_menu("Sunlight").click()
await self.wait_for_stage_event()
# verify Sunlight stage
stage = omni.usd.get_context().get_stage()
self.assertFalse(stage.GetRootLayer().identifier == layer_name)
prim_list = [prim.GetPath().pathString for prim in stage.TraverseAll()]
self.assertTrue(prim_list == ['/World', '/Environment', '/Environment/defaultLight'])
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/tests/__init__.py | from .test_commands import *
from .test_new_stage import *
from .test_new_stage_menu import *
from .test_templates import *
from .test_preferences import *
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/tests/test_new_stage.py | import unittest
import carb.settings
import omni.kit.app
import omni.kit.test
import omni.kit.stage_templates
from pxr import UsdGeom
class TestNewStage(omni.kit.test.AsyncTestCase):
async def setUp(self):
self._staged_called = 0
self._stage_name = "$$_test_stage_$$"
async def tearDown(self):
pass
def new_stage_test(self, rootname):
self._staged_called += 1
async def test_dirty_status_after_new_stage(self):
await omni.kit.stage_templates.new_stage_async(template="sunflowers")
self.assertFalse(omni.usd.get_context().has_pending_edit())
# Puts some delay to make sure no pending events to handle.
await omni.kit.app.get_app().next_update_async()
await omni.kit.app.get_app().next_update_async()
self.assertFalse(omni.usd.get_context().has_pending_edit())
async def test_command_new_stage(self):
# check template already exists
item = omni.kit.stage_templates.get_stage_template(self._stage_name)
self.assertTrue(item == None)
# add new template
omni.kit.stage_templates.register_template(self._stage_name, self.new_stage_test, 0)
# check new template exists
item = omni.kit.stage_templates.get_stage_template(self._stage_name)
self.assertTrue(item != None)
self.assertTrue(item[0] == self._stage_name)
self.assertTrue(item[1] == self.new_stage_test)
# run template & check was called once
await omni.kit.stage_templates.new_stage_async(self._stage_name)
self.assertTrue(self._staged_called == 1)
stage = omni.usd.get_context().get_stage()
settings = carb.settings.get_settings()
default_prim_name = settings.get("/persistent/app/stage/defaultPrimName")
rootname = f"/{default_prim_name}"
# create cube
cube_path = omni.usd.get_stage_next_free_path(stage, "{}/Cube".format(rootname), False)
omni.kit.commands.execute(
"CreatePrim",
prim_path=cube_path,
prim_type="Cube",
select_new_prim=False,
attributes={UsdGeom.Tokens.size: 100, UsdGeom.Tokens.extent: [(-50, -50, -50), (50, 50, 50)]},
)
prim = stage.GetPrimAtPath(cube_path)
self.assertTrue(prim, "Cube Prim exists")
# create sphere
sphere_path = omni.usd.get_stage_next_free_path(stage, "{}/Sphere".format(rootname), False)
omni.kit.commands.execute("CreatePrim", prim_path=sphere_path, prim_type="Sphere", select_new_prim=False)
prim = stage.GetPrimAtPath(sphere_path)
self.assertTrue(prim, "Sphere Prim exists")
# delete template
omni.kit.stage_templates.unregister_template(self._stage_name)
item = omni.kit.stage_templates.get_stage_template(self._stage_name)
self.assertTrue(item == None)
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/tests/test_preferences.py | ## Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
import omni.kit.test
import carb
import omni.usd
import omni.kit.app
from omni.kit.test.async_unittest import AsyncTestCase
from omni.kit import ui_test
# duplicate of kit\source\extensions\omni.kit.window.preferences\python\omni\kit\window\preferences\tests\test_pages.py
# added here for coverage
class PreferencesTestPages(AsyncTestCase):
# Before running each test
async def setUp(self):
omni.kit.window.preferences.show_preferences_window()
async def test_show_pages(self):
pages = omni.kit.window.preferences.get_page_list()
page_names = [page._title for page in pages]
# is list alpha sorted. Don't compare with fixed list as members can change
self.assertEqual(page_names, sorted(page_names))
for page in pages:
omni.kit.window.preferences.select_page(page)
await ui_test.human_delay(50)
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/tests/test_templates.py | import asyncio
import unittest
import carb.settings
import omni.kit.app
import omni.kit.test
import omni.kit.stage_templates
# also see kit\source\extensions\omni.kit.menu.file\python\omni\kit\menu\file\tests\test_func_templates.py as simular test
# added here for coverage
class TestNewStageTemplates(omni.kit.test.AsyncTestCase):
async def setUp(self):
pass
async def tearDown(self):
pass
async def test_stage_template_empty(self):
await omni.kit.stage_templates.new_stage_async(template="empty")
# verify Empty stage
stage = omni.usd.get_context().get_stage()
prim_list = [prim.GetPath().pathString for prim in stage.TraverseAll()]
self.assertTrue(prim_list == ['/World'])
async def test_stage_template_sunlight(self):
await omni.kit.stage_templates.new_stage_async(template="sunlight")
# verify Sunlight stage
stage = omni.usd.get_context().get_stage()
prim_list = [prim.GetPath().pathString for prim in stage.TraverseAll()]
self.assertTrue(prim_list == ['/World', '/Environment', '/Environment/defaultLight'])
async def test_stage_template_default_stage(self):
await omni.kit.stage_templates.new_stage_async(template="default stage")
# verify Default stage
stage = omni.usd.get_context().get_stage()
prim_list = [prim.GetPath().pathString for prim in stage.TraverseAll()]
self.assertTrue(set(prim_list) == set([
'/World', '/Environment', '/Environment/Sky',
'/Environment/DistantLight', '/Environment/Looks',
'/Environment/Looks/Grid', '/Environment/Looks/Grid/Shader',
'/Environment/ground', '/Environment/groundCollider']), prim_list)
async def test_stage_template_noname(self):
await omni.kit.stage_templates.new_stage_async()
# verify Empty stage
stage = omni.usd.get_context().get_stage()
prim_list = [prim.GetPath().pathString for prim in stage.TraverseAll()]
self.assertTrue(prim_list == ['/World'])
|
omniverse-code/kit/exts/omni.kit.stage_templates/omni/kit/stage_templates/tests/test_commands.py | import unittest
import carb.settings
import omni.kit.test
import omni.kit.stage_templates
from pxr import UsdGeom
class TestCommands(omni.kit.test.AsyncTestCase):
async def setUp(self):
self._staged_called = 0
self._stage_name = "$$_test_stage_$$"
async def tearDown(self):
pass
def new_stage_test(self, rootname):
self._staged_called += 1
async def test_command_new_stage(self):
# check template already exists
item = omni.kit.stage_templates.get_stage_template(self._stage_name)
self.assertTrue(item == None)
# add new template
omni.kit.stage_templates.register_template(self._stage_name, self.new_stage_test, 0)
# check new template exists
item = omni.kit.stage_templates.get_stage_template(self._stage_name)
self.assertTrue(item != None)
self.assertTrue(item[0] == self._stage_name)
self.assertTrue(item[1] == self.new_stage_test)
# run template & check was called once
await omni.kit.stage_templates.new_stage_async(self._stage_name)
self.assertTrue(self._staged_called == 1)
stage = omni.usd.get_context().get_stage()
settings = carb.settings.get_settings()
default_prim_name = settings.get("/persistent/app/stage/defaultPrimName")
rootname = f"/{default_prim_name}"
# create cube
cube_path = omni.usd.get_stage_next_free_path(stage, "{}/Cube".format(rootname), False)
omni.kit.commands.execute(
"CreatePrim",
prim_path=cube_path,
prim_type="Cube",
select_new_prim=False,
attributes={UsdGeom.Tokens.size: 100, UsdGeom.Tokens.extent: [(-50, -50, -50), (50, 50, 50)]},
)
prim = stage.GetPrimAtPath(cube_path)
self.assertTrue(prim, "Cube Prim exists")
# create sphere
sphere_path = omni.usd.get_stage_next_free_path(stage, "{}/Sphere".format(rootname), False)
omni.kit.commands.execute("CreatePrim", prim_path=sphere_path, prim_type="Sphere", select_new_prim=False)
prim = stage.GetPrimAtPath(sphere_path)
self.assertTrue(prim, "Sphere Prim exists")
# delete template
omni.kit.stage_templates.unregister_template(self._stage_name)
item = omni.kit.stage_templates.get_stage_template(self._stage_name)
self.assertTrue(item == None)
|
omniverse-code/kit/exts/omni.kit.widget.inspector/config/extension.toml | [package]
title = "UI Inspector (Preview)"
description = "Inspect your UI Elements"
version = "1.0.3"
category = "Developer"
authors = ["NVIDIA"]
repository = ""
keywords = ["stage", "outliner", "scene"]
changelog = "docs/CHANGELOG.md"
readme = "docs/README.md"
preview_image = "data/inspector_full.png"
icon = "data/clouseau.png"
[package.writeTarget]
kit = true
[dependencies]
"omni.ui" = {}
"omni.kit.pip_archive" = {}
[[native.library]]
path = "bin/${lib_prefix}omni.kit.widget.inspector${lib_ext}"
[[python.module]]
name = "omni.kit.widget.inspector"
[[test]]
args = []
stdoutFailPatterns.exclude = [
] |
omniverse-code/kit/exts/omni.kit.widget.inspector/omni/kit/widget/inspector/__init__.py | ## Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
##
## NVIDIA CORPORATION and its licensors retain all intellectual property
## and proprietary rights in and to this software, related documentation
## and any modifications thereto. Any use, reproduction, disclosure or
## distribution of this software and related documentation without an express
## license agreement from NVIDIA CORPORATION is strictly prohibited.
##
"""
InspectorWidget
------
InspectorWidget provides a way to display Widget Internals
:class:`.InspectorWidget`
"""
# Import the ICef bindings right away
from ._inspector_widget import *
from .scripts.extension import *
|
omniverse-code/kit/exts/omni.kit.widget.inspector/omni/kit/widget/inspector/scripts/extension.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import carb
import omni.ext
import omni.ui
import omni.kit.app
import omni.kit.widget.inspector
class InspectorWidgetExtension(omni.ext.IExt):
def __init__(self):
super().__init__()
pass
def on_startup(self, ext_id):
pass
# extension_path = omni.kit.app.get_app().get_extension_manager().get_extension_path(ext_id)
# omni.kit.widget.inspector.startup(extension_path)
def on_shutdown(self):
pass
# omni.kit.widget.inspector.shutdown()
|
omniverse-code/kit/exts/omni.kit.widget.inspector/omni/kit/widget/inspector/tests/test_widget.py | import omni.kit.test
from omni.kit.widget.inspector import PreviewMode, InspectorWidget
import functools
class TestInspectorWidget(omni.kit.test.AsyncTestCase):
async def setUp(self):
self._widget = InspectorWidget()
def test_widget_init(self):
self.assertTrue(self._widget)
def test_widget_get_set_widget(self):
widget = self._widget.widget
self.assertEqual(widget, None) # Check default
label = omni.ui.Label("Blah")
self._widget.widget = label
self.assertEqual(self._widget.widget, label)
def test_widget_get_set_preview(self):
preview_mode = self._widget.preview_mode
self.assertEqual(preview_mode, PreviewMode.WIREFRAME_AND_COLOR) # Check default
self._widget.preview_mode = PreviewMode.WIRE_ONLY
self.assertEqual(self._widget.preview_mode, PreviewMode.WIRE_ONLY)
def test_widget_get_set_selected_widget(self):
self.assertEqual(self._widget.selected_widget, None) # Check default
label = omni.ui.Label("Blah")
self._widget.selected_widget = label
self.assertEqual(self._widget.selected_widget, label)
def test_widget_get_set_start_depth(self):
start_depth = self._widget.start_depth
self.assertEqual(start_depth, 0) # Check default
self._widget.start_depth = 25
self.assertEqual(self._widget.start_depth, 25)
def test_widget_get_set_end_depth(self):
end_depth = self._widget.end_depth
self.assertEqual(end_depth, 65535) # Check default
self._widget.end_depth = 25
self.assertEqual(self._widget.end_depth, 25)
def test_widget_set_widget_changed_fn(self):
def selection_changed(the_list, widget):
the_list.append(widget)
the_list = []
self._widget.set_selected_widget_changed_fn(functools.partial(selection_changed, the_list))
label = omni.ui.Label("Blah")
self.assertTrue(len(the_list) == 0)
self._widget.selected_widget = label
self.assertTrue(len(the_list) == 1)
self.assertEqual(the_list[0], label)
def test_widget_set_preview_mode_changed_fn(self):
def preview_mode_value_changed(the_list, value):
the_list.append(value)
the_list = []
self._widget.set_preview_mode_change_fn(functools.partial(preview_mode_value_changed, the_list))
self.assertTrue(len(the_list) == 0)
self._widget.preview_mode = PreviewMode.WIRE_ONLY
self.assertTrue(len(the_list) == 1)
self.assertTrue(the_list[0] == PreviewMode.WIRE_ONLY)
|
omniverse-code/kit/exts/omni.kit.widget.inspector/omni/kit/widget/inspector/tests/__init__.py | from .test_widget import *
|
omniverse-code/kit/exts/omni.kit.widget.inspector/docs/CHANGELOG.md | # Changelog
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [1.0.3] - 2022-10-17
### Changes
- New mode COLOR_NO_NAVIGATION
## [1.0.2] - 2022-05-16
### Changes
- Added tests
## [1.0.1] - 2021-11-24
### Changes
- Update to latest omni.ui_query
## [1.0.0] - 2021-10-07
- Initial Release
|
omniverse-code/kit/exts/omni.kit.widget.inspector/docs/README.md | # UI Inspector Widget [omni.kit.widget.inspector]
|
omniverse-code/kit/exts/omni.kit.property.usd/PACKAGE-LICENSES/omni.kit.property.usd-LICENSE.md | Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited. |
omniverse-code/kit/exts/omni.kit.property.usd/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "3.18.17"
category = "Internal"
feature = true
# Lists people or organizations that are considered the "authors" of the package.
authors = ["NVIDIA"]
# The title and description fields are primarly for displaying extension info in UI
title = "USD Property Window Widgets"
description="Property Window widgets that displays USD related information."
# URL of the extension source repository.
repository = ""
# Preview image. Folder named "data" automatically goes in git lfs (see .gitattributes file).
preview_image = "data/preview.png"
# Icon is shown in Extensions window, it is recommended to be square, of size 256x256.
icon = "data/icon.png"
# Keywords for the extension
keywords = ["kit", "usd", "property"]
# Location of change log file in target (final) folder of extension, relative to the root.
# More info on writing changelog: https://keepachangelog.com/en/1.0.0/
changelog="docs/CHANGELOG.md"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
[dependencies]
"omni.client" = {}
"omni.kit.commands" = {}
"omni.timeline" = {}
"omni.usd" = {}
"omni.ui" = {}
"omni.kit.window.property" = {}
"omni.kit.widget.stage" = {}
"omni.kit.clipboard" = {}
"omni.kit.context_menu" = {}
"omni.kit.window.file_importer" = {}
"omni.kit.window.content_browser" = {optional = true}
"omni.kit.widget.versioning" = {optional = true}
"omni.kit.usd.layers" = {}
"omni.kit.notification_manager" = {}
[settings]
persistent.exts."omni.kit.property.usd".large_selection = 100
persistent.exts."omni.kit.property.usd".raw_widget_multi_selection_limit = 1
# Main python module this extension provides, it will be publicly available as "import omni.kit.property.usd".
[[python.module]]
name = "omni.kit.property.usd"
[[test]]
timeout = 1200
args = [
"--/rtx/materialDb/syncLoads=true",
"--/omni.kit.plugin/syncUsdLoads=true",
"--/rtx/hydra/materialSyncLoads=true",
"--/renderer/enabled=pxr",
"--/renderer/active=pxr",
"--/renderer/multiGpu/enabled=false",
"--/renderer/multiGpu/autoEnable=false", # Disable mGPU with PXR due to OM-51026, OM-53611
"--/renderer/multiGpu/maxGpuCount=1",
"--/app/asyncRendering=false",
"--/app/window/dpiScaleOverride=1.0",
"--/app/window/scaleToMonitor=false",
"--/app/file/ignoreUnsavedOnExit=true",
"--/persistent/app/stage/dragDropImport='reference'",
"--/persistent/app/material/dragDropMaterialPath='relative'",
"--/persistent/app/omniverse/filepicker/options_menu/show_details=false",
"--no-window"
]
dependencies = [
"omni.usd",
"omni.kit.renderer.capture",
"omni.kit.mainwindow",
"omni.kit.material.library", # for omni_pbr...
"omni.kit.window.preferences",
"omni.kit.window.content_browser",
"omni.kit.window.stage",
"omni.kit.property.material",
"omni.kit.window.status_bar",
"omni.kit.ui_test",
"omni.kit.test_suite.helpers",
"omni.kit.window.file",
"omni.hydra.rtx",
"omni.kit.hydra_texture",
"omni.kit.window.viewport",
]
#pyCoverageIncludeDependencies = false
stdoutFailPatterns.exclude = [
"*HydraRenderer failed to render this frame*", # Can drop a frame or two rendering with OpenGL interop
"*Cannot use omni.hydra.pxr without OpenGL interop*" # Linux TC configs with multi-GPU might not have OpenGL available
]
|
omniverse-code/kit/exts/omni.kit.property.usd/omni/kit/property/usd/usd_model_base.py | # Copyright (c) 2020-2021, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
from typing import List
import carb
import copy
import carb.profiler
import omni.kit.commands
import omni.kit.undo
import omni.timeline
import omni.usd
import omni.kit.notification_manager as nm
from omni.kit.usd.layers import get_layers, LayerEventType, get_layer_event_payload
from pxr import Ar, Gf, Sdf, Tf, Trace, Usd, UsdShade
from .control_state_manager import ControlStateManager
from .placeholder_attribute import PlaceholderAttribute
class UsdBase:
def __init__(
self,
stage: Usd.Stage,
object_paths: List[Sdf.Path],
self_refresh: bool,
metadata: dict = {},
change_on_edit_end: bool = False,
treat_array_entry_as_comp: bool = False,
**kwargs,
):
self._control_state_mgr = ControlStateManager.get_instance()
self._stage = stage
self._usd_context = None
self._object_paths = object_paths
self._object_paths_set = set(object_paths)
self._metadata = metadata
self._change_on_edit_end = change_on_edit_end
# Whether each array entry should be treated as a comp, if this property is of an array type.
# If set to True, each array entry will have ambiguous and different from default state checked. It may have
# huge perf impact if the array size is big.
# It is useful if you need to build each array entry as a single widget with mixed/non-default indicator
# an example would SdfAssetPathAttributeModel and _sdf_asset_path_array_builder
self._treat_array_entry_as_comp = treat_array_entry_as_comp
self._dirty = True
self._value = None # The value to be displayed on widget
self._has_default_value = False
self._default_value = None
self._real_values = [] # The actual values in usd, might be different from self._value if ambiguous.
self._connections = []
self._is_big_array = False
self._prev_value = None
self._prev_real_values = []
self._editing = 0
self._ignore_notice = False
self._ambiguous = False
# "comp" is the first dimension of an array (when treat_array_entry_as_comp is enabled) or vector attribute.
# - If attribute is non-array vector type, comp indexes into the vector itself. e.g. for vec3 type, comp=1 means
# the 2nd channel of the vector vec3[1].
# - If attribute is an array type:
# - When treat_array_entry_as_comp is enabled) is enabled:
# - If the attribute is an array of scalar (i.e. float[]), `comp`` is the entry index. e.g. for
# SdfAssetArray type, comp=1 means the 2nd path in the path array.
# - If the attribute is an array of vector (i.e. vec3f[]), `comp` only indexes the array entry, it does
# not support indexing into the channel of the vector within the array entry. i.e a "2D" comp is not
# supported yet.
# - When treat_array_entry_as_comp is enabled) is disabled:
# - comp is 0 and the entire array is treated as one scalar value.
#
# This applies to all `comp` related functionality in this model base.
self._comp_ambiguous = []
self._might_be_time_varying = False # Inaccurate named. It really means if timesamples > 0
self._timeline = omni.timeline.get_timeline_interface()
self._current_time = self._timeline.get_current_time()
self._timeline_sub = None
self._on_set_default_fn = None
self._soft_range_min = None
self._soft_range_max = None
# get soft_range userdata settings
attributes = self._get_attributes()
if attributes:
attribute = attributes[-1]
if isinstance(attribute, Usd.Attribute):
soft_range = attribute.GetCustomDataByKey("omni:kit:property:usd:soft_range_ui")
if soft_range:
self._soft_range_min = soft_range[0]
self._soft_range_max = soft_range[1]
# Hard range for the value. For vector type, range value is a float/int that compares against each component individually
self._min = kwargs.get("min", None)
self._max = kwargs.get("max", None)
# Invalid range
if self._min is not None and self._max is not None and self._min >= self._max:
self._min = self._max = None
# The state of the icon on the right side of the line with widgets
self._control_state = 0
# Callback when the control state is changing. We use it to redraw UI
self._on_control_state_changed_fn = None
# Callback when the value is reset. We use it to redraw UI
self._on_set_default_fn = None
# True if the attribute has the default value and the current value is not default
self._different_from_default = False
# Per component different from default
self._comp_different_from_default = []
# Whether the UsdModel should self register Tf.Notice or let UsdPropertiesWidget inform a property change
if self_refresh:
self._listener = Tf.Notice.Register(Usd.Notice.ObjectsChanged, self._on_usd_changed, self._stage)
else:
self._listener = None
# Notification handler to throttle notifications.
self._notification = None
usd_context = self._get_usd_context()
layers = get_layers(usd_context)
self._spec_locks_subscription = layers.get_event_stream().create_subscription_to_pop(
self._on_spec_locks_changed, name="Property USD"
)
@property
def control_state(self):
"""Returns the current control state, it's the icon on the right side of the line with widgets"""
return self._control_state
@property
def stage(self):
return self._stage
@property
def metadata(self):
return self._metadata
def update_control_state(self):
control_state, force_refresh = self._control_state_mgr.update_control_state(self)
# Redraw control state icon when the control state is changed
if self._control_state != control_state or force_refresh:
self._control_state = control_state
if self._on_control_state_changed_fn:
self._on_control_state_changed_fn()
def set_on_control_state_changed_fn(self, fn):
"""Callback that is called when control state is changed"""
self._on_control_state_changed_fn = fn
def set_on_set_default_fn(self, fn):
"""Callback that is called when value is reset"""
self._on_set_default_fn = fn
def clean(self):
self._notification = None
self._timeline_sub = None
self._stage = None
self._spec_locks_subscription = None
if self._listener:
self._listener.Revoke()
self._listener = None
def is_different_from_default(self) -> bool:
"""Returns True if the attribute has the default value and the current value is not default"""
self._update_value()
# soft_range has been overridden
if self._soft_range_min != None and self._soft_range_max != None:
return True
return self._different_from_default
def might_be_time_varying(self) -> bool:
self._update_value()
return self._might_be_time_varying
def is_ambiguous(self) -> bool:
self._update_value()
return self._ambiguous
def is_comp_ambiguous(self, index: int) -> bool:
self._update_value()
comp_len = len(self._comp_ambiguous)
if comp_len == 0 or index < 0:
return self.is_ambiguous()
if index < comp_len:
return self._comp_ambiguous[index]
return False
def is_array_type(self) -> bool:
return self._is_array_type()
def get_all_comp_ambiguous(self) -> List[bool]:
"""Empty array if attribute value is a scalar, check is_ambiguous instead"""
self._update_value()
return self._comp_ambiguous
def get_attribute_paths(self) -> List[Sdf.Path]:
return self._object_paths
def get_property_paths(self) -> List[Sdf.Path]:
return self.get_attribute_paths()
def get_connections(self):
return self._connections
def set_default(self, comp=-1):
"""Set the UsdAttribute default value if it exists in metadata"""
self.set_soft_range_userdata(None, None)
if self.is_different_from_default() is False or self._has_default_value is False:
if self._soft_range_min != None and self._soft_range_max != None:
if self._on_set_default_fn:
self._on_set_default_fn()
self.update_control_state()
return
current_time_code = self.get_current_time_code()
with omni.kit.undo.group():
# TODO clear timesample
# However, when a value is timesampled, the "default" button is overridden by timesampled button,
# so there's no way to invoke this function
for attribute in self._get_attributes():
if isinstance(attribute, Usd.Attribute):
current_value = attribute.Get(current_time_code)
if comp >= 0:
# OM-46294: Make a value copy to avoid changing with reference.
default_value = copy.copy(current_value)
default_value[comp] = self._default_value[comp]
else:
default_value = self._default_value
self._change_property(attribute.GetPath(), default_value, current_value)
# We just set all the properties to the same value, it's no longer ambiguous
self._ambiguous = False
self._comp_ambiguous.clear()
self._different_from_default = False
self._comp_different_from_default.clear()
if self._on_set_default_fn:
self._on_set_default_fn()
def _create_placeholder_attributes(self, attributes):
# NOTE: PlaceholderAttribute.CreateAttribute cannot throw exceptions
for index, attribute in enumerate(attributes):
if isinstance(attribute, PlaceholderAttribute):
self._editing += 1
attributes[index] = attribute.CreateAttribute()
self._editing -= 1
def set_value(self, value, comp: int = -1):
if self._min is not None:
if hasattr(value, "__len__"):
for i in range(len(value)):
if value[i] < self._min:
value[i] = self._min
else:
if value < self._min:
value = self._min
if self._max is not None:
if hasattr(value, "__len__"):
for i in range(len(value)):
if value[i] > self._max:
value[i] = self._max
else:
if value > self._max:
value = self._max
if not self._ambiguous and not any(self._comp_ambiguous) and value == self._value:
return False
if self.is_instance_proxy():
self._post_notification("Cannot edit attributes of instance proxy.")
self._update_value(True) # reset value
return False
if self.is_locked():
self._post_notification("Cannot edit locked attributes.")
self._update_value(True) # reset value
return False
if self._might_be_time_varying:
self._post_notification("Setting time varying attribute is not supported yet")
return False
self._value = value if comp == -1 else UsdBase.update_value_by_comp(value, self._value, comp)
attributes = self._get_attributes()
if len(attributes) == 0:
return False
self._create_placeholder_attributes(attributes)
if self._editing:
for i, attribute in enumerate(attributes):
self._ignore_notice = True
if comp == -1:
self._real_values[i] = self._value
if not self._change_on_edit_end:
attribute.Set(self._value)
else:
# Only update a single component of the value (for vector type)
value = self._real_values[i]
self._real_values[i] = self._update_value_by_comp(value, comp)
if not self._change_on_edit_end:
attribute.Set(value)
self._ignore_notice = False
else:
with omni.kit.undo.group():
for i, attribute in enumerate(attributes):
self._ignore_notice = True
# begin_edit is not called for certain widget (like Checkbox), issue the command directly
if comp == -1:
self._change_property(attribute.GetPath(), self._value, None)
else:
# Only update a single component of the value (for vector type)
value = self._real_values[i]
self._real_values[i] = self._update_value_by_comp(value, comp)
self._change_property(attribute.GetPath(), value, None)
self._ignore_notice = False
if comp == -1:
# We just set all the properties to the same value, it's no longer ambiguous
self._ambiguous = False
self._comp_ambiguous.clear()
else:
self._comp_ambiguous[comp] = False
self._ambiguous = any(self._comp_ambiguous)
if self._has_default_value:
self._comp_different_from_default = [False] * self._get_comp_num()
if comp == -1:
self._different_from_default = value != self._default_value
if self._different_from_default:
for comp in range(len(self._comp_different_from_default)):
self._comp_different_from_default[comp] = not self._compare_value_by_comp(
value, self._default_value, comp
)
else:
self._comp_different_from_default[comp] = not self._compare_value_by_comp(
value, self._default_value, comp
)
self._different_from_default = any(self._comp_different_from_default)
else:
self._different_from_default = False
self._comp_different_from_default.clear()
self.update_control_state()
return True
def _is_prev_same(self):
return self._prev_real_values == self._real_values
def begin_edit(self):
self._editing = self._editing + 1
self._prev_value = self._value
self._save_real_values_as_prev()
def end_edit(self):
self._editing = self._editing - 1
if self._is_prev_same():
return
attributes = self._get_attributes()
self._create_placeholder_attributes(attributes)
with omni.kit.undo.group():
self._ignore_notice = True
for i in range(len(attributes)):
attribute = attributes[i]
self._change_property(attribute.GetPath(), self._real_values[i], self._prev_real_values[i])
self._ignore_notice = False
# Set flags. It calls _on_control_state_changed_fn when the user finished editing
self._update_value(True)
def _post_notification(self, message):
if not self._notification or self._notification.dismissed:
status = nm.NotificationStatus.WARNING
self._notification = nm.post_notification(message, status=status)
carb.log_warn(message)
def _change_property(self, path: Sdf.Path, new_value, old_value):
# OM-75480: For props inside sesison layer, it will always change specs
# in the session layer to avoid shadowing. Why it needs to be def is that
# session layer is used for several runtime data for now as built-in cameras,
# MDL material params, and etc. Not all of them create runtime prims inside
# session layer. For those that are not defined inside session layer, we should
# avoid leaving delta inside other sublayers as they are shadowed and useless after
# stage close.
target_layer, _ = omni.usd.find_spec_on_session_or_its_sublayers(
self._stage, path.GetPrimPath(), lambda spec: spec.specifier == Sdf.SpecifierDef
)
if not target_layer:
target_layer = self._stage.GetEditTarget().GetLayer()
omni.kit.commands.execute(
"ChangeProperty", prop_path=path,
value=new_value, prev=old_value,
target_layer=target_layer,
usd_context_name=self._stage
)
def get_value_by_comp(self, comp: int):
self._update_value()
if comp == -1:
return self._value
return self._get_value_by_comp(self._value, comp)
def _save_real_values_as_prev(self):
# It's like copy.deepcopy but not all USD types support pickling (e.g. Gf.Quat*)
self._prev_real_values = [type(value)(value) for value in self._real_values]
def _get_value_by_comp(self, value, comp: int):
if value.__class__.__module__ == "pxr.Gf":
if value.__class__.__name__.startswith("Quat"):
if comp == 0:
return value.real
else:
return value.imaginary[comp - 1]
elif value.__class__.__name__.startswith("Matrix"):
dimension = len(value)
row = comp // dimension
col = comp % dimension
return value[row, col]
elif value.__class__.__name__.startswith("Vec"):
return value[comp]
else:
if comp < len(value):
return value[comp]
return None
def _update_value_by_comp(self, value, comp: int):
"""update value from self._value"""
return UsdBase.update_value_by_comp(self._value, value, comp)
@staticmethod
def update_value_by_comp(from_value, to_value, comp: int):
"""update value from from_value to to_value"""
if from_value.__class__.__module__ == "pxr.Gf":
if from_value.__class__.__name__.startswith("Quat"):
if comp == 0:
to_value.real = from_value.real
else:
imaginary = from_value.imaginary
imaginary[comp - 1] = from_value.imaginary[comp - 1]
to_value.SetImaginary(imaginary)
elif from_value.__class__.__name__.startswith("Matrix"):
dimension = len(from_value)
row = comp // dimension
col = comp % dimension
to_value[row, col] = from_value[row, col]
elif from_value.__class__.__name__.startswith("Vec"):
to_value[comp] = from_value[comp]
else:
to_value[comp] = from_value[comp]
return to_value
def _compare_value_by_comp(self, val1, val2, comp: int):
return self._get_value_by_comp(val1, comp) == self._get_value_by_comp(val2, comp)
def _get_comp_num(self):
# TODO any better wan than this??
# Checks if the value type is a vector type
if self._value.__class__.__module__ == "pxr.Gf":
if self._value.__class__.__name__.startswith("Quat"):
return 4
elif self._value.__class__.__name__.startswith("Matrix"):
mat_dimension = len(self._value)
return mat_dimension * mat_dimension
elif hasattr(self._value, "__len__"):
return len(self._value)
elif self._is_array_type() and self._treat_array_entry_as_comp:
return len(self._value)
return 0
@Trace.TraceFunction
def _on_usd_changed(self, notice, stage):
if stage != self._stage:
return
if self._editing > 0:
return
if self._ignore_notice:
return
for path in notice.GetResyncedPaths():
if path in self._object_paths_set:
self._set_dirty()
return
for path in notice.GetChangedInfoOnlyPaths():
if path in self._object_paths_set:
self._set_dirty()
return
def _on_dirty(self):
pass
def _set_dirty(self):
if self._editing > 0:
return
self._dirty = True
self._on_dirty()
def _get_type_name(self, obj=None):
if obj:
if hasattr(obj, "GetTypeName"):
return obj.GetTypeName()
elif hasattr(obj, "typeName"):
return obj.typeName
else:
return None
else:
type_name = self._metadata.get(Sdf.PrimSpec.TypeNameKey, "unknown type")
return Sdf.ValueTypeNames.Find(type_name)
def _is_array_type(self, obj=None):
type_name = self._get_type_name(obj)
if isinstance(type_name, Sdf.ValueTypeName):
return type_name.isArray
else:
return False
def _get_obj_type_default_value(self, obj):
type_name = self._get_type_name(obj)
if isinstance(type_name, Sdf.ValueTypeName):
return type_name.defaultValue
else:
return None
def _update_value(self, force=False):
return self._update_value_objects(force, self._get_attributes())
def _update_value_objects(self, force: bool, objects: list):
if (self._dirty or force) and self._stage:
with Ar.ResolverContextBinder(self._stage.GetPathResolverContext()):
with Ar.ResolverScopedCache():
carb.profiler.begin(1, "UsdBase._update_value_objects")
current_time_code = self.get_current_time_code()
self._might_be_time_varying = False
self._value = None
self._has_default_value = False
self._default_value = None
self._real_values.clear()
self._connections.clear()
self._ambiguous = False
self._comp_ambiguous.clear()
self._different_from_default = False
self._comp_different_from_default.clear()
for index, object in enumerate(objects):
value = self._read_value(object, current_time_code)
self._real_values.append(value)
if isinstance(object, Usd.Attribute):
self._might_be_time_varying = self._might_be_time_varying or object.GetNumTimeSamples() > 0
self._connections.append(
[conn for conn in object.GetConnections()] if object.HasAuthoredConnections() else []
)
# only need to check the first prim. All other prims are supposedly to be the same
if index == 0:
self._value = value
if self._value and self._is_array_type(object) and len(self._value) > 16:
self._is_big_array = True
comp_num = self._get_comp_num()
self._comp_ambiguous = [False] * comp_num
self._comp_different_from_default = [False] * comp_num
# Loads the default value
self._has_default_value, self._default_value = self._get_default_value(object)
elif self._value != value:
self._value = value
self._ambiguous = True
comp_num = len(self._comp_ambiguous)
if comp_num > 0:
for i in range(comp_num):
if not self._comp_ambiguous[i]:
self._comp_ambiguous[i] = not self._compare_value_by_comp(
value, self._real_values[0], i
)
if self._has_default_value:
comp_num = len(self._comp_different_from_default)
if comp_num > 0:
for i in range(comp_num):
if not self._comp_different_from_default[i]:
self._comp_different_from_default[i] = not self._compare_value_by_comp(
value, self._default_value, i
)
self._different_from_default |= any(self._comp_different_from_default)
else:
self._different_from_default |= value != self._default_value
self._dirty = False
self._timeline_sub = self._timeline.get_timeline_event_stream().create_subscription_to_pop(
self._on_timeline_event
)
self.update_control_state()
carb.profiler.end(1)
return True
return False
def _get_default_value(self, property):
default_values = {"xformOp:scale": (1.0, 1.0, 1.0), "visibleInPrimaryRay": True, "primvars:multimatte_id": -1}
if isinstance(property, Usd.Attribute):
prim = property.GetPrim()
if prim:
custom = property.GetCustomData()
if "default" in custom:
# This is not the standard USD way to get default.
return True, custom["default"]
elif "customData" in self._metadata:
# This is to fetch default value for custom property.
default_value = self._metadata["customData"].get("default", None)
if default_value:
return True, default_value
else:
prim_definition = prim.GetPrimDefinition()
prop_spec = prim_definition.GetSchemaPropertySpec(property.GetPath().name)
if prop_spec and prop_spec.default != None:
return True, prop_spec.default
if property.GetName() in default_values:
return True, default_values[property.GetName()]
# If we still don't find default value, use type's default value
value_type = property.GetTypeName()
default_value = value_type.defaultValue
return True, default_value
elif isinstance(property, PlaceholderAttribute):
return True, property.Get()
return False, None
def _get_attributes(self):
if not self._stage:
return []
attributes = []
if not self._stage:
return attributes
for path in self._object_paths:
prim = self._stage.GetPrimAtPath(path.GetPrimPath())
if prim:
attr = prim.GetAttribute(path.name)
if attr:
if not attr.IsHidden():
attributes.append(attr)
else:
attr = PlaceholderAttribute(name=path.name, prim=prim, metadata=self._metadata)
attributes.append(attr)
return attributes
def _get_objects(self):
objects = []
if not self._stage:
return objects
for path in self._object_paths:
obj = self._stage.GetObjectAtPath(path)
if obj and not obj.IsHidden():
objects.append(obj)
return objects
def _on_timeline_event(self, e):
if e.type == int(omni.timeline.TimelineEventType.CURRENT_TIME_TICKED) or e.type == int(
omni.timeline.TimelineEventType.CURRENT_TIME_CHANGED
):
current_time = e.payload["currentTime"]
if current_time != self._current_time:
self._current_time = current_time
if self._might_be_time_varying:
self._set_dirty()
elif e.type == int(omni.timeline.TimelineEventType.TENTATIVE_TIME_CHANGED):
tentative_time = e.payload["tentativeTime"]
if tentative_time != self._current_time:
self._current_time = tentative_time
if self._might_be_time_varying:
self._set_dirty()
def _read_value(self, object: Usd.Object, time_code: Usd.TimeCode):
carb.profiler.begin(1, "UsdBase._read_value")
val = object.Get(time_code)
if val is None:
result, val = self._get_default_value(object)
if not result:
val = self._get_obj_type_default_value(object)
carb.profiler.end(1)
return val
def _on_spec_locks_changed(self, event: carb.events.IEvent):
payload = get_layer_event_payload(event)
if payload and payload.event_type == LayerEventType.SPECS_LOCKING_CHANGED:
self.update_control_state()
def _get_usd_context(self):
if not self._usd_context:
self._usd_context = omni.usd.get_context_from_stage(self._stage)
return self._usd_context
def set_locked(self, locked):
usd_context = self._get_usd_context()
if not usd_context:
carb.log_warn(f"Current stage is not attached to any usd context.")
return
if locked:
omni.kit.usd.layers.lock_specs(usd_context, self._object_paths, False)
else:
omni.kit.usd.layers.unlock_specs(usd_context, self._object_paths, False)
def is_instance_proxy(self):
if not self._object_paths:
return False
path = Sdf.Path(self._object_paths[0]).GetPrimPath()
prim = self._stage.GetPrimAtPath(path)
return prim and prim.IsInstanceProxy()
def is_locked(self):
usd_context = self._get_usd_context()
if not usd_context:
carb.log_warn(f"Current stage is not attached to any usd context.")
return
for path in self._object_paths:
if not omni.kit.usd.layers.is_spec_locked(usd_context, path):
return False
return True
def has_connections(self):
return bool(len(self._connections[-1]) > 0)
def get_value(self):
self._update_value()
return self._value
def get_current_time_code(self):
return Usd.TimeCode(omni.usd.get_frame_time_code(self._current_time, self._stage.GetTimeCodesPerSecond()))
def set_soft_range_userdata(self, soft_range_min, soft_range_max):
# set soft_range userdata settings
for attribute in self._get_attributes():
if isinstance(attribute, Usd.Attribute):
if soft_range_min == None and soft_range_max == None:
attribute.SetCustomDataByKey("omni:kit:property:usd:soft_range_ui", None)
else:
attribute.SetCustomDataByKey(
"omni:kit:property:usd:soft_range_ui", Gf.Vec2f(soft_range_min, soft_range_max)
)
self._soft_range_min = soft_range_min
self._soft_range_max = soft_range_max
|
omniverse-code/kit/exts/omni.kit.property.usd/omni/kit/property/usd/relationship.py | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import weakref
from functools import partial
from typing import Callable, List, Optional
import omni.kit.commands
import omni.ui as ui
from omni.kit.widget.stage import StageWidget
from pxr import Sdf
class SelectionWatch(object):
def __init__(self, stage, on_selection_changed_fn, filter_type_list, filter_lambda, tree_view=None):
self._stage = weakref.ref(stage)
self._last_selected_prim_paths = None
self._filter_type_list = filter_type_list
self._filter_lambda = filter_lambda
self._on_selection_changed_fn = on_selection_changed_fn
self._targets_limit = 0
if tree_view:
self.set_tree_view(tree_view)
def reset(self, targets_limit):
self._targets_limit = targets_limit
self.clear_selection()
def set_tree_view(self, tree_view):
self._tree_view = tree_view
self._tree_view.set_selection_changed_fn(self._on_widget_selection_changed)
self._last_selected_prim_paths = None
def clear_selection(self):
if not self._tree_view:
return
self._tree_view.model.update_dirty()
self._tree_view.selection = []
if self._on_selection_changed_fn:
self._on_selection_changed_fn([])
def _on_widget_selection_changed(self, selection):
stage = self._stage()
if not stage:
return
prim_paths = [str(item.path) for item in selection if item]
# Deselect instance proxy items if they were selected
selection = [item for item in selection if item and not item.instance_proxy]
# Although the stage view has filter, you can still select the ancestor of filtered prims, which might not match the type.
if len(self._filter_type_list) != 0:
filtered_selection = []
for item in selection:
prim = stage.GetPrimAtPath(item.path)
if prim:
for type in self._filter_type_list:
if prim.IsA(type):
filtered_selection.append(item)
break
if filtered_selection != selection:
selection = filtered_selection
# or the ancestor might not match the lambda filter
if self._filter_lambda is not None:
selection = [item for item in selection if self._filter_lambda(stage.GetPrimAtPath(item.path))]
# Deselect if over the limit
if self._targets_limit > 0 and len(selection) > self._targets_limit:
selection = selection[: self._targets_limit]
if self._tree_view.selection != selection:
self._tree_view.selection = selection
prim_paths = [str(item.path) for item in selection]
if prim_paths == self._last_selected_prim_paths:
return
self._last_selected_prim_paths = prim_paths
if self._on_selection_changed_fn:
self._on_selection_changed_fn(self._last_selected_prim_paths)
def enable_filtering_checking(self, enable: bool):
"""
It is used to prevent selecting the prims that are filtered out but
still displayed when such prims have filtered children. When `enable`
is True, SelectionWatch should consider filtering when changing Kit's
selection.
"""
pass
def set_filtering(self, filter_string: Optional[str]):
pass
class RelationshipTargetPicker:
def __init__(
self, stage, relationship_widget, filter_type_list, filter_lambda, on_add_targets: Optional[Callable] = None
):
self._weak_stage = weakref.ref(stage)
self._relationship_widget = relationship_widget
self._filter_lambda = filter_lambda
self._selected_paths = []
self._filter_type_list = filter_type_list
self._on_add_targets = on_add_targets
def on_window_visibility_changed(visible):
if not visible:
self._stage_widget.open_stage(None)
else:
# Only attach the stage when picker is open. Otherwise the Tf notice listener in StageWidget kills perf
self._stage_widget.open_stage(self._weak_stage())
self._window = ui.Window(
"Select Target(s)",
width=400,
height=400,
visible=False,
flags=0,
visibility_changed_fn=on_window_visibility_changed,
)
with self._window.frame:
with ui.VStack():
with ui.Frame():
self._stage_widget = StageWidget(None, columns_enabled=["Type"])
self._selection_watch = SelectionWatch(
stage=stage,
on_selection_changed_fn=self._on_selection_changed,
filter_type_list=filter_type_list,
filter_lambda=filter_lambda,
)
self._stage_widget.set_selection_watch(self._selection_watch)
def on_select(weak_self):
weak_self = weak_self()
if not weakref:
return
pending_add = []
for relationship in weak_self._relationship_widget._relationships:
if relationship:
existing_targets = relationship.GetTargets()
for selected_path in weak_self._selected_paths:
selected_path = Sdf.Path(selected_path)
if selected_path not in existing_targets:
pending_add.append((relationship, selected_path))
if len(pending_add):
omni.kit.undo.begin_group()
for add in pending_add:
omni.kit.commands.execute("AddRelationshipTarget", relationship=add[0], target=add[1])
omni.kit.undo.end_group()
if self._on_add_targets:
self._on_add_targets(pending_add)
weak_self._window.visible = False
with ui.VStack(
height=0, style={"Button.Label:disabled": {"color": 0xFF606060}}
): # TODO consolidate all styles
self._label = ui.Label("Selected Path(s):\n\tNone")
self._button = ui.Button(
"Add",
height=10,
clicked_fn=partial(on_select, weak_self=weakref.ref(self)),
enabled=False,
identifier="add_button"
)
def clean(self):
self._window.set_visibility_changed_fn(None)
self._window = None
self._selection_watch = None
self._stage_widget.open_stage(None)
self._stage_widget.destroy()
self._stage_widget = None
self._filter_type_list = None
self._filter_lambda = None
self._on_add_targets = None
def show(self, targets_limit):
self._targets_limit = targets_limit
self._selection_watch.reset(targets_limit)
self._window.visible = True
if self._filter_lambda is not None:
self._stage_widget._filter_by_lambda({"relpicker_filter": self._filter_lambda}, True)
if self._filter_type_list:
self._stage_widget._filter_by_type(self._filter_type_list, True)
self._stage_widget.update_filter_menu_state(self._filter_type_list)
def _on_selection_changed(self, paths):
self._selected_paths = paths
if self._button:
self._button.enabled = len(self._selected_paths) > 0
if self._label:
text = "\n\t".join(self._selected_paths)
label_text = "Selected Path(s)"
if self._targets_limit > 0:
label_text += f" ({len(self._selected_paths)}/{self._targets_limit})"
label_text += f":\n\t{text if len(text) else 'None'}"
self._label.text = label_text
class RelationshipEditWidget:
def __init__(self, stage, attr_name, prim_paths, additional_widget_kwargs=None):
self._id_name = f"{prim_paths[-1]}_{attr_name}".replace('/', '_')
self._relationships = [stage.GetPrimAtPath(path).GetRelationship(attr_name) for path in prim_paths]
self._additional_widget_kwargs = additional_widget_kwargs if additional_widget_kwargs else {}
self._targets_limit = self._additional_widget_kwargs.get("targets_limit", 0)
self._button = None
self._target_picker = RelationshipTargetPicker(
stage,
self,
self._additional_widget_kwargs.get("target_picker_filter_type_list", []),
self._additional_widget_kwargs.get("target_picker_filter_lambda", None),
self._additional_widget_kwargs.get("target_picker_on_add_targets", None),
)
self._frame = ui.Frame()
self._frame.set_build_fn(self._build)
self._on_remove_target = self._additional_widget_kwargs.get("on_remove_target", None)
self._enabled = self._additional_widget_kwargs.get("enabled", True)
self._shared_targets = None
def clean(self):
self._target_picker.clean()
self._target_picker = None
self._frame = None
self._button = None
self._label = None
self._on_remove_target = None
self._enabled = True
def is_ambiguous(self) -> bool:
return self._shared_targets is None
def get_all_comp_ambiguous(self) -> List[bool]:
return []
def get_relationship_paths(self) -> List[Sdf.Path]:
return [rel.GetPath() for rel in self._relationships]
def get_property_paths(self) -> List[Sdf.Path]:
return self.get_relationship_paths()
def get_targets(self) -> List[Sdf.Path]:
return self._shared_targets
def set_targets(self, targets: List[Sdf.Path]):
with omni.kit.undo.group():
for relationship in self._relationships:
if relationship:
omni.kit.commands.execute("SetRelationshipTargets", relationship=relationship, targets=targets)
def set_value(self, targets: List[Sdf.Path]):
self.set_targets(targets)
def _build(self):
self._shared_targets = None
for relationship in self._relationships:
targets = relationship.GetTargets()
if self._shared_targets is None:
self._shared_targets = targets
elif self._shared_targets != targets:
self._shared_targets = None
break
with ui.VStack(spacing=2):
if self._shared_targets is not None:
for target in self._shared_targets:
with ui.HStack(spacing=2):
ui.StringField(name="models", read_only=True).model.set_value(target.pathString)
def on_remove_target(weak_self, target):
weak_self = weak_self()
if weak_self:
with omni.kit.undo.group():
for relationship in weak_self._relationships:
if relationship:
omni.kit.commands.execute(
"RemoveRelationshipTarget", relationship=relationship, target=target
)
if self._on_remove_target:
self._on_remove_target(target)
ui.Button(
"-",
enabled=self._enabled,
width=ui.Pixel(14),
clicked_fn=partial(on_remove_target, weak_self=weakref.ref(self), target=target),
identifier=f"remove_relationship{target.pathString.replace('/', '_')}"
)
def on_add_target(weak_self):
weak_self = weak_self()
if weak_self:
weak_self._target_picker.show(weak_self._targets_limit - len(weak_self._shared_targets))
within_target_limit = self._targets_limit == 0 or len(self._shared_targets) < self._targets_limit
button = ui.Button(
"Add Target(s)",
width=ui.Pixel(30),
clicked_fn=partial(on_add_target, weak_self=weakref.ref(self)),
enabled=within_target_limit and self._enabled,
identifier=f"add_relationship{self._id_name}"
)
if not within_target_limit:
button.set_tooltip(
f"Targets limit of {self._targets_limit} has been reached. To add more target(s), remove current one(s) first."
)
else:
ui.StringField(name="models", read_only=True).model.set_value("Mixed")
def _set_dirty(self):
self._frame.rebuild()
|
omniverse-code/kit/exts/omni.kit.property.usd/omni/kit/property/usd/usd_property_widget.py | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
#
import asyncio
import traceback
from collections import defaultdict
from typing import Any, DefaultDict, Dict, List, Sequence, Set, Tuple
import carb
import carb.events
import carb.profiler
import omni.kit.app
import omni.kit.context_menu
import omni.ui as ui
import omni.usd
from omni.kit.window.property.templates import SimplePropertyWidget, build_frame_header
from pxr import Sdf, Tf, Trace, Usd
from .usd_model_base import UsdBase
from .usd_property_widget_builder import *
from .message_bus_events import ADDITIONAL_CHANGED_PATH_EVENT_TYPE
# Clipboard to store copied group properties
__properties_to_copy: Dict[Sdf.Path, Any] = {}
def get_group_properties_clipboard():
return __properties_to_copy
def set_group_properties_clipboard(properties_to_copy: Dict[Sdf.Path, Any]):
global __properties_to_copy
__properties_to_copy = properties_to_copy
class UsdPropertyUiEntry:
def __init__(
self,
prop_name: str,
display_group: str,
metadata,
property_type,
build_fn=None,
display_group_collapsed: bool = False,
prim_paths: List[Sdf.Path] = None,
):
"""
Constructor.
Args:
prop_name: name of the Usd Property. This is not the display name.
display_group: group of the Usd Property when displayed on UI.
metadata: metadata associated with the Usd Property.
property_type: type of the property. Either Usd.Property or Usd.Relationship.
build_fn: a custom build function to build the UI. If not None the default builder will not be used.
display_group_collapsed: if the display group should be collapsed. Group only collapses when ALL its contents request such.
prim_paths: to override what prim paths this property will be built upon. Leave it to None to use default (currently selected paths, or last selected path if multi-edit is off).
"""
self.prop_name = prop_name
self.display_group = display_group
self.display_group_collapsed = display_group_collapsed
self.metadata = metadata
self.property_type = property_type
self.prim_paths = prim_paths
self.build_fn = build_fn
def add_custom_metadata(self, key: str, value):
"""
If value is not None, add it to the custom data of the metadata using the specified key. Otherwise, remove that
key from the custom data.
Args:
key: the key that should containt the custom metadata value
value: the value that should be added to the custom metadata if not None
"""
custom_data = self.metadata.get(Sdf.PrimSpec.CustomDataKey, {})
if value is not None:
custom_data[key] = value
elif key in custom_data:
del custom_data[key]
self.metadata[Sdf.PrimSpec.CustomDataKey] = custom_data
def override_display_group(self, display_group: str, collapsed: bool = False):
"""
Overrides the display group of the property. It only affects UI and DOES NOT write back DisplayGroup metadata to USD.
Args:
display_group: new display group to override to.
collapsed: if the display group should be collapsed. Group only collapses when ALL its contents request such.
"""
self.display_group = display_group
self.display_group_collapsed = collapsed
def override_display_name(self, display_name: str):
"""
Overrides the display name of the property. It only affects UI and DOES NOT write back DisplayName metadata to USD.
Args:
display_group: new display group to override to.
"""
self.metadata[Sdf.PropertySpec.DisplayNameKey] = display_name
# for backward compatibility
def __getitem__(self, key):
if key == 0:
return self.prop_name
elif key == 1:
return self.display_group
elif key == 2:
return self.metadata
return None
@property
def attr_name(self):
return self.prop_name
@attr_name.setter
def attr_name(self, value):
self.prop_name = value
def get_nested_display_groups(self):
if len(self.display_group) == 0:
return []
# Per USD documentation nested display groups are separated by colon
return self.display_group.split(":")
def __eq__(self, other):
return (
type(self) == type(other)
and self.prop_name == other.prop_name
and self.display_group == other.display_group
and self._compare_metadata(self.metadata, other.metadata)
and self.property_type == other.property_type
and self.prim_paths == other.prim_paths
)
def _compare_metadata(self, meta1, meta2) -> bool:
ignored_metadata = {"default", "colorSpace"}
for key, value in meta1.items():
if key not in ignored_metadata and (key not in meta2 or meta2[key] != value):
return False
for key, value in meta2.items():
if key not in ignored_metadata and (key not in meta1 or meta1[key] != value):
return False
return True
class UiDisplayGroup:
def __init__(self, name):
self.name = name
self.sub_groups = DefaultDict()
self.props = []
class UsdPropertiesWidget(SimplePropertyWidget):
"""
UsdPropertiesWidget provides functionalities to automatically populates UsdProperties on given prim(s). The UI will
and models be generated according to UsdProperties's value type. Multi-prim editing works for shared Properties
between all selected prims if instantiated with multi_edit = True.
"""
def __init__(self, title: str, collapsed: bool, multi_edit: bool = True):
"""
Constructor.
Args:
title: title of the widget.
collapsed: whether the collapsable frame should be collapsed for this widget.
multi_edit: whether multi-editing is supported.
If False, properties will only be collected from the last selected prim.
If True, shared properties among all selected prims will be collected.
"""
super().__init__(title=title, collapsed=collapsed)
self._multi_edit = multi_edit
self._models = defaultdict(list)
self._message_bus = omni.kit.app.get_app().get_message_bus_event_stream()
self._bus_sub = None
self._listener = None
self._pending_dirty_task = None
self._pending_dirty_paths = set()
self._group_menu_entries = []
def clean(self):
"""
See PropertyWidget.clean
"""
self.reset_models()
super().clean()
def reset(self):
"""
See PropertyWidget.reset
"""
self.reset_models()
super().reset()
def reset_models(self):
# models can be shared among multiple prims. Only clean once!
unique_models = set()
if self._models is not None:
for models in self._models.values():
for model in models:
unique_models.add(model)
for model in unique_models:
model.clean()
self._models = defaultdict(list)
if self._listener:
self._listener.Revoke()
self._listener = None
self._bus_sub = None
if self._pending_dirty_task is not None:
self._pending_dirty_task.cancel()
self._pending_dirty_task = None
self._pending_dirty_paths.clear()
for entry in self._group_menu_entries:
entry.release()
self._group_menu_entries.clear()
def get_additional_kwargs(self, ui_prop: UsdPropertyUiEntry):
"""
Override this function if you want to supply additional arguments when building the label or ui widget.
"""
additional_label_kwargs = None
additional_widget_kwargs = None
return additional_label_kwargs, additional_widget_kwargs
def build_property_item(self, stage, ui_prop: UsdPropertyUiEntry, prim_paths: List[Sdf.Path]):
"""
Override this function to customize property building.
"""
# override prim paths to build if UsdPropertyUiEntry specifies one
if ui_prop.prim_paths:
prim_paths = ui_prop.prim_paths
build_fn = ui_prop.build_fn if ui_prop.build_fn else UsdPropertiesWidgetBuilder.build
additional_label_kwargs, additional_widget_kwargs = self.get_additional_kwargs(ui_prop)
models = build_fn(
stage,
ui_prop.prop_name,
ui_prop.metadata,
ui_prop.property_type,
prim_paths,
additional_label_kwargs,
additional_widget_kwargs,
)
if models:
if not isinstance(models, list):
models = [models]
for model in models:
for prim_path in prim_paths:
self._models[prim_path.AppendProperty(ui_prop.prop_name)].append(model)
def build_nested_group_frames(self, stage, display_group: UiDisplayGroup):
"""
Override this function to build group frames differently.
"""
if self._multi_edit:
prim_paths = self._payload.get_paths()
else:
prim_paths = self._payload[-1:]
def build_props(props):
collapse_frame = len(props) > 0
# we need to build those property in 2 different possible locations
for prop in props:
self.build_property_item(stage, prop, prim_paths)
collapse_frame &= prop.display_group_collapsed
return collapse_frame
def get_sub_props(group: UiDisplayGroup, sub_prop_list: list):
if len(group.sub_groups) > 0:
for name, sub_group in group.sub_groups.items():
for sub_prop in sub_group.props:
sub_prop_list.append(sub_prop)
get_sub_props(sub_group, sub_prop_list)
def build_nested(display_group: UiDisplayGroup, level: int, prefix: str):
# Only create a collapsable frame if the group is not "" (for root level group)
if len(display_group.name) > 0:
id = prefix + ":" + display_group.name
frame = ui.CollapsableFrame(
title=display_group.name,
build_header_fn=lambda collapsed, text, id=id: self._build_frame_header(collapsed, text, id),
name="subFrame",
)
else:
id = prefix
frame = ui.Frame(name="subFrame")
prop_list = []
sub_props = get_sub_props(display_group, prop_list)
with frame:
with ui.VStack(height=0, spacing=5, name="frame_v_stack"):
for name, sub_group in display_group.sub_groups.items():
build_nested(sub_group, level + 1, id)
# Only do "Extra Properties" for root level group
if len(display_group.sub_groups) > 0 and len(display_group.props) > 0 and level == 0:
extra_group_name = "Extra Properties"
extra_group_id = prefix + ":" + extra_group_name
with ui.CollapsableFrame(
title=extra_group_name,
build_header_fn=lambda collapsed, text, id=extra_group_id: self._build_frame_header(
collapsed, text, id
),
name="subFrame",
):
with ui.VStack(height=0, spacing=5, name="frame_v_stack"):
build_props(display_group.props)
self._build_header_context_menu(
group_name=extra_group_name, group_id=extra_group_id, props=sub_props
)
else:
collapse = build_props(display_group.props)
if collapse and isinstance(frame, ui.CollapsableFrame):
frame.collapsed = collapse
# if level is 0, this is the root level group, and we use the self._title for its name
self._build_header_context_menu(
group_name=display_group.name if level > 0 else self._title, group_id=id, props=sub_props
)
build_nested(display_group, 0, self._title)
# if we have subFrame we need the main frame to assume the groupFrame styling
if len(display_group.sub_groups) > 0:
# here we reach into the Parent class frame
self._collapsable_frame.name = "groupFrame"
def build_items(self):
"""
See SimplePropertyWidget.build_items
"""
self.reset()
if len(self._payload) == 0:
return
last_prim = self._get_prim(self._payload[-1])
if not last_prim:
return
stage = last_prim.GetStage()
if not stage:
return
self._listener = Tf.Notice.Register(Usd.Notice.ObjectsChanged, self._on_usd_changed, stage)
self._bus_sub = self._message_bus.create_subscription_to_pop_by_type(
ADDITIONAL_CHANGED_PATH_EVENT_TYPE, self._on_bus_event
)
shared_props = self._get_shared_properties_from_selected_prims(last_prim)
if not shared_props:
return
shared_props = self._customize_props_layout(shared_props)
grouped_props = UiDisplayGroup("")
for prop in shared_props:
nested_groups = prop.get_nested_display_groups()
sub_group = grouped_props
for sub_group_name in nested_groups:
sub_group = sub_group.sub_groups.setdefault(sub_group_name, UiDisplayGroup(sub_group_name))
sub_group.props.append(prop)
self.build_nested_group_frames(stage, grouped_props)
def _build_header_context_menu(self, group_name: str, group_id: str, props: List[UsdPropertyUiEntry] = None):
"""
Override this function to build the context menu when right click on a Collapsable group header
Args:
group_name: Display name of the group. If it's not a subgroup, it's the title of the widget.
group_id: A unique identifier for group context menu.
props: Properties under this group. It contains all properties in its subgroups as well.
"""
self._build_group_builtin_header_context_menu(group_name, group_id, props)
self._build_group_additional_header_context_menu(group_name, group_id, props)
def _build_group_builtin_header_context_menu(
self, group_name: str, group_id: str, props: List[UsdPropertyUiEntry] = None
):
from .usd_attribute_model import GfVecAttributeSingleChannelModel
prop_names: Set[str] = set()
if props:
for prop in props:
prop_names.add(prop.prop_name)
def can_copy(object):
# Only support single selection copy oer OM-20206
return len(self._payload) == 1
def on_copy(object):
visited_models = set()
properties_to_copy: Dict[Sdf.Path, Any] = dict()
for models in self._models.values():
for model in models:
if model in visited_models:
continue
visited_models.add(model)
# Skip "Mixed"
if model.is_ambiguous():
continue
paths = model.get_property_paths()
if paths:
# Copy from the last path
# In theory if the value is not mixed all paths should have same value, so which one to pick doesn't matter
last_path = paths[-1]
# Only copy from the properties from this group
if props is not None and last_path.name not in prop_names:
continue
if issubclass(type(model), UsdBase):
# No need to copy single channel model. Each vector attribute also has a GfVecAttributeModel
if isinstance(model, GfVecAttributeSingleChannelModel):
continue
properties_to_copy[paths[-1]] = model.get_value()
elif isinstance(model, RelationshipEditWidget):
properties_to_copy[paths[-1]] = model.get_targets().copy()
if properties_to_copy:
set_group_properties_clipboard(properties_to_copy)
menu = {
"name": f'Copy All Property Values in "{group_name}"',
"show_fn": lambda object: True,
"enabled_fn": can_copy,
"onclick_fn": on_copy,
}
self._group_menu_entries.append(self._register_header_context_menu_entry(menu, group_id))
def on_paste(object):
properties_to_copy = get_group_properties_clipboard()
if not properties_to_copy:
return
unique_model_prim_paths: Set[Sdf.Path] = set()
for prop_path in self._models:
unique_model_prim_paths.add(prop_path.GetPrimPath())
with omni.kit.undo.group():
try:
for path, value in properties_to_copy.items():
for prim_path in unique_model_prim_paths:
# Only paste to the properties in this group
if props is not None and path.name not in prop_names:
continue
paste_to_model_path = prim_path.AppendProperty(path.name)
models = self._models.get(paste_to_model_path, [])
for model in models:
if isinstance(model, GfVecAttributeSingleChannelModel):
continue
else:
model.set_value(value)
except Exception as e:
carb.log_warn(traceback.format_exc())
menu = {
"name": f'Paste All Property Values to "{group_name}"',
"show_fn": lambda object: True,
"enabled_fn": lambda object: get_group_properties_clipboard(),
"onclick_fn": on_paste,
}
self._group_menu_entries.append(self._register_header_context_menu_entry(menu, group_id))
def can_reset(object):
visited_models = set()
for models in self._models.values():
for model in models:
if model in visited_models:
continue
visited_models.add(model)
paths = model.get_property_paths()
if paths:
last_path = paths[-1]
# Only reset from the properties from this group
if props is not None and last_path.name not in prop_names:
continue
if issubclass(type(model), UsdBase):
if model.is_different_from_default():
return True
return False
def on_reset(object):
visited_models = set()
for models in self._models.values():
for model in models:
if model in visited_models:
continue
visited_models.add(model)
paths = model.get_property_paths()
if paths:
last_path = paths[-1]
# Only reset from the properties from this group
if props is not None and last_path.name not in prop_names:
continue
if issubclass(type(model), UsdBase):
model.set_default()
menu = {
"name": f'Reset All Property Values in "{group_name}"',
"show_fn": lambda object: True,
"enabled_fn": can_reset,
"onclick_fn": on_reset,
}
self._group_menu_entries.append(self._register_header_context_menu_entry(menu, group_id))
def _build_group_additional_header_context_menu(
self, group_name: str, group_id: str, props: List[UsdPropertyUiEntry] = None
):
"""
Override this function to build the additional context menu to Kit's built-in ones when right click on a Collapsable group header
Args:
group_name: Display name of the group. If it's not a subgroup, it's the title of the widget.
group_id: A unique identifier for group context menu.
props: Properties under this group. It contains all properties in its subgroups as well.
"""
...
def _register_header_context_menu_entry(self, menu: Dict, group_id: str):
"""
Registers a menu entry to Collapsable group header
Args:
menu: The menu entry to be registered.
group_id: A unique identifier for group context menu.
Return:
The subscription object of the menu entry to be kept alive during menu's life span.
"""
return omni.kit.context_menu.add_menu(menu, "group_context_menu." + group_id, "omni.kit.window.property")
def _filter_props_to_build(self, props):
"""
When deriving from UsdPropertiesWidget, override this function to filter properties to build.
Args:
props: List of Usd.Property on a selected prim.
"""
return [prop for prop in props if not prop.IsHidden()]
def _customize_props_layout(self, props):
"""
When deriving from UsdPropertiesWidget, override this function to reorder/regroup properties to build.
To reorder the properties display order, reorder entries in props list.
To override display group or name, call prop.override_display_group or prop.override_display_name respectively.
If you want to hide/add certain property, remove/add them to the list.
NOTE: All above changes won't go back to USD, they're pure UI overrides.
Args:
props: List of Tuple(property_name, property_group, metadata)
Example:
for prop in props:
# Change display group:
prop.override_display_group("New Display Group")
# Change display name (you can change other metadata, it won't be write back to USD, only affect UI):
prop.override_display_name("New Display Name")
# add additional "property" that doesn't exist.
props.append(UsdPropertyUiEntry("PlaceHolder", "Group", { Sdf.PrimSpec.TypeNameKey: "bool"}, Usd.Property))
"""
return props
@Trace.TraceFunction
def _on_usd_changed(self, notice, stage):
carb.profiler.begin(1, "UsdPropertyWidget._on_usd_changed")
try:
if stage != self._payload.get_stage():
return
if not self._collapsable_frame:
return
if len(self._payload) == 0:
return
# Widget is pending rebuild, no need to check for dirty
if self._pending_rebuild_task is not None:
return
dirty_paths = set()
for path in notice.GetResyncedPaths():
if path in self._payload:
self.request_rebuild()
return
elif path.GetPrimPath() in self._payload:
prop = stage.GetPropertyAtPath(path)
# If prop is added or removed, rebuild frame
# TODO only check against the properties this widget cares about
if (not prop.IsValid()) != (self._models.get(path) is None):
self.request_rebuild()
return
# else trigger existing model to reload the value
else:
dirty_paths.add(path)
for path in notice.GetChangedInfoOnlyPaths():
dirty_paths.add(path)
self._pending_dirty_paths.update(dirty_paths)
if self._pending_dirty_task is None:
self._pending_dirty_task = asyncio.ensure_future(self._delayed_dirty_handler())
finally:
carb.profiler.end(1)
def _on_bus_event(self, event: carb.events.IEvent):
# TODO from C++?
# stage = event.payload["stage"]
# if stage != self._payload.get_stage():
# return
if not self._collapsable_frame:
return
if len(self._payload) == 0:
return
# Widget is pending rebuild, no need to check for dirty
if self._pending_rebuild_task is not None:
return
path = event.payload["path"]
self._pending_dirty_paths.add(Sdf.Path(path))
if self._pending_dirty_task is None:
self._pending_dirty_task = asyncio.ensure_future(self._delayed_dirty_handler())
def _get_prim(self, prim_path):
if prim_path:
stage = self._payload.get_stage()
if stage:
return stage.GetPrimAtPath(prim_path)
return None
def _get_shared_properties_from_selected_prims(self, anchor_prim):
shared_props_dict = None
if self._multi_edit:
prim_paths = self._payload.get_paths()
else:
prim_paths = self._payload[-1:]
for prim_path in prim_paths:
prim = self._get_prim(prim_path)
if not prim:
continue
props = self._filter_props_to_build(prim.GetProperties())
prop_dict = {}
for prop in props:
if prop.IsHidden():
continue
prop_dict[prop.GetName()] = UsdPropertyUiEntry(
prop.GetName(), prop.GetDisplayGroup(), prop.GetAllMetadata(), type(prop)
)
if shared_props_dict is None:
shared_props_dict = prop_dict
else:
# Find intersection of the dicts
intersect_shared_props = {}
for prop_name, prop_info in shared_props_dict.items():
if prop_dict.get(prop_name) == prop_info:
intersect_shared_props[prop_name] = prop_info
if len(intersect_shared_props) == 0:
# No intersection, nothing to build
# early return
return
shared_props_dict = intersect_shared_props
shared_props = list(shared_props_dict.values())
# Sort properties to PropertyOrder using the last selected object
order = anchor_prim.GetPropertyOrder()
shared_prop_order = []
shared_prop_unordered = []
for prop in shared_props:
if prop[0] in order:
shared_prop_order.append(prop[0])
else:
shared_prop_unordered.append(prop[0])
shared_prop_order.extend(shared_prop_unordered)
sorted(shared_props, key=lambda x: shared_prop_order.index(x[0]))
return shared_props
def request_rebuild(self):
# If widget is going to be rebuilt, _pending_dirty_task does not need to run.
if self._pending_dirty_task is not None:
self._pending_dirty_task.cancel()
self._pending_dirty_task = None
self._pending_dirty_paths.clear()
super().request_rebuild()
async def _delayed_dirty_handler(self):
while True:
# Do not refresh UI until visible/uncollapsed
if not self._collapsed:
break
await omni.kit.app.get_app().next_update_async()
# clear the pending dirty tasks BEFORE dirting model.
# dirtied model may trigger additional USD notice that needs to be scheduled.
self._pending_dirty_task = None
if self._pending_dirty_paths:
# Make a copy of the paths. It may change if USD edits are made during iteration
pending_dirty_paths = self._pending_dirty_paths.copy()
self._pending_dirty_paths.clear()
carb.profiler.begin(1, "UsdPropertyWidget._delayed_dirty_handler")
# multiple path can share the same model. Only dirty once!
dirtied_models = set()
for path in pending_dirty_paths:
models = self._models.get(path)
if models:
for model in models:
if model not in dirtied_models:
model._set_dirty()
dirtied_models.add(model)
carb.profiler.end(1)
class SchemaPropertiesWidget(UsdPropertiesWidget):
"""
SchemaPropertiesWidget only filters properties and only show the onces from a given IsA schema or applied API schema.
"""
def __init__(self, title: str, schema, include_inherited: bool):
"""
Constructor.
Args:
title (str): Title of the widgets on the Collapsable Frame.
schema: The USD IsA schema or applied API schema to filter properties.
include_inherited (bool): Whether the filter should include inherited properties.
"""
super().__init__(title, collapsed=False)
self._title = title
self._schema = schema
self._include_inherited = include_inherited
def on_new_payload(self, payload):
"""
See PropertyWidget.on_new_payload
"""
if not super().on_new_payload(payload):
return False
if not self._payload or len(self._payload) == 0:
return False
for prim_path in self._payload:
prim = self._get_prim(prim_path)
if not prim:
return False
is_api_schema = Usd.SchemaRegistry().IsAppliedAPISchema(self._schema)
if not (is_api_schema and prim.HasAPI(self._schema) or not is_api_schema and prim.IsA(self._schema)):
return False
return True
def _filter_props_to_build(self, props):
"""
See UsdPropertiesWidget._filter_props_to_build
"""
if len(props) == 0:
return props
if Usd.SchemaRegistry().IsMultipleApplyAPISchema(self._schema):
prim = props[0].GetPrim()
schema_instances = set()
schema_type_name = Usd.SchemaRegistry().GetSchemaTypeName(self._schema)
for schema in prim.GetAppliedSchemas():
if schema.startswith(schema_type_name):
schema_instances.add(schema[len(schema_type_name) + 1 :])
filtered_props = []
api_path_func_name = f"Is{schema_type_name}Path"
api_path_func = getattr(self._schema, api_path_func_name)
# self._schema.GetSchemaAttributeNames caches the query result in a static variable and any new instance
# token passed into it won't change the cached property names. This can potentially cause problems as other
# code calling same function with different instance name will get wrong result.
#
# There's any other function IsSchemaPropertyBaseName but it's not implemented on all applied schemas. (not
# implemented in a few PhysicsSchema, for example.
#
# if include_inherited is True, it returns SchemaTypeName:BaseName for properties, otherwise it only returns
# BaseName.
schema_attr_names = self._schema.GetSchemaAttributeNames(self._include_inherited, "")
for prop in props:
if prop.IsHidden():
continue
prop_path = prop.GetPath().pathString
for instance_name in schema_instances:
instance_seg = prop_path.find(":" + instance_name + ":")
if instance_seg != -1:
api_path = prop_path[0 : instance_seg + 1 + len(instance_name)]
if api_path_func(api_path):
base_name = prop_path[instance_seg + 1 + len(instance_name) + 1 :]
if base_name in schema_attr_names:
filtered_props.append(prop)
break
return filtered_props
else:
schema_attr_names = self._schema.GetSchemaAttributeNames(self._include_inherited)
return [prop for prop in props if prop.GetName() in schema_attr_names]
class MultiSchemaPropertiesWidget(UsdPropertiesWidget):
"""
MultiSchemaPropertiesWidget filters properties and only show the onces from a given IsA schema or schema subclass list.
"""
__known_api_schemas = set()
def __init__(
self,
title: str,
schema,
schema_subclasses: list,
include_list: list = [],
exclude_list: list = [],
api_schemas: Sequence[str] = None,
group_api_schemas: bool = False,
):
"""
Constructor.
Args:
title (str): Title of the widgets on the Collapsable Frame.
schema: The USD IsA schema or applied API schema to filter properties.
schema_subclasses (list): list of subclasses
include_list (list): list of additional schema named to add
exclude_list (list): list of additional schema named to remove
api_schemas (sequence): a sequence of AppliedAPI schema names that this widget handles
group_api_schemas (bool): whether to create default groupings for any AppliedSchemas on the Usd.Prim
"""
super().__init__(title=title, collapsed=False)
self._title = title
self._schema = schema
# create schema_attr_names
self._schema_attr_base = schema.GetSchemaAttributeNames(False)
for subclass in schema_subclasses:
self._schema_attr_base += subclass.GetSchemaAttributeNames(False)
self._schema_attr_base += include_list
self._schema_attr_base = set(self._schema_attr_base) - set(exclude_list)
# Setup the defaults for handling applied API schemas
self._applied_schemas = {}
self._schema_attr_names = None
self._group_api_schemas = group_api_schemas
# Save any custom Applied Schemas and mark them to be ignored when building default widget
self._custom_api_schemas = api_schemas
if self._custom_api_schemas:
MultiSchemaPropertiesWidget.__known_api_schemas.update(self._custom_api_schemas)
def __del__(self):
# Mark any custom Applied Schemas to start being handled by the defaut widget
if self._custom_api_schemas:
MultiSchemaPropertiesWidget.__known_api_schemas.difference_update(self._custom_api_schemas)
def clean(self):
"""
See PropertyWidget.clean
"""
self._applied_schemas = {}
self._schema_attr_names = None
super().clean()
def on_new_payload(self, payload):
"""
See PropertyWidget.on_new_payload
"""
if not super().on_new_payload(payload):
return False
if not self._payload or len(self._payload) == 0:
return False
used = []
schema_reg = Usd.SchemaRegistry()
self._applied_schemas = {}
# Build out our _schema_attr_names variable to include properties from the base schema and any applied schemas
self._schema_attr_names = self._schema_attr_base
for prim_path in self._payload:
prim = self._get_prim(prim_path)
if not prim or not prim.IsA(self._schema):
return False
# If the base-schema has requested auto-grouping of all Applied Schemas, handle that grouping now
if self._group_api_schemas:
# XXX: Should this be delayed until _customize_props_layout ?
for api_schema in prim.GetAppliedSchemas():
# Ignore any API schemas that are already registered as custom widgets
if api_schema in MultiSchemaPropertiesWidget.__known_api_schemas:
continue
# Skip over any API schemas that USD doesn't actually know about
prim_def = schema_reg.FindAppliedAPIPrimDefinition(api_schema)
if not prim_def:
continue
api_prop_names = prim_def.GetPropertyNames()
self._schema_attr_names = self._schema_attr_names.union(api_prop_names)
for api_prop in api_prop_names:
display_group = prim_def.GetPropertyMetadata(api_prop, "displayGroup")
prop_grouping = self._applied_schemas.setdefault(api_schema, {}).setdefault(display_group, [])
prop_grouping.append((api_prop, prim_def.GetPropertyMetadata(api_prop, "displayName")))
used += self._filter_props_to_build(prim.GetProperties())
return used
def _filter_props_to_build(self, props):
"""
See UsdPropertiesWidget._filter_props_to_build
"""
return [prop for prop in props if prop.GetName() in self._schema_attr_names and not prop.IsHidden()]
def _customize_props_layout(self, attrs):
# If no applied schemas, just use the base class' layout.
if not self._applied_schemas:
return super()._customize_props_layout(attrs)
from omni.kit.property.usd.custom_layout_helper import (
CustomLayoutFrame,
CustomLayoutGroup,
CustomLayoutProperty,
)
# We can't really escape the parent group, so all default/base and applied-schemas will be common to a group
frame = CustomLayoutFrame(hide_extra=False)
with frame:
# Add all base properties at the top
base_attrs = [attr for attr in attrs if attr.prop_name in self._schema_attr_base]
if base_attrs:
with CustomLayoutGroup(self._title):
for attr in base_attrs:
CustomLayoutProperty(attr.prop_name, attr.display_group)
attr.override_display_group(self._title)
# Now create a master-group for each applied schema, and possibly sub-groups for it's properties
# Here's where we may want to actually escape the parent and create a totally new group
with frame:
for api_schema, api_schema_groups in self._applied_schemas.items():
with CustomLayoutGroup(api_schema):
for prop_group, props in api_schema_groups.items():
with CustomLayoutGroup(prop_group):
for prop in props:
CustomLayoutProperty(*prop)
return frame.apply(attrs)
class RawUsdPropertiesWidget(UsdPropertiesWidget):
MULTI_SELECTION_LIMIT_SETTING_PATH = "/persistent/exts/omni.kit.property.usd/raw_widget_multi_selection_limit"
MULTI_SELECTION_LIMIT_DO_NOT_ASK_SETTING_PATH = (
"/exts/omni.kit.property.usd/multi_selection_limit_do_not_ask" # session only, not persistent!
)
def __init__(self, title: str, collapsed: bool, multi_edit: bool = True):
super().__init__(title=title, collapsed=collapsed, multi_edit=multi_edit)
self._settings = carb.settings.get_settings()
self._settings.set_default(RawUsdPropertiesWidget.MULTI_SELECTION_LIMIT_DO_NOT_ASK_SETTING_PATH, False)
self._skip_multi_selection_protection = False
def on_new_payload(self, payload):
if not super().on_new_payload(payload):
return False
for prim_path in self._payload:
if not prim_path.IsPrimPath():
return False
self._skip_multi_selection_protection = False
return bool(self._payload and len(self._payload) > 0)
def build_impl(self):
# rebuild frame when frame is opened
super().build_impl()
self._collapsable_frame.set_collapsed_changed_fn(self._on_collapsed_changed)
def build_items(self):
# only show raw items if frame is open
if self._collapsable_frame and not self._collapsable_frame.collapsed:
if not self._multi_selection_protected():
super().build_items()
def _on_collapsed_changed(self, collapsed):
if not collapsed:
self.request_rebuild()
def _multi_selection_protected(self):
if self._no_multi_selection_protection_this_session():
return False
def show_all(do_not_ask: bool):
self._settings.set(RawUsdPropertiesWidget.MULTI_SELECTION_LIMIT_DO_NOT_ASK_SETTING_PATH, do_not_ask)
self._skip_multi_selection_protection = True
self.request_rebuild()
multi_select_limit = self._settings.get(RawUsdPropertiesWidget.MULTI_SELECTION_LIMIT_SETTING_PATH)
if multi_select_limit and len(self._payload) > multi_select_limit and not self._skip_multi_selection_protection:
ui.Separator()
ui.Label(
f"You have selected {len(self._payload)} Prims, to preserve fast performance the Raw Usd Properties Widget is not showing above the current limit of {multi_select_limit} Prims. Press the button below to show it anyway but expect potential performance penalty.",
width=omni.ui.Percent(100),
alignment=ui.Alignment.CENTER,
name="label",
word_wrap=True,
)
button = ui.Button("Skip Multi Selection Protection")
with ui.HStack(width=0):
checkbox = ui.CheckBox()
ui.Spacer(width=5)
ui.Label("Do not ask again for current session.", name="label")
button.set_clicked_fn(lambda: show_all(checkbox.model.get_value_as_bool()))
return True
return False
def _no_multi_selection_protection_this_session(self):
return self._settings.get(RawUsdPropertiesWidget.MULTI_SELECTION_LIMIT_DO_NOT_ASK_SETTING_PATH)
|
omniverse-code/kit/exts/omni.kit.property.usd/omni/kit/property/usd/property_preferences_page.py | import carb.settings
import omni.ui as ui
from omni.kit.window.preferences import PreferenceBuilder, SettingType, PERSISTENT_SETTINGS_PREFIX
class PropertyUsdPreferences(PreferenceBuilder):
def __init__(self):
super().__init__("Property Widgets")
def build(self):
with ui.VStack(height=0):
with self.add_frame("Property Window"):
with ui.VStack():
w = self.create_setting_widget(
"Large selections threshold. This prevents slowdown/stalls on large selections, after this number of prims is selected most of the property window will be hidden.\n\nSet to zero to disable this feature\n\n",
PERSISTENT_SETTINGS_PREFIX + "/exts/omni.kit.property.usd/large_selection",
SettingType.INT,
height=20
)
w.identifier = "large_selection"
w = self.create_setting_widget(
"Raw Usd Properties Widget multi-selection limit. This prevents slowdown/stalls on large selections, after this number of prims is selected content of Raw Usd Properties Widget will be hidden.\n\nSet to zero to disable this feature\n\n",
PERSISTENT_SETTINGS_PREFIX + "/exts/omni.kit.property.usd/raw_widget_multi_selection_limit",
SettingType.INT,
height=20
)
w.identifier = "raw_widget_multi_selection_limit"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.